v0.2.0
New
- Simple HTTP API example (#530) by @briancaffey
- Command-line streaming inference example (#512) by @ZaymeShaw
- Batch Vocos decoding (generate matrix by longest token, fill the rest with 0) (6e18575) by @fumiama
- Adaptation to new VQ Encoder (9f0b7a0) by @fumiama
- ZeroShot support (b4da237) by @fumiama
- Initial support for vLLM (8e6184e) by @ylzz1997
- Added
--source
and--custom_path
parameters to command-line examples (#669) by @weedge - Added
inplace
parameter toSpeaker
class for fine-tuning (#679) by @ain-soph - Added
experimental
parameter toChat.load
function (#682) by @ain-soph
Fixed
- Intermittent glitches in streaming inference audio (7ee5426) by @fumiama
- Different accent per generation even under the same parameters in WebUI (3edd47c) by @fumiama
normalizer
changed tag format, causing model to read out the tag (c6bae90) by @fumiama- Replaced unaccessable GitCode mirror (c06f1d4) by @fumiama
- Error in handling repetition penalty idx (#738) by @niuzheng168
Optimized
- Completely removed
pretrain_models
dictionary (77c7e20) by @fumiama - Made
tokenizer
a standalone class (77c7e20) by @fumiama normalizer
will remove all unsupported characters now to avoid inference errors (0f47a87) by @fumiama- Removed
config
folder, settings are now directly embedded into the code for easier changes (27331c3) by @fumiama - Removed extra whitespace at the end of streaming inference (#564) by @Ox0400
- Added
manual_seed
parameter, which directly providesgenerator
tomultinomial
, avoiding impact on torch environment (e675a59) by @fumiama tokenizer
loading switched from.pt
to the built-infrom_pretrained
method to eliminate potential malicious code loading (80b24e6) by @fumiama- Made
speaker
class standalone, placingspk_stat
related content within it, and directly wrote its values into the settings class due to its small size (f3dcd97) by @fumiama Chat.load
will setcompile=False
now by default (7e33889) by @fumiama- Switched GPT to
safetensor
model (8a503fd) by @fumiama
Dependencies
- Changed code license to open-source AGPL3.0 (9f402ba)
新增
- 简单的 HTTP API 示例 (#530) by @briancaffey
- 命令行流式推理实例 (#512) by @ZaymeShaw
- 批量 Vocos 解码(按最长 token 生成矩阵,其余填0) (6e18575) by @fumiama
- 适配新 VQ Encoder (9f0b7a0) by @fumiama
- ZeroShot 支持 (b4da237) by @fumiama
- 初步支持 vLLM (8e6184e) by @ylzz1997
- 命令行示例增加
--source
和--custom_path
参数 (#669) by @weedge - 为引入微调,给
Speaker
类增加inplace
参数 (#679) by @ain-soph - 给
Chat.load
函数增加experimental
参数 (#682) by @ain-soph
修复
- 流式推理的声音有间断性毛刺 (7ee5426) by @fumiama
- WebUI 相同条件下音频生成每次不同 (3edd47c) by @fumiama
normalizer
更改 tag 导致模型将 tag 读出 (c6bae90) by @fumiama- 更换已失效的 GitCode 镜像 (c06f1d4) by @fumiama
- repetition penalty idx 处理错误 (#738) by @niuzheng168
优化
- 彻底移除
pretrain_models
字典 (77c7e20) by @fumiama - 将
tokenizer
独立为一个类 (77c7e20) by @fumiama normalizer
将所有不支持的字符删除以免推理出错 (0f47a87) by @fumiama- 取消
config
文件夹,直接把设置写入代码方便更改 (27331c3) by @fumiama - 删除流式推理末尾多余的空白 (#564) by @Ox0400
- 在调用前设置
manual_seed
改为直接给multinomial
提供generator
,避免影响 torch 环境 (e675a59) by @fumiama - 将
tokenizer
从直接加载.pt
改为调用自带的from_pretrained
方法,从而消除可能的恶意代码加载 (80b24e6) by @fumiama - 独立
speaker
类,放置spk_stat
相关内容,同时因为该模型很小,所以直接将它的值写入了设置类 (f3dcd97) by @fumiama Chat.load
参数改为默认关闭编译 (7e33889) by @fumiama- GPT 切换到
safetensor
模型 (8a503fd) by @fumiama
依赖
- 代码许可证更改为开源的 AGPL3.0 (9f402ba)