From ca182b650e1061f0630ad275baf047e260c864df Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?P=C3=A1l=20Zs=C3=A1mboki?= <79205753+zsamboki@users.noreply.github.com> Date: Thu, 13 Jul 2023 18:37:20 +0200 Subject: [PATCH] Fix code typo in int8-asr.mdx Having `bias="None"` in `LoraConfig` raised a `NotImplementedError`. Replaced it with `bias="none"` as per the [`LoraConfig` reference](https://huggingface.co/docs/peft/main/en/package_reference/tuners#peft.LoraConfig) and now the code works, I can run training. --- docs/source/task_guides/int8-asr.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/task_guides/int8-asr.mdx b/docs/source/task_guides/int8-asr.mdx index f1ace5ba00..37d63b6d6d 100644 --- a/docs/source/task_guides/int8-asr.mdx +++ b/docs/source/task_guides/int8-asr.mdx @@ -205,7 +205,7 @@ Let's also apply LoRA to the training to make it even more efficient. Load a [`~ ```py from peft import LoraConfig, PeftModel, LoraModel, LoraConfig, get_peft_model -config = LoraConfig(r=32, lora_alpha=64, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="None") +config = LoraConfig(r=32, lora_alpha=64, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none") ``` After you set up the [`~peft.LoraConfig`], wrap it and the base model with the [`get_peft_model`] function to create a [`PeftModel`]. Print out the number of trainable parameters to see how much more efficient LoRA is compared to fully training the model! @@ -375,4 +375,4 @@ with torch.cuda.amp.autocast(): text = pipe(audio, generate_kwargs={"forced_decoder_ids": forced_decoder_ids}, max_new_tokens=255)["text"] text "मी तुमच्यासाठी काही करू शकतो का?" -``` \ No newline at end of file +```