Skip to content

v0.11

Compare
Choose a tag to compare
@Sxela Sxela released this 27 Aug 12:12
· 1 commit to v0.11-AGPL since this release
214f60c

Changelog:

  • add lora support
  • add loras schedule parsing from prompts
  • add custom path for loras
  • add custom path from embeddings
  • add torch built-in raft implementation
  • fix threads error on video export
  • disable guidance for lora
  • add compile option for raft @ torch v2 (a100 only)
  • force torch downgrade for T4 GPU on colab
  • add faces controlnet from https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace
  • make gui not reset on run cell (there is still javascript delay before input is saved)
  • add custom download folder for controlnets
  • fix face controlnet download url
  • fix controlnet depth_init for cond_video with no preprocess

Lora Support

Added lora support.

Firstly, download them and place in a folder. When downloading, mind the base model used for loras, v1 and v2 loras may not be compatible.

Specify their folder in LORA & embedding paths cell. After you run this cell, a list of detected loras will be printed.

To use loras, add them to your main prompt (preferrably at the end, it will be detected and removed from the prompt). Use the following format: lora:lora_name:lora_weight

For example:

<lora:urbanSamuraiClothing_urbansamuraiV03:1> where urbanSamuraiClothing_urbansamuraiV03 is the detected lora name printed in LORA & embedding paths cell.

The full prompt may look like this: {0: ['a beautiful highly detailed cyberpunk mechanical augmented most beautiful (man) ever, cyberpunk 2077, neon, dystopian, hightech, trending on artstation, <lora:urbanSamuraiClothing_urbansamuraiV03:1>']}

Scheduling:

You may schedule loras in your prompts. (requires blend_json_schedules enabled for that, otherwise it will use the last weight value without blending)

For example:

{0: ['a prompt, <lora:urbanSamuraiClothing_urbansamuraiV03:1>'],
100: ['a prompt, <lora:urbanSamuraiClothing_urbansamuraiV03:0>']}

will gradually reduce lora weight from 1 to 0 across 100 frames.

You can use multiple loras. For example:

{0: ['a prompt, <lora:urbanSamuraiClothing_urbansamuraiV03:1> <lora:zahaHadid_v10:0>'],  
100: ['a prompt, <lora:urbanSamuraiClothing_urbansamuraiV03:0> <lora:zahaHadid_v10:1>']}

will gradually reduce urbanSamuraiClothing_urbansamuraiV03 weight from 1 to 0 across 100 frames, and increase zahaHadid_v10 weight from 0 to 1.

if you don't specify a 0-weight keframe, the lora will just pop. For example:

{0: ['a prompt'], 100: ['a prompt, <lora:urbanSamuraiClothing_urbansamuraiV03:1>']}

will have no lora for frames 0-99, and urbanSamuraiClothing_urbansamuraiV03 lora with weight 1 at frame 100.

Init scale and latent scale (guidance) are disabled when loras are used.

Faces ControlNet

Added Faces ControlNet support. To use it set weight above 0 in the cell with other controlnets.

Settings: GUI -> controlnet - max_faces: max faces to detect.

If no faces were detected in a frame, the controlnet will not be used for that frame even if it was enabled.

QOL improvements

The GUI will now keep its values even when the cell is re-run. Keep in mind that it's a javascript app and may still lag, not saving the most recent changes if your system if under load.

Added user paths for loras, embeddings, controlnet models. You can now store all of these in a common (non-warp) folder / google drive.

Added built-in raft implementation instead of the pre-compiled one used before. It will support torch v2. To use it, uncheck use_jit_raft. You can compile built-it raft model for 30% speed-up (available only for a100 on google colab / torch v2 on local install)