Skip to content

v0.13

Compare
Choose a tag to compare
@Sxela Sxela released this 11 Sep 17:11
· 1 commit to v0.13-AGPL since this release
3ba3eb2

Watch the video

Changelog:

  • add alternative consistency algo (also faster)
  • auto skip install for our docker env
  • clean some discodiffusion legacy code (it's been a year :D)
  • add controlnet default main model (v1.5)
  • add reference controlnet (attention injection)
  • add reference mode and source image
  • skip flow preview generation if it fails
  • downgrade to torch v1.13 for colab hosted env
  • save schedules to settings before applying templates
  • keep pre-template settings in the GUI
  • add gui options to load settings, keep state on rerun/load from previous cells
  • fix schedules not kept on GUI rerun
  • rename depth_source to cond_image_src to reflect its actual purpose
  • fix outer not defined error for reference
  • remove torch downgrade for colab
  • remove xformers for torch v2/colab
  • add sdp attention from AUTOMATIC1111 to replace xformers (for torch v2)
  • fix reference controlnet infinite recursion loop
  • fix prompt schedules not working with "0"-like keys

New consistency algorithm

The new algo is cleaner and should reduce missed consistency mask replated flicker

Consistency is now calculated simultaneously with the flow.

use_legacy_cc:
The alternative consistency algo is on by default. To revert to the older algo, check use_legacy_cc in Generate optical flow and consistency maps cell.

missed_consistency_dilation:
Missed consistency mask "width". 1 - default value

edge_consistency_width:
Edge consistency width. Odd numbers only, default = 11

Reference controlnet (aka attention injection)

By Lvmin Zhang
https://github.com/Mikubill/sd-webui-controlnet

Added attention injection. You can mix attention data from your reference image and the one that's being generated. Runs 2x slower as it basically samples 2 images in parallel (the stylized and reference).

Works with any model, not only controlnet multi, as it's just a hack on attention layers. We still call in controlnet to honor its author's naming decision.

Reference controlnet (attention injection) ->

use_reference: Check to enable

reference_weight: strength of reference image vs current image

reference_source: source of the reference image. Options: ['None', 'stylized', 'init', 'prev_frame','color_video']
None - off
stylized - use current input image
prev_frame - previously stylized frame
init - raw video frame
color_video - frame from color video

reference_mode:
Options: ['Balanced', 'Controlnet', 'Prompt']
Defines what should affect the result more, prompt or reference.