Replies: 2 comments
-
cc @asomoza |
Beta Was this translation helpful? Give feedback.
-
Hi, you're asking if you can train a controlnet for what you want? The best and common solution for what you're describing is to train a lora of the person you want consistency, it also depends on the model architecture you're using and for what. Character/person consistency is one of the most asked questions and until this day, there's no way to achieve it with a 100% accuracy all the time, at least with photorealism, with anime/cartoons is not that hard and there's plenty of web apps that do this. There's been a lot of solutions and papers about it, you can try to search and read them also if you search for "stable diffusion character consistency" you'll see what I mean with one of the most asked questions. As an example, there's MimicMotion which does what you want but with openpose and an image as conditions. |
Beta Was this translation helpful? Give feedback.
-
I am training a ControlNet using Depth maps as condition. I want to achieve consistency in the images produced. For example:
I have frames of a video where a person is walking from point A to point B. I give the depth Maps for each frame to ControlNet. I would like the ControlNet to generate the image of the same person for all those frame, wearing the same clothes, same body and everything.
I have played around with different seeds, fixing them; although it gives some consistency in the background, it's never the same person, body or the clothes.
Can someone guide me on how to achieve this?
Beta Was this translation helpful? Give feedback.
All reactions