。任何人都可以进行测试。
https://drive.google.com/drive/folders/1y7fslnoy1n2z71r0t91ty8yrq5iuwole
炼油机和CN Passes Extractor: https://drive.google.com/drive/folders/1hozxkux7wag7wag7obqp00r4oiv48sxceryq
什么人是动画的人? Here is some information : https://www.youtube.com/watch?v=LK12oC2N87g&ab_channel=SebastianKamph
Workflow Previews :1) Main Animate Anyone Setup:
1) Background and Exporting Setup
Uploading Reference image Tips:- Upload Full Body reference ,- Face should be fully可见的,背景应该简单明了。-手不应重叠其他身体部位, – 与512×768,
的图像相比,如果您有姿势图像,那就更好了。
调整大小提示: – 使用512x768or closer dimension, images will be resized in this dimension when saving.- It may stretch or squash a little, it’s fine, the refiner pass will nudge the proportions into place, but avoid too much stretch or squash.
Also Keep in mind the , this animate anyone is trained on tiktok video, so they works best in portrait mode…
3) Controlnet:
– You should have OpenPose Pass从通过“通过出口商”文件中导出到目录中,然后在此处粘贴目录。
4)自动处理
lap counter:根据您设置的批处理范围自动增加了跳过帧。
1) Enter Batch Range (10 is good)
2) After the first run it will tell the number n of queues you need in the Laps Needed Node
3) Change The Control_After_Generate to “Increment” when ready for automation
4) Press the Queue Prompt Button n times till your Queue Size = N in the side menu.或使用Extra的批处理选项。
如果您错误地输入了更多的圈子,它将自动停止,没有发现错误的错误,这很好,不必担心。
—————————————————————————————————–
INSTALLATION
Installation Video Help: https://www.youtube.com/watch?v=4z0wlS0JZ2Q&ab_channel=OlivioSarikas
GitHub Link: https://github.com/mrforexample/comfyui-animateanyone-evolved
- 1)下载此:https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/main/unet/diffusion_pytorch_model.binput在这里: comfyui \ custom_nodes \ comfyui-animateanyone-evolved \ pretraining_weights \ stable-diffusion-v1-5 \ unet
- 4) Download Pytorch_model.bin From herehttps://huggingface.co/lambdalabs/sd-image-variations-diffusers/tree/main/image_encoderrename it to clip_vision_animate_anyone并放在此处:comfyui \ models \ clip_vision
- 5)您可以在此中使用任何sd vae。
- denoising_unet.pthmotion_module.pthpose_guider.pthreference_unet.pthrom在此处:https://huggingface.co/patrolli/patrolli/animateanyone/tree/mainane/mainand put to there:comfyui \ custic_nodes \ cutical custom_node-custom_node-custic_nodevoldd\pretrained_weights
**此剪辑视觉模型除外,将会给出错误
__________________________________________________________________________
Also pip install diffusers in the venv if getting ‘diffusers.models.embeddings’ error
https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved/issues/2#issuecomment-1901089199
Trouble shooting errors : https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved/issues
——————————————————————————————————————————————————————-》注释:
1)这个动画的人以简单的背景下注,何时doing复杂,它摇摆不定。
2)视频生成仅取决于开放式姿势,而开放式姿势就像像是波涛汹涌的动作,从而产生了波动的动画。因此,如果通过一些3D运动跟踪或重新定位敞开式钻机来启用敞口,那将是更好的。3)这项技术具有低训练数据集,目前这也非常小故障和笨拙,这在现在也会产生奇怪的结果。…无论如何,它将在未来改善。最好浸入其中。 https://www.runcomfy.com/?ref=jerrydavos
-Jerry Davos