Skip to content

Limbicnation/Steerable-Motion

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Steerable Motion, a ComfyUI custom node for steering videos with batches of images

<<<<<<< HEAD Steerable Motion is a ComfyUI node for batch creative interpolation. Our goal is to feature the best methods for steering motion with images as video models evolve.

🆕 Interpolation Method 🌟

I've recently updated our interpolation method for better performance and smoother transitions. Check the demo below:

New Interpolation Demo New Interpolation Demo 2

This new method enhances the quality of motion steering, ensuring more fluid and natural transitions. I'm constantly working to improve the algorithms and provide the best possible experimental experience.

Features

  • Batch creative interpolation
  • Improved motion steering
  • Fluid and natural transitions
  • Image upscale

Please use this updated workflow


======= Steerable Motion is a ComfyUI node for batch creative interpolation. Our goal is to feature the best quality and most precise and powerful methods for steering motion with images as video models evolve. This node is best used via Dough - a creative tool which simplifies the settings and provides a nice creative flow - or in Discord - by joining this channel.

banodoco/main

Main example

<<<<<<< HEAD

Installation

=======

Installation in Comfy

banodoco/main

  1. If you haven't already, install ComfyUI and Comfy Manager - you can find instructions on their pages.
  2. Download this workflow and drop it into ComfyUI - or you can use one of the workflows others in the community made below.
  3. When the workflow opens, download the dependent nodes by pressing "Install Missing Custom Nodes" in Comfy Manager. Search and download the required models from Comfy Manager also - make sure that the models you download have the same name as the ones in the workflow - or you're confident that they're the same.

Usage

The main settings are:

  • Key frame position: how many frames to generate between each main key frame you provide.
  • Length of influence: what range of frames to apply the IP-Adapter (IPA) influence to.
  • Strength of influence: what the low-point and high-point of each frame should be.
  • Image adherence: how much we should force adherence to the input images.

Other than image adherence which is set for the entire generation these are set linearly - the same for each frame - or dynamically - varying them for each frame - you can find detailed instructions on how to tweak these settings inside the workflow above.

Tweaking the settings can greatly influence the motion - for example, below you can see two examples of the same images animated - but with the one setting tweaked, the length of each frame's influence:

Tweaking settings example

Philosophy for getting the most from this

This isn’t a tool like text to video that will perform well out of the box, it’s more like a paint brush - an artistic tool that you need to figure out how to get the best from.

Through trial and error, you'll need to build an understanding of how the motion and settings work, what its limitations are, which inputs images work best with it, etc.

It won't work for everything but if you can figure out how to wield it, this approach can provide enough control for you to make beautiful things that match your imagination precisely.

5 basic workflows to get started

Below are 5 basic workflows - each with their own weird and unique characteristics - all with differing levels of adherence and different types of motion - most of the changes come from tweaking the IPA configuration and switching out base models:

You can see each in acton below:

basic workflows

2 examples of things others have built on top of this:

The workflows I share above are just basic examples of it in action - below are two other workflows people in our community have created on top of this node that leverage the same underlying mechanism in creative and interesting ways:

Looped LCM by @idgallagher

First, @idgallagher uses LCM and different settings to achieve a really interesting realistic motion effect. You can grab it here and see an example output here:

Flipping Sigmas

Smooth & Deep by @Superbeasts.ai:

Next, Superbeasts.ai uses depth maps to control the motion in different layers - creating a smoother motion effect. You can grab this workflow here and see an example of it in action here:

Superbeasts Example

I believe that that there are endless ways to expand upon and extend the ideas in this node - if you do anything cool, please share!

Want to give feedback, or join a community who are pushing open source models to their artistic and technical limits?

You're very welcome to drop into our Discord here.

Credits

This code draws heavily from Cubiq's IPAdapter_plus, while the workflow uses Kosinkadink's Animatediff Evolved and ComfyUI-Advanced-ControlNet, Fizzledorf's Fizznodes, Fannovel16's Frame Interpolation and more. Thanks to all and of course the Animatediff team, Controlnet, others, and of course our supportive community!

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Batchfile 0.3%