You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The LoRAScheduler output image is different from normal Lora loader uses the same checkpoint. (Sometimes it occurs at the first run, sometimes occurs runs later after changing seed, prompt etc.)
Interestingly, if the LoRAScheduler uses a separate checkpoint loader then the results become the same.
The text was updated successfully, but these errors were encountered:
This looks like unpatching the model after sampling isn't happening properly, but I'm not sure where the actual problem is. Maybe ComfyUI doesn't always unpatch models after sampling.
ComfyUI's model object is actually a thin wrapper over the "real" model object that contains the loaded weights, and applying LoRAs modifies those weights directly. The patcher maintains a backup of modified weights that should get restored when sampling is done. It seems that's not happening. It might be that ComfyUI is being smart and not "needlessly" unpatching a model that doesn't change.
With a separate checkpoint loader, the ModelPatcher objects will not share the same underlying weights, so you don't get problems like this.
Maybe I'll figure out a fix at some point, but for now I'd just suggest not mixing the LoRAScheduler or ScheduleToModel with LoRALoader
asagi4
changed the title
LoRAScheduler output differs from Lora Loader
LoRAScheduler output can differ from LoraLoader if both are used in the same workflow
Apr 13, 2024
@asagi4 Thanks for the explanation, I tried use multiple LoRAScheduler in the same workflow and results look fine. So for now I'll only use LoRAScheduler if I need to schedule loras. Thanks again for bring this feature to comfyUI :)
The LoRAScheduler output image is different from normal Lora loader uses the same checkpoint. (Sometimes it occurs at the first run, sometimes occurs runs later after changing seed, prompt etc.)
Interestingly, if the LoRAScheduler uses a separate checkpoint loader then the results become the same.
The text was updated successfully, but these errors were encountered: