Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support inference with LyCORIS GLora networks #13610

Merged
merged 1 commit into from
Oct 14, 2023

Conversation

v0xie
Copy link
Contributor

@v0xie v0xie commented Oct 12, 2023

Description

This PR adds support for inference of networks that are trained with LyCORIS GLora. The implementation is based on the excellent implementation by @KohakuBlueleaf here: https://github.com/KohakuBlueleaf/LyCORIS/blob/main/lycoris/modules/glora.py

Changes:

  • Added a new Python file network_glora.py in extensions-builtin/Lora.
  • Modified extensions-builtin/Lora/networks.py to use the new module type.

Other notes:

  • The network tested was trained with kohya_ss with arguments --network_module=lycoris.kohya --network_args "conv_dim=4" "conv_alpha=4" "algo=glora"
  • Link to the GLora paper: https://arxiv.org/abs/2306.07967

Checklist:

@v0xie v0xie changed the title support inference with LyCORIS GLora networks Support inference with LyCORIS GLora networks Oct 12, 2023
@AUTOMATIC1111 AUTOMATIC1111 merged commit 26500b8 into AUTOMATIC1111:dev Oct 14, 2023
3 checks passed
@oliverban
Copy link

Does it work with SDXL?

@KohakuBlueleaf
Copy link
Collaborator

@v0xie The implementation you linked does not use conv_dim and conv_alpha arguments?

I haven't Implemented the glora for convolution since it has more than one way to implement the W' = A + WB. (For A and B are both low-rank matrix, in Conv layer. WB have more than one option)

I think I will directly impl B as linear layer between channels directly when I have time

@v0xie
Copy link
Contributor Author

v0xie commented Oct 24, 2023

@oliverban Yes, I've only tested it with SDXL

@KohakuBlueleaf Thank you! Even without conv layer, GLora is making way better images than any normal Lora I've trained.

@KohakuBlueleaf
Copy link
Collaborator

@v0xie That's great!
I'm investigate the GLoKr things (mentioned in our paper)
I think I will release it in lycoris 2.0.0

@Blackhol3
Copy link

Thanks for your work, @v0xie!

GLoRA looks promising, but I can't make the inference work with SD1.5. I only get seemingly random noise like the image below. The training with bmaltais' kohya_ss goes well, the samples generated during the training are looking good, your new module is correctly called, and yet…

Do you have any idea what could cause the issue?

example

@v0xie
Copy link
Contributor Author

v0xie commented Oct 27, 2023

Thanks for your work, @v0xie!

GLoRA looks promising, but I can't make the inference work with SD1.5. I only get seemingly random noise like the image below. The training with bmaltais' kohya_ss goes well, the samples generated during the training are looking good, your new module is correctly called, and yet…

Do you have any idea what could cause the issue?

example

That's super strange that's it's doing that. I hadn't tested SD1.5 with GLora before so I just trained one with latest dev branch of sd-scripts to see and it appears work correctly.

Can you try this LoRA and see if you're able to generate images with it? https://huggingface.co/v0xie/sd15-glora-monster_toy/blob/main/monster_toy_sd15_glora-000010.safetensors

Both the picture and safetensors have the metadata embedded with training/inference settings.

33793-4-lora_monster_toy_sd15_glora-000010_0_1 a monster_toy toy in sky

@Blackhol3
Copy link

Yeah, your LoRA works great. Thanks to the embedded metadata, I've been able to track down the problem, and it seems to be network_alpha. I'm used to keeping mine at 1, which caused the issue; everything's fine when it's the same value as network_rank (or close enough).

It's strange, though, because as I said, the samples generated during the training were looking good, so I don't think there is a problem with the training; and yet, I don't see how network_alpha can have any impact during inference…

@ckmlai
Copy link

ckmlai commented Oct 28, 2023

For me, GLoRA looks totally fine during training previews as previously mentioned, but for inference, anything above 0.3 weight produces fried images or noise nonsense. I have tested multiple configurations of lr, dim and alphas, but the problem doesn't go away

@w-e-w w-e-w mentioned this pull request Dec 4, 2023
@wcde
Copy link

wcde commented Dec 6, 2023

I don't see how network_alpha can have any impact during inference…

It should be scaled in inference too, but this implementation misses scaling, so it should work correctly only with alpha==network_dim. Everything else will be broken.

@w-e-w w-e-w mentioned this pull request Dec 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants