Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: many models fail to import when coming from Auto1111 #6964

Open
1 task done
Jonseed opened this issue Sep 27, 2024 · 0 comments
Open
1 task done

[bug]: many models fail to import when coming from Auto1111 #6964

Jonseed opened this issue Sep 27, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@Jonseed
Copy link

Jonseed commented Sep 27, 2024

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

RTX 3060

GPU VRAM

12GB

Version number

5.0.0

Browser

Edge 128.0.2739.79 (Official build) (64-bit)

Python dependencies

No response

What happened

I went to the model manager, and scanned my model folder from my Auto1111 install, and there are many models that fail to import in Invoke. Here are some of them:

Unable to determine model type:

ViT-L-14-BEST-smooth-GmP-TE-only-HF-format.safetensors (text encoder, clip-l fine-tune)
ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors (text encoder, clip-l fine-tune)
t5xxl_fp8_e4m3fn.safetensors (text encoder, t5 fp8)
t5xxl_fp16.safetensors (text encoder, t5 fp16)
ControlNetHED.pth (ControlNet HED preprocessor)
clip_l.safetensors (text encoder, clip-l)
body_pose_model.pth (openpose controlnet model)
ip-adapter-plus-face_sd15.pth (ip-adapter plus face model for SD 1.5)
kohya_controllllite_xl_blur.safetensors
t2i-adapter_diffusers_xl_openpose.safetensors
kohya_controllllite_xl_depth.safetensors
t2i-adapter_xl_openpose.safetensors
kohya_controllllite_xl_canny.safetensors
bdsqlsz_controlllite_xl_tile_realistic.safetensors
ip-adapter-plus-face_sd15.pth

Unknown LoRA type:

fluxlora.safetensors (Flux LoRA, trained in Flux Gym)
SameFace_fix.safetensors (Flux LoRA)

Cannot determine base type:

sd3_medium_incl_clips.safetensors (sd3 medium, including clip)
sd3_medium_incl_clips_t5xxlfp8.safetensors (sd3 medium, including clip and t5)
ae.safetensors (flux VAE)
control-lora-openposeXL2-rank256.safetensors
thibaud_xl_openpose_256lora.safetensors

Unsupported model file extension .bin:

ip-adapter-faceid-plusv2_sd15.bin (ip-adapter face ID plus v2 for SD 1.5)
ip-adapter-faceid-plusv2_sdxl.bin (ip-adapter face ID plus v2 for SDXL)

What do I do with these failed imports? Can I manually import them and specify the model type? Some are probably unsupported, or not yet supported, but others should be usable. Are alternative T5 quants not supported for Flux? Clip-L finetunes? Many SDXL ControlNets? How do I know what is and is not supported in Invoke?

What you expected to happen

Completed import.

How to reproduce the problem

Just tried to install these models in the model manager.

Additional context

No response

Discord username

No response

@Jonseed Jonseed added the bug Something isn't working label Sep 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant