Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Batched Inference Script for VCoder #3

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dg845
Copy link

@dg845 dg845 commented Jan 10, 2024

This PR adds a script to perform batched inference from a VCoder checkpoint in vcoder_llava/eval/, based on vcoder_llava/eval/model_seg_loader.py and vcoder_llava/eval/model_depth_loader.py.

@pribadihcr
Copy link

Hi, I run the following example script:

python model_batched_inference.py --model-path shi-labs/vcoder_llava-v1.5-7b --load-4bit --image-folder ../input_folder --output-folder ./out_vcoder --prompt "describe this image in details" 

I got the following error:

File "/VCoder/model_batched_inference.py", line 238, in evaluate_model
output_ids = model.generate(
File "/Anaconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Anaconda3/lib/python3.10/site-packages/transformers/generation/utils.py", line 1282, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/Anaconda3/lib/python3.10/site-packages/transformers/generation/utils.py", line 1155, in _validate_model_kwargs
raise ValueError(
ValueError: The following model_kwargs are not used by the model: ['depths'] (note: typos in the generate arguments will also show up in this list)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants