Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation Scripts Clarification #7

Open
ryhnhao opened this issue Jan 8, 2024 · 11 comments
Open

Evaluation Scripts Clarification #7

ryhnhao opened this issue Jan 8, 2024 · 11 comments

Comments

@ryhnhao
Copy link

ryhnhao commented Jan 8, 2024

Hi @deepcs233
Thanks for your great work.

Our team is investigating this topic and hope to re-implement the codes to generate the metric index as your paper claimed.

However, we have difficulties during evaluation. (Maybe we are unfamiliar with Carla server running)

Could you supplement more details in evaluation parts in readme.md
It can't be better if your guidance could help us replicate the metric index claimed in your paper.

Thanks again.

@deepcs233
Copy link
Collaborator

Hi!
Thank you for your attention to this project. However, I have no idea how to provide more details about the evaluation section now. Can you show some problems or confusion during the reproduction?

@yanfushan
Copy link

Hi @deepcs233
Thanks for your great work.
I am a current student. Based on your introduction to pre training in readme.md, I have found the model of the visual encoder (visual_encoder/tim/models/memfuser. py), but during model fine-tuning (LAVIS Bash run. sh), I couldn't find a py file like "visual_encoder/tim/models/memfuser. py", so I don't know how "model" was created. Where is it located in the LAVIS folder.
Thanks again.

@deepcs233
Copy link
Collaborator

Hi!
You need to install the modified timm package (follow the setup section).

cd vision_encoder
pip3 install -r requirements.txt
python setup.py develop # if you have installed timm before, please uninstall it

@yanfushan
Copy link

@deepcs233 Thank you for your reply!

@ryhnhao
Copy link
Author

ryhnhao commented Jan 16, 2024

Hi @deepcs233

I'm trying to run the evaluation through run_evaluation.sh
CUDA_VISIBLE_DEVICES=0 ./leaderboard/scripts/run_evaluation.sh

There are two specific issues here:

  1. As shown in figure 1, it seems carla doesn't work well, how can i solve it.
    image

BTW, I have follow your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md

BTW, I notice the codes are based on CARLA 0.9.10.1, but the pip-installed carla version is 0.9.15, does it matter? Most users may directly pip install carla as your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md (We cannot pip install carla==0.9.10 or 0.9.10.1, which are not released versions), so if others don't run into the same question, it may not matter.

  1. leaderboard/scripts/run_evaluation.sh mentioned the checkpoint info is in results/lmdrive_result.json
    And leaderboard/team_code/lmdrive_config.py also mentioned the checkpoint info as follows
    llm_model = '/data/llava-v1.5-7b' preception_model = 'memfuser_baseline_e1d3_return_feature' preception_model_ckpt = 'sensor_pretrain.pth.tar.r50' lmdrive_ckpt = 'lmdrive_llava.pth'
    So how the structure of checkpoints are organized in results/lmdrive_result.json? could you supplemented a lmdrive_result.json example?

Thanks a lot.

Best regards.

@deepcs233
Copy link
Collaborator

Hi!

  1. Can you show the command that starts your Carla server? The ports that are set in the Carla server and in the run_evaluation.sh need to be the same. The version is ok. 0.9.10.1 is the version of the carla application and 0.9.15 is the version of the carla API. The two versions don't need to be the same.
  2. results/lmdrive_result.json only records the evaluation results and details. When you finish the evaluation, the file will be generated.

@ryhnhao
Copy link
Author

ryhnhao commented Jan 17, 2024

Hi!

  1. Can you show the command that starts your Carla server? The ports that are set in the Carla server and in the run_evaluation.sh need to be the same. The version is ok. 0.9.10.1 is the version of the carla application and 0.9.15 is the version of the carla API. The two versions don't need to be the same.
  2. results/lmdrive_result.json only records the evaluation results and details. When you finish the evaluation, the file will be generated.

Hi!
Thanks for your in-time reply!

  1. the Carla server start command and leaderboard evaluator command are the same as the run_evaluation.sh

According to my understanding, Carla server start command in red box, and leaderboard evaluator command in green box. The ports in Carla server and leaderboard evaluator are the same.

image

  1. Ok, got it.

Thanks again.

@deepcs233
Copy link
Collaborator

Hi!
Could you try to sleep 10 seconds to wait for the start of your Carla server? and use nvidia-smi or the GUI to check whether the server is ready.

@yanfushan
Copy link

20240122-181038
Hello!Thank you for your excellent work.I have some questions when reading the dataset, which may be a relatively simple problem. However, I would like to ask for your advice. When reading the text "navigation_instruction_list. txt", I am not quite clear about the meanings expressed by strings such as "Follow-03-s1 dis", "Other-01", "Turn-04-S", "Turn-06-S-L", especially strings such as "s1", "dis", "Other", "s-L", "s", etc. Can you provide an answer?Thank you again.

@MayDGT
Copy link

MayDGT commented Jul 5, 2024

Hi @deepcs233

I'm trying to run the evaluation through run_evaluation.sh CUDA_VISIBLE_DEVICES=0 ./leaderboard/scripts/run_evaluation.sh

There are two specific issues here:

  1. As shown in figure 1, it seems carla doesn't work well, how can i solve it.
    image

BTW, I have follow your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md

BTW, I notice the codes are based on CARLA 0.9.10.1, but the pip-installed carla version is 0.9.15, does it matter? Most users may directly pip install carla as your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md (We cannot pip install carla==0.9.10 or 0.9.10.1, which are not released versions), so if others don't run into the same question, it may not matter.

  1. leaderboard/scripts/run_evaluation.sh mentioned the checkpoint info is in results/lmdrive_result.json
    And leaderboard/team_code/lmdrive_config.py also mentioned the checkpoint info as follows
    llm_model = '/data/llava-v1.5-7b' preception_model = 'memfuser_baseline_e1d3_return_feature' preception_model_ckpt = 'sensor_pretrain.pth.tar.r50' lmdrive_ckpt = 'lmdrive_llava.pth'
    So how the structure of checkpoints are organized in results/lmdrive_result.json? could you supplemented a lmdrive_result.json example?

Thanks a lot.

Best regards.

Hi @deepcs233 @ryhnhao , I met the same problem when executing ./leaderboard/scripts/run_evaluation.sh for the evaluation:

  • UnboundLocalError: local variable 'leaderboard_evaluator' referenced before assignment
  • AttributeError: 'LeaderboardEvaluator' object has no attribute 'manager'

How did you solve it?

@MayDGT
Copy link

MayDGT commented Jul 6, 2024

Hi @deepcs233
I'm trying to run the evaluation through run_evaluation.sh CUDA_VISIBLE_DEVICES=0 ./leaderboard/scripts/run_evaluation.sh
There are two specific issues here:

  1. As shown in figure 1, it seems carla doesn't work well, how can i solve it.
    image

BTW, I have follow your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md
BTW, I notice the codes are based on CARLA 0.9.10.1, but the pip-installed carla version is 0.9.15, does it matter? Most users may directly pip install carla as your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md (We cannot pip install carla==0.9.10 or 0.9.10.1, which are not released versions), so if others don't run into the same question, it may not matter.

  1. leaderboard/scripts/run_evaluation.sh mentioned the checkpoint info is in results/lmdrive_result.json
    And leaderboard/team_code/lmdrive_config.py also mentioned the checkpoint info as follows
    llm_model = '/data/llava-v1.5-7b' preception_model = 'memfuser_baseline_e1d3_return_feature' preception_model_ckpt = 'sensor_pretrain.pth.tar.r50' lmdrive_ckpt = 'lmdrive_llava.pth'
    So how the structure of checkpoints are organized in results/lmdrive_result.json? could you supplemented a lmdrive_result.json example?

Thanks a lot.
Best regards.

Hi @deepcs233 @ryhnhao , I met the same problem when executing ./leaderboard/scripts/run_evaluation.sh for the evaluation:

  • UnboundLocalError: local variable 'leaderboard_evaluator' referenced before assignment
  • AttributeError: 'LeaderboardEvaluator' object has no attribute 'manager'

How did you solve it?

Update: In my case, the two errors occurred after another error:

  • ImportError: cannot import name '_expand_mask' from 'transformers.models.clip.modeling_clip'

Following this solution: salesforce/LAVIS#571 (comment), the original two errors disappeared and the evaluation process was successfully started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants