-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation Scripts Clarification #7
Comments
Hi! |
Hi @deepcs233 |
Hi! cd vision_encoder
pip3 install -r requirements.txt
python setup.py develop # if you have installed timm before, please uninstall it |
@deepcs233 Thank you for your reply! |
Hi @deepcs233 I'm trying to run the evaluation through run_evaluation.sh There are two specific issues here: BTW, I have follow your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md BTW, I notice the codes are based on CARLA 0.9.10.1, but the pip-installed carla version is 0.9.15, does it matter? Most users may directly pip install carla as your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md (We cannot pip install carla==0.9.10 or 0.9.10.1, which are not released versions), so if others don't run into the same question, it may not matter.
Thanks a lot. Best regards. |
Hi!
|
Hi!
According to my understanding, Carla server start command in red box, and leaderboard evaluator command in green box. The ports in Carla server and leaderboard evaluator are the same.
Thanks again. |
Hi! |
Hi @deepcs233 @ryhnhao , I met the same problem when executing
How did you solve it? |
Update: In my case, the two errors occurred after another error:
Following this solution: salesforce/LAVIS#571 (comment), the original two errors disappeared and the evaluation process was successfully started. |
Hi @deepcs233
Thanks for your great work.
Our team is investigating this topic and hope to re-implement the codes to generate the metric index as your paper claimed.
However, we have difficulties during evaluation. (Maybe we are unfamiliar with Carla server running)
Could you supplement more details in evaluation parts in readme.md
It can't be better if your guidance could help us replicate the metric index claimed in your paper.
Thanks again.
The text was updated successfully, but these errors were encountered: