Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(perception_benchmark_tool): add perception benchmark tool #603

Conversation

kaancolak
Copy link
Contributor

@kaancolak kaancolak commented Mar 29, 2022

Signed-off-by: kaancolak kcolak@leodrive.ai

Description

Resolves #565
Builds on top of #565

Related links

Tests performed

Notes for reviewers

Pre-review checklist for the PR author

The PR author must check the checkboxes below when creating the PR.

In-review checklist for the PR reviewers

The PR reviewers must check the checkboxes below before approval.

  • The PR follows the pull request guidelines.
  • The PR has been properly tested.
  • The PR has been reviewed by the code owners.

Post-review checklist for the PR author

The PR author must check the checkboxes below before merging.

  • There are no open discussions or they are tracked via tickets.
  • The PR is ready for merge.

After all checkboxes are checked, anyone who has write access can merge the PR.

@kaancolak kaancolak self-assigned this Mar 30, 2022
@aohsato aohsato added the component:perception Advanced sensor data processing and environment understanding. (auto-assigned) label Apr 14, 2022
@kaancolak
Copy link
Contributor Author

@1222-takeshi I approved your review request, but this pull request is still a draft.

@kaancolak kaancolak force-pushed the 565-perception-benchmark-tool branch from e9c2deb to c920d44 Compare April 21, 2022 16:48
@kaancolak kaancolak force-pushed the 565-perception-benchmark-tool branch from c920d44 to a342974 Compare April 21, 2022 16:55
kaancolak and others added 13 commits April 21, 2022 20:03
…enchmark tool package

Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
Signed-off-by: Kaan Çolak <kaancolak95@gmail.com>
Signed-off-by: Kaan Çolak <kaancolak95@gmail.com>
@mitsudome-r mitsudome-r marked this pull request as ready for review May 10, 2022 07:59
@kaancolak
Copy link
Contributor Author

I shared the initial 3D tracking benchmark results in the README file of the PR link. It contains only the results of the lidar-only pipeline. For running the camera-lidar-fusion pipeline, we need to run it with multiple GPUs, the Waymo dataset contains 5 cameras. After this PR #736, I will add its result.

For vehicles, there is no blockage. But for pedestrians, We have some problems. We are giving a constant length and width size to the pedestrian bounding boxes in Autoware.Universe, it's equal to 1 meter. But, Waymo Dataset has so strict 3D IoU scores for matching tracked ground truth objects and tracked object predictions. When we give a fixed size to pedestrians, it falls below the cutoff score. (Vehicle: 0.7 , Pedestrian and Cyclist: 0.5). For this reason, pedestrian scores are almost equal to zero.

We can use bounding boxes directly coming from 3D detection nodes just for evaluation but it needs an external change of the perception stack and reproducing benchmarking results could be really hard, or we can change the score threshold from the dataset config file but in this way, we can't compare our results with other. I think there are only 2 options.

If you have any suggestions or advice please share them.

@kaancolak
Copy link
Contributor Author

@aohsato @yukkysaito @miursh @yukke42 @1222-takeshi @mitsudome-r @xmfcx

@xmfcx
Copy link
Contributor

xmfcx commented May 17, 2022

But for pedestrians, We have some problems. We are giving a constant length and width size to the pedestrian bounding boxes in Autoware.Universe, it's equal to 1 meter. But, Waymo Dataset has so strict 3D IoU scores for matching tracked ground truth objects and tracked object predictions. When we give a fixed size to pedestrians, it falls below the cutoff score.

If the current stack is getting 0 points for pedestrians, it's ok. We can work on improving the detection pipeline to increase it later on.

@1222-takeshi when can you review this? Will you be able to reproduce the results by following instructions in the readme file?

kaancolak and others added 2 commits May 23, 2022 00:45
@miursh
Copy link
Contributor

miursh commented May 31, 2022

@kaancolak I believe this benchmark should be placed under "perception" directory. Is there any reason for making benchmark directly at top-level ?

@kaancolak
Copy link
Contributor Author

Thanks, @miursh for your feedback.

Actually, I talked to Fatih when I first started this tool, he advised this directory structure. It can also take place in Perception, but some of the developers from the Robotec working on generic evaluation tools, which will contain multiple packages for metric calculation (localization, control, etc. ). We can collect all benchmarking and evaluation tools under a single top-level folder. But this is optional.

For now, I think, reproducing benchmarking result of this package is enough. Our goal was the compare our tracking result with other tracking submissions in the Waymo 3D Tracking Challenge.

And also, after the generic evaluation tools are finished, I am planning to connect the generic evaluation tools and perception benchmark tools for a more generic evaluator / benchmarking tools.

@mitsudome-r
Copy link
Member

@ktro2828 Are you done with the review of the PR? If so, I would like you to give approval so that we can merge.

@ktro2828
Copy link
Contributor

@mitsudome-r Sorry not yet. As mentioned above(his reply at 15 days ago), @.kaancolak is trying to update codes. After he gets ready, I will re-review.

Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
@kaancolak
Copy link
Contributor Author

kaancolak commented Sep 19, 2022

@ktro2828, Sorry for the delay, I was dealing with other issues about BUS ODD field tests.

I made the updates, the package is now ready to be reviewed.

kaancolak and others added 2 commits September 20, 2022 11:17
@ktro2828
Copy link
Contributor

ktro2828 commented Sep 26, 2022

@kaancolak sorry for late to reaction. I'm reviewing, but build-and-test is failed in CI/CD. Can you fix it??
Also, please add copyrights to top of the codes.

@kaancolak
Copy link
Contributor Author

Thanks, I added the license agreement. Currently, Autoware docker doesn't contain TensorFlow but but all data in the Waymo dataset is in TensorFlow "tfrecord" format. For this reason, it doesn't pass the CI/CD pipeline at the moment.

Signed-off-by: Kaan Colak <kcolak@leodrive.ai>
@xmfcx
Copy link
Contributor

xmfcx commented Sep 26, 2022

@kenji-miyake What is the right way to have pip packages like tensorflow, waymo_open_dataset?

  • Should we put them to ansible?
  • Or should we skip this test?
  • Is it possible to add pip package dependencies to package.xml?

https://github.com/autowarefoundation/autoware.universe/actions/runs/3088532428/jobs/5011962436 here I suspect launch-testing-ros causes this error.

@kenji-miyake
Copy link
Contributor

Is it possible to add pip package dependencies to package.xml?

@xmfcx Yes, it is possible. You can send pull requests to add your dependencies here.
https://github.com/ros/rosdistro/tree/master/rosdep

If you have some troubles with rosdistro, I think we can consider using Ansible instead.

And I believe skipping tests is only acceptable as a tentative workaround. In this case, you should create a follow-up issue.

@kaancolak
Copy link
Contributor Author

I create PR for waymo_open_dataset, a related link. After this PR is merged, waymo_open_dataset will install the relevant TensorFlow version.

@ktro2828
Copy link
Contributor

ktro2828 commented Oct 3, 2022

@kaancolak It seems the PR has been merged, so please update package.xml to install waymo-open-dataset with rosdep.

@kaancolak kaancolak force-pushed the 565-perception-benchmark-tool branch from caf3cc5 to 036c873 Compare October 3, 2022 15:00
@kaancolak
Copy link
Contributor Author

waymo-open-dataset-tf-2-6-0 depends on protobuf==3.9.2 but the system has protobuf==4.21.7 for some reason.

And this causes python unit tests to fail.

I couldn't find what installs the protobuf==4.21.7.

But if I install waymo with (without sudo) pip install waymo-open-dataset-tf-2-6-0, it installs protobuf to ~/.local/lib/python3.8/site-packages/google/protobuf/ and it overrides it and everything works.

@kenji-miyake Is there a clean way to skip colcon test for only this package? (without needing to modify github actions)

If so, I can update the README instructions to install it that way.

If not, I will close this PR and add it as a separate repository in autowarefoundation org.

@kaancolak kaancolak force-pushed the 565-perception-benchmark-tool branch from 036c873 to adc7951 Compare October 4, 2022 10:28
@kenji-miyake
Copy link
Contributor

@kaancolak I guess the dependency of waymo-open-dataset-tf-2-6-0 isn't written properly.

$ docker run --rm -it ubuntu:20.04 /bin/bash
$ apt update && apt install -y python3-pip
$ pip3 install -U -q waymo-open-dataset-tf-2-6-0
$ pip list | grep protobuf
protobuf                    4.21.7

Also, build fails on Humble. In that case, you can't merge this PR.
https://github.com/autowarefoundation/autoware.universe/actions/runs/3181365356/jobs/5186030427#step:6:118

$ docker run --rm -it ubuntu:20.04 /bin/bash
$ apt update && apt install -y python3-pip
$ pip3 install -U -q waymo-open-dataset-tf-2-6-0
ERROR: Could not find a version that satisfies the requirement waymo-open-dataset-tf-2-6-0 (from version
s: none)                                                                                                ERROR: No matching distribution found for waymo-open-dataset-tf-2-6-0

@xmfcx
Copy link
Contributor

xmfcx commented Oct 4, 2022

You can move the contents of this package to https://github.com/autowarefoundation/perception_benchmark_tool

@kaancolak
Copy link
Contributor Author

this PR moved to autowarefoundation/perception_benchmark_tool#1

@kaancolak kaancolak closed this Oct 10, 2022
kyoichi-sugahara pushed a commit that referenced this pull request Sep 16, 2023
feat: change launch repo to awf/autoware_launch

Signed-off-by: Takayuki Murooka <takayuki5168@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:perception Advanced sensor data processing and environment understanding. (auto-assigned)
Projects
No open projects
Status: Done
Development

Successfully merging this pull request may close these issues.

Evaluate performance of object detection pipeline
8 participants