Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you give an example of pandaset ? #274

Closed
cuge1995 opened this issue Sep 2, 2020 · 9 comments
Closed

Could you give an example of pandaset ? #274

cuge1995 opened this issue Sep 2, 2020 · 9 comments
Labels
enhancement New feature or request stale

Comments

@cuge1995
Copy link

cuge1995 commented Sep 2, 2020

No description provided.

@jacoblambert
Copy link

jacoblambert commented Sep 3, 2020

It's not the prettiest approach but personally I created a piece of code to

  • Load pandaset sequence by sequence, frame by frame,
  • Bring the LiDAR data and annotations into the Pandar64 ego frame
  • Normalize LiDAR intensity from 0-1
  • Rename some classes to avoid spaces which will break the kitti_utils loading code, e.g. map 'Emergency Vehicle' -> 'Emergency-Vehicle',
  • Pick my favorite camera and obtain the necessary tf information
  • Save pointcloud, labels, images and calibration file into KITTI format
  • Create validation / training split files

Then I created a dataset class following the KITTI template:

  • Change loading functions,
  • Making sure to remove the camera->lidar frame TF that standard KITTI needs (set identity TF)

Finally make a dataset_config and pandaset_models with modified parameters such as in nuscenes models (full pcd, smaller voxel size). I had decent results with second_multihead and now training pv_rcnn. But I think evaluating on pandaset is not so fair - there are many boxes without any lidar points because of the dataset creation approach. These are not used in training but are used in evaluation, as far as I know. I think I will create a modified labels list with only bounding boxes that have lidar points and are in range, to give fairer evaluation.

Good luck!

@MartinHahner
Copy link
Contributor

@jacoblambert

It's not the prettiest approach but personally I created a piece of code

Even though you say your code is not the prettiest,
I think all of us would be happy if you create a pull request and share that piece of code anyway. 🙂

@xujinchang
Copy link

@jacoblambert can u show your code of pandasets ?

@yangnvzi
Copy link

@jacoblambert
hi,I am honored to read your comments.
I recently also try to transfer pandar64 format into kitti format
emm...I I've run into some problems about bring the annotations(cuboids and semsegation ) into the Pandar64 ego frame
1)refer to https://github.com/scaleapi/pandaset-devkit,
demision.x ,demision.y,demision.z dimensions of the cuboid based on the world dimensions
how to bring the annotations cuboids into Pandar64 ego frame?
2)semseg only to include index and class_id
how to Distinguish anyone liar points is pandar64's or pandarGT's

@sshaoshuai sshaoshuai added the enhancement New feature or request label Oct 19, 2020
@dkoguciuk
Copy link

Hi @jacoblambert ,

can you elaborate why do you suggest to bring lidar & labels to egoframe? I'm trying to understand the logic behind it :)

Best,
Daniel

@lea-v
Copy link
Contributor

lea-v commented Mar 4, 2021

Hi,
in case it can help, I've started a merge request in order to integrate pandaset support. (#396)
What it contains is:

  • training directly on pandaset data (without converting to kitti format).
  • using gt_sampling during training
  • pick the categories to train on
  • select the lidar(s) to load data for
  • save pandaset output in the same format as pandaset cuboids annotations
  • visual check that pvrcnn trained on pandaset gives coherent results

The main missing parts are:

  • the evaluation code : which is made complicated by the fact that there is no "official" evaluation method for pandaset
  • addition of trained models to the model zoo

Feel free to build on this when trying to train with pandaset

Best,
Léa

@Leozyc-waseda
Copy link

@jacoblambert
Hi,

Can you tell me how you evaluation pandaset?
It would be nice if there was a code.
I have put pandaset through dataloader generated the data infos.

@github-actions
Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Mar 12, 2022
@github-actions
Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Projects
None yet
Development

No branches or pull requests

9 participants