This repository allows you to evaluate results using COCO metrics.
- Multi-Model Support: Compatible with RKNN (
.rknn
), and ONNX (.onnx
) models. - Annotation Conversion: Convert Ultralytics-style (
.txt
) annotations to COCO format. - Configurable Parameters: Easily adjust thresholds, image sizes, and class labels via
config.json
. - COCO Evaluation: Compute COCO mAP metrics using
pycocotools
. - Modular Design: Organized into separate modules for preprocessing, postprocessing, and annotation conversion for easier maintenance and extension.
-
Clone the Repository
git clone https://github.com/Applied-Deep-Learning-Lab/rknn-metrics.git cd rknn-metrics
-
Install Dependencies
pip install -r requirements.txt
All configurable parameters are stored in config.json
. Modify this file to adjust object detection thresholds, image sizes, and class labels.
{
"OBJ_THRESH": 0.3,
"NMS_THRESH": 0.5,
"IMG_SIZE": [960, 960],
"CLASSES": {
"0": "bike",
"1": "bus",
"2": "car",
"3": "construction equipment",
"4": "emergency",
"5": "motorbike",
"6": "personal mobility",
"7": "quadbike",
"8": "truck"
}
}
- OBJ_THRESH: Object confidence threshold.
- NMS_THRESH: Non-Maximum Suppression threshold.
- IMG_SIZE: Target image size for inference (width, height).
- CLASSES: Mapping of class IDs to class names.
Use main.py
to perform object detection inference on a set of images. The script supports displaying and saving results, as well as evaluating using COCO mAP metrics.
--model_path
(str, required): Path to the model file (.rknn
,.onnx
).--img_show
(flag): Display detection results onscreen.--img_save
(flag): Save detection results to disk (default:./result
).--anno_json
(str): Path to COCO annotation file (default:path/to/annotations/instances_val2017.json
).--img_folder
(str): Path to the folder containing images (default:path/to/images
).--coco_map_test
(flag): Enable COCO mAP evaluation.--ultralytics_dir
(str): Path to Ultralytics.txt
annotations directory.--coco_output
(str): Output path for converted COCO annotations.
python main.py \
--model_path models/yolov8.rknn \
--img_folder data/valid/images \
--img_show \
--img_save \
--coco_map_test \
--anno_json data/valid/instances_val2017.json
To convert Ultralytics-style .txt
annotations to COCO format, use the --ultralytics_dir
and --coco_output
arguments.
python main.py \
--model_path models/yolov8.rknn \
--img_folder data/valid/images \
--ultralytics_dir data/ \
--coco_output data/valid/coco_annotations.json \
--coco_map_test
This command will:
- Convert Ultralytics
.txt
annotations indata/valid/labels
to COCO format and save todata/valid/coco_annotations.json
. - Use the converted annotations for COCO mAP evaluation.
Processes raw model outputs into final detection boxes, classes, and scores specifically for modified YOLOv8 by AIRockchip.
- Functions:
filter_boxes(boxes, box_confidences, box_class_probs)
: Filters boxes based on object confidence threshold.nms_boxes(boxes, scores)
: Applies NMS to suppress overlapping boxes.dfl(position: np.ndarray)
: Decodes position data.box_process(position, IMG_SIZE)
: Converts model output to bounding boxes.post_process(input_data, IMG_SIZE)
: Aggregates and filters detections.
Converts Ultralytics-style .txt
annotations to COCO format.
- Functions:
ultralytics_to_coco(ultralytics_dir, coco_output_path, categories)
: Performs the conversion.
Loads configuration parameters from config.json
.
- Variables:
config
: Dictionary containing configuration parameters.
The main script to perform inference, handle annotation conversion, and evaluate results.
- Key Steps:
- Parse command-line arguments.
- Convert annotations if Ultralytics format is provided.
- Initialize the specified model.
- Perform inference on each image.
- Optionally display and/or save results.
- Record detections for COCO mAP evaluation.
- Compute and display mAP and average latency.
-
Prepare Data
- Place your images in
data/valid/images
. - If using Ultralytics annotations, place
.txt
files indata/valid/ultralytics
.
- Place your images in
-
Configure
config.json
Adjust thresholds, image sizes, and class labels as needed.
-
Run Inference and Evaluation
python main.py \ --model_path models/yolov8.rknn \ --img_folder data/valid/images \ --ultralytics_dir data/valid/ultralytics \ --coco_output data/valid/coco_annotations.json \ --img_save \ --coco_map_test
-
View Results
- Saved images with detections will be in the
./result
directory. - COCO evaluation metrics will be printed in the console.
- Saved images with detections will be in the