This repository contains Python scripts and YOLOv5, YOLOv6, YOLOv7 and YOLOv8 object detection models (.blob format) for testing and deploying the Insect Detect DIY camera trap for automated insect monitoring.
The camera trap system is composed of low-cost off-the-shelf hardware components (Raspberry Pi Zero 2 W, Luxonis OAK-1, Witty Pi 4 L3V7 or PiJuice Zero pHAT), combined with open source software and can be easily assembled and set up with the provided instructions.
Important
Please make sure that you followed all steps to set up your Raspberry Pi.
Install all required dependencies for RPi + OAK:
wget -qO- https://github.com/maxsitt/insect-detect/main/install_dependencies_oak.sh | sudo bash
Install and configure Rclone if you want to use the upload feature:
wget -qO- https://rclone.org/install.sh | sudo bash
Clone the insect-detect
GitHub repo:
git clone https://github.com/maxsitt/insect-detect
Create a virtual environment with access to the system site-packages:
python3 -m venv --system-site-packages env_insdet
Update pip in the virtual environment:
env_insdet/bin/python3 -m pip install --upgrade pip
Install all required packages in the virtual environment:
env_insdet/bin/python3 -m pip install -r insect-detect/requirements.txt
Run the scripts with the Python interpreter from the virtual environment:
env_insdet/bin/python3 insect-detect/webapp.py
Check out the Usage section for more details about the scripts.
Model | size (pixels) |
mAPval 50-95 |
mAPval 50 |
Precisionval |
Recallval |
SpeedOAK (fps) |
params (M) |
---|---|---|---|---|---|---|---|
YOLOv5n | 320 | 53.8 | 96.9 | 95.5 | 96.1 | 49 | 1.76 |
YOLOv6n | 320 | 50.3 | 95.1 | 96.9 | 89.8 | 60 | 4.63 |
YOLOv7-tiny | 320 | 53.2 | 95.7 | 94.7 | 94.2 | 52 | 6.01 |
YOLOv8n | 320 | 55.4 | 94.4 | 92.2 | 89.9 | 39 | 3.01 |
Table Notes
- All models were trained to 300 epochs with batch size 32 and default hyperparameters. Reproduce the model training with the provided Google Colab notebooks.
- Trained on Insect_Detect_detection dataset version 7, downscaled to 320x320 pixel with only 1 class ("insect").
- Model metrics (mAP, Precision, Recall) are shown for the original PyTorch (.pt) model before conversion to ONNX -> OpenVINO -> .blob format. Reproduce metrics by using the respective model validation method.
All configuration parameters can be customized in the web app or by directly modifying the
config_custom.yaml
file. You can generate multiple custom configuration files and select the active config either in
the web app or by modifying the
config_selector.yaml
.
Processing pipeline for the
yolo_tracker_save_hqsync.py
script that can be used for automated insect monitoring:
- A custom YOLO insect detection model is run in real time on device (OAK) and uses a continuous stream of downscaled LQ frames as input.
- An object tracker uses the bounding box coordinates of detected insects to assign a unique tracking ID to each individual present in the frame and track its movement through time.
- The tracker + model output from inference on LQ frames is synchronized with MJPEG-encoded HQ frames (default: 3840x2160 px) on device (OAK).
- The HQ frames are saved to the microSD card at the configured capture intervals while an insect is detected (triggered capture) and independent of detections (time-lapse capture).
- Corresponding metadata from the detection model and tracker output is saved to a metadata .csv file for each detected and tracked insect (including timestamp, label, confidence score, tracking ID, tracking status and bounding box coordinates).
- The bounding box coordinates can be used to crop detected insects from the corresponding HQ frames and save them as individual .jpg images. Depending on the post-processing configuration, the original HQ frames are optionally deleted to save storage space.
- If a power management board (Witty Pi 4 L3V7 or PiJuice Zero) is connected and enabled in the configuration, intelligent power management is activated which includes battery charge level monitoring with conditional recording durations.
- With the default configuration, running the recording consumes ~3.8 W of power.
More information about the processing pipeline can be found in the Insect Detect Docs 📑.
Check out the classification
instructions and the insect-detect-ml
GitHub repo for
information on how to classify the cropped detections with the provided classification model and script.
Take a look at the post-processing instructions for information on how to post-process the metadata with classification results.
This repository is licensed under the terms of the GNU General Public License v3.0 (GNU GPLv3).
If you use resources from this repository, please cite our paper:
Sittinger M, Uhler J, Pink M, Herz A (2024) Insect detect: An open-source DIY camera trap for automated insect monitoring. PLOS ONE 19(4): e0295474. https://doi.org/10.1371/journal.pone.0295474