Skip to content

ImBlurryF4c3/AnomalySegmentation_CourseProjectBaseCode

 
 

Repository files navigation

Real-Time-Anomaly-Segmentation [Course Project]

This repository provides a code for the Real-Time Anomaly Segmentation project of the Machine Learning Course. This code is submitted for the 27/02/2024 exam by:

  • Marco Colangelo, s309798
  • Federica Amato, s310275
  • Roberto Pulvirenti, s317704

Baselines - MSP, MaxLogit and MaxEntropy

The goal for this step is to evaluate a proposed anomaly segmentation method for urban scenes using a pre-trained ERF-Net model and a test dataset. The evaluation involves running the model on the test dataset and analyzing its performance in detecting anomalies. Three different methods are used for the evaluation: MSP, maxLogit, and maxEntr. The code for the inference analysis can be found in eval folder and in evalAnomaly.py file, the code for the mIou analysis can be found in eval folder in the eval_iou.py file.

The results can be found into results_msp_ml_me.txt file for the inference and into miou_msp_ml_me.txt file for the miou.

Baselines - Temperature Scaling

The goal for the second step of our project is to find the optimal temperature for a neural classification model that minimizes the calibration error and the negative log-likelihood of the predictions. The method is to use a validation dataset and an optimization algorithm to tune the temperature parameter that scales the model outputs. The result is a more calibrated model that can output more reliable probabilities and predictions.

The code for this part is avaible in evalAnomaly.py and eval_iou.py file, in order to choose the best temperature we use the code in temperature_scaling.py.

The results we obtain can be found into results_temperature.txt file for the inference and into miou_temperature.txt file for the miou.

Void Classifier

For this step we provide a method for anomaly detection using a semantic segmentation network with an extra class for anomalies. We use the Cityscapes with the void class as a source of anomaly data and train two networks, ENet and BiSeNet, with this method.

The code for ENet and BiSeNet can be found respectivelly into ENet-Github and BiSeNet-Github repositories. For our evaluation we used the code into eval_voidClassifier.py and eval_iou.py file.

The results can be found into results_void.txt file for the inference and into miou_void.txt file for the miou.

Project Extention - Effect of Training Loss function

We explore different loss functions that are designed for anomaly detection, such as Jaccard Loss and Logit Normalization Loss, investigating how combining these loss functions with other common ones, such as Focal Loss and Cross-Entropy Loss, affects the model’s performance in segmenting and identifying anomalies in road scenes.

For this step, we modify the main.py file in order to implement this new loss functions.

The results can be found into results_loss.txt file for the inference and into miou_loss.txt file for the miou.

Command used

All the command used in the evaluation part can be found into googleColab file, for the training part the files can be found into training_file folder.

About

AML Project 2023/2024

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 84.9%
  • Jupyter Notebook 15.1%