Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Enable regression & misc #27

Merged
merged 73 commits into from
Jul 8, 2023
Merged
Show file tree
Hide file tree
Changes from 69 commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
e4c00ec
Start the regression branch :sparkles:
o-laurent Jun 13, 2023
44e552a
Remove GNNL duplicate code :fire:
o-laurent Jun 13, 2023
2fca85e
Fix validation split arg. :bug:
o-laurent Jun 14, 2023
678990e
Fix UCIRegression dataset :bug:
o-laurent Jun 14, 2023
ad81902
Add energy appliances to UCI :sparkles:
o-laurent Jun 15, 2023
f3c5130
Add *.out to .gitignore :wrench:
o-laurent Jun 15, 2023
df77a53
Improve datamodules' docstrings :books:
o-laurent Jun 15, 2023
1695530
Improve UCI datamodule :zap:
o-laurent Jun 15, 2023
52ae849
Enable standard regression :sparkles:
o-laurent Jun 15, 2023
953d033
Improve the MLP's flexibility :sparkles:
o-laurent Jun 16, 2023
01d8151
:sparkles: Enable binary classification
o-laurent Jun 23, 2023
28cea05
:bug: Fix binary classification routine
o-laurent Jun 23, 2023
17e764d
:bug: Finish fixing binary cls
o-laurent Jun 23, 2023
c9b8d80
:sparkles: Add a VGG prototype
o-laurent Jun 23, 2023
caa617a
:sparkles: Add style argument to VGG
o-laurent Jun 23, 2023
2d47c48
Remove task arg from cli_main :hammer:
o-laurent Jun 26, 2023
d49b4c8
:bug: finish changing arg
o-laurent Jun 26, 2023
7abe004
:sparkles: Add VGG experiments
o-laurent Jun 26, 2023
93e4a19
:bug: Fix optim recipes
o-laurent Jun 26, 2023
1fa8b61
:bug: Fix Packed VGG & add note
o-laurent Jun 26, 2023
6e227e1
:bug: :books: Fix note on poetry bugs
o-laurent Jun 27, 2023
f9f012e
:hammer: Move ood_detection param to dms
o-laurent Jun 27, 2023
acd5e5b
:book: Add documentation to MLP
o-laurent Jun 27, 2023
1e66ab3
:sparkles: Start supporting ensemble regression
o-laurent Jun 27, 2023
182d3a3
:sparkles: Add dropout to MLP
o-laurent Jun 29, 2023
13bb000
Merge branch 'dev' of github.com:ENSTA-U2IS/torch-uncertainty into re…
o-laurent Jun 30, 2023
5a196e5
:wrench: Add setup.py
o-laurent Jun 30, 2023
940af73
:book: Add setup.py install to doc
o-laurent Jun 30, 2023
dd63306
:bug: Fix ensemble univariate regression
o-laurent Jun 30, 2023
1843fa9
:bug: Fix missing MNIST parameter
o-laurent Jun 30, 2023
f7939a8
:sparkles: Start multivariate regression
o-laurent Jul 1, 2023
049ab2a
:sparkles: Add LeNet model
o-laurent Jul 1, 2023
0da9a2d
Enable loading lightning checkpoints for ResNet baseline :sparkles:
alafage Jul 1, 2023
7fc7d02
:bug: Update packed layers
o-laurent Jul 1, 2023
84449a3
Add Deep Ensembles baseline proposition :construction:
alafage Jul 1, 2023
e35e0d5
:heavy_check_mark: Update tests
o-laurent Jul 1, 2023
eae6b38
Add save_hyperparameters() to routines :hammer:
alafage Jul 1, 2023
b4a42b7
:hammer: Rework stochastic models
o-laurent Jul 1, 2023
2241a3a
:fire: Remove DE useless param & :book: add ref
o-laurent Jul 1, 2023
c6f3c67
:fire: Remove dataset duplicate
o-laurent Jul 1, 2023
8fb9070
:book: Rename architecture reference section
o-laurent Jul 1, 2023
9a5b7f5
:bug: Fix bayes layers #28
o-laurent Jul 1, 2023
b54d0b1
:sparkles: Add weight in ELBO
o-laurent Jul 1, 2023
31210fc
:art: Improve LeNet and MLP models
o-laurent Jul 1, 2023
4a173c5
:art: Change default dataloader_idx to 0
o-laurent Jul 1, 2023
c9e478c
:heavy_check_mark: Reduce bias in bayesian layers tests
o-laurent Jul 2, 2023
202b648
:book: Make init args opt. and add help strs
o-laurent Jul 2, 2023
17eccda
:bug: Fix bayesian networks
o-laurent Jul 4, 2023
585b1ef
:art: Improve LeNet
o-laurent Jul 4, 2023
f2e8f46
:bug: Finish fixing BNNs
o-laurent Jul 4, 2023
024b750
:sparkles: Add bayesian LeNet experiment
o-laurent Jul 4, 2023
f7bd96f
:sparkles: Add LeNet experiment
o-laurent Jul 4, 2023
3a08d34
:shirt: Improve typing and misc.
o-laurent Jul 4, 2023
95f10f6
:book: Start BNN documentation
o-laurent Jul 4, 2023
a8d2cb5
:bug: Finish renaming BNN losses
o-laurent Jul 4, 2023
d1354ba
:book: Add bayesian tutorial & slight conf. changees
o-laurent Jul 4, 2023
a2d44b8
:art: Continue improve typing & misc
o-laurent Jul 4, 2023
c8d30f0
:bug: Fix gallery intro
o-laurent Jul 5, 2023
2fff0ed
:fire: Remove empty test file
o-laurent Jul 5, 2023
62b5f12
:sparkles: cli_main returns test results
o-laurent Jul 5, 2023
f9f955e
:sparkles: Add CIFAR-N
o-laurent Jul 5, 2023
e0d1d2f
:bug: Fix non-dist ensemble regression
o-laurent Jul 5, 2023
48a7f94
:art: Misc
o-laurent Jul 5, 2023
b3abd8d
:bug: Fix bayesian conv
o-laurent Jul 5, 2023
9dd884f
:bug: Fix BrierScore update when handling binary classification
alafage Jul 5, 2023
287e3f8
:bug: Fix DeepEnsembles model
alafage Jul 5, 2023
adf40ba
:sparkles: DeepEnsemble baseline on classification tasks
alafage Jul 5, 2023
359d682
:heavy_plus_sign: pandas is now a dependency of torch-uncertainty
alafage Jul 5, 2023
dd357c7
:sparkles: DeepEnsemble baseline on regression tasks
alafage Jul 5, 2023
8326649
:ok_hand: Fix PR comments
o-laurent Jul 5, 2023
b2232eb
:bug: Fix TinyImageNet loader
o-laurent Jul 6, 2023
579788d
:art: Further improve Tiny-ImageNet
o-laurent Jul 6, 2023
b43c797
:bug: Fix validation split in C10/100
o-laurent Jul 8, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ lightning_logs/
docs/*/generated/
docs/*/auto_tutorials/
*.pth
*.out

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ To date, the following baselines are implemented:
- BatchEnsemble
- Masksembles
- Packed-Ensembles (see [blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873))
- Bayesian Neural Networks

### Post-processing methods

Expand All @@ -62,7 +63,7 @@ To date, the following post-processing methods are implemented:

## Awesome Uncertainty repositories

You may find a lot of information about modern uncertainty estimation techniques on the [Awesome Uncertainty in Deep Learning](https://github.com/ENSTA-U2IS/awesome-uncertainty-deeplearning).
You may find a lot of papers about modern uncertainty estimation techniques on the [Awesome Uncertainty in Deep Learning](https://github.com/ENSTA-U2IS/awesome-uncertainty-deeplearning).

## Other References

Expand Down
3 changes: 2 additions & 1 deletion auto_tutorials_source/README.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
Tutorials
=========

Below is a gallery of examples.
On this page, you will find tutorials and insights on TorchUncertainty. Don't
hesitate to open an issue if you have any question or suggestion for tutorials.
144 changes: 144 additions & 0 deletions auto_tutorials_source/tutorial_bayesian.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# -*- coding: utf-8 -*-
# fmt: off
# flake: noqa
"""
Train a Bayesian Neural Network in Three Minutes
================================================

In this tutorial, we will train a Bayesian Neural Network (BNN) LeNet classifier on the MNIST dataset.

Foreword on Bayesian Neural Networks
------------------------------------

Bayesian Neural Networks (BNNs) are a class of neural networks that can estimate the uncertainty of their predictions via uncertainty on their weights. This is achieved by considering the weights of the neural network as random variables, and by learning their posterior distribution. This is in contrast to standard neural networks, which only learn a single set of weights, which can be seen as Dirac distributions on the weights.

For more information on Bayesian Neural Networks, we refer the reader to the following resources:

- Weight Uncertainty in Neural Networks [ICML2015](https://arxiv.org/pdf/1505.05424.pdf)
- Hands-on Bayesian Neural Networks - a Tutorial for Deep Learning Users [IEEE Computational Intelligence Magazine](https://arxiv.org/pdf/2007.06823.pdf)

Training a Bayesian LeNet using TorchUncertainty models and PyTorch Lightning
-----------------------------------------------------------------------------

In this part, we train a bayesian LeNe, based on the already implemented method.

1. Loading the utilities
~~~~~~~~~~~~~~~~~~~~~~~~

To train a BNN using TorchUncertainty, we have to load the following utilities from TorchUncertainty:
- the model: bayesian_lenet, which lies in the torch_uncertainty.model module
- the classification training routine in the torch_uncertainty.training.classification module
- the bayesian objective: the ELBOLoss, which lies in the torch_uncertainty.losses file
- the datamodule that handles dataloaders: MNISTDataModule, which lies in the torch_uncertainty.datamodule
- the cli handler: cli_main and argument parser: init_args
"""
from torch_uncertainty import cli_main, init_args
from torch_uncertainty.datamodules import MNISTDataModule
from torch_uncertainty.losses import ELBOLoss
from torch_uncertainty.models.lenet import bayesian_lenet
from torch_uncertainty.routines.classification import ClassificationSingle

########################################################################
# We will also need to define an optimizer using torch.optim as well as the
# neural network utils withing torch.nn, as well as the partial util to provide
# the modified default arguments for the ELBO loss.

# We also import ArgvContext to avoid using the jupyter arguments as cli
# arguments, and therefore avoid errors.
import torch.nn as nn
import torch.optim as optim

from functools import partial
from pathlib import Path
import os
from cli_test_helpers import ArgvContext

########################################################################
# Creating the Optimizer Wrapper
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# We will use the Adam optimizer with the default learning rate of 0.001.
def optim_lenet(model: nn.Module) -> dict:
optimizer = optim.Adam(
model.parameters(),
lr=1e-3,
)
return {"optimizer": optimizer}

########################################################################
# Creating the necessary variables
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

# In the following, we will need to define the root of the datasets and the
# logs, and to fake parse the arguments needed for using the PyTorch Lightning
# Trainer. We also create the datamodule that handles the MNIST dataset,
# dataloaders and transforms. Finally, we also create the model using the
# blueprint from torch_uncertainty.models.
root = Path(os.path.abspath("")).parent.absolute().parents[2]

with ArgvContext("--max_epochs 10"): #TODO: understand why it doesn't work
args = init_args(datamodule=MNISTDataModule)

args.max_epochs = 10
net_name = "bayesian-lenet-mnist"

# datamodule
args.root = str(root / "data")
dm = MNISTDataModule(**vars(args))

# model
model = bayesian_lenet(dm.num_channels, dm.num_classes)

########################################################################
# The Loss and the Training Routine
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Then, we just have to define the loss to be used during training. To do this,
# we redefine the default parameters from the ELBO loss using the partial
# function from functools. We use the hyperparameters proposed in the blitz
# library. As we are train a classification model, we use the CrossEntropyLoss
# as the likelihood.
# We then define the training routine using the classification training routine
# from torch_uncertainty.training.classification. We provide the model, the ELBO
# loss and the optimizer, as well as all the default arguments.
loss = partial(
ELBOLoss,
model=model,
criterion=nn.CrossEntropyLoss(),
kl_weight=1 / 50000,
num_samples=3,
)

baseline = ClassificationSingle(
model=model,
num_classes=dm.num_classes,
in_channels=dm.num_channels,
loss=loss,
optimization_procedure=optim_lenet,
**vars(args),
)

########################################################################
### Gathering Everything and Train the Model
# Now that we have prepared all of this, we just have to gather everything in
# the main function and to train the model using the PyTorch Lightning Trainer.
# Specifically, it needs the baseline, that includes the model as well as the
# training routine, the datamodule, the root for the datasets and the logs, the
# name of the model for the logs and all the training arguments.
# The dataset will be downloaded automatically in the root/data folder, and the
# logs will be saved in the root/logs folder.
cli_main(baseline, dm, root, net_name, args)

########################################################################
# References
# ----------
# **LeNet & MNIST:**
# LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based
# learning applied to document recognition.
# [Proceedings of the IEEE](vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf).
# **Bayesian Neural Networks:**
# Weight Uncertainty in Neural Networks
# [ICML2015](https://arxiv.org/pdf/1505.05424.pdf)
# **The Adam optimizer:**
# Kingma, Diederik P., and Jimmy Ba. "Adam: A method for stochastic optimization."
# [ICLR 2015](https://arxiv.org/pdf/1412.6980.pdf)
# The [Blitz library](https://github.com/piEsposito/blitz-bayesian-deep-learning/tree/master)
# (for the hyperparameters)
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@

cifar10

Training a image Packed-Ensemble classifier
-------------------------------------------
Training an image Packed-Ensemble classifier
--------------------------------------------

Here is the outline of the process:

Expand Down
13 changes: 13 additions & 0 deletions docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -161,3 +161,16 @@ Metrics
JensenShannonDivergence
MutualInformation
NegativeLogLikelihood

Losses
------

.. currentmodule:: torch_uncertainty.losses

.. autosummary::
:toctree: generated/
:nosignatures:
:template: class.rst

KLDiv
ELBOLoss
5 changes: 3 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,14 @@
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
from datetime import datetime
import pytorch_sphinx_theme

# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information

project = "Torch Uncertainty"
copyright = "2023, Adrien Lafage and Olivier Laurent"
copyright = f"{str(datetime.utcnow().year)}, Adrien Lafage and Olivier Laurent"
author = "Adrien Lafage and Olivier Laurent"
release = "0.1.3"

Expand All @@ -28,7 +29,7 @@
sphinx_gallery_conf = {
"examples_dirs": ["../../auto_tutorials_source"],
"gallery_dirs": "auto_tutorials",
"filename_pattern": r"pe_",
"filename_pattern": r"tutorial_",
"plot_gallery": "True",
"promote_jupyter_magic": True,
"backreferences_dir": None,
Expand Down
34 changes: 33 additions & 1 deletion docs/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,12 @@ To update the package, run:
From source
-----------

To install the project from source, you may use `Poetry <https://python-poetry.org/>`_
or install the package using pip directly.

With poetry
^^^^^^^^^^^

**Installing Poetry**

Installation guidelines for poetry are available `here <https://python-poetry.org/docs/>`_.
Expand Down Expand Up @@ -64,10 +70,36 @@ Install the package using poetry:

Depending on your system, you may encounter poetry errors. If so, kill the
process and add :bash:`PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring`
at the beginning of every :bash:`poetry install` command.
at the beginning of every :bash:`poetry` command.

To update the package, run:

.. parsed-literal::

git pull && poetry update

With pip
^^^^^^^^

Clone the repository with:

.. parsed-literal::

git clone https://github.com/ENSTA-U2IS/torch-uncertainty.git
cd torch-uncertainty

Create a new conda environment and activate it:

.. parsed-literal::

conda create -n uncertainty python=3.10
conda activate uncertainty

Install the package using pip in editable mode:

.. parsed-literal::

pip install -e .

For now, you will have to install the optional dependencies manually.
Check the pyproject.toml file for the list of dependencies.
6 changes: 3 additions & 3 deletions docs/source/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Procedure
^^^^^^^^^

The library provides a full-fledged trainer which can be used directly, via
CLI. To do so, create a file in the experiments folder and use the `cls_main`
CLI. To do so, create a file in the experiments folder and use the `cli_main`
routine, which takes as arguments:

* a Lightning Module corresponding to the model, its own arguments, and
Expand Down Expand Up @@ -50,7 +50,7 @@ trains any ResNet architecture on CIFAR10:

import torch.nn as nn

from torch_uncertainty import cls_main, init_args
from torch_uncertainty import cli_main, init_args
from torch_uncertainty.baselines import ResNet
from torch_uncertainty.datamodules import CIFAR10DataModule
from torch_uncertainty.optimization_procedures import get_procedure
Expand All @@ -77,7 +77,7 @@ trains any ResNet architecture on CIFAR10:
**vars(args),
)

cls_main(model, dm, root, net_name, args)
cli_main(model, dm, root, net_name, args)

Run this model with, for instance:

Expand Down
23 changes: 21 additions & 2 deletions docs/source/references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,17 @@ Uncertainty Models

The following uncertainty models are implemented.

Bayesian Neural Networks
^^^^^^^^^^^^^^^^^^^^^^^^

For Bayesian Neural Networks, consider citing:

**Weight Uncertainty in Neural Networks**

* Authors: *Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra*
* Paper: `ICML 2015 <https://arxiv.org/pdf/1505.05424>`__.


Deep Ensembles
^^^^^^^^^^^^^^

Expand Down Expand Up @@ -120,6 +131,14 @@ CIFAR-10 H
* Authors: *Joshua C. Peterson, Ruairidh M. Battleday, Thomas L. Griffiths, and Olga Russakovsky*
* Paper: `ICCV 2019 <https://arxiv.org/pdf/1908.07086.pdf>`__.

CIFAR-10 N / CIFAR-100 N
^^^^^^^^^^^^^^^^^^^^^^^^

**Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations**

* Authors: *Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, Yang Liu*
* Paper: `ICLR 2022 <https://arxiv.org/pdf/2110.12088.pdf>`__.

SVHN
^^^^

Expand Down Expand Up @@ -170,8 +189,8 @@ Textures
* Authors: *Haoqi Wang, Zhizhong Li, Litong Feng, and Wayne Zhang**
* Paper: `CVPR 2022 <https://arxiv.org/pdf/2203.10807.pdf>`__.

Classic Models
--------------
Architectures
-------------

ResNet
^^^^^^
Expand Down
9 changes: 8 additions & 1 deletion experiments/classification/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,22 @@
# Classification Benchmarks

*Work in progress*
In this folder, you will find the different experiment files for classification
datasets.

## Image Classification

Note on VGG: We haven't figured out hyperparameters that would be acceptable
for VGG, even more so for Packed-Ensembles. Just adding groups to the standard
network impedes convergence for all tested hyperparameters.

### CIFAR-10

* ResNet
* WideResNet
* VGG

### CIFAR-100

* ResNet
* WideResNet
* VGG
Loading