Skip to content

Releases: deel-ai/deel-lip

1.5.0

28 Nov 17:53
13d6137
Compare
Choose a tag to compare

New features and improvements

  • Two new losses based on standard Keras cross-entropy losses with a settable temperature for softmax:
    • TauSparseCategoricalCrossentropy equivalent to Keras SparseCategoricalCrossentropy
    • TauBinaryCrossentropy equivalent to Keras BinaryCrossentropy
  • New module deel.lip.compute_layer_sv to compute the largest and lowest singular values of layers compute_layer_sv() or of a whole model compute_model_sv().
  • Power iteration algorithm for convolution.
  • New "Getting Started" tutorial to introduce 1-Lipschitz neural networks.
  • Documentation migration from Sphinx to MkDocs.

API changes

  • Activations are now imported via deel.lip.layers submodule, e.g. deel.lip.layers.GroupSort instead of deel.lip.activations.GroupSort. We adopted the same convention as Keras. The legacy submodule is still available for retro-compatibility but will be removed in a future release.
  • Unconstrained layers must now be imported using deel.lip.layers.unconstrained submodule, e.g. deel.lip.layers.unconstrained.PadConv2D.

Fixes

  • Fix InvertibleUpSampling __call__() returning None.

Full changelog: v1.4.0...v1.5.0

1.4.0

10 Jan 17:16
9c2f1a7
Compare
Choose a tag to compare

New features and improvements

  • Two new layers:
    • SpectralConv2DTranspose, a Lipschitz version of the Keras Conv2DTranspose layer
    • activation layer Householder which is a parametrized generalization of the GroupSort2
  • Two new regularizers to foster orthogonality:
    • LorthRegularizer for an orthogonal convolution
    • OrthDenseRegularizer for an orthogonal Dense matrix kernel
  • Two new losses for Lipschitz networks:
    • TauCategoricalCrossentropy, a categorical cross-entropy loss with temperature scaling tau
    • CategoricalHinge, a hinge loss for multi-class problems based on the implementation of the Keras CategoricalHinge
  • Two new custom callbacks:
    • LossParamScheduler to change loss hyper-parameters during training, e.g. min_margin, alpha and tau
    • LossParamLog to log the value of loss parameters
  • The Björck orthogonalization algorithm was accelerated.
  • Normalizers (power iteration and Björck) use tf.while_loop and the swap_memory argument can be globally set using set_swap_memory(bool). Default value is True to save memory usage in GPU.
  • The new function set_stop_grad_spectral(bool) allows to bypass the back-propagation in the power iteration algorithm that computes the spectral norm. Default value is True. Stopping gradient propagation reduces runtime.
  • Due to bugs in TensorFlow serialization of custom losses and metrics (version 2.0 and 2.1), deel-lip now only supports TensorFlow >= 2.2.

Fixes

  • SpectralInitializer does not reuse anymore the same base initializer in multiple instances.

Full Changelog: v1.3.0...v1.4.0

1.3.0

29 Aug 10:17
bb0db0b
Compare
Choose a tag to compare

New features and improvements

  • New layer PadConv2D to handle in particular circular padding in convolutional layer
  • Losses handle multi-label classification
  • Losses are now element-wise. reduction parameter in custom losses can be set to None.
  • New metrics are introduced: ProvableAvgRobustness and ProvableRobustAccuracy

API changes

  • KR is not a function anymore but a class derived from tf.keras.losses.Loss.
  • negative_KR function was removed. Use the loss HKR(alpha=0) instead.
  • The stopping criterion for Spectral normalization and Björck orthogonalization (iterative methods) is no more the number of iterations niter_spectral and niter_bjorck. The methods are now stopped based on the difference between two iterations: eps_spectral and eps_bjorck. This API change occurs in:
    • Lipschitz layers, such as SpectralDense and SpectralConv2D
    • normalizer reshaped_kernel_orthogonalization
    • constraint SpectralConstraint
    • initializer SpectralInitializer

Full Changelog: v1.2.0...v1.3.0

1.2.0

10 Sep 17:28
5e14d25
Compare
Choose a tag to compare

this revision contains:

  • code refactoring: storing wbar in a tf.variable
  • update of the documentation's notebooks
  • update of the Callbacks, Initializers, Constraints...
  • update of the losses and tests for losses
  • improved loss stability for small batches
  • added ScaledGlobalL2NormPooling2D
  • new way to export keras serializable objects

This ends the support of tf2.0. Only versions >= tf2.1 are supported.

1.1.1

26 May 09:54
56e0006
Compare
Choose a tag to compare

This revision contains:

  • bugfixes in losses.py: fixed a problem with data types in HKR_loss and fixed a weighting problem in KR_multiclass_loss.
  • changed behavior of FrobeniusDense in the multi class setup : now using FrobeniusDense with 10 output neurons is now equivalent to stack 10 FrobeniuDense layers with 1 output neuron. The L2normalization is performed on each neuron instead of the full weight matrix

1.1.0

26 Feb 18:45
843a30c
Compare
Choose a tag to compare

This version add new features:

  • InvertibleDownSampling and InvertibleUpSampling
  • multiclass extension of the HKR loss

It also contains the multiple fixes for:

  • bug with L2NormPooling
  • bug with vanilla_export
  • bug with tf.function annotation causing incorrect Lipschitz constant in Sequential (for constant others than 1).

Breaking changes:

  • the true_values parameter has been removed in binary HKR as both (1, -1) and (1,0) are handled automatically.

1.0.2

14 Sep 14:24
bfff576
Compare
Choose a tag to compare

Features

  • Tensorflow 2.3 support.

1.0.1

27 Jul 12:45
b7bd6bc
Compare
Choose a tag to compare

Features

  • Improvements for Björck initializers.
  • Stride handling in convolutionnal layers.

Bug fixes

  • Fixed bug with ScaledL2NormPooling which was causing nan values to appear after the first training step.

Initial release - v1.0.0

26 Jun 18:00
Compare
Choose a tag to compare

Controlling the Lipschitz constant of a layer or a whole neural network has many applications ranging from adversarial robustness to Wasserstein distance estimation.

This library provides implementation of k-Lispchitz layers for keras.