Skip to content

Add assertions to MNIST test in TensorFlow test notebooks #1261

Open
@coderabbitai

Description

@coderabbitai

Description

The MNIST model test in TensorFlow test notebooks currently runs through the training process but lacks proper assertions to validate model performance. This makes it a smoke test rather than a proper unit test.

Current Behavior

The test_mnist_model() method in test notebooks (like jupyter/rocm/tensorflow/ubi9-python-3.12/test/test_notebook.ipynb) executes the MNIST training workflow but doesn't verify:

  • Model prediction shapes
  • Training accuracy thresholds
  • Loss values within reasonable ranges
  • Probability model outputs

Proposed Solution

Add assertions to validate:

  1. Prediction tensor shapes match expected dimensions
  2. Model achieves minimum accuracy threshold (e.g., >50% for basic sanity check)
  3. Test loss remains within reasonable bounds
  4. Probability model outputs have correct shape

Acceptance Criteria

  • Add shape assertions for model predictions
  • Add minimum accuracy threshold check (test_accuracy > 0.5)
  • Add reasonable loss bounds check (test_loss < 10.0)
  • Add shape assertions for probability model outputs
  • Apply similar improvements to other TensorFlow test notebooks if present
  • Ensure tests remain stable and don't introduce flakiness

Impact

This enhancement will convert smoke tests into proper unit tests that can catch regressions in model functionality and ensure the TensorFlow environment is working correctly.

Related

Additional Context

This issue was identified during review of the ROCm TensorFlow Python 3.12 image implementation.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    📋 Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions