Skip to content

[RHOAIENG-17006] chore(pyproject.toml): migrate test dependencies from pipenv to uv #1204

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: feature-uv
Choose a base branch
from

Conversation

mtchoum1
Copy link
Contributor

@mtchoum1 mtchoum1 commented Jun 26, 2025

https://issues.redhat.com/browse/RHOAIENG-17006

Description

Moved all notebook images dependencies into one pyproject.toml file and using uv lock and export commands was able to create updated requirement.txt files that are us to build the images using uv

How Has This Been Tested?

After the creation of the requirement files i run podman build to build the images and make sure there was no errors

Review Points

updating dependencies currently with uv isn't possible for just one group:

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Chores
    • Updated all images and workflows to use the "uv" tool and requirements.txt files for Python dependency management, replacing "micropipenv" and Pipfile/Pipfile.lock.
    • Removed all Pipfile and Pipfile.lock files from Jupyter image directories.
    • Updated scripts and automation to generate requirements.txt from uv.lock and manage dependencies via pyproject.toml.
    • Enhanced dependency grouping, source mapping, and conflict management in pyproject.toml.
    • Broadened Python version support to include Python 3.11 and above.

@openshift-ci openshift-ci bot requested review from caponetto and paulovmr June 26, 2025 17:52
Copy link
Contributor

openshift-ci bot commented Jun 26, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign jiridanek for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

Caution

There are some errors in your PipelineRun template.

PipelineRun Error
unknown `Object 'Kind' is missing in '

List of images referenced from the Python code generation scripts for Tekton pipelines.

The structure of this file must be compatible with

https://docs.renovatebot.com/modules/manager/tekton/

Specifically, see function getDeps and function getBundleValue() in

https://github.com/renovatebot/renovate/blob/main/lib/modules/manager/tekton/extract.ts

This is using the 'older-style' bundle references (see ^^^), because they are a bit less verbose

Konflux (MintMaker) will then update the hashes in this yaml together with the generated Tekton pipelines

because the default renovate.json config includes .tekton/**.yaml (and .yml) files

https://github.com/konflux-ci/mintmaker/blob/289fefb5c7ac18c978b96080c2628d55d0712e83/config/renovate/renovate.json#L62-L70

items:

  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-buildah-remote-oci-ta:0.4@sha256:1d26a89f1ad48279999cdcad3cb5ce43dc08620a6c07d8dfe5cc9c9e17622551
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-show-sbom:0.1@sha256:04f15cbce548e1db7770eee3f155ccb2cc0140a6c371dc67e9a34d83673ea0c0
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-init:0.2@sha256:737682d073a65a486d59b2b30e3104b93edd8490e0cd5e9b4a39703e47363f0f
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-git-clone-oci-ta:0.1@sha256:9709088bf3c581d4763e9804d9ee3a1f06ad6a61c23237277057c4f0cdc4f9c3
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-prefetch-dependencies-oci-ta:0.2@sha256:153ef0382deef840d155f5146f134f39b480523a7d5c38ba9fea2b58792dd4b5
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-build-image-index:0.1@sha256:95be274b6d0432d4671e2c41294ec345121bdf01284b1c6c46b5537dc6b37e15
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-source-build-oci-ta:0.2@sha256:9fe82c9511f282287686f918bf1a543fcef417848e7a503357e988aab2887cee
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-deprecated-image-check:0.5@sha256:5d63b920b71192906fe4d6c4903f594e6f34c5edcff9d21714a08b5edcfbc667
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-clair-scan:0.2@sha256:712afcf63f3b5a97c371d37e637efbcc9e1c7ad158872339d00adc6413cd8851
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-ecosystem-cert-preflight-checks:0.2@sha256:00b13d06d17328e105b11619ee4db98b215ca6ac02314a4776aa5fc2a974f9c1
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-sast-snyk-check-oci-ta:0.3@sha256:a1cb59ed66a7be1949c9720660efb0a006e95ef05b3f67929dd8e310e1d7baef
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-clamav-scan:0.2@sha256:62c835adae22e36fce6684460b39206bc16752f1a4427cdbba4ee9afdd279670
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-sast-coverity-check-oci-ta:0.2@sha256:044412899f847dad17a64ae84f43ace5fd6fb976acbe64a42eb0a06bbff92499
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-coverity-availability-check:0.2@sha256:0b35292eed661c5e3ca307c0ba7f594d17555db2a1da567903b0b47697fa23ed
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-sast-shell-check-oci-ta:0.1@sha256:a591675c72f06fb9c5b1a3d60e6e4c58e4df5f7da180c7a4691a692a6e7e6496
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-sast-unicode-check-oci-ta:0.1@sha256:424f2f659c02998dc3a43e1ce869e3148982c59adb74f953f8fa91ff1c9ab86e
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-apply-tags:0.1@sha256:61c90b1c94a2a11cb11211a0d65884089b758c34254fcec164d185a402beae22
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-push-dockerfile-oci-ta:0.1@sha256:55a4ff2910ae2e4502f3841719935d37578bd52156bc789fcdf45ff48c2b048b
  • spec:
    taskRef:
    bundle: quay.io/konflux-ci/tekton-catalog/task-rpms-signature-scan:0.2@sha256:7b80f5a319d4ff1817fa097cbdbb9473635562f8ea3022e64933e387d3b68715

'` |

Copy link
Contributor

openshift-ci bot commented Jun 26, 2025

Hi @mtchoum1. Thanks for your PR.

I'm waiting for a opendatahub-io member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Contributor

coderabbitai bot commented Jun 26, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This change migrates Python dependency management from Pipenv and Pipfile.lock files to the uv tool and requirements.txt files across multiple Jupyter-related Docker images and workflows. All Pipfile and Pipfile.lock usage is removed, Dockerfiles are updated to use uv for installation, and the dependency structure is consolidated and restructured in pyproject.toml with new dependency groups, source mappings, and conflict rules.

Changes

File(s) Change Summary
.github/workflows/uvlock-renewal.yaml Workflow updated to manage uv.lock files using uv instead of Pipfile.lock with pipenv.
jupyter/*/ubi9-python-3.11/Dockerfile.* Dockerfiles switched from micropipenv/Pipfile.lock to uv/requirements.txt for Python dependency installation.
jupyter/*/ubi9-python-3.11/Pipfile All Pipfile dependency specification files deleted.
pyproject.toml Added dependency groups, new source mappings, indices, conflict rules, and relaxed Python version requirement.
scripts/sync-requirements-txt.sh Script updated to use uv for generating requirements.txt files from uv.lock and new dependency groups.

Sequence Diagram(s)

sequenceDiagram
    participant Workflow
    participant uv
    participant Git
    Workflow->>uv: Install uv
    Workflow->>uv: Run `uv lock --python-version`
    uv-->>Workflow: Generates uv.lock
    Workflow->>Git: Add & commit uv.lock
Loading
sequenceDiagram
    participant Dockerfile
    participant uv
    participant requirements.txt
    Dockerfile->>uv: Install uv via pip
    Dockerfile->>requirements.txt: Copy requirements.txt into image
    Dockerfile->>uv: Run `uv pip install -r requirements.txt`
    uv-->>Dockerfile: Installs dependencies
    Dockerfile->>requirements.txt: Remove requirements.txt after install
Loading

Suggested labels

tide/merge-method-squash

Poem

🐇✨
Out with the Pipfile, in with the uv,
Dependencies grouped, oh what a move!
Docker builds faster, requirements anew,
The lock is refreshed, the workflow too.
With pyproject’s wisdom and scripts that align,
This bunny approves—your Python will shine!


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jiridanek
Copy link
Member

@coderabbitai review

Copy link
Contributor

coderabbitai bot commented Jun 26, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@openshift-ci openshift-ci bot added size/xxl and removed size/xxl labels Jun 26, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (8)
jupyter/datascience/ubi9-python-3.11/Dockerfile.cpu (1)

30-31: Duplicate of the stale-comment / unpinned uv issue noted for the TrustyAI Dockerfile

Please apply the same fix there.

jupyter/pytorch/ubi9-python-3.11/Dockerfile.cuda (1)

30-31: Same stale comment / unpinned uv concern as raised earlier – please align.

jupyter/rocm/pytorch/ubi9-python-3.11/Dockerfile.rocm (1)

30-31: Same stale comment / unpinned uv concern as raised earlier – please align.

jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda (1)

176-182: Duplicate comment/layer optimisation remark

The “Install Python dependencies from Pipfile.lock file” note is obsolete and the two-layer copy/remove can be merged as previously suggested.

jupyter/minimal/ubi9-python-3.11/Dockerfile.cpu (2)

17-19: Keep comments in sync with implementation

Same adjustment as above.


56-62: Obsolete wording + layer slimming

See earlier CUDA/ROCm notes.

jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm (2)

30-32: Comment/tool mismatch

Swap “micropipenv” → “uv”.


153-157: Comment accuracy + one-layer install

Same suggestion as in TensorFlow-CUDA Dockerfile.

🧹 Nitpick comments (10)
jupyter/datascience/ubi9-python-3.11/Dockerfile.cpu (1)

108-110: Prefer system-site installation to avoid $HOME/.local bloat

Running inside OpenShift as an arbitrary UID, uv pip install defaults to user-site. Installing into the system site-packages keeps the layer clean and avoids permission gymnastics later:

-    uv pip install -r requirements.txt && \
+    uv pip install --system -r requirements.txt && \
jupyter/tensorflow/ubi9-python-3.11/Dockerfile.cuda (2)

30-32: Comment talks about micropipenv, but code switched to uv

The in-line comment still mentions “micropipenv”/Pipfile.lock although the command now installs uv. This will confuse future maintainers.

-# Install micropipenv to deploy packages from Pipfile.lock
+# Install uv to deploy packages from requirements.txt
 RUN pip install --no-cache-dir -U uv

238-244: Out-dated comment + minor image-size optimisation

  1. Comment again references Pipfile.lock; update for accuracy.
  2. requirements.txt is copied in one layer and deleted in the next, so it still bloats the previous layer. Copy & install in a single RUN keeps the file out of all layers:
-# Install Python packages and Jupyterlab extensions from Pipfile.lock
-COPY ${TENSORFLOW_SOURCE_CODE}/requirements.txt ./
-RUN echo "Installing softwares and packages" && \
-    uv pip install -r requirements.txt && \
-    rm -f ./requirements.txt && \
+# Install Python packages and JupyterLab extensions with uv
+RUN echo "Installing softwares and packages" && \
+    curl -sSL ${TENSORFLOW_SOURCE_CODE}/requirements.txt -o /tmp/req.txt && \
+    uv pip install -r /tmp/req.txt && \
+    rm -f /tmp/req.txt && \
jupyter/minimal/ubi9-python-3.11/Dockerfile.rocm (2)

17-19: Update misleading comment & pin uv for reproducibility

Same mismatch between comment and tooling as above; additionally, consider pinning uv to a known good version instead of always pulling latest.

-# Install micropipenv to deploy packages from Pipfile.lock
-RUN pip install --no-cache-dir -U uv
+# Install uv for requirements.txt based workflow
+RUN pip install --no-cache-dir -U uv==<version>

Ensure the chosen version exists: curl -s https://pypi.org/pypi/uv/json | jq '.releases|keys[-5:]'


90-96: Comment & layer issue identical to CUDA path

Update wording and collapse COPY+install to keep layers slim (see earlier diff example). This pattern repeats across several Dockerfiles; consider abstracting to a common script.

jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda (1)

17-19: Same stale comment

Replace “micropipenv” with “uv”.

scripts/sync-requirements-txt.sh (2)

7-7: Update the comment – it no longer reflects reality

Line 7 still mentions “Pipfile.lock”, but the script now operates exclusively on requirements.txt generated from uv.lock. This will confuse future maintainers.


8-8: Prefer a more robust uv check / install

uv --version || pip install uv will install into whichever pip happens to be first on PATH, which can differ from the Python used by the script. Consider:

-uv --version || pip install uv
+command -v uv >/dev/null 2>&1 || python -m pip install --upgrade --no-cache-dir uv

Keeps the interpreter-pip pairing intact and avoids stale versions.

.github/workflows/uvlock-renewal.yaml (2)

2-3: Header & name still reference Pipfile locks

Line 2 comment and the workflow name should mention uv.lock, not pipfile.locks, for clarity and grep-ability.


62-65: Step title outdated

The step below is now installing uv; rename the title accordingly to avoid future confusion when scanning CI logs.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e98a63e and 51ec784.

⛔ Files ignored due to path filters (8)
  • jupyter/datascience/ubi9-python-3.11/Pipfile.lock is excluded by !**/*.lock
  • jupyter/minimal/ubi9-python-3.11/Pipfile.lock is excluded by !**/*.lock
  • jupyter/pytorch/ubi9-python-3.11/Pipfile.lock is excluded by !**/*.lock
  • jupyter/rocm/pytorch/ubi9-python-3.11/Pipfile.lock is excluded by !**/*.lock
  • jupyter/rocm/tensorflow/ubi9-python-3.11/Pipfile.lock is excluded by !**/*.lock
  • jupyter/tensorflow/ubi9-python-3.11/Pipfile.lock is excluded by !**/*.lock
  • jupyter/trustyai/ubi9-python-3.11/Pipfile.lock is excluded by !**/*.lock
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (19)
  • .github/workflows/uvlock-renewal.yaml (3 hunks)
  • jupyter/datascience/ubi9-python-3.11/Dockerfile.cpu (2 hunks)
  • jupyter/datascience/ubi9-python-3.11/Pipfile (0 hunks)
  • jupyter/minimal/ubi9-python-3.11/Dockerfile.cpu (2 hunks)
  • jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda (2 hunks)
  • jupyter/minimal/ubi9-python-3.11/Dockerfile.rocm (2 hunks)
  • jupyter/minimal/ubi9-python-3.11/Pipfile (0 hunks)
  • jupyter/pytorch/ubi9-python-3.11/Dockerfile.cuda (2 hunks)
  • jupyter/pytorch/ubi9-python-3.11/Pipfile (0 hunks)
  • jupyter/rocm/pytorch/ubi9-python-3.11/Dockerfile.rocm (2 hunks)
  • jupyter/rocm/pytorch/ubi9-python-3.11/Pipfile (0 hunks)
  • jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm (2 hunks)
  • jupyter/rocm/tensorflow/ubi9-python-3.11/Pipfile (0 hunks)
  • jupyter/tensorflow/ubi9-python-3.11/Dockerfile.cuda (2 hunks)
  • jupyter/tensorflow/ubi9-python-3.11/Pipfile (0 hunks)
  • jupyter/trustyai/ubi9-python-3.11/Dockerfile.cpu (2 hunks)
  • jupyter/trustyai/ubi9-python-3.11/Pipfile (0 hunks)
  • pyproject.toml (2 hunks)
  • scripts/sync-requirements-txt.sh (1 hunks)
💤 Files with no reviewable changes (7)
  • jupyter/minimal/ubi9-python-3.11/Pipfile
  • jupyter/rocm/pytorch/ubi9-python-3.11/Pipfile
  • jupyter/pytorch/ubi9-python-3.11/Pipfile
  • jupyter/trustyai/ubi9-python-3.11/Pipfile
  • jupyter/rocm/tensorflow/ubi9-python-3.11/Pipfile
  • jupyter/tensorflow/ubi9-python-3.11/Pipfile
  • jupyter/datascience/ubi9-python-3.11/Pipfile
🧰 Additional context used
🧠 Learnings (9)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
jupyter/tensorflow/ubi9-python-3.11/Dockerfile.cuda (2)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
jupyter/rocm/pytorch/ubi9-python-3.11/Dockerfile.rocm (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
jupyter/pytorch/ubi9-python-3.11/Dockerfile.cuda (2)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
jupyter/datascience/ubi9-python-3.11/Dockerfile.cpu (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
jupyter/minimal/ubi9-python-3.11/Dockerfile.rocm (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
pyproject.toml (5)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: Runtime deployment tests in opendatahub-io/notebooks may show PodSecurity warnings about allowPrivilegeEscalation, capabilities, runAsNonRoot, and seccompProfile settings. These warnings occur on OpenShift but not on GitHub Actions because GitHub Actions uses upstream Kubernetes without SecurityContextConstraints (SCC).
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-26T16:19:31.249Z
Learning: In the opendatahub-io/notebooks repository, the Playwright Docker image version in `.github/workflows/build-notebooks-TEMPLATE.yaml` (format: `mcr.microsoft.com/playwright:v1.53.1-noble`) must always match the `@playwright/test` version specified in the `tests/browser/package.json` file. Both versions need to be updated together to maintain consistency between CI/CD pipeline and project dependencies.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-26T16:19:31.249Z
Learning: In the opendatahub-io/notebooks repository, the Playwright Docker image version in `.github/workflows/build-notebooks-TEMPLATE.yaml` (format: `mcr.microsoft.com/playwright:v1.53.1-noble`) must always match the `@playwright/test` version specified in the `tests/browser/package.json` file. Both versions need to be updated together to maintain consistency between CI/CD pipeline and project dependencies.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-tensorflow-notebook-imagestream.yaml:45-45
Timestamp: 2025-06-13T08:34:01.300Z
Learning: When updating dependency versions in `manifests/base/*-imagestream.yaml`, the project convention is to modify only the newest tag (e.g., "2025.1") and intentionally leave earlier tags (e.g., "2024.2") unchanged.
⏰ Context from checks skipped due to timeout of 90000ms (10)
  • GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.11, false) / build
  • GitHub Check: build (jupyter-trustyai-ubi9-python-3.11, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.11, false) / build
  • GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.11, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.11, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.11, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.11, false) / build
  • GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.11, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.11, false) / build
  • GitHub Check: check-generated-code
🔇 Additional comments (1)
pyproject.toml (1)

118-128: Hard-pin to unreleased torch==2.6.0 risks blocking the build

torch==2.6.0 is referenced in three groups (pytorchcuda, pytorchrocm, trustyai). No such wheel exists on PyPI or the NVIDIA/ROCm indices (latest published is 2.3.*). A resolver run from a clean cache will error out.

Action: relax the pin (torch~=2.3 or leave unpinned) or stage your own wheel repository before merging.

Comment on lines +76 to +87
datascience-base = [
"boto3 ~=1.37.8",
"kafka-python-ng ~=2.2.3",
"kfp ~=2.12.1",
"plotly ~=6.0.0",
"scipy ~=1.15.2",
"skl2onnx ~=1.18.0",
"onnxconverter-common ~=1.13.0",
"codeflare-sdk ~=0.27.0",
"kubeflow-training ==1.9.0"
]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

NumPy/SciPy version clash in TensorFlow image build path

datascience-base pins scipy~=1.15.2 (needs NumPy ≥ 2.x) while datascience-tensorflow still pins numpy~=1.26.4.
Both groups are combined in jupyter-tensorflow-image (lines 156-165). The resulting image will end up with a NumPy too old for the SciPy wheel and will fail at import-time (ImportError: numpy.core.multiarray failed to import).

Either:

  1. Bump NumPy in datascience-tensorflow to ~=2.*, or
  2. Lower SciPy to the last build compatible with NumPy 1.26 (≈ 1.11).

Also applies to: 95-100, 156-165

🤖 Prompt for AI Agents
In pyproject.toml around lines 76 to 87, there is a version conflict where
datascience-base pins scipy to 1.15.2 requiring NumPy ≥ 2.x, but
datascience-tensorflow pins numpy to 1.26.4, causing import errors when combined
in jupyter-tensorflow-image (lines 156-165). To fix this, either update the
numpy version in datascience-tensorflow to ~=2.* to match scipy's requirement or
downgrade scipy in datascience-base to a version compatible with numpy 1.26,
such as around 1.11. Apply the same fix to the dependencies listed at lines
95-100 and 156-165 to ensure consistency.

Comment on lines 30 to 31
# Install micropipenv to deploy packages from Pipfile.lock
RUN pip install --no-cache-dir -U "micropipenv[toml]"
RUN pip install --no-cache-dir -U uv
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Stale comment & unpinned uv hinder reproducibility

The comment still refers to micropipenv, but the actual tool is uv.
Also, pip install -U uv pulls the latest release on every build, making image digests non-deterministic.

-# Install micropipenv to deploy packages from Pipfile.lock
-RUN pip install --no-cache-dir -U uv
+# Install uv (resolver/installer used by the image)
+RUN pip install --no-cache-dir 'uv==0.2.11'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Install micropipenv to deploy packages from Pipfile.lock
RUN pip install --no-cache-dir -U "micropipenv[toml]"
RUN pip install --no-cache-dir -U uv
# Install uv (resolver/installer used by the image)
RUN pip install --no-cache-dir 'uv==0.2.11'
🤖 Prompt for AI Agents
In jupyter/trustyai/ubi9-python-3.11/Dockerfile.cpu at lines 30-31, update the
comment to correctly reference the tool `uv` instead of the outdated
`micropipenv`. Additionally, pin the `uv` package to a specific version in the
pip install command to ensure reproducible builds and deterministic image
digests, avoiding the use of the `-U` flag which fetches the latest version
every time.

Comment on lines 16 to 21
temp_dir="${full_path%/*}"
image_name="${temp_dir##*/}"

if [[ "$full_path" == *"/rocm/"* ]]; then
image_name="${image_name}-rocm"
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Image name derivation is wrong – produces jupyter or rocm-rocm

temp_dir="${full_path%/*}"; image_name="${temp_dir##*/}" yields the parent directory, not the directory that actually contains the image.
Examples:
jupyter/minimalimage_name=jupyter
jupyter/rocm/pytorchimage_name=rocm → later suffixed to rocm-rocm

This breaks the --group jupyter-${image_name}-image lookup and exports the wrong dependency set.
Fix:

-  temp_dir="${full_path%/*}"
-  image_name="${temp_dir##*/}"
+  image_name="$(basename "$full_path")"
 
   if [[ "$full_path" == *"/rocm/"* ]]; then
     image_name="${image_name}-rocm"
   fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
temp_dir="${full_path%/*}"
image_name="${temp_dir##*/}"
if [[ "$full_path" == *"/rocm/"* ]]; then
image_name="${image_name}-rocm"
fi
image_name="$(basename "$full_path")"
if [[ "$full_path" == *"/rocm/"* ]]; then
image_name="${image_name}-rocm"
fi
🤖 Prompt for AI Agents
In scripts/sync-requirements-txt.sh around lines 16 to 21, the image_name is
incorrectly derived from the parent directory of full_path, causing wrong values
like 'jupyter' or 'rocm-rocm'. To fix this, change the extraction to get the
actual directory containing the image by using image_name="${full_path%%/*}" or
an equivalent method that correctly isolates the first directory segment
representing the image name. This will ensure the group lookup and dependency
exports use the correct image name.

Comment on lines 67 to 72
- name: Run make refresh-pipfilelock-files and push the changes back to the branch
run: |
make refresh-pipfilelock-files PYTHON_VERSION=${{ env.PYTHON_VERSION }} INCLUDE_OPT_DIRS=${{ env.INCLUDE_OPT_DIRS }}
git add .
git commit -m "Update Pipfile.lock files by piplock-renewal.yaml action"
uv lock --python ${{ env.PYTHON_VERSION }}
git add uv.lock
git commit -m "Update uv.lock files by uvlock-renewal.yaml action"
git push origin ${{ env.BRANCH }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Workflow fails when there is nothing to commit

git commit exits with status 1 if the index is empty, causing the whole job to fail on a no-op run. Guard the commit:

-      - name: Run make refresh-pipfilelock-files and push the changes back to the branch
+      - name: Refresh uv.lock and push if there are changes
         run: |
-          uv lock --python ${{ env.PYTHON_VERSION }}
-          git add uv.lock
-          git commit -m "Update uv.lock files by uvlock-renewal.yaml action"
-          git push origin ${{ env.BRANCH }}
+          uv lock --python ${{ env.PYTHON_VERSION }}
+          git add uv.lock
+          if ! git diff --cached --quiet; then
+            git commit -m "Update uv.lock file by uvlock-renewal workflow"
+            git push origin ${{ env.BRANCH }}
+          else
+            echo "uv.lock already up to date – nothing to commit"
+          fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Run make refresh-pipfilelock-files and push the changes back to the branch
run: |
make refresh-pipfilelock-files PYTHON_VERSION=${{ env.PYTHON_VERSION }} INCLUDE_OPT_DIRS=${{ env.INCLUDE_OPT_DIRS }}
git add .
git commit -m "Update Pipfile.lock files by piplock-renewal.yaml action"
uv lock --python ${{ env.PYTHON_VERSION }}
git add uv.lock
git commit -m "Update uv.lock files by uvlock-renewal.yaml action"
git push origin ${{ env.BRANCH }}
- name: Refresh uv.lock and push if there are changes
run: |
uv lock --python ${{ env.PYTHON_VERSION }}
git add uv.lock
if ! git diff --cached --quiet; then
git commit -m "Update uv.lock file by uvlock-renewal workflow"
git push origin ${{ env.BRANCH }}
else
echo "uv.lock already up to date – nothing to commit"
fi
🤖 Prompt for AI Agents
In .github/workflows/uvlock-renewal.yaml around lines 67 to 72, the workflow
fails if there is nothing to commit because `git commit` returns an error when
the index is empty. To fix this, add a condition to check if there are any
changes staged before running `git commit`. You can do this by running `git diff
--cached --quiet` and only committing if there are changes, preventing the job
from failing on no-op runs.

@atheo89
Copy link
Member

atheo89 commented Jun 30, 2025

/hold

@jiridanek
Copy link
Member

/hold

This pr goes to feature branch and not main, just saying because it's easy to miss.

@mtchoum1
Copy link
Contributor Author

mtchoum1 commented Jul 3, 2025

@jiridanek Thank you for the questions on Slack. I have created a document with the answers to most of them and am still looking into some of the remaining questions. https://docs.google.com/document/d/1kocr25L4E_GTlAYWi75Bt4jK5xE7F-WHx0RvlzsUh-k/edit?usp=sharing

@atheo89
Copy link
Member

atheo89 commented Jul 3, 2025

/hold

This pr goes to feature branch and not main, just saying because it's easy to miss.

Oh thanks didn't noticed that!
/unhold
then!

Need to take a look on that series of PR tho one day..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants