-
Notifications
You must be signed in to change notification settings - Fork 97
[RHOAIENG-17006] chore(pyproject.toml): migrate test dependencies from pipenv to uv #1204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feature-uv
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Caution There are some errors in your PipelineRun template.
List of images referenced from the Python code generation scripts for Tekton pipelines.The structure of this file must be compatible withhttps://docs.renovatebot.com/modules/manager/tekton/Specifically, see
|
Hi @mtchoum1. Thanks for your PR. I'm waiting for a opendatahub-io member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis change migrates Python dependency management from Pipenv and Changes
Sequence Diagram(s)sequenceDiagram
participant Workflow
participant uv
participant Git
Workflow->>uv: Install uv
Workflow->>uv: Run `uv lock --python-version`
uv-->>Workflow: Generates uv.lock
Workflow->>Git: Add & commit uv.lock
sequenceDiagram
participant Dockerfile
participant uv
participant requirements.txt
Dockerfile->>uv: Install uv via pip
Dockerfile->>requirements.txt: Copy requirements.txt into image
Dockerfile->>uv: Run `uv pip install -r requirements.txt`
uv-->>Dockerfile: Installs dependencies
Dockerfile->>requirements.txt: Remove requirements.txt after install
Suggested labels
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (8)
jupyter/datascience/ubi9-python-3.11/Dockerfile.cpu (1)
30-31
: Duplicate of the stale-comment / unpinneduv
issue noted for the TrustyAI DockerfilePlease apply the same fix there.
jupyter/pytorch/ubi9-python-3.11/Dockerfile.cuda (1)
30-31
: Same stale comment / unpinneduv
concern as raised earlier – please align.jupyter/rocm/pytorch/ubi9-python-3.11/Dockerfile.rocm (1)
30-31
: Same stale comment / unpinneduv
concern as raised earlier – please align.jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda (1)
176-182
: Duplicate comment/layer optimisation remarkThe “Install Python dependencies from Pipfile.lock file” note is obsolete and the two-layer copy/remove can be merged as previously suggested.
jupyter/minimal/ubi9-python-3.11/Dockerfile.cpu (2)
17-19
: Keep comments in sync with implementationSame adjustment as above.
56-62
: Obsolete wording + layer slimmingSee earlier CUDA/ROCm notes.
jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm (2)
30-32
: Comment/tool mismatchSwap “micropipenv” → “uv”.
153-157
: Comment accuracy + one-layer installSame suggestion as in TensorFlow-CUDA Dockerfile.
🧹 Nitpick comments (10)
jupyter/datascience/ubi9-python-3.11/Dockerfile.cpu (1)
108-110
: Prefer system-site installation to avoid$HOME/.local
bloatRunning inside OpenShift as an arbitrary UID,
uv pip install
defaults to user-site. Installing into the system site-packages keeps the layer clean and avoids permission gymnastics later:- uv pip install -r requirements.txt && \ + uv pip install --system -r requirements.txt && \jupyter/tensorflow/ubi9-python-3.11/Dockerfile.cuda (2)
30-32
: Comment talks about micropipenv, but code switched to uvThe in-line comment still mentions “micropipenv”/Pipfile.lock although the command now installs
uv
. This will confuse future maintainers.-# Install micropipenv to deploy packages from Pipfile.lock +# Install uv to deploy packages from requirements.txt RUN pip install --no-cache-dir -U uv
238-244
: Out-dated comment + minor image-size optimisation
- Comment again references Pipfile.lock; update for accuracy.
requirements.txt
is copied in one layer and deleted in the next, so it still bloats the previous layer. Copy & install in a singleRUN
keeps the file out of all layers:-# Install Python packages and Jupyterlab extensions from Pipfile.lock -COPY ${TENSORFLOW_SOURCE_CODE}/requirements.txt ./ -RUN echo "Installing softwares and packages" && \ - uv pip install -r requirements.txt && \ - rm -f ./requirements.txt && \ +# Install Python packages and JupyterLab extensions with uv +RUN echo "Installing softwares and packages" && \ + curl -sSL ${TENSORFLOW_SOURCE_CODE}/requirements.txt -o /tmp/req.txt && \ + uv pip install -r /tmp/req.txt && \ + rm -f /tmp/req.txt && \jupyter/minimal/ubi9-python-3.11/Dockerfile.rocm (2)
17-19
: Update misleading comment & pinuv
for reproducibilitySame mismatch between comment and tooling as above; additionally, consider pinning
uv
to a known good version instead of always pulling latest.-# Install micropipenv to deploy packages from Pipfile.lock -RUN pip install --no-cache-dir -U uv +# Install uv for requirements.txt based workflow +RUN pip install --no-cache-dir -U uv==<version>Ensure the chosen version exists:
curl -s https://pypi.org/pypi/uv/json | jq '.releases|keys[-5:]'
90-96
: Comment & layer issue identical to CUDA pathUpdate wording and collapse COPY+install to keep layers slim (see earlier diff example). This pattern repeats across several Dockerfiles; consider abstracting to a common script.
jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda (1)
17-19
: Same stale commentReplace “micropipenv” with “uv”.
scripts/sync-requirements-txt.sh (2)
7-7
: Update the comment – it no longer reflects realityLine 7 still mentions “Pipfile.lock”, but the script now operates exclusively on
requirements.txt
generated fromuv.lock
. This will confuse future maintainers.
8-8
: Prefer a more robustuv
check / install
uv --version || pip install uv
will install into whicheverpip
happens to be first onPATH
, which can differ from the Python used by the script. Consider:-uv --version || pip install uv +command -v uv >/dev/null 2>&1 || python -m pip install --upgrade --no-cache-dir uvKeeps the interpreter-pip pairing intact and avoids stale versions.
.github/workflows/uvlock-renewal.yaml (2)
2-3
: Header & name still reference Pipfile locksLine 2 comment and the workflow name should mention
uv.lock
, notpipfile.locks
, for clarity and grep-ability.
62-65
: Step title outdatedThe step below is now installing
uv
; rename the title accordingly to avoid future confusion when scanning CI logs.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (8)
jupyter/datascience/ubi9-python-3.11/Pipfile.lock
is excluded by!**/*.lock
jupyter/minimal/ubi9-python-3.11/Pipfile.lock
is excluded by!**/*.lock
jupyter/pytorch/ubi9-python-3.11/Pipfile.lock
is excluded by!**/*.lock
jupyter/rocm/pytorch/ubi9-python-3.11/Pipfile.lock
is excluded by!**/*.lock
jupyter/rocm/tensorflow/ubi9-python-3.11/Pipfile.lock
is excluded by!**/*.lock
jupyter/tensorflow/ubi9-python-3.11/Pipfile.lock
is excluded by!**/*.lock
jupyter/trustyai/ubi9-python-3.11/Pipfile.lock
is excluded by!**/*.lock
uv.lock
is excluded by!**/*.lock
📒 Files selected for processing (19)
.github/workflows/uvlock-renewal.yaml
(3 hunks)jupyter/datascience/ubi9-python-3.11/Dockerfile.cpu
(2 hunks)jupyter/datascience/ubi9-python-3.11/Pipfile
(0 hunks)jupyter/minimal/ubi9-python-3.11/Dockerfile.cpu
(2 hunks)jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda
(2 hunks)jupyter/minimal/ubi9-python-3.11/Dockerfile.rocm
(2 hunks)jupyter/minimal/ubi9-python-3.11/Pipfile
(0 hunks)jupyter/pytorch/ubi9-python-3.11/Dockerfile.cuda
(2 hunks)jupyter/pytorch/ubi9-python-3.11/Pipfile
(0 hunks)jupyter/rocm/pytorch/ubi9-python-3.11/Dockerfile.rocm
(2 hunks)jupyter/rocm/pytorch/ubi9-python-3.11/Pipfile
(0 hunks)jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm
(2 hunks)jupyter/rocm/tensorflow/ubi9-python-3.11/Pipfile
(0 hunks)jupyter/tensorflow/ubi9-python-3.11/Dockerfile.cuda
(2 hunks)jupyter/tensorflow/ubi9-python-3.11/Pipfile
(0 hunks)jupyter/trustyai/ubi9-python-3.11/Dockerfile.cpu
(2 hunks)jupyter/trustyai/ubi9-python-3.11/Pipfile
(0 hunks)pyproject.toml
(2 hunks)scripts/sync-requirements-txt.sh
(1 hunks)
💤 Files with no reviewable changes (7)
- jupyter/minimal/ubi9-python-3.11/Pipfile
- jupyter/rocm/pytorch/ubi9-python-3.11/Pipfile
- jupyter/pytorch/ubi9-python-3.11/Pipfile
- jupyter/trustyai/ubi9-python-3.11/Pipfile
- jupyter/rocm/tensorflow/ubi9-python-3.11/Pipfile
- jupyter/tensorflow/ubi9-python-3.11/Pipfile
- jupyter/datascience/ubi9-python-3.11/Pipfile
🧰 Additional context used
🧠 Learnings (9)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
jupyter/tensorflow/ubi9-python-3.11/Dockerfile.cuda (2)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
jupyter/rocm/pytorch/ubi9-python-3.11/Dockerfile.rocm (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
jupyter/pytorch/ubi9-python-3.11/Dockerfile.cuda (2)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
jupyter/rocm/tensorflow/ubi9-python-3.11/Dockerfile.rocm (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
jupyter/datascience/ubi9-python-3.11/Dockerfile.cpu (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
jupyter/minimal/ubi9-python-3.11/Dockerfile.rocm (1)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
pyproject.toml (5)
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-pytorch-notebook-imagestream.yaml:0-0
Timestamp: 2025-06-16T11:06:33.139Z
Learning: In the opendatahub-io/notebooks repository, N-1 versions of images in manifest files (like imagestream.yaml files) should not be updated regularly. The versions of packages like codeflare-sdk in N-1 images are frozen to what was released when the image was moved from N to N-1 version. N-1 images are only updated for security vulnerabilities of packages, not for regular version bumps. This is why the version of packages in N-1 images may be quite old compared to the latest N version.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: Runtime deployment tests in opendatahub-io/notebooks may show PodSecurity warnings about allowPrivilegeEscalation, capabilities, runAsNonRoot, and seccompProfile settings. These warnings occur on OpenShift but not on GitHub Actions because GitHub Actions uses upstream Kubernetes without SecurityContextConstraints (SCC).
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-26T16:19:31.249Z
Learning: In the opendatahub-io/notebooks repository, the Playwright Docker image version in `.github/workflows/build-notebooks-TEMPLATE.yaml` (format: `mcr.microsoft.com/playwright:v1.53.1-noble`) must always match the `@playwright/test` version specified in the `tests/browser/package.json` file. Both versions need to be updated together to maintain consistency between CI/CD pipeline and project dependencies.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-26T16:19:31.249Z
Learning: In the opendatahub-io/notebooks repository, the Playwright Docker image version in `.github/workflows/build-notebooks-TEMPLATE.yaml` (format: `mcr.microsoft.com/playwright:v1.53.1-noble`) must always match the `@playwright/test` version specified in the `tests/browser/package.json` file. Both versions need to be updated together to maintain consistency between CI/CD pipeline and project dependencies.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1154
File: manifests/base/jupyter-tensorflow-notebook-imagestream.yaml:45-45
Timestamp: 2025-06-13T08:34:01.300Z
Learning: When updating dependency versions in `manifests/base/*-imagestream.yaml`, the project convention is to modify only the newest tag (e.g., "2025.1") and intentionally leave earlier tags (e.g., "2024.2") unchanged.
⏰ Context from checks skipped due to timeout of 90000ms (10)
- GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.11, false) / build
- GitHub Check: build (jupyter-trustyai-ubi9-python-3.11, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.11, false) / build
- GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.11, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.11, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.11, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.11, false) / build
- GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.11, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.11, false) / build
- GitHub Check: check-generated-code
🔇 Additional comments (1)
pyproject.toml (1)
118-128
: Hard-pin to unreleasedtorch==2.6.0
risks blocking the build
torch==2.6.0
is referenced in three groups (pytorchcuda
,pytorchrocm
,trustyai
). No such wheel exists on PyPI or the NVIDIA/ROCm indices (latest published is 2.3.*). A resolver run from a clean cache will error out.Action: relax the pin (
torch~=2.3
or leave unpinned) or stage your own wheel repository before merging.
datascience-base = [ | ||
"boto3 ~=1.37.8", | ||
"kafka-python-ng ~=2.2.3", | ||
"kfp ~=2.12.1", | ||
"plotly ~=6.0.0", | ||
"scipy ~=1.15.2", | ||
"skl2onnx ~=1.18.0", | ||
"onnxconverter-common ~=1.13.0", | ||
"codeflare-sdk ~=0.27.0", | ||
"kubeflow-training ==1.9.0" | ||
] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NumPy/SciPy version clash in TensorFlow image build path
datascience-base
pins scipy~=1.15.2
(needs NumPy ≥ 2.x) while datascience-tensorflow
still pins numpy~=1.26.4
.
Both groups are combined in jupyter-tensorflow-image
(lines 156-165). The resulting image will end up with a NumPy too old for the SciPy wheel and will fail at import-time (ImportError: numpy.core.multiarray failed to import
).
Either:
- Bump NumPy in
datascience-tensorflow
to~=2.*
, or - Lower SciPy to the last build compatible with NumPy 1.26 (≈ 1.11).
Also applies to: 95-100, 156-165
🤖 Prompt for AI Agents
In pyproject.toml around lines 76 to 87, there is a version conflict where
datascience-base pins scipy to 1.15.2 requiring NumPy ≥ 2.x, but
datascience-tensorflow pins numpy to 1.26.4, causing import errors when combined
in jupyter-tensorflow-image (lines 156-165). To fix this, either update the
numpy version in datascience-tensorflow to ~=2.* to match scipy's requirement or
downgrade scipy in datascience-base to a version compatible with numpy 1.26,
such as around 1.11. Apply the same fix to the dependencies listed at lines
95-100 and 156-165 to ensure consistency.
# Install micropipenv to deploy packages from Pipfile.lock | ||
RUN pip install --no-cache-dir -U "micropipenv[toml]" | ||
RUN pip install --no-cache-dir -U uv |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Stale comment & unpinned uv
hinder reproducibility
The comment still refers to micropipenv, but the actual tool is uv
.
Also, pip install -U uv
pulls the latest release on every build, making image digests non-deterministic.
-# Install micropipenv to deploy packages from Pipfile.lock
-RUN pip install --no-cache-dir -U uv
+# Install uv (resolver/installer used by the image)
+RUN pip install --no-cache-dir 'uv==0.2.11'
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Install micropipenv to deploy packages from Pipfile.lock | |
RUN pip install --no-cache-dir -U "micropipenv[toml]" | |
RUN pip install --no-cache-dir -U uv | |
# Install uv (resolver/installer used by the image) | |
RUN pip install --no-cache-dir 'uv==0.2.11' |
🤖 Prompt for AI Agents
In jupyter/trustyai/ubi9-python-3.11/Dockerfile.cpu at lines 30-31, update the
comment to correctly reference the tool `uv` instead of the outdated
`micropipenv`. Additionally, pin the `uv` package to a specific version in the
pip install command to ensure reproducible builds and deterministic image
digests, avoiding the use of the `-U` flag which fetches the latest version
every time.
scripts/sync-requirements-txt.sh
Outdated
temp_dir="${full_path%/*}" | ||
image_name="${temp_dir##*/}" | ||
|
||
if [[ "$full_path" == *"/rocm/"* ]]; then | ||
image_name="${image_name}-rocm" | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Image name derivation is wrong – produces jupyter
or rocm-rocm
temp_dir="${full_path%/*}"; image_name="${temp_dir##*/}"
yields the parent directory, not the directory that actually contains the image.
Examples:
• jupyter/minimal
→ image_name=jupyter
• jupyter/rocm/pytorch
→ image_name=rocm
→ later suffixed to rocm-rocm
This breaks the --group jupyter-${image_name}-image
lookup and exports the wrong dependency set.
Fix:
- temp_dir="${full_path%/*}"
- image_name="${temp_dir##*/}"
+ image_name="$(basename "$full_path")"
if [[ "$full_path" == *"/rocm/"* ]]; then
image_name="${image_name}-rocm"
fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
temp_dir="${full_path%/*}" | |
image_name="${temp_dir##*/}" | |
if [[ "$full_path" == *"/rocm/"* ]]; then | |
image_name="${image_name}-rocm" | |
fi | |
image_name="$(basename "$full_path")" | |
if [[ "$full_path" == *"/rocm/"* ]]; then | |
image_name="${image_name}-rocm" | |
fi |
🤖 Prompt for AI Agents
In scripts/sync-requirements-txt.sh around lines 16 to 21, the image_name is
incorrectly derived from the parent directory of full_path, causing wrong values
like 'jupyter' or 'rocm-rocm'. To fix this, change the extraction to get the
actual directory containing the image by using image_name="${full_path%%/*}" or
an equivalent method that correctly isolates the first directory segment
representing the image name. This will ensure the group lookup and dependency
exports use the correct image name.
- name: Run make refresh-pipfilelock-files and push the changes back to the branch | ||
run: | | ||
make refresh-pipfilelock-files PYTHON_VERSION=${{ env.PYTHON_VERSION }} INCLUDE_OPT_DIRS=${{ env.INCLUDE_OPT_DIRS }} | ||
git add . | ||
git commit -m "Update Pipfile.lock files by piplock-renewal.yaml action" | ||
uv lock --python ${{ env.PYTHON_VERSION }} | ||
git add uv.lock | ||
git commit -m "Update uv.lock files by uvlock-renewal.yaml action" | ||
git push origin ${{ env.BRANCH }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Workflow fails when there is nothing to commit
git commit
exits with status 1 if the index is empty, causing the whole job to fail on a no-op run. Guard the commit:
- - name: Run make refresh-pipfilelock-files and push the changes back to the branch
+ - name: Refresh uv.lock and push if there are changes
run: |
- uv lock --python ${{ env.PYTHON_VERSION }}
- git add uv.lock
- git commit -m "Update uv.lock files by uvlock-renewal.yaml action"
- git push origin ${{ env.BRANCH }}
+ uv lock --python ${{ env.PYTHON_VERSION }}
+ git add uv.lock
+ if ! git diff --cached --quiet; then
+ git commit -m "Update uv.lock file by uvlock-renewal workflow"
+ git push origin ${{ env.BRANCH }}
+ else
+ echo "uv.lock already up to date – nothing to commit"
+ fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- name: Run make refresh-pipfilelock-files and push the changes back to the branch | |
run: | | |
make refresh-pipfilelock-files PYTHON_VERSION=${{ env.PYTHON_VERSION }} INCLUDE_OPT_DIRS=${{ env.INCLUDE_OPT_DIRS }} | |
git add . | |
git commit -m "Update Pipfile.lock files by piplock-renewal.yaml action" | |
uv lock --python ${{ env.PYTHON_VERSION }} | |
git add uv.lock | |
git commit -m "Update uv.lock files by uvlock-renewal.yaml action" | |
git push origin ${{ env.BRANCH }} | |
- name: Refresh uv.lock and push if there are changes | |
run: | | |
uv lock --python ${{ env.PYTHON_VERSION }} | |
git add uv.lock | |
if ! git diff --cached --quiet; then | |
git commit -m "Update uv.lock file by uvlock-renewal workflow" | |
git push origin ${{ env.BRANCH }} | |
else | |
echo "uv.lock already up to date – nothing to commit" | |
fi |
🤖 Prompt for AI Agents
In .github/workflows/uvlock-renewal.yaml around lines 67 to 72, the workflow
fails if there is nothing to commit because `git commit` returns an error when
the index is empty. To fix this, add a condition to check if there are any
changes staged before running `git commit`. You can do this by running `git diff
--cached --quiet` and only committing if there are changes, preventing the job
from failing on no-op runs.
/hold |
This pr goes to feature branch and not main, just saying because it's easy to miss. |
@jiridanek Thank you for the questions on Slack. I have created a document with the answers to most of them and am still looking into some of the remaining questions. https://docs.google.com/document/d/1kocr25L4E_GTlAYWi75Bt4jK5xE7F-WHx0RvlzsUh-k/edit?usp=sharing |
Oh thanks didn't noticed that! Need to take a look on that series of PR tho one day.. |
…e new Python package manager uv
https://issues.redhat.com/browse/RHOAIENG-17006
Description
Moved all notebook images dependencies into one pyproject.toml file and using uv lock and export commands was able to create updated requirement.txt files that are us to build the images using uv
How Has This Been Tested?
After the creation of the requirement files i run podman build to build the images and make sure there was no errors
Review Points
updating dependencies currently with uv isn't possible for just one group:
--upgrade-group
astral-sh/uv#13705Merge criteria:
Summary by CodeRabbit