-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nvidia-container-cli reports incorrect CUDA driver version on WSL2 #148
Comments
The same with meStatus: Downloaded newer image for nvidia/cuda:10.2-base |
@opptimus seems to have a different issue, but the original issue may be related to: |
@danfairs I solve my problems with upgrading my Win10 to version 20257.1. Follow official WSL2 guidelines. |
Hey @danfairs . Thanks for reporting the issue. We have a fix in progress to address the fact that we report CUDA version 11.0 on WSL. In the meantime you could use the
For reference: here is the merge request extending WSL support. |
Hi. I have some problem with
|
@archee8 which version of the NVIDIA container toolkit is this? The version 1.4.0 of |
|
@archee8 Your issue appears to be related to this: |
The following command works, but it doesn't work with docker-compose. Does anyone know the cause?
I have the following environment. The reason for Ubuntu 16.04 is that it cannot be upgraded due to company security issues.
|
This issue is still present when following the current instructions on the official nvidia documentation for this: https://docs.nvidia.com/cuda/wsl-user-guide/index.html#ch05-running-containers |
While trying to run https://github.com/borisdayma/dalle-mini in WSL2 I encountered the same error message as @danfairs
When I check my currently installed version with nvidia-smi I see that I have version 11.7 installed (the error meesage above requires 11.6):
I'm kinda stuck right now. Any advice? |
@psychofisch as a workaround please start the container with
|
I ran into this issue and this work around worked. Thank you @elezar |
Sorry, but I'm not at all convinced The most precise error message resulting from the use of
|
1. Issue or feature description
nvidia-container-cli
on WSL2 is reporting CUDA 11.0 (and thus refusing to run containers with cuda>=11.1) even though CUDA toolkit 11.1 is installed in Linux. Windows 10 is build 20251.fe_release.201030-1438. Everything is installed as per the install guide, and CUDA containers do actually work (for exampledocker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
successfully returns a benchmark).Machine is a Dell XPS 15 9500 with an i9-10885H CPU, 64 GB RAM and an NVIDIA GeForce GTX 1650 Ti.
2. Steps to reproduce the issue
docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
correctly outputs benchmarksnvidia-container-cli info
. It incorrectly outputs CUDA version 11.0.This command will also fail:
3. Information to attach (optional if deemed irrelevant)
Some nvidia-container information:
nvidia-container-cli -k -d /dev/tty info
ncc.txtKernel version from
uname -a
Linux aphid 5.4.72-microsoft-standard-WSL2 NVIDIA/nvidia-docker#1 SMP Wed Oct 28 23:40:43 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Any relevant kernel output lines from
dmesg
Driver information from
nvidia-smi -a
nvidia-smi.txtDocker version from
docker version
19.03.13
NVIDIA packages version from
dpkg -l '*nvidia*'
orrpm -qa '*nvidia*'
packages.txtNVIDIA container library version from
nvidia-container-cli -V
ncc-version.txtNVIDIA container library logs (see troubleshooting)
Docker command, image and tag used
The text was updated successfully, but these errors were encountered: