Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: --gpu option cannot work on win10, not friendly to WIN. #487

Open
liuye1992 opened this issue Jul 6, 2024 · 2 comments
Open

Bug: --gpu option cannot work on win10, not friendly to WIN. #487

liuye1992 opened this issue Jul 6, 2024 · 2 comments

Comments

@liuye1992
Copy link

Contact Details

No response

What happened?

Recording to some warnings (such as:get_nvcc_path: note: /usr/local/cuda/bin/nvcc.exe does not exist) outputted by the log uploaded, seems llamafile with --gpu only tries to screen CUDA installation location of Linux OS.

So how to specify the one i want from two GC on Windows?

Version

llamafile-0.6.2.exe
CUDA Version: 11.7
GTX 1650 4G

What operating system are you seeing the problem on?

Windows

Relevant log output

import_cuda_impl: initializing gpu module...
get_nvcc_path: note: nvcc.exe not found on $PATH
get_nvcc_path: note: $CUDA_PATH/bin/nvcc.exe does not exist
get_nvcc_path: note: /opt/cuda/bin/nvcc.exe does not exist
get_nvcc_path: note: /usr/local/cuda/bin/nvcc.exe does not exist
link_cuda_dso: note: dynamically linking C:\Users\Administrator/.llamafile/ggml-cuda.dll
ggml_cuda_link: welcome to CUDA SDK with tinyBLAS
link_cuda_dso: GPU support linked
link_cuda_dso: GPU support not possible
fatal error: support for --gpu nvidia was explicitly requested, but it wasn't available
@jart
Copy link
Collaborator

jart commented Jul 6, 2024

Have you tried using the latest version? 0.6.2 is from a very long time ago.

@liuye1992
Copy link
Author

Have you tried using the latest version? 0.6.2 is from a very long time ago.

Several hours ago I installed the lastest version and run it successfully on powershell, but some warinings were outputted and GPU was unable to be used for Model training and inference still.

seems the reason is CUDA SDK called by ggml-cuda.dll library cannot find any GPU devices.

Warining information below:


ggml_cuda_link: welcome to CUDA SDK with tinyBLAS
link_cuda_dso: No GPU devices found

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants