Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
I expect llama cpp working
Current Behavior
I followed official installer guide:
- installed cmake
- installed git
- installed vs studio with desktop c++ and linux embedded development
- runned
pyhton setup.py install
When using lamacpp it's not working
Traceback (most recent call last):
File "C:\Users\luca.giulianini2\Desktop\cuda-test\main.py", line 10, in
llm = Llama(model_path=r"llama-2-7b.ggmlv3.q2_K.bin")
File "C:\Users\luca.giulianini2\AppData\Local\anaconda3\envs\ai\lib\site-packages\llama_cpp_python-0.1.77-py3.10-win-amd64.egg\llama_cpp\llama.py", line 320, in init
self.model = llama_cpp.llama_load_model_from_file(
File "C:\Users\luca.giulianini2\AppData\Local\anaconda3\envs\ai\lib\site-packages\llama_cpp_python-0.1.77-py3.10-win-amd64.egg\llama_cpp\llama_cpp.py", line 428, in llama_load_model_from_file
return _lib.llama_load_model_from_file(path_model, params)
OSError: [WinError -1073741795] Windows Error 0xc000001d
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
- Physical (or virtual) hardware you are using, e.g. for Linux:
intel i7 3770 4 core 8 threads
- Operating System, e.g. for Linux:
windows 10 LTSC
- SDK version, e.g. for Linux:
$ python3 --version 3.10.12
$ cmake --version 3.27.2
$ g++ --version don't know
Try the following:
git clone https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python
rm -rf _skbuild/
# delete any old buildspython setup.py develop
cd ./vendor/llama.cpp
- Follow llama.cpp's instructions to
cmake
llama.cpp - Run llama.cpp's
./main
with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp