Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make run without error but ./model folder is empty #7

Closed
xndai opened this issue Mar 11, 2023 · 4 comments
Closed

Make run without error but ./model folder is empty #7

xndai opened this issue Mar 11, 2023 · 4 comments
Labels
model Model specific wontfix This will not be worked on

Comments

@xndai
Copy link

xndai commented Mar 11, 2023

Did I miss anything?

@jlei523
Copy link

jlei523 commented Mar 11, 2023

Same.

I can't perform this step.

ls ./models
65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
➜  llama.cpp git:(master) cd models
➜  models git:(master) 65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model

zsh: command not found: 65B

@al03
Copy link

al03 commented Mar 11, 2023

Download models from here: https://huggingface.co/nyanko7/LLaMA-7B

@imWildCat
Copy link

Download models from here: https://huggingface.co/nyanko7/LLaMA-7B

We have to organize these files according to the project convention:

models
├── 7B
│   ├── consolidated.00.pth
│   └── params.json
├── checklist.chk
└── tokenizer.model

@xndai
Copy link
Author

xndai commented Mar 11, 2023

Works now. Thanks.

@gjmulder gjmulder added wontfix This will not be worked on model Model specific labels Mar 15, 2023
SlyEcho pushed a commit to SlyEcho/llama.cpp that referenced this issue May 31, 2023
Reuse format_generation_settings for logging.
cebtenzzre added a commit to cebtenzzre/llama.cpp that referenced this issue Nov 7, 2023
chsasank pushed a commit to chsasank/llama.cpp that referenced this issue Dec 20, 2023
* add our readme

* add live demo video

* update README

---------

Co-authored-by: syx <yixinsong@sjtu.edu.cn>
chsasank pushed a commit to chsasank/llama.cpp that referenced this issue Dec 20, 2023
* remove warning at gpu split

* remove dead code

* adaptive sparsity threshold reading from model file

* convert models with sparse threshold
@Dyke-F Dyke-F mentioned this issue Dec 21, 2023
3 tasks
ggerganov pushed a commit that referenced this issue Aug 6, 2024
@slaren slaren mentioned this issue Aug 15, 2024
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
model Model specific wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

6 participants