Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server docs: fix default values and add n_probs #3506

Merged
merged 1 commit into from
Oct 6, 2023

Conversation

Mihaiii
Copy link
Contributor

@Mihaiii Mihaiii commented Oct 6, 2023

The code for the n_probs functionality is already in master branch, but it can be missed because it's not mentioned in the docs.

@ggerganov ggerganov merged commit cb13d73 into ggerganov:master Oct 6, 2023
10 checks passed
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull request Oct 7, 2023
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 12, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp:
  py : change version of numpy requirement to 1.24.4 (ggerganov#3515)
  quantize : fail fast on write errors (ggerganov#3521)
  metal : support default.metallib load & reuse code for swift package (ggerganov#3522)
  llm : support Adept Persimmon 8B (ggerganov#3410)
  Fix for ggerganov#3454 (ggerganov#3455)
  readme : update models, cuda + ppl instructions (ggerganov#3510)
  server : docs fix default values and add n_probs (ggerganov#3506)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants