Skip to content

Support for server APIs of apps specialised in local LLMs #11872

Open
1 of 1 issue completed
Open
feature
1 of 1 issue completed
@ThiloteE

Description

@ThiloteE

Solves support for

  • local LLMs
  • hardware
  • gpu acceleration
  • custom and more LLM architectures

Describe the solution you'd like

Support for applications that adhere to the OpenAI API.
OpenAI API has become an inofficial standard.

Progress:

Most of the applications listed here are a wrapper around llama.cpp, though they all have their unique strenghts and weaknesses. Except for LMStudio, they are all open source.

Related issues:

InAnYan#71

Notes:

This issue here is purely about LLMs. Not embedding models. For embedding models see InAnYan#85 (comment).

Sub-issues

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions