
llm provides a simple way to talk to LLMs available via OpenRouter right from many programmers home (the terminal).
Clone the repository and install the llm
executable to your Go binary path ((go env GOPATH)/bin
):
git clone https://github.com/flacial/llm
cd llm
go install .
You can send a prompt in a few ways:
-
As a direct argument:
llm "What's the capital of Sudan?"
-
Via standard input (
stdin
):echo "Tell me a short story about a brave dragon and a sleeping cow." | llm
-
From a file (
-f
or--file
):echo "Summarize the key points of the paper Attention Is All You Need." > summary.txt
Then, run
llm
with the file:llm -f summary.txt
Override your default model (if set) or specify a particular model for a single query.
llm -m google/gemini-flash-1.5 "Explain the concept of recursion in programming as if I'm a grug programmer."
View all available LLM models from OpenRouter:
llm models
By default, llm
streams responses live.
llm "Write a haiku about a bustling city at sunset."
Automatically copy the LLM's response to your system clipboard.
llm -C "What is the chemical symbol for gold?"
(After running, you can paste the answer (Au
) into any text field.)
Use predefined prompts for common tasks.
Setup: Add a new template to your ~/.llm/templates/
folder.
# ~/.llm/templates/brainstorm.tmpl.yaml
name: "brainstorm"
description: "Generates creative ideas, concepts, or solutions for a given topic."
system_message: |
You are a creative brainstorming assistant. Your role is to generate a diverse range of ideas, concepts, or solutions based on the user's input. Think broadly, explore different angles, and provide innovative suggestions. Encourage out-of-the-box thinking.
user_prompt_template: |
I need some brainstorming ideas for:
{{.UserPrompt}}
Please provide at least 5 distinct ideas or approaches.
Usage: Provide a prompt and apply the template.
llm "Vacation plans for going to paris" -t brainstorm
Set a default model or other options in your configuration file so you don't have to specify them every time.
Configuration File Location: The configuration file is located at $XDG_CONFIG_HOME/llm/config.yaml
(typically ~/.config/llm/config.yaml
on most systems). If XDG_CONFIG_HOME
is not set, it defaults to ~/.config
. You can also specify a custom location using the --config
flag.
Setup: Add a default model to your configuration file:
# ~/.config/llm/config.yaml
model: openai/gpt-3.5-turbo # This will be used if -m is not specified
api_key: "your-api-key-here" # Optional if using environment variable
Model Aliases: You can use shorter aliases for commonly used models:
# ~/.config/llm/config.yaml
model: fast # Uses the built-in alias for openai/gpt-4.1-nano
# Or define custom aliases:
models:
aliases:
my-model: "anthropic/claude-3-5-sonnet"
quick: "google/gemini-flash-1.5"
Built-in Aliases:
fast
: openai/gpt-4.1-nano10x
: anthropic/claude-sonnet-4smart
: google/gemini-2.5-progpt4
: openai/gpt-4o
Usage: Now you can run llm
without the -m
flag:
llm "What is the capital of Sudan?"
# Or use an alias
llm -m fast "Quick question here"
Ensure your LLM_API_KEY
environment variable is set, or include api_key: "YOUR_KEY_HERE"
in your ~/.llmrc.yaml
.
# Example of setting an API key via environment variable (for current session)
export LLM_API_KEY="sk-or-..."
llm "Hello!"
To load completions for the current session:
$ source <(llm completion bash)
To load completions for each new session, execute this once:
- Linux:
$ llm completion bash > /etc/bash_completion.d/llm
- macOS:
$ llm completion bash > /usr/local/etc/bash_completion.d/llm
To load completions for each session, execute this once:
$ llm completion zsh > ~/.zsh/_llm
Restart your terminal for it to work.
To load completions for the current session:
$ llm completion fish | source
To load completions for each new session, execute this once:
$ llm completion fish > ~/.config/fish/completions/llm.fish
See detailed output, including API requests and responses, useful for debugging.
llm -v "hello world" # For debugging use --debug
This is a personal tool. It works well, but isn't built for production workloads. Use at your own risk.