The open-source rules engine for LLM APIs.
Explore Features »
Getting Started
·
Deployment
·
Contributing
·
Issues
Modelrules is a rules engine for LLM APIs. It provides a simple way to override any API parameters for OpenAI-compatible LLM providers. It's ideal for environments where LLM clients are constrained to specific parameters or can't offer flexible customization.
All configuration rules are applied server-side, and you can securely store your LLM provider credentials.
- ⚙️ Customizable Rules: Create custom rules to overwrite your LLM API parameters and apply them per model or provider.
- 🔒 Secure Credential Storage: Securely store provider API keys and credentials.
- 🔑 Virtual API Key Management: Create, manage, and revoke virtual API keys for your applications.
- 🔄 OpenAI-Compatible: Drop-in replacement for any OpenAI-compatible API.
- 🚀 Built with Modern Tech: Server-side rendering with React Router, Vite for fast development, and TailwindCSS for styling.
While Modelrules is designed to be a drop-in replacement in your OpenAI SDK client by changing the baseURL
, it's important to understand how compatibility works. Modelrules acts as a proxy, forwarding requests to the upstream LLM provider you configure in your rulesets.
The actual OpenAI API compatibility depends entirely on the provider you are routing to:
- Guaranteed Compatibility: If your ruleset points to an OpenAI model, you'll have full compatibility.
- Provider-Dependent Compatibility: If you're using a different provider like Anthropic, you must use their OpenAI-compatible endpoint for the
baseURL
in your ruleset. - Runtime Errors: If you configure a ruleset with a
baseURL
that is not OpenAI-compatible and then try to use it with an OpenAI SDK (e.g.,openai.chat.completions.create
), you will encounter runtime errors because the request and response formats will not match what the SDK expects.
In short, Modelrules doesn't translate between different API schemas; it enriches requests and routes them. The provider's endpoint specified in your rule must be compatible with the SDK you are using on the client-side.
- Hono
- Cloudflare Workers
- Drizzle ORM
- Turso (SQLite)
- Clerk for Authentication
- Cloudflare KV for Caching
- Create a Virtual API Key: Generate a new API key within the Modelrules application.
- Define a Ruleset: Create a ruleset for a specific LLM provider or model. In the ruleset, you can override API parameters (like
temperature
,top_p
, etc.) and securely provide the credentials for the target LLM provider. - Make a Request: Send a request to the Modelrules API as you would to the OpenAI API. To specify which ruleset to use, prepend its name and two colons to the model name. For example, with a ruleset named "my-ruleset" and the "gpt-3.5-turbo" model, set the model to
"my-ruleset::gpt-3.5-turbo"
.
Here's how you can make a request:
curl -X POST https://rules.exectx.run/api/chat/completions \
-H "Authorization: Bearer $RULES_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "my-ruleset::o4-mini",
"messages": [{
"role": "user",
"content": "What is the capital of France?"
}]
}'
For detailed instructions on how to run the project locally, please see the local development guide.
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License.
- Unkey for the inspiration on key generation strategy.