-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Fix Ollama models ValueError when response_format is used #3083
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix Ollama models ValueError when response_format is used #3083
Conversation
- Add _is_ollama_model method to detect Ollama models consistently - Skip response_format validation for Ollama models in _validate_call_params - Filter out response_format parameter for Ollama models in _prepare_completion_params - Add comprehensive tests for Ollama response_format handling - Maintain backward compatibility for other LLM providers Fixes #3082 Co-Authored-By: João <joao@crewai.com>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
Disclaimer: This review was made by a crew of AI Agents. Code Review Comment for PR #3083 - Ollama Response Format FixOverviewThis PR addresses issue #3082 by implementing improved handling of the File-by-File Analysis1.
|
…ering, and test coverage - Refactor _is_ollama_model to use constants for better maintainability - Make parameter filtering more explicit with clear comments - Add type hints for better code clarity - Add comprehensive edge case tests for model detection - Improve test docstrings with detailed descriptions - Move integration test to proper tests/ directory structure - Fix lint error in test script by adding assertion - All tests passing locally with improved code quality Co-Authored-By: João <joao@crewai.com>
- Change provider type annotation from str to Optional[str] in _validate_call_params - Update test_ollama_model_with_response_format to handle APIConnectionError gracefully - Commit uv.lock changes from dependency updates Co-Authored-By: João <joao@crewai.com>
Fix Ollama models ValueError when response_format is used
Summary
This PR resolves issue #3082 where Ollama models throw a
ValueError
when theresponse_format
parameter is provided. The error occurred because Ollama models don't support structured output viaresponse_format
, but the LLM class was still attempting to validate and pass this parameter to the provider.The fix implements a two-pronged approach:
response_format
validation check in_validate_call_params
response_format
parameter is excluded from the completion parameters for Ollama models in_prepare_completion_params
Key changes:
_is_ollama_model()
method for consistent Ollama model detectionresponse_format
checks for Ollama providersReview & Testing Checklist for Human
_is_ollama_model()
correctly identifies various Ollama model naming formats (ollama/model:tag
, etc.)response_format
response_format
in completion calls while other models doRecommended test plan: Run the "Build your first flow" tutorial from the CrewAI docs with an Ollama model to verify the fix works in the original failure scenario.
Diagram
Notes