-
-
Notifications
You must be signed in to change notification settings - Fork 602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding support for O1 #991
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to f35b04c in 25 seconds
More details
- Looked at
89
lines of code in5
files - Skipped
0
files when reviewing. - Skipped posting
2
drafted comments based on config settings.
1. instructor/client.py:426
- Draft comment:
Consider adding specific handling forMode.JSON_O1
in thefrom_openai
function if it requires special processing, similar to other modes likeMD_JSON
orTOOLS_STRICT
. - Reason this comment was not posted:
Confidence changes required:50%
The PR introduces a new modeJSON_O1
but does not update thefrom_openai
function to handle this mode specifically. This could lead to unexpected behavior if the mode requires special handling.
2. instructor/process_response.py:263
- Draft comment:
Ensure thatMode.JSON_O1
is handled correctly inhandle_response_model
. The current implementation seems to append a message tonew_kwargs["messages"]
, but verify if additional handling is needed. - Reason this comment was not posted:
Confidence changes required:50%
The PR adds a new modeJSON_O1
but does not update thehandle_response_model
function to handle this mode specifically. This could lead to unexpected behavior if the mode requires special handling.
Workflow ID: wflow_YHsDjroDOx3yW1gZ
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
Deploying instructor with Cloudflare Pages
|
Skipped PR review on ee1c5af because no changed files had a supported extension. If you think this was in error, please contact us and we'll fix it right away. Generated with ❤️ by ellipsis.dev |
Important here to note that the current |
This adds a new mode which should support the o1 model family using the openai client sdk. We'll need to upgrade the supported version to support the new
max_completion_tokens
parameter though. This allows us to do something like what we see below.feat: add support for JSON_O1 mode with OpenAI SDK
Summary:
Adds support for
JSON_O1
mode using the OpenAI client SDK, requiring an updatedopenai
dependency.Key points:
JSON_O1
mode infrom_openai()
inclient.py
.handle_response_model()
inprocess_response.py
to handleJSON_O1
mode, ensuring no system messages are included.from_response()
infunction_calls.py
to parseJSON_O1
mode usingparse_json()
.openai
dependency to^1.45.0
inpyproject.toml
to supportmax_completion_tokens
.JSON_O1
toMode
enum inmode.py
.Generated with ❤️ by ellipsis.dev