Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[eval] Add IMO problems with exact answers #1528

Merged
merged 9 commits into from
Jul 13, 2024

Conversation

justinlinw
Copy link
Contributor

@justinlinw justinlinw commented May 15, 2024

Eval details 📑

Eval name

IMO Problems with Exact Answers

Eval description

A small set of IMO problems that have exact answers (e.g. yes/no, numeric answers) that make these problems easy to automatically evaluate (as opposed to informal-to-informal proofs).

What makes this a useful eval?

This eval contributes to the set of math/reasoning related evals that are significantly harder than MATH/GSM8K (sourced from previous IMO contests). GPT-4 will fail this eval. In the event a GPT-4 level model does answer the question correctly, it's most likely an indication of luck or eval memorization at this point in time. While there's an argument that this isn't a useful eval since a GPT-4 level model cannot perform this task, I'm interested in the resultant reasoning steps (i.e. CoT traces) and for model-graded evals in the future from stronger models.

Criteria for a good eval ✅

Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).

Your eval should be:

  • Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
  • Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
  • Includes good signal around what is the right behavior. This means either a correct answer for Basic evals or the Fact Model-graded eval, or an exhaustive rubric for evaluating answers for the Criteria Model-graded eval.
  • Include at least 15 high-quality examples.

If there is anything else that makes your eval worth including, please document it below.

Unique eval value

Insert what makes your eval high quality that was not mentioned above. (Not required)

Eval structure 🏗️

Your eval should

  • Check that your data is in evals/registry/data/{name}
  • Check that your YAML is registered at evals/registry/evals/{name}.yaml
  • Ensure you have the right to use the data you submit via this eval

(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)

Final checklist 👀

Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).

  • I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.

Email address validation

If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request.

  • I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.

Limited availability acknowledgment

We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.

  • I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access be granted.

Submit eval

  • I have filled out all required fields of this form
  • I have used Git LFS for the Eval JSON data
  • (Ignore if not submitting code) I have run pip install pre-commit; pre-commit install and have verified that mypy, black, isort, autoflake and ruff are running when I commit and push

Failure to fill out all required fields will result in the PR being closed.

Eval JSON data

Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:

View evals in JSON

Eval

INSERT_EVAL_HERE

@justinlinw justinlinw changed the title [eval] Add auto-evaluable IMO problems [eval] Add IMO problems with exact answers May 15, 2024
Copy link
Collaborator

@usama-openai usama-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution. I would like to request some changes.

  1. In the .yaml file, kindly replace the <eval_name> placeholder with the name of eval.

  2. The prompt is ambiguous about the format of the answer. It states Please respond with the correct answer only wrapped in [] and then states <|start_of_answer|>\nA specific answer (e.g. $n=42$, yes, no, 3.14)\n<|end_of_answer|>. So, it isn't clear what the format of the output provided by the model should be. The provided ideal answer isn't following any of the methods provided in the prompt. You need to add clear instructions about the output format and provide the ideal answer in that format.

  3. Complex mathematical problems or multistep reasoning questions can't be solved by the model in a single shot. You need to ask the model to provide reasoning first and then provide the final answer in a specific format. Use the "Include" evaluation method to evaluate the completion. Asking the model to reason before answering will give it a fair chance to solve the question.

We would love to review the PR again after the suggested changes.

@justinlinw
Copy link
Contributor Author

justinlinw commented May 17, 2024

Thanks for the fast review @usama-openai! Addressed your review comments -- lmk if there's anything else.

  1. Resolved in 67542df

  2. Updated the system prompt and response format in 66a00b6. I ended up sticking with the square bracket delimiter in order to be more explicit in the "Include" evaluation and included a reasoning tidbit in the system prompt.

  3. Updated evaluation method to "Include" in bd7ab27

Eval Run Result

oaieval gpt-4 imo_exact_answers
[2024-05-17 09:12:41,658] [eval.py:36] Evaluating 19 samples
[2024-05-17 09:12:41,663] [eval.py:144] Running in threaded mode with 10 threads!
[2024-05-17 09:13:51,547] [oaieval.py:275] Found 19/19 sampling events with usage data
[2024-05-17 09:13:51,548] [oaieval.py:283] Token usage from 19 sampling events:
completion_tokens: 8,859
prompt_tokens: 4,922
total_tokens: 13,781
[2024-05-17 09:13:51,549] [record.py:371] Final report: {'accuracy': 0.21052631578947367, 'boostrap_std': 0.10161358662922158, 'usage_completion_tokens': 8859, 'usage_prompt_tokens': 4922, 'usage_total_tokens': 13781}. Logged to /tmp/evallogs/240517141241UKZXTUZ4_gpt-4_imo_exact_answers.jsonl
[2024-05-17 09:13:51,549] [oaieval.py:233] Final report:
[2024-05-17 09:13:51,549] [oaieval.py:235] accuracy: 0.21052631578947367
[2024-05-17 09:13:51,549] [oaieval.py:235] boostrap_std: 0.10161358662922158
[2024-05-17 09:13:51,549] [oaieval.py:235] usage_completion_tokens: 8859
[2024-05-17 09:13:51,549] [oaieval.py:235] usage_prompt_tokens: 4922
[2024-05-17 09:13:51,549] [oaieval.py:235] usage_total_tokens: 13781
[2024-05-17 09:13:51,562] [record.py:360] Logged 38 rows of events to /tmp/evallogs/240517141241UKZXTUZ4_gpt-4_imo_exact_answers.jsonl: insert_time=11.976ms

@justinlinw justinlinw closed this May 17, 2024
@justinlinw justinlinw deleted the justinlinw/imo_solutions_only branch May 17, 2024 20:41
@justinlinw justinlinw restored the justinlinw/imo_solutions_only branch May 17, 2024 20:42
@justinlinw justinlinw reopened this May 17, 2024
Copy link
Collaborator

@usama-openai usama-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR looks in good shape now. I'm approving this PR.

@kliu128 kliu128 merged commit 234bcde into openai:main Jul 13, 2024
@justinlinw justinlinw deleted the justinlinw/imo_solutions_only branch July 19, 2024 03:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants