Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add llama3 tasks #2556

Open
wants to merge 25 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions lm_eval/tasks/llama3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,9 @@ BibTeX-formatted citation goes here

#### Tasks

### Tasks

* `mgsm_chat`: 0-shot mgsm benchmark. Use with chat-template.
* `mmlu_llama`: `generation variant of MMLU`
* `arc_chalenge_chat`: `generation variant of ARC-Challenge using MMLU format`

Expand Down
24 changes: 24 additions & 0 deletions lm_eval/tasks/llama3/base/arc_challenge.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
tag:
- llama
task: llama_arc_challenge
dataset_path: allenai/ai2_arc
dataset_name: ARC-Challenge
output_type: multiple_choice
training_split: train
validation_split: validation
test_split: test
fewshot_split: train
doc_to_text: "Question: {{question.strip()}}\nA. {{choices.text[0]}}\nB. {{choices.text[1]}}\nC. {{choices.text[2]}}{% if choices.text|length > 3 %}\nD. {{choices.text[3]}}{% endif %}\nAnswer:"
fewshot_delimiter: "\n\n"
doc_to_target: "{{ 'ABCD'[answerKey|int - 1] if answerKey|string in '1234' else answerKey }}"
doc_to_choice: "{{ choices.label|map('replace', '1', 'A')|map('replace', '2', 'B')|map('replace', '3', 'C')|map('replace', '4', 'D')|list if choices.label[0] in '1234' else choices.label }}"
num_fewshot: 25
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 1.0
15 changes: 15 additions & 0 deletions lm_eval/tasks/llama3/base/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
import datasets


def process_arc_c_docs(dataset: datasets.Dataset) -> datasets.Dataset:
COLUMNS = dataset.column_names

def map_(doc):
doc["doc_to_text"] = doc["input_final_prompts"][0].strip()[:-2].strip()
doc["doc_to_choice"] = [
x.replace("Answer:", "").strip() for x in doc["output_choice_completions"]
]
doc["doc_to_target"] = doc["input_correct_responses"][0].strip()[-1]
return doc

return dataset.map(map_, remove_columns=COLUMNS)
66 changes: 66 additions & 0 deletions lm_eval/tasks/llama3/instruct/math/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# MATH
ℹ️ This is the 0-shot variant! reproducing https://huggingface.co/datasets/meta-llama/Llama-3.1-8B-Instruct-evals/viewer/Llama-3.1-8B-Instruct-evals__math__details?row=0
## Paper
Measuring Mathematical Problem Solving With the MATH Dataset
https://arxiv.org/abs/2103.03874

Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.

NOTE: The few-shot and the generated answer extraction is based on the [Minerva](https://arxiv.org/abs/2206.14858) and exact match equivalence is calculated using the `sympy` library. This requires additional dependencies, which can be installed via the `lm-eval[math]` extra.

Homepage: https://github.com/hendrycks/math


## Citation
```
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
journal={NeurIPS},
year={2021}
}

@misc{2206.14858,
Author = {Aitor Lewkowycz and Anders Andreassen and David Dohan and Ethan Dyer and Henryk Michalewski and Vinay Ramasesh and Ambrose Slone and Cem Anil and Imanol Schlag and Theo Gutman-Solo and Yuhuai Wu and Behnam Neyshabur and Guy Gur-Ari and Vedant Misra},
Title = {Solving Quantitative Reasoning Problems with Language Models},
Year = {2022},
Eprint = {arXiv:2206.14858},
}
```

### Groups and Tasks

[//]: # (#### Groups)

[//]: # ()
[//]: # (- `llama_math`)

#### Tasks

- `llama_math_algebra`
- `llama_math_counting_and_prob`
- `llama_math_geometry`
- `llama_math_intermediate_algebra`
- `llama_math_num_theory`
- `llama_math_prealgebra`
- `llama_math_precalc`

### Checklist

The checklist is the following:

For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
* The implementation in the original paper is one where the model is first fine-tuned on the data. They do have a few-shot evaluation for GPT-3, however the few-shot context used here is sourced from [Lewkowycz et al](https://arxiv.org/abs/2206.14858). The achieved accuracy on Llama-2 models is comparable to that provided in the paper, though not identical.


If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?

### Variant Wishlist

- [ ] zero-shot variant
25 changes: 25 additions & 0 deletions lm_eval/tasks/llama3/instruct/math/llama_math_algebra.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
task: llama_math_algebra
dataset_path: EleutherAI/hendrycks_math
process_docs: !function utils.process_docs
dataset_name: algebra
output_type: generate_until
training_split: train
test_split: test
doc_to_text: "Solve the following math problem efficiently and clearly:\n\n- For simple problems (2 steps or fewer):\nProvide a concise solution with minimal explanation.\n\n- For complex problems (3 steps or more):\nUse this step-by-step format:\n\n## Step 1: [Concise description]\n[Brief explanation and calculations]\n\n## Step 2: [Concise description]\n[Brief explanation and calculations]\n\n...\n\nRegardless of the approach, always conclude with:\n\nTherefore, the final answer is: $\\\\boxed{answer}$. I hope it is correct.\n\nWhere [answer] is just the final number or expression that solves the problem.\n\nProblem: {{ problem }}"
process_results: !function utils.process_results
doc_to_target: "{{answer if few_shot is undefined else solution}}"
generation_kwargs:
until:
- "Problem:"
max_gen_toks: 5120
do_sample: false
temperature: 0
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
num_fewshot: 0
metadata:
version: 1.0
dataset_kwargs:
trust_remote_code: true
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
include: llama_math_algebra.yaml
dataset_name: counting_and_probability
task: llama_math_counting_and_prob
3 changes: 3 additions & 0 deletions lm_eval/tasks/llama3/instruct/math/llama_math_geometry.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
include: llama_math_algebra.yaml
dataset_name: geometry
task: llama_math_geometry
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
include: llama_math_algebra.yaml
dataset_name: intermediate_algebra
task: llama_math_intermediate_algebra
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
include: llama_math_algebra.yaml
dataset_name: number_theory
task: llama_math_num_theory
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
include: llama_math_algebra.yaml
dataset_name: prealgebra
task: llama_math_prealgebra
3 changes: 3 additions & 0 deletions lm_eval/tasks/llama3/instruct/math/llama_math_precalc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
include: llama_math_algebra.yaml
dataset_name: precalculus
task: llama_math_precalc
14 changes: 14 additions & 0 deletions lm_eval/tasks/llama3/instruct/math/math.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
group: llama_math
task:
- llama_math_algebra
- llama_math_counting_and_prob
- llama_math_geometry
- llama_math_intermediate_algebra
- llama_math_num_theory
- llama_math_prealgebra
- llama_math_precalc
aggregate_metric_list:
- metric: exact_match
weight_by_size: True
metadata:
version: 1
Loading
Loading