-
Notifications
You must be signed in to change notification settings - Fork 2k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* add custom filter * fix type casting of references * add humaneval * fix a bug in humaneval * add greedy version of humaneval * update tasks README * test humaneval * return multiple metrics * nit * add confirmation to run code tasks * nit * nit --------- Co-authored-by: Hojin Lee <[email protected]> Co-authored-by: Baber <[email protected]>
- Loading branch information
1 parent
bb098f1
commit 4c11206
Showing
11 changed files
with
184 additions
and
8 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
from lm_eval.api.filter import Filter | ||
from lm_eval.api.registry import register_filter | ||
|
||
|
||
@register_filter("custom") | ||
class CustomFilter(Filter): | ||
""" | ||
Custom filter that applies a custom, user-defined function to the model responses. | ||
""" | ||
|
||
def __init__(self, **kwargs) -> None: | ||
self.filter_fn = kwargs.pop("filter_fn") | ||
|
||
super().__init__(**kwargs) | ||
|
||
def apply(self, resps, docs): | ||
return self.filter_fn(resps, docs) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
# HumanEval | ||
|
||
## Paper | ||
Evaluating Large Language Models Trained on Code | ||
https://arxiv.org/abs/2107.03374 | ||
|
||
We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics. | ||
|
||
Homepage: https://github.com/openai/human-eval | ||
|
||
|
||
## Citation | ||
``` | ||
@article{chen2021codex, | ||
title={Evaluating Large Language Models Trained on Code}, | ||
author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba}, | ||
year={2021}, | ||
eprint={2107.03374}, | ||
archivePrefix={arXiv}, | ||
primaryClass={cs.LG} | ||
} | ||
``` | ||
|
||
### Groups and Tasks | ||
|
||
#### Groups | ||
|
||
* Not part of a group yet. | ||
|
||
#### Tasks | ||
|
||
- `humaneval` pass@1 | ||
- `humaneval_64` pass@64 variant | ||
|
||
### Checklist | ||
|
||
For adding novel benchmarks/datasets to the library: | ||
* [ ] Is the task an existing benchmark in the literature? | ||
* [ ] Have you referenced the original paper that introduced the task? | ||
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? | ||
|
||
|
||
If other tasks on this dataset are already supported: | ||
* [ ] Is the "Main" variant of this task clearly denoted? | ||
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? | ||
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
task: humaneval | ||
dataset_path: openai/openai_humaneval | ||
unsafe_code: true | ||
output_type: generate_until | ||
test_split: test | ||
doc_to_text: "{{prompt}}" | ||
doc_to_target: "{{test}}\ncheck({{entry_point}})" | ||
metric_list: | ||
- metric: !function utils.pass_at_k | ||
aggregation: mean | ||
higher_is_better: true | ||
k: [1] | ||
generation_kwargs: | ||
until: | ||
- "\nclass" | ||
- "\ndef" | ||
- "\n#" | ||
- "\nif" | ||
- "\nprint" | ||
max_gen_toks: 1024 | ||
do_sample: false | ||
repeats: 1 | ||
num_fewshot: 0 | ||
filter_list: | ||
- name: "create_test" | ||
filter: | ||
- function: "custom" | ||
filter_fn: !function utils.build_predictions | ||
metadata: | ||
version: 1.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
include: humaneval.yaml | ||
task: humaneval_64 | ||
repeats: 64 | ||
metric_list: | ||
- metric: !function utils.pass_at_k | ||
aggregation: mean | ||
higher_is_better: true | ||
k: [2,8,16,32,64] | ||
generation_kwargs: | ||
until: | ||
- "\nclass" | ||
- "\ndef" | ||
- "\n#" | ||
- "\nif" | ||
- "\nprint" | ||
max_gen_toks: 1024 | ||
do_sample: true | ||
temperature: 0.2 | ||
top_p: 0.95 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
import evaluate as hf_evaluate | ||
|
||
|
||
try: | ||
compute_ = hf_evaluate.load("code_eval") | ||
test_cases = ["assert add(2, 3)==5"] | ||
candidates = [["def add(a,b): return a*b"]] | ||
results = compute_.compute(references=test_cases, predictions=candidates, k=[1]) | ||
except Exception as e: | ||
raise e | ||
|
||
|
||
def pass_at_k(references: list[str], predictions: list[list[str]], k: list[int] = None): | ||
global compute_ | ||
assert k is not None | ||
if isinstance(k, int): | ||
k = [k] | ||
res = compute_.compute( | ||
references=references, | ||
predictions=predictions, | ||
k=k, | ||
) | ||
return res[0] | ||
|
||
|
||
def build_predictions(resps: list[list[str]], docs: list[dict]) -> list[list[str]]: | ||
return [[doc["prompt"] + r for r in resp] for resp, doc in zip(resps, docs)] |