Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Emnlp #7

Open
wants to merge 93 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
93 commits
Select commit Hold shift + click to select a range
a2ca02a
Add ATP-specific generate DVs workflow
vinhowe May 12, 2023
095912c
Apparently we could generate DVs automatically
vinhowe May 12, 2023
298d028
Add survey cost estimation code
vinhowe May 16, 2023
b6a536e
Add code to automatically generate ATP configs
vinhowe May 16, 2023
a65478c
Lowercase variables because it works for neurips
vinhowe May 16, 2023
39026e1
Add 'culling sampled below n' code (note!)
vinhowe May 16, 2023
f937f59
Merge branch 'main' into neurips
vinhowe May 19, 2023
80110d9
Add output for debugging prompt construction
vinhowe May 19, 2023
08e9bb5
Revert "Lowercase variables because it works for neurips"
vinhowe May 19, 2023
7e7adef
Undo ValidOption lowercasing
vinhowe May 19, 2023
7435097
Revert "Add 'culling sampled below n' code (note!)"
vinhowe May 20, 2023
e0df35c
Remove n_culle_sampled_below everywhere
vinhowe May 20, 2023
97e84c8
Unsort imports in survey.py
vinhowe May 20, 2023
b013cb9
Ignore .DS_Store
vinhowe May 20, 2023
298cf23
Merge branch 'main' into emnlp
vinhowe May 20, 2023
a990667
s/config_filename/variables_filename/g
vinhowe May 20, 2023
5fc9c00
Filter out out-of-schema variable values
vinhowe May 20, 2023
bd7a057
Working async openai sampler impl
vinhowe May 20, 2023
490e07c
Reformat
vinhowe May 20, 2023
81c22f7
Remove extra comment
vinhowe May 20, 2023
3bd43df
Update example_configure_survey.py
vinhowe May 20, 2023
7b68fa6
Remove unused argparse import
vinhowe May 20, 2023
fea3a1f
Reformat
vinhowe May 20, 2023
b4a8fe9
Merge branch 'emnlp' into async-openai-sampling
vinhowe May 20, 2023
3728470
Remove unused prompt printing
vinhowe May 20, 2023
a218599
Merge branch 'emnlp' into async-openai-sampling
vinhowe May 20, 2023
2877aab
Remove comments for copilot
vinhowe May 20, 2023
4e8155d
Merge branch 'emnlp' into async-openai-sampling
vinhowe May 20, 2023
10b50a5
replace responses.csv -> data.csv
vinhowe May 20, 2023
a41a1bd
Update folder structure.
alexgshaw May 22, 2023
27615c8
Add style check GitHub workflow
vinhowe May 22, 2023
056761b
Merge branch 'emnlp' into async-openai-sampling
vinhowe May 22, 2023
8035a00
Bug fix + add slot for response object.
alexgshaw May 22, 2023
98b4acf
Fix index typing error.
alexgshaw May 22, 2023
687dcd0
Merge branch 'emnlp' into async-openai-sampling
vinhowe May 22, 2023
bd381ba
Merge branch 'emnlp' into async-openai-sampling
vinhowe May 22, 2023
a7febdf
Update async openai sampler to return response
vinhowe May 22, 2023
82c356c
Add ordinal property to variables.json
vinhowe May 22, 2023
08f02c5
Handle AutoModel async
vinhowe May 22, 2023
05ddeef
Merge pull request #8 from BYU-PCCL/async-openai-sampling
vinhowe May 22, 2023
2ed52ae
Merge branch 'emnlp' into add-ordinal-to-variables
vinhowe May 22, 2023
2c31a63
Merge pull request #11 from BYU-PCCL/add-ordinal-to-variables
alexgshaw May 22, 2023
64afba1
Fix survey sampling.
alexgshaw May 22, 2023
23bb834
Merge branch 'emnlp' of https://github.com/BYU-PCCL/lm-survey into emnlp
alexgshaw May 22, 2023
2b82b88
Update atp configuration
vinhowe May 22, 2023
f126bc5
Add ordinal functionality to Question class.
alexgshaw May 22, 2023
639d687
Merge branch 'emnlp' of https://github.com/BYU-PCCL/lm-survey into emnlp
alexgshaw May 22, 2023
3740cfc
Rework folder structure.
alexgshaw May 22, 2023
d7198fb
Temp. revert "Add style check GitHub workflow"
vinhowe May 23, 2023
b0b14e5
bug fix.
alexgshaw May 23, 2023
aee5f3c
Minor bug fix on question.
alexgshaw May 23, 2023
f260e71
rename schema to variable
alexgshaw May 23, 2023
83bdc97
gitignore update
alexgshaw May 23, 2023
ea66dd5
Added crude representativeness scoring
chrisrytting May 24, 2023
8008a9f
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
chrisrytting May 24, 2023
ac835b2
Update estimate survey to experiment config file
vinhowe May 24, 2023
d47640a
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
vinhowe May 24, 2023
e3c4c8e
breadth experiment
chrisrytting May 24, 2023
f9347ac
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
chrisrytting May 24, 2023
67bbf30
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
vinhowe May 24, 2023
5940ff0
Add create atp experiment script
vinhowe May 24, 2023
39c2fe4
Check in variables
vinhowe May 24, 2023
95da00f
Clean up create atp experiment imports
vinhowe May 24, 2023
562fb9d
Do formatting for create atp experiment
vinhowe May 24, 2023
28e3ac9
Update check survey prompts for experiments
vinhowe May 24, 2023
4e79da7
Update estimate survey to experiment config file
vinhowe May 24, 2023
cde6e24
Fix ATP configuration script
vinhowe May 24, 2023
9076e10
Use ordinals to find invalid options
vinhowe May 24, 2023
9eba1c9
Fix
vinhowe May 24, 2023
f0cf51d
Fix rate limit error import
vinhowe May 25, 2023
f510c3e
Bump up rate limit to what OpenAI says we have
vinhowe May 25, 2023
9aaead3
Push updates variables
vinhowe May 26, 2023
ac8e3fb
Add force flag.
alexgshaw May 26, 2023
6e9d77f
Sampler fix.
alexgshaw May 26, 2023
0a24acd
Added logging, added functionality to fill in missing response_objects
chrisrytting May 26, 2023
ad4525d
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
chrisrytting May 26, 2023
b058a5f
Bug fix.
alexgshaw May 26, 2023
34115df
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
chrisrytting May 27, 2023
5b3c884
added some helpers
chrisrytting May 29, 2023
07fdbbe
Added tests for infilling
chrisrytting May 29, 2023
21b8e75
Added infilling ability, some more logging, and some extra DVS functi…
chrisrytting May 29, 2023
446672c
Removed data push mistake
chrisrytting May 29, 2023
835da77
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
vinhowe May 29, 2023
1aff991
Remove unused import in async sampler
vinhowe May 29, 2023
8f7e6ad
Add updated estimate_survey.py
vinhowe May 30, 2023
849e91e
Minor fixes.
alexgshaw May 31, 2023
91b8041
Get rid of horrible rate limit print
vinhowe May 31, 2023
e1b1c52
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
vinhowe May 31, 2023
f65c797
Made the rep calculation neater
chrisrytting Jun 2, 2023
c96a889
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
chrisrytting Jun 2, 2023
0909387
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
vinhowe Jun 2, 2023
df555c6
Added weighting and an ability to extract D_H from opinionqa dataset
chrisrytting Jun 3, 2023
a0ba33e
Merge branch 'emnlp' of github.com:BYU-PCCL/lm-survey into emnlp
chrisrytting Jun 3, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -129,10 +129,13 @@ dmypy.json
# Pyre type checker
.pyre/


data
docker
results
nb_*
.Trash-0
*.sh
*.sh
.DS_Store
experiments
*.tar.gz
*.zip
54 changes: 54 additions & 0 deletions calculate_representativeness.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
from lm_survey.survey import SurveyResults
import numpy as np
from lm_survey.helpers import *
import os
import json
from lm_survey.survey import DependentVariableSample
import glob
import pandas as pd
import sys


# Grab all the files called "results.json" in the "experiments" directory
input_filepaths = Path("experiments/breadth").glob(
"**/*davinci*/results.json",
)


# read input filepaths into pandas dfs
mean_reps = {}

question_samples = []
for input_filepath in list(input_filepaths):
question_samples += filepath_to_dvs_list(input_filepath, add_weights=True)
# wave = input_filepath.split("/")[3][-3:]
# mean_reps[wave] = survey_results.get_representativeness()

survey_results = SurveyResults(question_samples=question_samples)
rep = survey_results.get_representativeness(
survey_results.df.groupby("variable_name")
)
print(rep.mean())


# print("Average representativeness: ", np.mean(list(mean_reps.values())))
# print(
# "Average representativeness per : \n",
# [f"{k}: {v}\n" for k, v in mean_reps.items()],
# )


# with open(input_filepath, "r") as file:
# results = json.load(file)

# question_samples = [
# DependentVariableSample(
# **sample_dict,
# )
# for sample_dict in results["llama-7b-hf"]
# ]

# survey_results = SurveyResults(question_samples=question_samples)

# # Print with 2 decimal places
# print(survey_results.get_mean_score(slice_by=["gender"]).round(2))
70 changes: 70 additions & 0 deletions check_survey_prompts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
import argparse
import json
import os
import typing
from pathlib import Path

from tqdm import tqdm

from lm_survey.survey import Survey


def check_survey_prompts(
survey_name: str,
experiment_name: str,
):
data_dir = os.path.join("data", survey_name)
variables_dir = os.path.join("variables", survey_name)
experiment_dir = os.path.join("experiments", experiment_name, survey_name)

with open(os.path.join(experiment_dir, "config.json"), "r") as file:
config = json.load(file)

print(os.path.join(variables_dir, "variables.json"))

survey = Survey(
name=survey_name,
data_filename=os.path.join(data_dir, "data.csv"),
variables_filename=os.path.join(variables_dir, "variables.json"),
independent_variable_names=config["independent_variable_names"],
dependent_variable_names=config["dependent_variable_names"],
)

next_survey_sample = next(survey.iterate())
print(f"## EXAMPLE PROMPT FOR {data_dir}:")
print()
print('"""')
print(
f"{next_survey_sample.prompt}█{next_survey_sample.completion.correct_completion}"
)
print('"""')
print()
print(f"## DEMOGRAPHICS NATURAL LANGUAGE SUMMARY FOR {data_dir}:")
print()
survey.print_demographics_natural_language_summary()


def main(survey_directories: typing.List[Path], experiment_name: str) -> None:
for survey_directory in survey_directories:
check_survey_prompts(survey_directory, experiment_name)


if __name__ == "__main__":
parser = argparse.ArgumentParser()

# Positional argument for survey dir(s)
parser.add_argument(
"survey_directory",
nargs="+",
type=Path,
)
parser.add_argument(
"-e",
"--experiment_name",
type=str,
default="default",
)

args = parser.parse_args()

main(survey_directories=args.survey_directory, experiment_name=args.experiment_name)
39 changes: 39 additions & 0 deletions configure_atp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import argparse
import json
import os
from pathlib import Path

from lm_survey.survey.survey import Survey

if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"wave", type=Path, nargs="+", help="Path(s) to wave of ATP to configure"
)
parser.add_argument("output_path", type=Path, help="Path to output directory")
parser.add_argument(
"--base-variables", type=Path, help="Path to optional base variables"
)
args = parser.parse_args()

base_variables = None
if args.base_variables:
with args.base_variables.open("r") as f:
base_variables = json.load(f)

for wave in args.wave:
survey = Survey(name="ATP_W92", data_filename=wave / "data.csv")

wave_output_dir = args.output_path / wave
wave_output_dir.mkdir(parents=True, exist_ok=True)

output_variables_path = wave_output_dir / "variables.json"
survey.generate_atp_variables(wave, wave_output_dir / "variables.json")

# This is a simple way to put some extra stuff in the variables file
if base_variables:
with output_variables_path.open("r") as f:
variables = json.load(f)
variables.extend(base_variables)
with output_variables_path.open("w") as f:
json.dump(variables, f, indent=2)
56 changes: 56 additions & 0 deletions create_atp_experiment.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
import argparse
import json
from pathlib import Path

import pandas as pd
from tqdm import tqdm


def main(survey_name: str, experiment_name: str) -> None:
data_dir = Path("data") / survey_name
experiment_dir = Path("experiments") / experiment_name / survey_name

# create experiment dir
if not experiment_dir.exists():
experiment_dir.mkdir(parents=True, exist_ok=True)

info_csv_path = data_dir / "info.csv"
metadata_csv_path = data_dir / "metadata.csv"

info_df = pd.read_csv(info_csv_path)
metadata_df = pd.read_csv(metadata_csv_path)

experiment_config = {
"independent_variable_names": list(metadata_df["key"]),
"dependent_variable_names": list(info_df["key"])
}

with (experiment_dir / "config.json").open("w") as file:
json.dump(experiment_config, file, indent=4)


if __name__ == "__main__":
parser = argparse.ArgumentParser()

parser.add_argument(
"-s",
"--survey_name",
type=str,
default="all",
)
parser.add_argument(
"-e",
"--experiment_name",
type=str,
default="default",
)

args = parser.parse_args()

if args.survey_name == "all":
paths = sorted(Path("data").glob("ATP/American*/"))
for path in tqdm(paths):
args.survey_name = str(path.relative_to("data"))
main(survey_name=args.survey_name, experiment_name=args.experiment_name)
else:
main(survey_name=args.survey_name, experiment_name=args.experiment_name)
147 changes: 147 additions & 0 deletions estimate_survey.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
import argparse
import json
import os
import typing

import numpy as np
import pandas as pd
from tqdm import tqdm

from lm_survey.samplers import AutoSampler, BaseSampler
from lm_survey.survey import Survey
from pathlib import Path


def estimate_survey_costs(
sampler: BaseSampler,
survey_name: str,
experiment_name: str,
*,
n_samples_per_dependent_variable: typing.Optional[int] = None,
):
data_dir = os.path.join("data", survey_name)
variables_dir = os.path.join("variables", survey_name)
experiment_dir = os.path.join("experiments", experiment_name, survey_name)

with open(os.path.join(experiment_dir, "config.json"), "r") as file:
config = json.load(file)

survey = Survey(
name=survey_name,
data_filename=os.path.join(data_dir, "data.csv"),
variables_filename=os.path.join(variables_dir, "variables.json"),
independent_variable_names=config["independent_variable_names"],
dependent_variable_names=config["dependent_variable_names"],
)

dependent_variable_samples = list(
survey.iterate(
n_samples_per_dependent_variable=n_samples_per_dependent_variable
)
)

prompt_count = len(dependent_variable_samples)

if hasattr(sampler, "batch_estimate_prompt_cost"):
completion_costs = sampler.batch_estimate_prompt_cost(
[
dependent_variable_sample.prompt
for dependent_variable_sample in dependent_variable_samples
]
)
else:
completion_costs = []
for dependent_variable_sample in tqdm(dependent_variable_samples):
completion_cost = sampler.estimate_prompt_cost(
dependent_variable_sample.prompt
)
completion_costs.append(completion_cost)

total_completion_cost = np.sum(completion_costs)

return {
"prompt_count": prompt_count,
"cost": total_completion_cost,
}


def main(
model_name: str,
survey_names: typing.List[str],
experiment_name: str,
n_samples_per_dependent_variable: typing.Optional[int] = None,
) -> None:
sampler = AutoSampler(model_name=model_name)

survey_costs = {}
for survey_name in tqdm(survey_names):
estimate = estimate_survey_costs(
sampler=sampler,
survey_name=survey_name,
experiment_name=experiment_name,
n_samples_per_dependent_variable=n_samples_per_dependent_variable,
)
survey_costs[survey_name] = estimate

total_cost = sum([estimate["cost"] for estimate in survey_costs.values()])

total_prompt_count = sum(
[estimate["prompt_count"] for estimate in survey_costs.values()]
)

if len(survey_names) > 1:
print(f"Cost per survey:")
for survey_name, survey_cost in survey_costs.items():
print(
f"{survey_name}: ${(survey_cost['cost'] / 100):.2f} ({survey_cost['prompt_count']}"
" prompts)"
)

print(f"Total cost: ${(total_cost / 100):.2f} ({total_prompt_count} prompts)")


if __name__ == "__main__":
parser = argparse.ArgumentParser()

parser.add_argument(
"-m",
"--model_name",
type=str,
required=True,
)
parser.add_argument(
"-n",
"--n_samples_per_dependent_variable",
type=int,
)
parser.add_argument(
"-e",
"--experiment_name",
type=str,
default="default",
)
# Positional argument for survey dir(s)
parser.add_argument(
"survey_name",
# nargs="+",
type=str,
)

args = parser.parse_args()

if args.survey_name == "all":
paths = sorted(Path("data").glob("ATP/American*/"))
survey_names = [path.relative_to("data") for path in paths]
main(
model_name=args.model_name,
survey_names=survey_names,
experiment_name=args.experiment_name,
n_samples_per_dependent_variable=args.n_samples_per_dependent_variable,
)
else:
main(
model_name=args.model_name,
survey_names=args.survey_name,
experiment_name=args.experiment_name,
n_samples_per_dependent_variable=args.n_samples_per_dependent_variable,
)
Loading