Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching model responses for quicker development and branching of runs #40

Open
jas-ho opened this issue Jul 29, 2023 · 0 comments · May be fixed by #45
Open

Caching model responses for quicker development and branching of runs #40

jas-ho opened this issue Jul 29, 2023 · 0 comments · May be fixed by #45

Comments

@jas-ho
Copy link
Collaborator

jas-ho commented Jul 29, 2023

Note: this needs to be defined more carefully.
Currently, this is mostly a bookmark for a discussion we had with @spirali and @gavento .

The rough idea would be to allow an actor to read responses from a context instead of querying an API.

How this would help?

  • could speed up debugging by avoiding slow API calls
  • could help debugging by providing a reproducible / deterministic actor response
  • could allow branching experiments where, e.g., a run of a debate which looks promising up to round 10 can be repeated with several possible continuations after round 10
@jas-ho jas-ho changed the title Caching model responses for quicker development Caching model responses for quicker development and branching of runs Jul 29, 2023
@spirali spirali linked a pull request Jul 31, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant