Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unified Configuration File in TOML Format #1174

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from
34 changes: 34 additions & 0 deletions config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# config.toml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused, isn't this configuration file gpt-engineer specific?

It would probably be a confusingly general name for most projects.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fully agreed. I will do that in the following commits.

# Unified configuration file for the gpt-engineer project

# API Configuration
[api]
# API key for OpenAPI
# OPENAI_API_KEY=Your personal OpenAI API key from https://platform.openai.com/account/api-keys
OPENAI_API_KEY = "your_api_key_here"
ANTHROPIC_API_KEY = "your_anthropic_api_key_here"
Copy link
Collaborator

@ErikBjare ErikBjare Jun 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest not adding this here, since the configuration file is checked into git. Let users handle it with env vars/dotenv imo, or by a separate config file (~/.config/gpt-engineer/config.toml).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ATheorell and I have agreed to keep all the current behaviour intact while providing an additional option to toggle all configurations in a unified way. The current interfaces, like CLI commands and env files, will still have higher priority. If the user doesn't provide one, GPT-engineer will default to using the config.toml file, offering a more convenient way for users to view and adjust all configurations simultaneously. This approach ensures flexibility while maintaining ease of use.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, the reason for placing the config.toml file in the root directory is to accommodate users who might not be very tech-savvy. Having a one-stop configuration file in the root can save them a lot of time and make the setup process much more straightforward.

# Model configurations
[model]
model_name = "gpt-4o"
# Controls randomness: lower values for more focused, deterministic outputs
temperature = 0.1
# Endpoint for your Azure OpenAI Service (https://xx.openai.azure.com).
# In that case, the given model is the deployment name chosen in the Azure AI Studio.
azure_endpoint = ""

# improve mode Configuration
[improve]
# Enable or disable linting (true/false)
is_linting = true
# Enable or disable file selection. "true" will open your default editor to select the file. (true/false)
is_file_selection = true

# Git Filter Configuration
[git_filter]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest this section be renamed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fully agreed. Once all the functions are finalized, we'll rename this section and revise all the names in a subsequent commit. For now, I've just placed some placeholders here to help identify and locate all configurations.

# File extension settings for the git filter
file_extensions = ["py", "toml", "md"]

# Self-Healing Mechanism Configuration
[self_healing]
# Number of retry attempts for self-healing mechanisms (0-2)
retry_attempts = 1
8 changes: 7 additions & 1 deletion gpt_engineer/applications/cli/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@
from gpt_engineer.core.files_dict import FilesDict
from gpt_engineer.core.git import stage_uncommitted_to_git
from gpt_engineer.core.preprompts_holder import PrepromptsHolder
from gpt_engineer.core.project_config import Config
from gpt_engineer.core.prompt import Prompt
from gpt_engineer.tools.custom_steps import clarified_gen, lite_gen, self_heal

Expand Down Expand Up @@ -250,7 +251,7 @@ def prompt_yesno() -> bool:
def main(
project_path: str = typer.Argument(".", help="path"),
model: str = typer.Option(
os.environ.get("MODEL_NAME", "gpt-4o"), "--model", "-m", help="model id string"
os.environ.get("MODEL_NAME", "gpt-4"), "--model", "-m", help="model id string"
),
temperature: float = typer.Option(
0.1,
Expand Down Expand Up @@ -410,6 +411,11 @@ def main(
path = Path(project_path)
print("Running gpt-engineer in", path.absolute(), "\n")

# read the configuration file from the root directory
config = Config()
config.from_toml(Path(os.getcwd()) / "config.toml").to_dict()
# todo: apply configuration here

prompt = load_prompt(
DiskMemory(path),
improve_mode,
Expand Down
72 changes: 39 additions & 33 deletions gpt_engineer/core/project_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

import tomlkit

default_config_filename = "gpt-engineer.toml"
default_config_filename = "config.toml"

example_config = """
[run]
Expand Down Expand Up @@ -38,22 +38,22 @@ class _PathsConfig:


@dataclass
class _RunConfig:
build: str | None = None
test: str | None = None
lint: str | None = None
format: str | None = None
class _ApiConfig:
OPENAI_API_KEY: str | None = None
ANTHROPIC_API_KEY: str | None = None


@dataclass
class _OpenApiConfig:
url: str
class _ModelConfig:
model_name: str | None = None
temperature: float | None = None
azure_endpoint: str | None = None


@dataclass
class _GptEngineerAppConfig:
project_id: str
openapi: list[_OpenApiConfig] | None = None
class _ImproveConfig:
is_linting: bool | None = None
is_file_selection: bool | None = None


def filter_none(d: dict) -> dict:
Expand All @@ -74,8 +74,9 @@ class Config:
"""Configuration for the GPT Engineer CLI and gptengineer.app via `gpt-engineer.toml`."""

paths: _PathsConfig = field(default_factory=_PathsConfig)
run: _RunConfig = field(default_factory=_RunConfig)
gptengineer_app: _GptEngineerAppConfig | None = None
api_config: _ApiConfig = field(default_factory=_ApiConfig)
model_config: _ModelConfig = field(default_factory=_ModelConfig)
improve_config: _ImproveConfig = field(default_factory=_ImproveConfig)

@classmethod
def from_toml(cls, config_file: Path | str):
Expand All @@ -86,31 +87,36 @@ def from_toml(cls, config_file: Path | str):

@classmethod
def from_dict(cls, config_dict: dict):
run = _RunConfig(**config_dict.get("run", {}))
paths = _PathsConfig(**config_dict.get("paths", {}))

# load optional gptengineer-app section
gptengineer_app_dict = config_dict.get("gptengineer-app", {})
gptengineer_app = None
if gptengineer_app_dict:
assert (
"project_id" in gptengineer_app_dict
), "project_id is required in gptengineer-app section"
gptengineer_app = _GptEngineerAppConfig(
# required if gptengineer-app section is present
project_id=gptengineer_app_dict["project_id"],
openapi=[
_OpenApiConfig(**openapi)
for openapi in gptengineer_app_dict.get("openapi", [])
]
or None,
paths = _PathsConfig(**config_dict.get("paths", {"base": None, "src": None}))
api_config = _ApiConfig(
**config_dict.get(
"api", {"OPENAI_API_KEY": None, "ANTHROPIC_API_KEY": None}
)
)
model_config = _ModelConfig(
**config_dict.get(
"model",
{"model_name": None, "temperature": None, "azure_endpoint": None},
)
)
improve_config = _ImproveConfig(
**config_dict.get(
"improve", {"is_linting": None, "is_file_selection": None}
)
)

return cls(paths=paths, run=run, gptengineer_app=gptengineer_app)
return cls(
paths=paths,
api_config=api_config,
model_config=model_config,
improve_config=improve_config,
)

def to_dict(self) -> dict:
d = asdict(self)
d["gptengineer-app"] = d.pop("gptengineer_app", None)
d["api"] = d.pop("api_config", None)
d["model"] = d.pop("model_config", None)
d["improve"] = d.pop("improve_config", None)

# Drop None values and empty dictionaries
# Needed because tomlkit.dumps() doesn't handle None values,
Expand Down
Loading