diff --git a/pages/applications/_meta.en.json b/pages/applications/_meta.en.json index 8f2dab78e..2384b4d0a 100644 --- a/pages/applications/_meta.en.json +++ b/pages/applications/_meta.en.json @@ -1,4 +1,5 @@ { + "finetuning-gpt4o": "Fine-tuning GPT-4o", "function_calling": "Function Calling", "context-caching": "Context Caching with LLMs", "generating": "Generating Data", diff --git a/pages/applications/finetuning-gpt4o.en.mdx b/pages/applications/finetuning-gpt4o.en.mdx new file mode 100644 index 000000000..8a84c38e0 --- /dev/null +++ b/pages/applications/finetuning-gpt4o.en.mdx @@ -0,0 +1,31 @@ +# Fine-Tuning with GPT-4o Models + +OpenAI recently [announced](https://openai.com/index/gpt-4o-fine-tuning/) the availability of fine-tuning for its latest models, GPT-4o and GPT-4o mini. This new capability enables developers to customize the GPT-4o models for specific use cases, enhancing performance and tailoring outputs. + +## Fine-Tuning Details and Costs + +Developers can now access the `GPT-4o-2024-08-06` checkpoint for fine-tuning through the dedicated [fine-tuning dashboard](https://platform.openai.com/finetune). This process allows for customization of response structure, tone, and adherence to complex, domain-specific instructions. + +The cost for fine-tuning GPT-4o is \$25 per million tokens for training and \$3.75 per million input tokens and \$15 per million output tokens for inference. This feature is exclusively available to developers on paid usage tiers. + +## Free Training Tokens for Experimentation + +To encourage exploration of this new feature, OpenAI is offering a limited-time promotion until September 23rd. Developers can access 1 million free training tokens per day for GPT-4o and 2 million free training tokens per day for GPT-4o mini. This provides a good opportunity to experiment and discover innovative applications for fine-tuned models. + +## Use Case: Emotion Classification + +