Skip to content

Commit

Permalink
finetuning gpt4o
Browse files Browse the repository at this point in the history
  • Loading branch information
omarsar committed Aug 22, 2024
1 parent 8a6a3c3 commit 9c35ed4
Show file tree
Hide file tree
Showing 2 changed files with 32 additions and 0 deletions.
1 change: 1 addition & 0 deletions pages/applications/_meta.en.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
{
"finetuning-gpt4o": "Fine-tuning GPT-4o",
"function_calling": "Function Calling",
"context-caching": "Context Caching with LLMs",
"generating": "Generating Data",
Expand Down
31 changes: 31 additions & 0 deletions pages/applications/finetuning-gpt4o.en.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Fine-Tuning with GPT-4o Models

OpenAI recently [announced](https://openai.com/index/gpt-4o-fine-tuning/) the availability of fine-tuning for its latest models, GPT-4o and GPT-4o mini. This new capability enables developers to customize the GPT-4o models for specific use cases, enhancing performance and tailoring outputs.

## Fine-Tuning Details and Costs

Developers can now access the `GPT-4o-2024-08-06` checkpoint for fine-tuning through the dedicated [fine-tuning dashboard](https://platform.openai.com/finetune). This process allows for customization of response structure, tone, and adherence to complex, domain-specific instructions.

The cost for fine-tuning GPT-4o is \$25 per million tokens for training and \$3.75 per million input tokens and \$15 per million output tokens for inference. This feature is exclusively available to developers on paid usage tiers.

## Free Training Tokens for Experimentation

To encourage exploration of this new feature, OpenAI is offering a limited-time promotion until September 23rd. Developers can access 1 million free training tokens per day for GPT-4o and 2 million free training tokens per day for GPT-4o mini. This provides a good opportunity to experiment and discover innovative applications for fine-tuned models.

## Use Case: Emotion Classification

<iframe width="100%"
height="415px"
src="https://www.youtube.com/embed/UJ7ry7Qp2Js?si=ZU3K0ZVNfQjnlZgo" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
/>

In the above guide, we showcase a practical example of fine-tuning which involves training a model for emotion classification. Using a [JSONL formatted dataset](https://github.com/dair-ai/datasets/tree/main/openai) containing text samples labeled with corresponding emotions, GPT-4o mini can be fine-tuned to classify text based on emotional tone.

This demonstration highlights the potential of fine-tuning in enhancing model performance for specific tasks, achieving significant improvements in accuracy compared to standard models.

## Accessing and Evaluating Fine-Tuned Models

Once the fine-tuning process is complete, developers can access and evaluate their custom models through the OpenAI playground. The playground allows for interactive testing with various inputs and provides insights into the model's performance. For more comprehensive evaluation, developers can integrate the fine-tuned model into their applications via the OpenAI API and conduct systematic testing.

OpenAI's introduction of fine-tuning for GPT-4o models unlocks new possibilities for developers seeking to leverage the power of LLMs for specialized tasks.

0 comments on commit 9c35ed4

Please sign in to comment.