Skip to content

Commit

Permalink
Merge pull request #660 from iamarunbrahma/fix/wandb_broken_link
Browse files Browse the repository at this point in the history
fix: broken link in RESOURCES.md
  • Loading branch information
koreyspace authored Jan 24, 2025
2 parents 0d49fa0 + 5b49380 commit b39aae2
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions 18-fine-tuning/RESOURCES.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The lesson was built using a number of core resources from OpenAI and Azure Open
| [Fine-tuning and function calling](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning-functions?WT.mc_id=academic-105485-koreyst) | Fine-tuning your model **with function calling examples** can improve model output by getting more accurate and consistent outputs - with similarly-formatted responses & cost-savings |
| [Fine-tuning Models: Azure OpenAI Guidance](https://learn.microsoft.com/azure/ai-services/openai/concepts/models#fine-tuning-models?WT.mc_id=academic-105485-koreyst) | Look up this table to understand **what models can be fine-tuned** in Azure OpenAI, and which regions these are available in. Look up their token limits and training data expiry dates if needed. |
| [To Fine Tune or Not To Fine Tune? That is the Question](https://learn.microsoft.com/shows/ai-show/to-fine-tune-or-not-fine-tune-that-is-the-question?WT.mc_id=academic-105485-koreyst) | This 30-min **Oct 2023** episode of the AI Show discusses benefits, drawbacks and practical insights that help you make this decision. |
| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning?WT.mc_id=academic-105485-koreyst) | This **AI Playbook** resource walks you through data requirements, formatting, hyperparameter fine-tuning and challenges/limitations you should know. |
| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning-recommend?WT.mc_id=academic-105485-koreyst) | This **AI Playbook** resource walks you through data requirements, formatting, hyperparameter fine-tuning and challenges/limitations you should know. |
| **Tutorial**: [Azure OpenAI GPT3.5 Turbo Fine-Tuning](https://learn.microsoft.com/azure/ai-services/openai/tutorials/fine-tune?tabs=python%2Ccommand-line?WT.mc_id=academic-105485-koreyst) | Learn to create a sample fine-tuning dataset, prepare for fine-tuning, create a fine-tuning job, and deploy the fine-tuned model on Azure. |
| **Tutorial**: [Fine-tune a Llama 2 model in Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/how-to/fine-tune-model-llama?WT.mc_id=academic-105485-koreyst) | Azure AI Studio lets you tailor large language models to your personal datasets _using a UI-based workflow suitable for low-code developers_. See this example. |
| **Tutorial**:[Fine-tune Hugging Face models for a single GPU on Azure](https://learn.microsoft.com/azure/databricks/machine-learning/train-model/huggingface/fine-tune-model?WT.mc_id=academic-105485-koreyst) | This article describes how to fine-tune a Hugging Face model with the Hugging Face transformers library on a single GPU with Azure DataBricks + Hugging Face Trainer libraries |
Expand All @@ -30,7 +30,7 @@ This section captures additional resources that are worth exploring, but that we
| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **OpenAI Cookbook**: [Data preparation and analysis for chat model fine-tuning](https://cookbook.openai.com/examples/chat_finetuning_data_prep?WT.mc_id=academic-105485-koreyst) | This notebook serves as a tool to preprocess and analyze the chat dataset used for fine-tuning a chat model. It checks for format errors, provides basic statistics, and estimates token counts for fine-tuning costs. See: [Fine-tuning method for gpt-3.5-turbo](https://platform.openai.com/docs/guides/fine-tuning?WT.mc_id=academic-105485-koreyst). |
| **OpenAI Cookbook**: [Fine-Tuning for Retrieval Augmented Generation (RAG) with Qdrant](https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant?WT.mc_id=academic-105485-koreyst) | The aim of this notebook is to walk through a comprehensive example of how to fine-tune OpenAI models for Retrieval Augmented Generation (RAG). We will also be integrating Qdrant and Few-Shot Learning to boost model performance and reduce fabrications. |
| **OpenAI Cookbook**: [Fine-tuning GPT with Weights & Biases](https://cookbook.openai.com/examples/third_party/gpt_finetuning_with_wandb?WT.mc_id=academic-105485-koreyst) | Weights & Biases (W&B) is the AI developer platform, with tools for training models, fine-tuning models, and leveraging foundation models. Read their [OpenAI Fine-Tuning](https://docs.wandb.ai/guides/integrations/openai?WT.mc_id=academic-105485-koreyst) guide first, then try the Cookbook exercise. |
| **OpenAI Cookbook**: [Fine-tuning GPT with Weights & Biases](https://cookbook.openai.com/examples/third_party/gpt_finetuning_with_wandb?WT.mc_id=academic-105485-koreyst) | Weights & Biases (W&B) is the AI developer platform, with tools for training models, fine-tuning models, and leveraging foundation models. Read their [OpenAI Fine-Tuning](https://docs.wandb.ai/guides/integrations/openai-fine-tuning/?WT.mc_id=academic-105485-koreyst) guide first, then try the Cookbook exercise. |
| **Community Tutorial** [Phinetuning 2.0](https://huggingface.co/blog/g-ronimo/phinetuning?WT.mc_id=academic-105485-koreyst) - fine-tuning for Small Language Models | Meet [Phi-2](https://www.microsoft.com/research/blog/phi-2-the-surprising-power-of-small-language-models/?WT.mc_id=academic-105485-koreyst), Microsoft’s new small model, remarkably powerful yet compact. This tutorial will guide you through fine-tuning Phi-2, demonstrating how to build a unique dataset and fine-tune model using QLoRA. |
| **Hugging Face Tutorial** [How to Fine-Tune LLMs in 2024 with Hugging Face](https://www.philschmid.de/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst) | This blog post walks you thorugh how to fine-tune open LLMs using Hugging Face TRL, Transformers & datasets in 2024. You define a use case, setup a dev environment, prepare a dataset, fine tune the model, test-evaluate it, then deploy it to production. |
| **Hugging Face: [AutoTrain Advanced](https://github.com/huggingface/autotrain-advanced?WT.mc_id=academic-105485-koreyst)** | Brings faster and easier training and deployments of [state-of-the-art machine learning models](https://twitter.com/abhi1thakur/status/1755167674894557291?WT.mc_id=academic-105485-koreyst). Repo has Colab-friendly tutorials with YouTube video guidance, for fine-tuning. **Reflects recent [local-first](https://twitter.com/abhi1thakur/status/1750828141805777057?WT.mc_id=academic-105485-koreyst) update** . Read the [AutoTrain documentation](https://huggingface.co/autotrain?WT.mc_id=academic-105485-koreyst) |
Expand Down
4 changes: 2 additions & 2 deletions 18-fine-tuning/translations/tw/RESOURCES.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
| [Fine-tuning and function calling](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning-functions?WT.mc_id=academic-105485-koreyst) | 使用**函式呼叫範例**微調你的模型可以通過獲得更準確和一致的輸出來改進模型輸出——具有類似格式的回應和成本節省 |
| [Fine-tuning Models: Azure OpenAI Guidance](https://learn.microsoft.com/azure/ai-services/openai/concepts/models#fine-tuning-models?WT.mc_id=academic-105485-koreyst) | 查閱此表以了解**哪些模型可以在Azure OpenAI中進行微調**,以及這些模型在哪些地區可用。如有需要,查閱它們的token限制和訓練數據到期日期。 |
| [To Fine Tune or Not To Fine Tune? That is the Question](https://learn.microsoft.com/shows/ai-show/to-fine-tune-or-not-fine-tune-that-is-the-question?WT.mc_id=academic-105485-koreyst) | 這個30分鐘的**2023年10月**AI Show節目討論了幫助你做出這個決定的優點、缺點和實際見解。 |
| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning?WT.mc_id=academic-105485-koreyst) | 這個**AI Playbook**資源帶你了解數據需求、格式化、超參數微調以及你應該知道的挑戰/限制。 |
| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning-recommend?WT.mc_id=academic-105485-koreyst) | 這個**AI Playbook**資源帶你了解數據需求、格式化、超參數微調以及你應該知道的挑戰/限制。 |
| **Tutorial**: [Azure OpenAI GPT3.5 Turbo Fine-Tuning](https://learn.microsoft.com/azure/ai-services/openai/tutorials/fine-tune?tabs=python%2Ccommand-line?WT.mc_id=academic-105485-koreyst) | 學習建立一個範例微調數據集,準備微調,建立微調工作,並在Azure上部署微調模型。 |
| **Tutorial**: [Fine-tune a Llama 2 model in Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/how-to/fine-tune-model-llama?WT.mc_id=academic-105485-koreyst) | Azure AI Studio允許你使用基於UI的工作流程來定制大型語言模型以適應你的個人數據集_適合低代碼開發者_。請參見此範例。 |
| **Tutorial**:[Fine-tune Hugging Face models for a single GPU on Azure](https://learn.microsoft.com/azure/databricks/machine-learning/train-model/huggingface/fine-tune-model?WT.mc_id=academic-105485-koreyst) | 本文描述了如何使用Hugging Face transformers函式庫在單個GPU上與Azure DataBricks + Hugging Face Trainer函式庫一起微調Hugging Face模型。 |
Expand All @@ -30,7 +30,7 @@
| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **OpenAI Cookbook**: [Data preparation and analysis for chat model fine-tuning](https://cookbook.openai.com/examples/chat_finetuning_data_prep?WT.mc_id=academic-105485-koreyst) | 此筆記本用作預處理和分析用於微調聊天模型的聊天數據集。它檢查格式錯誤,提供基本統計資訊,並估算微調成本的代幣數量。請參見: [Fine-tuning method for gpt-3.5-turbo](https://platform.openai.com/docs/guides/fine-tuning?WT.mc_id=academic-105485-koreyst)|
| **OpenAI Cookbook**: [Fine-Tuning for Retrieval Augmented Generation (RAG) with Qdrant](https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant?WT.mc_id=academic-105485-koreyst) | 此筆記本的目的是通過一個全面的範例來演示如何為檢索增強生成(RAG)微調OpenAI模型。我們還將整合Qdrant和少樣本學習來提升模型性能並減少捏造。 |
| **OpenAI Cookbook**: [Fine-tuning GPT with Weights & Biases](https://cookbook.openai.com/examples/third_party/gpt_finetuning_with_wandb?WT.mc_id=academic-105485-koreyst) | Weights & Biases (W&B) 是AI開發者平台,提供訓練模型、微調模型和利用基礎模型的工具。首先閱讀他們的[OpenAI Fine-Tuning](https://docs.wandb.ai/guides/integrations/openai?WT.mc_id=academic-105485-koreyst)指南,然後嘗試Cookbook練習。 |
| **OpenAI Cookbook**: [Fine-tuning GPT with Weights & Biases](https://cookbook.openai.com/examples/third_party/gpt_finetuning_with_wandb?WT.mc_id=academic-105485-koreyst) | Weights & Biases (W&B) 是AI開發者平台,提供訓練模型、微調模型和利用基礎模型的工具。首先閱讀他們的[OpenAI Fine-Tuning]https://docs.wandb.ai/guides/integrations/openai-fine-tuning/?WT.mc_id=academic-105485-koreyst)指南,然後嘗試Cookbook練習。 |
| **Community Tutorial** [Phinetuning 2.0](https://huggingface.co/blog/g-ronimo/phinetuning?WT.mc_id=academic-105485-koreyst) - fine-tuning for Small Language Models | 認識[Phi-2](https://www.microsoft.com/research/blog/phi-2-the-surprising-power-of-small-language-models/?WT.mc_id=academic-105485-koreyst),微軟的新小型模型,功能強大且緊湊。本指南將引導您微調Phi-2,展示如何建立獨特的數據集並使用QLoRA微調模型。 |
| **Hugging Face Tutorial** [How to Fine-Tune LLMs in 2024 with Hugging Face](https://www.philschmid.de/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst) | 這篇博客文章將引導您如何在2024年使用Hugging Face TRL、Transformers和數據集微調開放LLMs。您將定義一個使用案例,設定開發環境,準備數據集,微調模型,測試評估,然後部署到生產環境。 |
| **Hugging Face: [AutoTrain Advanced](https://github.com/huggingface/autotrain-advanced?WT.mc_id=academic-105485-koreyst)** | 帶來更快更簡單的[最先進機器學習模型](https://twitter.com/abhi1thakur/status/1755167674894557291?WT.mc_id=academic-105485-koreyst)訓練和部署。 Repo 有適合Colab的指南和YouTube影片指導,用於微調。**反映了最近的[local-first](https://twitter.com/abhi1thakur/status/1750828141805777057?WT.mc_id=academic-105485-koreyst)更新**。請閱讀[AutoTrain documentation](https://huggingface.co/autotrain?WT.mc_id=academic-105485-koreyst)|
Expand Down

0 comments on commit b39aae2

Please sign in to comment.