From a6100e80ff49bde73f66dc223f19f4fa5601bea7 Mon Sep 17 00:00:00 2001 From: gusmally Date: Wed, 11 Oct 2023 14:53:36 -0700 Subject: [PATCH] Correct legacy fine-tuning note (#770) --- examples/Chat_finetuning_data_prep.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/Chat_finetuning_data_prep.ipynb b/examples/Chat_finetuning_data_prep.ipynb index 3c0f7fd..55ab9b0 100644 --- a/examples/Chat_finetuning_data_prep.ipynb +++ b/examples/Chat_finetuning_data_prep.ipynb @@ -10,8 +10,8 @@ "\n", "This notebook serves as a tool to preprocess and analyze the chat dataset used for fine-tuning a chat model. \n", "It checks for format errors, provides basic statistics, and estimates token counts for fine-tuning costs.\n", - "The method shown here corresponds to [legacy fine-tuning](https://platform.openai.com/docs/guides/legacy-fine-tuning) for models like babbage-002 and davinci-002.\n", - "For fine-tuning gpt-3.5-turbo, see [the current fine-tuning page](https://platform.openai.com/docs/guides/fine-tuning)." + "The method shown here corresponds to the [current fine-tuning method](https://platform.openai.com/docs/guides/fine-tuning) for gpt-3.5-turbo.\n", + "See [legacy fine-tuning](https://platform.openai.com/docs/guides/legacy-fine-tuning) for models like babbage-002 and davinci-002." ] }, {