diff --git a/examples/azure/embeddings.ipynb b/examples/azure/embeddings.ipynb index f008a08..544367d 100644 --- a/examples/azure/embeddings.ipynb +++ b/examples/azure/embeddings.ipynb @@ -141,7 +141,7 @@ "metadata": {}, "outputs": [], "source": [ - "embeddings = openai.Embedding.create(engine=deployment_id,\n", + "embeddings = openai.Embedding.create(deployment_id=deployment_id,\n", " input=\"The food was delicious and the waiter...\")\n", " \n", "print(embeddings)" diff --git a/examples/azure/finetuning.ipynb b/examples/azure/finetuning.ipynb index 8e59dc5..ba3405c 100644 --- a/examples/azure/finetuning.ipynb +++ b/examples/azure/finetuning.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Azure Fune tuning example\n", + "# Azure Fine tuning example\n", "In this example we'll try to go over all operations that can be done using the Azure endpoints and their differences with the openAi endpoints (if any).
\n", "This example focuses on finetuning but touches on the majority of operations that are also available using the API. This example is meant to be a quick way of showing simple operations and is not meant as a finetune model adaptation tutorial.\n" ] @@ -414,7 +414,7 @@ "source": [ "print('Sending a test completion job')\n", "start_phrase = 'When I go to the store, I want a'\n", - "response = openai.Completion.create(engine=deployment_id, prompt=start_phrase, max_tokens=4)\n", + "response = openai.Completion.create(deployment_id=deployment_id, prompt=start_phrase, max_tokens=4)\n", "text = response['choices'][0]['text'].replace('\\n', '').replace(' .', '.').strip()\n", "print(f'\"{start_phrase} {text}\"')\n" ] diff --git a/examples/fine-tuned_qa/olympics-2-create-qa.ipynb b/examples/fine-tuned_qa/olympics-2-create-qa.ipynb index 9834cec..7b279f0 100644 --- a/examples/fine-tuned_qa/olympics-2-create-qa.ipynb +++ b/examples/fine-tuned_qa/olympics-2-create-qa.ipynb @@ -298,8 +298,10 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 2.5 Search file\n", - "We create a search file ([API reference](https://beta.openai.com/docs/api-reference/files/list)), which can be used to retrieve the relevant context when a question is asked.\n" + "## 2.5 Search file (DEPRECATED)\n", + "We create a search file ([API reference](https://beta.openai.com/docs/api-reference/files/list)), which can be used to retrieve the relevant context when a question is asked.\n", + "\n", + "**DEPRECATED: The /search endpoint is deprecated in favour of using embeddings. Embeddings are cheaper, faster and can support a better search experience. See [Question Answering Guide](https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb) for a search implementation using the embeddings**\n" ] }, { @@ -725,11 +727,9 @@ } ], "metadata": { - "interpreter": { - "hash": "be4b5d5b73a21c599de40d6deb1129796d12dc1cc33a738f7bac13269cfcafe8" - }, "kernelspec": { - "display_name": "Python 3.7.3 64-bit ('base': conda)", + "display_name": "Python 3.9.9 64-bit ('3.9.9')", + "language": "python", "name": "python3" }, "language_info": { @@ -742,9 +742,14 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.3" + "version": "3.9.9" }, - "orig_nbformat": 4 + "orig_nbformat": 4, + "vscode": { + "interpreter": { + "hash": "cb9817b186a29e4e9713184d901f26c1ee05ad25243d878baff7f31bb1fef480" + } + } }, "nbformat": 4, "nbformat_minor": 2