mirror of
https://github.com/james-m-jordan/openai-cookbook.git
synced 2025-05-09 19:32:38 +00:00
Add notebooks for Azure OpenAI with the v1 Python SDK (#833)
This commit is contained in:
parent
54c34c79ac
commit
feef1bf398
@ -7,6 +7,8 @@
|
||||
"source": [
|
||||
"# Azure DALL·E image generation example\n",
|
||||
"\n",
|
||||
"> Note: There is a newer version of the openai library available. See https://github.com/openai/openai-python/discussions/742\n",
|
||||
"\n",
|
||||
"This notebook shows how to generate images with the Azure OpenAI service."
|
||||
]
|
||||
},
|
||||
@ -26,7 +28,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install openai\n",
|
||||
"! pip install \"openai>=0.28.1,<1.0.0\"\n",
|
||||
"# We need requests to retrieve the generated image\n",
|
||||
"! pip install requests\n",
|
||||
"# We use Pillow to display the generated image\n",
|
287
examples/azure/archive/chat.ipynb
Normal file
287
examples/azure/archive/chat.ipynb
Normal file
@ -0,0 +1,287 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure chat completions example (preview)\n",
|
||||
"\n",
|
||||
"> Note: There is a newer version of the openai library available. See https://github.com/openai/openai-python/discussions/742\n",
|
||||
"\n",
|
||||
"This example will cover chat completions using the Azure OpenAI service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First, we install the necessary dependencies."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"openai>=0.28.1,<1.0.0\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For the following sections to work properly we first have to setup some things. Let's start with the `api_base` and `api_version`. To find your `api_base` go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for the \"Endpoint\" value."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_version = '2023-05-15'\n",
|
||||
"openai.api_base = '' # Please add your endpoint here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We next have to setup the `api_type` and `api_key`. We can either get the key from the portal or we can get it through Microsoft Active Directory Authentication. Depending on this the `api_type` is either `azure` or `azure_ad`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setup: Portal\n",
|
||||
"Let's first look at getting the key from the portal. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for one of the \"Keys\" values."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_type = 'azure'\n",
|
||||
"openai.api_key = os.environ[\"OPENAI_API_KEY\"]\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"OPENAI_API_BASE\n",
|
||||
"OPENAI_API_KEY\n",
|
||||
"OPENAI_API_TYPE\n",
|
||||
"OPENAI_API_VERSION\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Setup: Microsoft Active Directory Authentication\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication. Uncomment the following code if you want to use Active Directory Authentication instead of keys from the portal."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"# default_credential = DefaultAzureCredential()\n",
|
||||
"# token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
"# openai.api_type = 'azure_ad'\n",
|
||||
"# openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import typing\n",
|
||||
"import time\n",
|
||||
"import requests\n",
|
||||
"if typing.TYPE_CHECKING:\n",
|
||||
" from azure.core.credentials import TokenCredential\n",
|
||||
"\n",
|
||||
"class TokenRefresh(requests.auth.AuthBase):\n",
|
||||
"\n",
|
||||
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
|
||||
" self.credential = credential\n",
|
||||
" self.scopes = scopes\n",
|
||||
" self.cached_token: typing.Optional[str] = None\n",
|
||||
"\n",
|
||||
" def __call__(self, req):\n",
|
||||
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
|
||||
" self.cached_token = self.credential.get_token(*self.scopes)\n",
|
||||
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
|
||||
" return req\n",
|
||||
"\n",
|
||||
"session = requests.Session()\n",
|
||||
"session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
|
||||
"\n",
|
||||
"openai.requestssession = session"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deployments\n",
|
||||
"In this section we are going to create a deployment using the `gpt-35-turbo` model that we can then use to create chat completions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Create manually\n",
|
||||
"Let's create a deployment using the `gpt-35-turbo` model. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Model deployments\" create a new `gpt-35-turbo` deployment. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"deployment_id = '' # Fill in the deployment id from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create chat completion\n",
|
||||
"Now let's send a sample chat completion to the deployment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# For all possible arguments see https://platform.openai.com/docs/api-reference/chat-completions/create\n",
|
||||
"response = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=deployment_id,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
|
||||
" {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Orange.\"},\n",
|
||||
" ],\n",
|
||||
" temperature=0,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(f\"{response.choices[0].message.role}: {response.choices[0].message.content}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also stream the response.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=deployment_id,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
|
||||
" {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Orange.\"},\n",
|
||||
" ],\n",
|
||||
" temperature=0,\n",
|
||||
" stream=True\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"for chunk in response:\n",
|
||||
" if len(chunk.choices) > 0:\n",
|
||||
" delta = chunk.choices[0].delta\n",
|
||||
"\n",
|
||||
" if \"role\" in delta.keys():\n",
|
||||
" print(delta.role + \": \", end=\"\", flush=True)\n",
|
||||
" if \"content\" in delta.keys():\n",
|
||||
" print(delta.content, end=\"\", flush=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
448
examples/azure/archive/chat_with_your_own_data.ipynb
Normal file
448
examples/azure/archive/chat_with_your_own_data.ipynb
Normal file
File diff suppressed because one or more lines are too long
247
examples/azure/archive/completions.ipynb
Normal file
247
examples/azure/archive/completions.ipynb
Normal file
@ -0,0 +1,247 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure completions example\n",
|
||||
"\n",
|
||||
"> Note: There is a newer version of the openai library available. See https://github.com/openai/openai-python/discussions/742\n",
|
||||
"\n",
|
||||
"This example will cover completions using the Azure OpenAI service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First, we install the necessary dependencies."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"openai>=0.28.1,<1.0.0\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For the following sections to work properly we first have to setup some things. Let's start with the `api_base` and `api_version`. To find your `api_base` go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for the \"Endpoint\" value."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_version = '2023-05-15'\n",
|
||||
"openai.api_base = '' # Please add your endpoint here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We next have to setup the `api_type` and `api_key`. We can either get the key from the portal or we can get it through Microsoft Active Directory Authentication. Depending on this the `api_type` is either `azure` or `azure_ad`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setup: Portal\n",
|
||||
"Let's first look at getting the key from the portal. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for one of the \"Keys\" values."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_type = 'azure'\n",
|
||||
"openai.api_key = os.environ[\"OPENAI_API_KEY\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"OPENAI_API_BASE\n",
|
||||
"OPENAI_API_KEY\n",
|
||||
"OPENAI_API_TYPE\n",
|
||||
"OPENAI_API_VERSION\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Setup: Microsoft Active Directory Authentication\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication. Uncomment the following code if you want to use Active Directory Authentication instead of keys from the portal."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"# default_credential = DefaultAzureCredential()\n",
|
||||
"# token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
"# openai.api_type = 'azure_ad'\n",
|
||||
"# openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import typing\n",
|
||||
"import time\n",
|
||||
"import requests\n",
|
||||
"if typing.TYPE_CHECKING:\n",
|
||||
" from azure.core.credentials import TokenCredential\n",
|
||||
"\n",
|
||||
"class TokenRefresh(requests.auth.AuthBase):\n",
|
||||
"\n",
|
||||
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
|
||||
" self.credential = credential\n",
|
||||
" self.scopes = scopes\n",
|
||||
" self.cached_token: typing.Optional[str] = None\n",
|
||||
"\n",
|
||||
" def __call__(self, req):\n",
|
||||
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
|
||||
" self.cached_token = self.credential.get_token(*self.scopes)\n",
|
||||
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
|
||||
" return req\n",
|
||||
"\n",
|
||||
"session = requests.Session()\n",
|
||||
"session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
|
||||
"\n",
|
||||
"openai.requestssession = session"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deployments\n",
|
||||
"In this section we are going to create a deployment using the `text-davinci-002` model that we can then use to create completions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Create manually\n",
|
||||
"Create a new deployment by going to your Resource in your portal under \"Resource Management\" -> \"Model deployments\". Select `text-davinci-002` as the model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"deployment_id = '' # Fill in the deployment id from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Completions\n",
|
||||
"Now let's send a sample completion to the deployment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = \"The food was delicious and the waiter\"\n",
|
||||
"completion = openai.Completion.create(deployment_id=deployment_id,\n",
|
||||
" prompt=prompt, stop=\".\", temperature=0)\n",
|
||||
" \n",
|
||||
"print(f\"{prompt}{completion['choices'][0]['text']}.\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "3a5103089ab7e7c666b279eeded403fcec76de49a40685dbdfe9f9c78ad97c17"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
279
examples/azure/archive/embeddings.ipynb
Normal file
279
examples/azure/archive/embeddings.ipynb
Normal file
@ -0,0 +1,279 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure embeddings example\n",
|
||||
"\n",
|
||||
"> Note: There is a newer version of the openai library available. See https://github.com/openai/openai-python/discussions/742\n",
|
||||
"\n",
|
||||
"This example will cover embeddings using the Azure OpenAI service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First, we install the necessary dependencies."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"! pip install \"openai>=0.28.1,<1.0.0\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For the following sections to work properly we first have to setup some things. Let's start with the `api_base` and `api_version`. To find your `api_base` go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for the \"Endpoint\" value."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_version = '2023-05-15'\n",
|
||||
"openai.api_base = '' # Please add your endpoint here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We next have to setup the `api_type` and `api_key`. We can either get the key from the portal or we can get it through Microsoft Active Directory Authentication. Depending on this the `api_type` is either `azure` or `azure_ad`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setup: Portal\n",
|
||||
"Let's first look at getting the key from the portal. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for one of the \"Keys\" values."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_type = 'azure'\n",
|
||||
"openai.api_key = os.environ[\"OPENAI_API_KEY\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"OPENAI_API_BASE\n",
|
||||
"OPENAI_API_KEY\n",
|
||||
"OPENAI_API_TYPE\n",
|
||||
"OPENAI_API_VERSION\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Setup: Microsoft Active Directory Authentication\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication. Uncomment the following code if you want to use Active Directory Authentication instead of keys from the portal."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"# default_credential = DefaultAzureCredential()\n",
|
||||
"# token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
"# openai.api_type = 'azure_ad'\n",
|
||||
"# openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import typing\n",
|
||||
"import time\n",
|
||||
"import requests\n",
|
||||
"if typing.TYPE_CHECKING:\n",
|
||||
" from azure.core.credentials import TokenCredential\n",
|
||||
"\n",
|
||||
"class TokenRefresh(requests.auth.AuthBase):\n",
|
||||
"\n",
|
||||
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
|
||||
" self.credential = credential\n",
|
||||
" self.scopes = scopes\n",
|
||||
" self.cached_token: typing.Optional[str] = None\n",
|
||||
"\n",
|
||||
" def __call__(self, req):\n",
|
||||
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
|
||||
" self.cached_token = self.credential.get_token(*self.scopes)\n",
|
||||
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
|
||||
" return req\n",
|
||||
"\n",
|
||||
"session = requests.Session()\n",
|
||||
"session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
|
||||
"\n",
|
||||
"openai.requestssession = session"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deployments\n",
|
||||
"In this section we are going to create a deployment that we can use to create embeddings."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Create manually\n",
|
||||
"Let's create a deployment using the `text-similarity-curie-001` model. Create a new deployment by going to your Resource in your portal under \"Resource Management\" -> \"Model deployments\"."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"deployment_id = '' # Fill in the deployment id from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Listing\n",
|
||||
"Now because creating a new deployment takes a long time, let's look in the subscription for an already finished deployment that succeeded."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print('While deployment running, selecting a completed one that supports embeddings.')\n",
|
||||
"deployment_id = None\n",
|
||||
"result = openai.Deployment.list()\n",
|
||||
"for deployment in result.data:\n",
|
||||
" if deployment[\"status\"] != \"succeeded\":\n",
|
||||
" continue\n",
|
||||
" \n",
|
||||
" model = openai.Model.retrieve(deployment[\"model\"])\n",
|
||||
" if model[\"capabilities\"][\"embeddings\"] != True:\n",
|
||||
" continue\n",
|
||||
" \n",
|
||||
" deployment_id = deployment[\"id\"]\n",
|
||||
" break\n",
|
||||
"\n",
|
||||
"if not deployment_id:\n",
|
||||
" print('No deployment with status: succeeded found.')\n",
|
||||
"else:\n",
|
||||
" print(f'Found a succeeded deployment that supports embeddings with id: {deployment_id}.')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embeddings\n",
|
||||
"Now let's send a sample embedding to the deployment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings = openai.Embedding.create(deployment_id=deployment_id,\n",
|
||||
" input=\"The food was delicious and the waiter...\")\n",
|
||||
" \n",
|
||||
"print(embeddings)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "3a5103089ab7e7c666b279eeded403fcec76de49a40685dbdfe9f9c78ad97c17"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
449
examples/azure/archive/functions.ipynb
Normal file
449
examples/azure/archive/functions.ipynb
Normal file
@ -0,0 +1,449 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure functions example\n",
|
||||
"\n",
|
||||
"> Note: There is a newer version of the openai library available. See https://github.com/openai/openai-python/discussions/742\n",
|
||||
"\n",
|
||||
"This notebook shows how to use the function calling capability with the Azure OpenAI service. Functions allow a caller of chat completions to define capabilities that the model can use to extend its\n",
|
||||
"functionality into external tools and data sources.\n",
|
||||
"\n",
|
||||
"You can read more about chat functions on OpenAI's blog: https://openai.com/blog/function-calling-and-other-api-updates\n",
|
||||
"\n",
|
||||
"**NOTE**: Chat functions require model versions beginning with gpt-4 and gpt-35-turbo's `-0613` labels. They are not supported by older versions of the models."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"First, we install the necessary dependencies."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"openai>=0.28.1,<1.0.0\"\n",
|
||||
"# (Optional) If you want to use Microsoft Active Directory\n",
|
||||
"! pip install azure-identity"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"Additionally, to properly access the Azure OpenAI Service, we need to create the proper resources at the [Azure Portal](https://portal.azure.com) (you can check a detailed guide on how to do this in the [Microsoft Docs](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal))\n",
|
||||
"\n",
|
||||
"Once the resource is created, the first thing we need to use is its endpoint. You can get the endpoint by looking at the *\"Keys and Endpoints\"* section under the *\"Resource Management\"* section. Having this, we will set up the SDK using this information:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_base = \"\" # Add your endpoint here\n",
|
||||
"\n",
|
||||
"# functions is only supported by the 2023-07-01-preview API version\n",
|
||||
"openai.api_version = \"2023-07-01-preview\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Authentication\n",
|
||||
"\n",
|
||||
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure credentials."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"use_azure_active_directory = False"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"#### Authentication using API key\n",
|
||||
"\n",
|
||||
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set up the `api_type` to `azure` and set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not use_azure_active_directory:\n",
|
||||
" openai.api_type = \"azure\"\n",
|
||||
" openai.api_key = os.environ[\"OPENAI_API_KEY\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"OPENAI_API_BASE\n",
|
||||
"OPENAI_API_KEY\n",
|
||||
"OPENAI_API_TYPE\n",
|
||||
"OPENAI_API_VERSION\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using Microsoft Active Directory\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" default_credential = DefaultAzureCredential()\n",
|
||||
" token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
" openai.api_type = \"azure_ad\"\n",
|
||||
" openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import typing\n",
|
||||
"import time\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"if typing.TYPE_CHECKING:\n",
|
||||
" from azure.core.credentials import TokenCredential\n",
|
||||
"\n",
|
||||
"class TokenRefresh(requests.auth.AuthBase):\n",
|
||||
"\n",
|
||||
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
|
||||
" self.credential = credential\n",
|
||||
" self.scopes = scopes\n",
|
||||
" self.cached_token: typing.Optional[str] = None\n",
|
||||
"\n",
|
||||
" def __call__(self, req):\n",
|
||||
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
|
||||
" self.cached_token = self.credential.get_token(*self.scopes)\n",
|
||||
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
|
||||
" return req\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" session = requests.Session()\n",
|
||||
" session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
|
||||
"\n",
|
||||
" openai.requestssession = session"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Functions\n",
|
||||
"\n",
|
||||
"With setup and authentication complete, you can now use functions with the Azure OpenAI service. This will be split into a few steps:\n",
|
||||
"\n",
|
||||
"1. Define the function(s)\n",
|
||||
"2. Pass function definition(s) into chat completions API\n",
|
||||
"3. Call function with arguments from the response\n",
|
||||
"4. Feed function response back into chat completions API"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### 1. Define the function(s)\n",
|
||||
"\n",
|
||||
"A list of functions can be defined, each containing the name of the function, an optional description, and the parameters the function accepts (described as a JSON schema)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"functions = [\n",
|
||||
" {\n",
|
||||
" \"name\": \"get_current_weather\",\n",
|
||||
" \"description\": \"Get the current weather\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"location\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
|
||||
" },\n",
|
||||
" \"format\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"enum\": [\"celsius\", \"fahrenheit\"],\n",
|
||||
" \"description\": \"The temperature unit to use. Infer this from the users location.\",\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" \"required\": [\"location\"],\n",
|
||||
" },\n",
|
||||
" }\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### 2. Pass function definition(s) into chat completions API\n",
|
||||
"\n",
|
||||
"Now we can pass the function into the chat completions API. If the model determines it should call the function, a `finish_reason` of \"function_call\" will be populated on the choice and the details of which function to call and its arguments will be present in the `message`. Optionally, you can set the `function_call` keyword argument to force the model to call a particular function (e.g. `function_call={\"name\": get_current_weather}`). By default, this is set to `auto`, allowing the model to choose whether to call the function or not. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{\n",
|
||||
" \"choices\": [\n",
|
||||
" {\n",
|
||||
" \"content_filter_results\": {},\n",
|
||||
" \"finish_reason\": \"function_call\",\n",
|
||||
" \"index\": 0,\n",
|
||||
" \"message\": {\n",
|
||||
" \"function_call\": {\n",
|
||||
" \"arguments\": \"{\\n \\\"location\\\": \\\"Seattle, WA\\\"\\n}\",\n",
|
||||
" \"name\": \"get_current_weather\"\n",
|
||||
" },\n",
|
||||
" \"role\": \"assistant\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" ],\n",
|
||||
" \"created\": 1689702512,\n",
|
||||
" \"id\": \"chatcmpl-7dj6GkYdM7Vw9eGn02bc2qqjN70Ps\",\n",
|
||||
" \"model\": \"gpt-4\",\n",
|
||||
" \"object\": \"chat.completion\",\n",
|
||||
" \"prompt_annotations\": [\n",
|
||||
" {\n",
|
||||
" \"content_filter_results\": {\n",
|
||||
" \"hate\": {\n",
|
||||
" \"filtered\": false,\n",
|
||||
" \"severity\": \"safe\"\n",
|
||||
" },\n",
|
||||
" \"self_harm\": {\n",
|
||||
" \"filtered\": false,\n",
|
||||
" \"severity\": \"safe\"\n",
|
||||
" },\n",
|
||||
" \"sexual\": {\n",
|
||||
" \"filtered\": false,\n",
|
||||
" \"severity\": \"safe\"\n",
|
||||
" },\n",
|
||||
" \"violence\": {\n",
|
||||
" \"filtered\": false,\n",
|
||||
" \"severity\": \"safe\"\n",
|
||||
" }\n",
|
||||
" },\n",
|
||||
" \"prompt_index\": 0\n",
|
||||
" }\n",
|
||||
" ],\n",
|
||||
" \"usage\": {\n",
|
||||
" \"completion_tokens\": 18,\n",
|
||||
" \"prompt_tokens\": 115,\n",
|
||||
" \"total_tokens\": 133\n",
|
||||
" }\n",
|
||||
"}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" {\"role\": \"system\", \"content\": \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"What's the weather like today in Seattle?\"}\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"chat_completion = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=\"gpt-35-turbo-0613\",\n",
|
||||
" messages=messages,\n",
|
||||
" functions=functions,\n",
|
||||
")\n",
|
||||
"print(chat_completion)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### 3. Call function with arguments from the response\n",
|
||||
"\n",
|
||||
"The name of the function call will be one that was provided initially and the arguments will include JSON matching the schema included in the function definition."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"get_current_weather\n",
|
||||
"{\n",
|
||||
" \"location\": \"Seattle, WA\"\n",
|
||||
"}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"def get_current_weather(request):\n",
|
||||
" \"\"\"\n",
|
||||
" This function is for illustrative purposes.\n",
|
||||
" The location and unit should be used to determine weather\n",
|
||||
" instead of returning a hardcoded response.\n",
|
||||
" \"\"\"\n",
|
||||
" location = request.get(\"location\")\n",
|
||||
" unit = request.get(\"unit\")\n",
|
||||
" return {\"temperature\": \"22\", \"unit\": \"celsius\", \"description\": \"Sunny\"}\n",
|
||||
"\n",
|
||||
"function_call = chat_completion.choices[0].message.function_call\n",
|
||||
"print(function_call.name)\n",
|
||||
"print(function_call.arguments)\n",
|
||||
"\n",
|
||||
"if function_call.name == \"get_current_weather\":\n",
|
||||
" response = get_current_weather(json.loads(function_call.arguments))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### 4. Feed function response back into chat completions API\n",
|
||||
"\n",
|
||||
"The response from the function should be serialized into a new message with the role set to \"function\". Now the model will use the response data to formulate its answer."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Today in Seattle, the weather is sunny with a temperature of 22 degrees celsius.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages.append(\n",
|
||||
" {\n",
|
||||
" \"role\": \"function\",\n",
|
||||
" \"name\": \"get_current_weather\",\n",
|
||||
" \"content\": json.dumps(response)\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"function_completion = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=\"gpt-35-turbo-0613\",\n",
|
||||
" messages=messages,\n",
|
||||
" functions=functions,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(function_completion.choices[0].message.content.strip())"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.0"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "3a5103089ab7e7c666b279eeded403fcec76de49a40685dbdfe9f9c78ad97c17"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
276
examples/azure/archive/whisper.ipynb
Normal file
276
examples/azure/archive/whisper.ipynb
Normal file
@ -0,0 +1,276 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure audio whisper (preview) example\n",
|
||||
"\n",
|
||||
"> Note: There is a newer version of the openai library available. See https://github.com/openai/openai-python/discussions/742\n",
|
||||
"\n",
|
||||
"The example shows how to use the Azure OpenAI Whisper model to transcribe audio files."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"First, we install the necessary dependencies."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"openai>=0.28.1,<1.0.0\"\n",
|
||||
"! pip install python-dotenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, we'll import our libraries and configure the Python OpenAI SDK to work with the Azure OpenAI service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"OPENAI_API_BASE\n",
|
||||
"OPENAI_API_KEY\n",
|
||||
"OPENAI_API_TYPE\n",
|
||||
"OPENAI_API_VERSION\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"True"
|
||||
]
|
||||
},
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import dotenv\n",
|
||||
"import openai\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"dotenv.load_dotenv()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"To properly access the Azure OpenAI Service, we need to create the proper resources at the [Azure Portal](https://portal.azure.com) (you can check a detailed guide on how to do this in the [Microsoft Docs](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal))\n",
|
||||
"\n",
|
||||
"Once the resource is created, the first thing we need to use is its endpoint. You can get the endpoint by looking at the *\"Keys and Endpoints\"* section under the *\"Resource Management\"* section. Having this, we will set up the SDK using this information:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_base = os.environ[\"OPENAI_API_BASE\"]\n",
|
||||
"\n",
|
||||
"# Min API version that supports Whisper\n",
|
||||
"openai.api_version = \"2023-09-01-preview\"\n",
|
||||
"\n",
|
||||
"# Enter the deployment_id to use for the Whisper model\n",
|
||||
"deployment_id = \"<deployment-id-for-your-whisper-model>\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Authentication\n",
|
||||
"\n",
|
||||
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure credentials."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# set to True if using Azure Active Directory authentication\n",
|
||||
"use_azure_active_directory = False"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"#### Authentication using API key\n",
|
||||
"\n",
|
||||
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set up the `api_type` to `azure` and set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not use_azure_active_directory:\n",
|
||||
" openai.api_type = 'azure'\n",
|
||||
" openai.api_key = os.environ[\"OPENAI_API_KEY\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using Azure Active Directory\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" default_credential = DefaultAzureCredential()\n",
|
||||
" token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
" openai.api_type = 'azure_ad'\n",
|
||||
" openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import typing\n",
|
||||
"import time\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"if typing.TYPE_CHECKING:\n",
|
||||
" from azure.core.credentials import TokenCredential\n",
|
||||
"\n",
|
||||
"class TokenRefresh(requests.auth.AuthBase):\n",
|
||||
"\n",
|
||||
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
|
||||
" self.credential = credential\n",
|
||||
" self.scopes = scopes\n",
|
||||
" self.cached_token: typing.Optional[str] = None\n",
|
||||
"\n",
|
||||
" def __call__(self, req):\n",
|
||||
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
|
||||
" self.cached_token = self.credential.get_token(*self.scopes)\n",
|
||||
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
|
||||
" return req\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" session = requests.Session()\n",
|
||||
" session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
|
||||
"\n",
|
||||
" openai.requestssession = session"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Audio transcription\n",
|
||||
"\n",
|
||||
"Audio transcription, or speech-to-text, is the process of converting spoken words into text. Use the `openai.Audio.transcribe` method to transcribe an audio file stream to text.\n",
|
||||
"\n",
|
||||
"You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# download sample audio file\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"sample_audio_url = \"https://github.com/Azure-Samples/cognitive-services-speech-sdk/raw/master/sampledata/audiofiles/wikipediaOcelot.wav\"\n",
|
||||
"audio_file = requests.get(sample_audio_url)\n",
|
||||
"with open(\"wikipediaOcelot.wav\", \"wb\") as f:\n",
|
||||
" f.write(audio_file.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"transcription = openai.Audio.transcribe(\n",
|
||||
" file=open(\"wikipediaOcelot.wav\", \"rb\"),\n",
|
||||
" model=\"whisper-1\",\n",
|
||||
" deployment_id=deployment_id,\n",
|
||||
")\n",
|
||||
"print(transcription.text)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.0"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
@ -5,9 +5,28 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure chat completions example (preview)\n",
|
||||
"In this example we'll try to go over all operations needed to get chat completions working using the Azure endpoints. \\\n",
|
||||
"This example focuses on chat completions but also touches on some other operations that are also available using the API. This example is meant to be a quick way of showing simple operations and is not meant as a tutorial."
|
||||
"# Azure chat completions example\n",
|
||||
"\n",
|
||||
"This example will cover chat completions using the Azure OpenAI service. It also includes information on content filtering."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"First, we install the necessary dependencies and import the libraries we will be using."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"openai>=1.0.0,<2.0.0\"\n",
|
||||
"! pip install python-dotenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -17,16 +36,64 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai"
|
||||
"import openai\n",
|
||||
"import dotenv\n",
|
||||
"\n",
|
||||
"dotenv.load_dotenv()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"For the following sections to work properly we first have to setup some things. Let's start with the `api_base` and `api_version`. To find your `api_base` go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for the \"Endpoint\" value."
|
||||
"### Authentication\n",
|
||||
"\n",
|
||||
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure Active Directory token credentials."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using API key\n",
|
||||
"\n",
|
||||
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com)). You'll also find the endpoint for your resource here."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" api_key=api_key,\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using Azure Active Directory\n",
|
||||
"Let's now see how we can autheticate via Azure Active Directory. We'll start by installing the `azure-identity` library. This library will provide the token credentials we need to authenticate and help us build a token credential provider through the `get_bearer_token_provider` helper function. It's recommended to use `get_bearer_token_provider` over providing a static token to `AzureOpenAI` because this API will automatically cache and refresh tokens for you. \n",
|
||||
"\n",
|
||||
"For more information on how to set up Azure Active Directory authentication with Azure OpenAI, see the [documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -35,113 +102,38 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_version = '2023-05-15'\n",
|
||||
"openai.api_base = '' # Please add your endpoint here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We next have to setup the `api_type` and `api_key`. We can either get the key from the portal or we can get it through Microsoft Active Directory Authentication. Depending on this the `api_type` is either `azure` or `azure_ad`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setup: Portal\n",
|
||||
"Let's first look at getting the key from the portal. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for one of the \"Keys\" values."
|
||||
"! pip install \"azure-identity>=1.15.0\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_type = 'azure'\n",
|
||||
"openai.api_key = os.environ[\"OPENAI_API_KEY\"]\n"
|
||||
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\"),\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
|
||||
"> Note: the AzureOpenAI infers the following arguments from their corresponding environment variables if they are not provided:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"OPENAI_API_BASE\n",
|
||||
"OPENAI_API_KEY\n",
|
||||
"OPENAI_API_TYPE\n",
|
||||
"OPENAI_API_VERSION\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Setup: Microsoft Active Directory Authentication\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication. Uncomment the following code if you want to use Active Directory Authentication instead of keys from the portal."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"# default_credential = DefaultAzureCredential()\n",
|
||||
"# token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
"# openai.api_type = 'azure_ad'\n",
|
||||
"# openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import typing\n",
|
||||
"import time\n",
|
||||
"import requests\n",
|
||||
"if typing.TYPE_CHECKING:\n",
|
||||
" from azure.core.credentials import TokenCredential\n",
|
||||
"\n",
|
||||
"class TokenRefresh(requests.auth.AuthBase):\n",
|
||||
"\n",
|
||||
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
|
||||
" self.credential = credential\n",
|
||||
" self.scopes = scopes\n",
|
||||
" self.cached_token: typing.Optional[str] = None\n",
|
||||
"\n",
|
||||
" def __call__(self, req):\n",
|
||||
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
|
||||
" self.cached_token = self.credential.get_token(*self.scopes)\n",
|
||||
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
|
||||
" return req\n",
|
||||
"\n",
|
||||
"session = requests.Session()\n",
|
||||
"session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
|
||||
"\n",
|
||||
"openai.requestssession = session"
|
||||
"- `api_key` from `AZURE_OPENAI_API_KEY`\n",
|
||||
"- `azure_ad_token` from `AZURE_OPENAI_AD_TOKEN`\n",
|
||||
"- `api_version` from `OPENAI_API_VERSION`\n",
|
||||
"- `azure_endpoint` from `AZURE_OPENAI_ENDPOINT`\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -150,7 +142,8 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deployments\n",
|
||||
"In this section we are going to create a deployment using the `gpt-35-turbo` model that we can then use to create chat completions."
|
||||
"\n",
|
||||
"In this section we are going to create a deployment of a GPT model that we can use to create chat completions."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -158,17 +151,17 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Create manually\n",
|
||||
"Let's create a deployment using the `gpt-35-turbo` model. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Model deployments\" create a new `gpt-35-turbo` deployment. "
|
||||
"### Deployments: Create in the Azure OpenAI Studio\n",
|
||||
"Let's deploy a model to use with chat completions. Go to https://portal.azure.com, find your Azure OpenAI resource, and then navigate to the Azure OpenAI Studio. Click on the \"Deployments\" tab and then create a deployment for the model you want to use for chat completions. The deployment name that you give the model will be used in the code below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"deployment_id = '' # Fill in the deployment id from the portal here"
|
||||
"deployment = \"\" # Fill in the deployment name from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -176,8 +169,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create chat completion\n",
|
||||
"Now let's send a sample chat completion to the deployment."
|
||||
"## Create chat completions\n",
|
||||
"\n",
|
||||
"Now let's create a chat completion using the client we built."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -187,8 +181,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# For all possible arguments see https://platform.openai.com/docs/api-reference/chat-completions/create\n",
|
||||
"response = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=deployment_id,\n",
|
||||
"response = client.chat.completions.create(\n",
|
||||
" model=deployment,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
|
||||
@ -206,7 +200,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also stream the response.\n"
|
||||
"### Create a streaming chat completion\n",
|
||||
"\n",
|
||||
"We can also stream the response."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -215,8 +211,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=deployment_id,\n",
|
||||
"response = client.chat.completions.create(\n",
|
||||
" model=deployment,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
|
||||
@ -228,13 +224,96 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"for chunk in response:\n",
|
||||
" if len(chunk.choices) > 0:\n",
|
||||
" delta = chunk.choices[0].delta\n",
|
||||
"\n",
|
||||
" if \"role\" in delta.keys():\n",
|
||||
" if delta.role:\n",
|
||||
" print(delta.role + \": \", end=\"\", flush=True)\n",
|
||||
" if \"content\" in delta.keys():\n",
|
||||
" if delta.content:\n",
|
||||
" print(delta.content, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Content filtering\n",
|
||||
"\n",
|
||||
"Azure OpenAI service includes content filtering of prompts and completion responses. You can learn more about content filtering and how to configure it [here](https://learn.microsoft.com/azure/ai-services/openai/concepts/content-filter).\n",
|
||||
"\n",
|
||||
"If the prompt is flagged by the content filter, the library will raise a `BadRequestError` exception with a `content_filter` error code. Otherwise, you can access the `prompt_filter_results` and `content_filter_results` on the response to see the results of the content filtering and what categories were flagged."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Prompt flagged by content filter"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"<text violating the content policy>\"}\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" completion = client.chat.completions.create(\n",
|
||||
" messages=messages,\n",
|
||||
" model=deployment,\n",
|
||||
" )\n",
|
||||
"except openai.BadRequestError as e:\n",
|
||||
" err = json.loads(e.response.text)\n",
|
||||
" if err[\"error\"][\"code\"] == \"content_filter\":\n",
|
||||
" print(\"Content filter triggered!\")\n",
|
||||
" content_filter_result = err[\"error\"][\"innererror\"][\"content_filter_result\"]\n",
|
||||
" for category, details in content_filter_result.items():\n",
|
||||
" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Checking the result of the content filter"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"What's the biggest city in Washington?\"}\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"completion = client.chat.completions.create(\n",
|
||||
" messages=messages,\n",
|
||||
" model=deployment,\n",
|
||||
")\n",
|
||||
"print(f\"Answer: {completion.choices[0].message.content}\")\n",
|
||||
"\n",
|
||||
"# prompt content filter result in \"model_extra\" for azure\n",
|
||||
"prompt_filter_result = completion.model_extra[\"prompt_filter_results\"][0][\"content_filter_results\"]\n",
|
||||
"print(\"\\nPrompt content filter results:\")\n",
|
||||
"for category, details in prompt_filter_result.items():\n",
|
||||
" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")\n",
|
||||
"\n",
|
||||
"# completion content filter result\n",
|
||||
"print(\"\\nCompletion content filter results:\")\n",
|
||||
"completion_filter_result = completion.choices[0].model_extra[\"content_filter_results\"]\n",
|
||||
"for category, details in completion_filter_result.items():\n",
|
||||
" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@ -253,7 +332,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.10.0"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
|
File diff suppressed because one or more lines are too long
@ -6,8 +6,27 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure completions example\n",
|
||||
"In this example we'll try to go over all operations needed to get completions working using the Azure endpoints. \\\n",
|
||||
"This example focuses on completions but also touches on some other operations that are also available using the API. This example is meant to be a quick way of showing simple operations and is not meant as a tutorial."
|
||||
"\n",
|
||||
"This example will cover completions using the Azure OpenAI service. It also includes information on content filtering."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"First, we install the necessary dependencies and import the libraries we will be using."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"openai>=1.0.0,<2.0.0\"\n",
|
||||
"! pip install python-dotenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -17,16 +36,64 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai"
|
||||
"import openai\n",
|
||||
"import dotenv\n",
|
||||
"\n",
|
||||
"dotenv.load_dotenv()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"For the following sections to work properly we first have to setup some things. Let's start with the `api_base` and `api_version`. To find your `api_base` go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for the \"Endpoint\" value."
|
||||
"### Authentication\n",
|
||||
"\n",
|
||||
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure Active Directory token credentials."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using API key\n",
|
||||
"\n",
|
||||
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com)). You'll also find the endpoint for your resource here."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" api_key=api_key,\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using Azure Active Directory\n",
|
||||
"Let's now see how we can autheticate via Azure Active Directory. We'll start by installing the `azure-identity` library. This library will provide the token credentials we need to authenticate and help us build a token credential provider through the `get_bearer_token_provider` helper function. It's recommended to use `get_bearer_token_provider` over providing a static token to `AzureOpenAI` because this API will automatically cache and refresh tokens for you. \n",
|
||||
"\n",
|
||||
"For more information on how to set up Azure Active Directory authentication with Azure OpenAI, see the [documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -35,113 +102,38 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_version = '2023-05-15'\n",
|
||||
"openai.api_base = '' # Please add your endpoint here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We next have to setup the `api_type` and `api_key`. We can either get the key from the portal or we can get it through Microsoft Active Directory Authentication. Depending on this the `api_type` is either `azure` or `azure_ad`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setup: Portal\n",
|
||||
"Let's first look at getting the key from the portal. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for one of the \"Keys\" values."
|
||||
"! pip install \"azure-identity>=1.15.0\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_type = 'azure'\n",
|
||||
"openai.api_key = os.environ[\"OPENAI_API_KEY\"]"
|
||||
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\"),\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
|
||||
"> Note: the AzureOpenAI infers the following arguments from their corresponding environment variables if they are not provided:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"OPENAI_API_BASE\n",
|
||||
"OPENAI_API_KEY\n",
|
||||
"OPENAI_API_TYPE\n",
|
||||
"OPENAI_API_VERSION\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Setup: Microsoft Active Directory Authentication\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication. Uncomment the following code if you want to use Active Directory Authentication instead of keys from the portal."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"# default_credential = DefaultAzureCredential()\n",
|
||||
"# token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
"# openai.api_type = 'azure_ad'\n",
|
||||
"# openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import typing\n",
|
||||
"import time\n",
|
||||
"import requests\n",
|
||||
"if typing.TYPE_CHECKING:\n",
|
||||
" from azure.core.credentials import TokenCredential\n",
|
||||
"\n",
|
||||
"class TokenRefresh(requests.auth.AuthBase):\n",
|
||||
"\n",
|
||||
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
|
||||
" self.credential = credential\n",
|
||||
" self.scopes = scopes\n",
|
||||
" self.cached_token: typing.Optional[str] = None\n",
|
||||
"\n",
|
||||
" def __call__(self, req):\n",
|
||||
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
|
||||
" self.cached_token = self.credential.get_token(*self.scopes)\n",
|
||||
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
|
||||
" return req\n",
|
||||
"\n",
|
||||
"session = requests.Session()\n",
|
||||
"session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
|
||||
"\n",
|
||||
"openai.requestssession = session"
|
||||
"- `api_key` from `AZURE_OPENAI_API_KEY`\n",
|
||||
"- `azure_ad_token` from `AZURE_OPENAI_AD_TOKEN`\n",
|
||||
"- `api_version` from `OPENAI_API_VERSION`\n",
|
||||
"- `azure_endpoint` from `AZURE_OPENAI_ENDPOINT`\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -150,7 +142,8 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deployments\n",
|
||||
"In this section we are going to create a deployment using the `text-davinci-002` model that we can then use to create completions."
|
||||
"\n",
|
||||
"In this section we are going to create a deployment of a model that we can use to create completions."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -158,17 +151,17 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Create manually\n",
|
||||
"Create a new deployment by going to your Resource in your portal under \"Resource Management\" -> \"Model deployments\". Select `text-davinci-002` as the model."
|
||||
"### Deployments: Create in the Azure OpenAI Studio\n",
|
||||
"Let's deploy a model to use with completions. Go to https://portal.azure.com, find your Azure OpenAI resource, and then navigate to the Azure OpenAI Studio. Click on the \"Deployments\" tab and then create a deployment for the model you want to use for completions. The deployment name that you give the model will be used in the code below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"deployment_id = '' # Fill in the deployment id from the portal here"
|
||||
"deployment = \"\" # Fill in the deployment name from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -176,8 +169,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Completions\n",
|
||||
"Now let's send a sample completion to the deployment."
|
||||
"## Completions\n",
|
||||
"\n",
|
||||
"Now let's create a completion using the client we built."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -187,10 +181,113 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = \"The food was delicious and the waiter\"\n",
|
||||
"completion = openai.Completion.create(deployment_id=deployment_id,\n",
|
||||
" prompt=prompt, stop=\".\", temperature=0)\n",
|
||||
"completion = client.completions.create(\n",
|
||||
" model=deployment,\n",
|
||||
" prompt=prompt,\n",
|
||||
" stop=\".\",\n",
|
||||
" temperature=0\n",
|
||||
")\n",
|
||||
" \n",
|
||||
"print(f\"{prompt}{completion['choices'][0]['text']}.\")"
|
||||
"print(f\"{prompt}{completion.choices[0].text}.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a streaming completion\n",
|
||||
"\n",
|
||||
"We can also stream the response."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = \"The food was delicious and the waiter\"\n",
|
||||
"response = client.completions.create(\n",
|
||||
" model=deployment,\n",
|
||||
" prompt=prompt,\n",
|
||||
" stream=True,\n",
|
||||
")\n",
|
||||
"for completion in response:\n",
|
||||
" if len(completion.choices) > 0:\n",
|
||||
" print(f\"{completion.choices[0].text}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Content filtering\n",
|
||||
"\n",
|
||||
"Azure OpenAI service includes content filtering of prompts and completion responses. You can learn more about content filtering and how to configure it [here](https://learn.microsoft.com/azure/ai-services/openai/concepts/content-filter).\n",
|
||||
"\n",
|
||||
"If the prompt is flagged by the content filter, the library will raise a `BadRequestError` exception with a `content_filter` error code. Otherwise, you can access the `prompt_filter_results` and `content_filter_results` on the response to see the results of the content filtering and what categories were flagged."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Prompt flagged by content filter"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" completion = client.completions.create(\n",
|
||||
" prompt=\"<text violating the content policy>\",\n",
|
||||
" model=deployment,\n",
|
||||
" )\n",
|
||||
"except openai.BadRequestError as e:\n",
|
||||
" err = json.loads(e.response.text)\n",
|
||||
" if err[\"error\"][\"code\"] == \"content_filter\":\n",
|
||||
" print(\"Content filter triggered!\")\n",
|
||||
" content_filter_result = err[\"error\"][\"innererror\"][\"content_filter_result\"]\n",
|
||||
" for category, details in content_filter_result.items():\n",
|
||||
" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Checking the result of the content filter"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"completion = client.completions.create(\n",
|
||||
" prompt=\"What's the biggest city in Washington?\",\n",
|
||||
" model=deployment,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(f\"Answer: {completion.choices[0].text}\")\n",
|
||||
"\n",
|
||||
"# prompt content filter result in \"model_extra\" for azure\n",
|
||||
"prompt_filter_result = completion.model_extra[\"prompt_filter_results\"][0][\"content_filter_results\"]\n",
|
||||
"print(\"\\nPrompt content filter results:\")\n",
|
||||
"for category, details in prompt_filter_result.items():\n",
|
||||
" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")\n",
|
||||
"\n",
|
||||
"# completion content filter result\n",
|
||||
"print(\"\\nCompletion content filter results:\")\n",
|
||||
"completion_filter_result = completion.choices[0].model_extra[\"content_filter_results\"]\n",
|
||||
"for category, details in completion_filter_result.items():\n",
|
||||
" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -210,7 +307,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.10.0"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
@ -6,8 +6,27 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure embeddings example\n",
|
||||
"In this example we'll try to go over all operations for embeddings that can be done using the Azure endpoints. \\\n",
|
||||
"This example focuses on embeddings but also touches some other operations that are also available using the API. This example is meant to be a quick way of showing simple operations and is not meant as a tutorial."
|
||||
"\n",
|
||||
"This example will cover embeddings using the Azure OpenAI service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"First, we install the necessary dependencies and import the libraries we will be using."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"openai>=1.0.0,<2.0.0\"\n",
|
||||
"! pip install python-dotenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -17,16 +36,64 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai"
|
||||
"import openai\n",
|
||||
"import dotenv\n",
|
||||
"\n",
|
||||
"dotenv.load_dotenv()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"For the following sections to work properly we first have to setup some things. Let's start with the `api_base` and `api_version`. To find your `api_base` go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for the \"Endpoint\" value."
|
||||
"### Authentication\n",
|
||||
"\n",
|
||||
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure Active Directory token credentials."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using API key\n",
|
||||
"\n",
|
||||
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com)). You'll also find the endpoint for your resource here."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" api_key=api_key,\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using Azure Active Directory\n",
|
||||
"Let's now see how we can autheticate via Azure Active Directory. We'll start by installing the `azure-identity` library. This library will provide the token credentials we need to authenticate and help us build a token credential provider through the `get_bearer_token_provider` helper function. It's recommended to use `get_bearer_token_provider` over providing a static token to `AzureOpenAI` because this API will automatically cache and refresh tokens for you. \n",
|
||||
"\n",
|
||||
"For more information on how to set up Azure Active Directory authentication with Azure OpenAI, see the [documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -35,113 +102,38 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_version = '2023-05-15'\n",
|
||||
"openai.api_base = '' # Please add your endpoint here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We next have to setup the `api_type` and `api_key`. We can either get the key from the portal or we can get it through Microsoft Active Directory Authentication. Depending on this the `api_type` is either `azure` or `azure_ad`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setup: Portal\n",
|
||||
"Let's first look at getting the key from the portal. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for one of the \"Keys\" values."
|
||||
"! pip install \"azure-identity>=1.15.0\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_type = 'azure'\n",
|
||||
"openai.api_key = os.environ[\"OPENAI_API_KEY\"]"
|
||||
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\"),\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
|
||||
"> Note: the AzureOpenAI infers the following arguments from their corresponding environment variables if they are not provided:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"OPENAI_API_BASE\n",
|
||||
"OPENAI_API_KEY\n",
|
||||
"OPENAI_API_TYPE\n",
|
||||
"OPENAI_API_VERSION\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Setup: Microsoft Active Directory Authentication\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication. Uncomment the following code if you want to use Active Directory Authentication instead of keys from the portal."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"# default_credential = DefaultAzureCredential()\n",
|
||||
"# token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
"# openai.api_type = 'azure_ad'\n",
|
||||
"# openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import typing\n",
|
||||
"import time\n",
|
||||
"import requests\n",
|
||||
"if typing.TYPE_CHECKING:\n",
|
||||
" from azure.core.credentials import TokenCredential\n",
|
||||
"\n",
|
||||
"class TokenRefresh(requests.auth.AuthBase):\n",
|
||||
"\n",
|
||||
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
|
||||
" self.credential = credential\n",
|
||||
" self.scopes = scopes\n",
|
||||
" self.cached_token: typing.Optional[str] = None\n",
|
||||
"\n",
|
||||
" def __call__(self, req):\n",
|
||||
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
|
||||
" self.cached_token = self.credential.get_token(*self.scopes)\n",
|
||||
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
|
||||
" return req\n",
|
||||
"\n",
|
||||
"session = requests.Session()\n",
|
||||
"session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
|
||||
"\n",
|
||||
"openai.requestssession = session"
|
||||
"- `api_key` from `AZURE_OPENAI_API_KEY`\n",
|
||||
"- `azure_ad_token` from `AZURE_OPENAI_AD_TOKEN`\n",
|
||||
"- `api_version` from `OPENAI_API_VERSION`\n",
|
||||
"- `azure_endpoint` from `AZURE_OPENAI_ENDPOINT`\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -150,60 +142,8 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deployments\n",
|
||||
"In this section we are going to create a deployment that we can use to create embeddings."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Create manually\n",
|
||||
"Let's create a deployment using the `text-similarity-curie-001` model. Create a new deployment by going to your Resource in your portal under \"Resource Management\" -> \"Model deployments\"."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"deployment_id = '' # Fill in the deployment id from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Listing\n",
|
||||
"Now because creating a new deployment takes a long time, let's look in the subscription for an already finished deployment that succeeded."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print('While deployment running, selecting a completed one that supports embeddings.')\n",
|
||||
"deployment_id = None\n",
|
||||
"result = openai.Deployment.list()\n",
|
||||
"for deployment in result.data:\n",
|
||||
" if deployment[\"status\"] != \"succeeded\":\n",
|
||||
" continue\n",
|
||||
" \n",
|
||||
" model = openai.Model.retrieve(deployment[\"model\"])\n",
|
||||
" if model[\"capabilities\"][\"embeddings\"] != True:\n",
|
||||
" continue\n",
|
||||
" \n",
|
||||
" deployment_id = deployment[\"id\"]\n",
|
||||
" break\n",
|
||||
"\n",
|
||||
"if not deployment_id:\n",
|
||||
" print('No deployment with status: succeeded found.')\n",
|
||||
"else:\n",
|
||||
" print(f'Found a succeeded deployment that supports embeddings with id: {deployment_id}.')"
|
||||
"In this section we are going to create a deployment of a model that we can use to create embeddings."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -211,8 +151,27 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embeddings\n",
|
||||
"Now let's send a sample embedding to the deployment."
|
||||
"### Deployments: Create in the Azure OpenAI Studio\n",
|
||||
"Let's deploy a model to use with embeddings. Go to https://portal.azure.com, find your Azure OpenAI resource, and then navigate to the Azure OpenAI Studio. Click on the \"Deployments\" tab and then create a deployment for the model you want to use for embeddings. The deployment name that you give the model will be used in the code below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"deployment = \"\" # Fill in the deployment name from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Embeddings\n",
|
||||
"\n",
|
||||
"Now let's create embeddings using the client we built."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -221,8 +180,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings = openai.Embedding.create(deployment_id=deployment_id,\n",
|
||||
" input=\"The food was delicious and the waiter...\")\n",
|
||||
"embeddings = client.embeddings.create(\n",
|
||||
" model=deployment,\n",
|
||||
" input=\"The food was delicious and the waiter...\"\n",
|
||||
")\n",
|
||||
" \n",
|
||||
"print(embeddings)"
|
||||
]
|
||||
@ -244,7 +205,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.10.0"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
@ -16,13 +16,12 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"First, we install the necessary dependencies."
|
||||
"First, we install the necessary dependencies and import the libraries we will be using."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -31,9 +30,48 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install openai\n",
|
||||
"# (Optional) If you want to use Microsoft Active Directory\n",
|
||||
"! pip install azure-identity"
|
||||
"! pip install \"openai>=1.0.0,<2.0.0\"\n",
|
||||
"! pip install python-dotenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai\n",
|
||||
"import dotenv\n",
|
||||
"\n",
|
||||
"dotenv.load_dotenv()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Authentication\n",
|
||||
"\n",
|
||||
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure Active Directory token credentials."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using API key\n",
|
||||
"\n",
|
||||
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com)). You'll also find the endpoint for your resource here."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -42,41 +80,82 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai"
|
||||
"if not use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" api_key=api_key,\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using Azure Active Directory\n",
|
||||
"Let's now see how we can autheticate via Azure Active Directory. We'll start by installing the `azure-identity` library. This library will provide the token credentials we need to authenticate and help us build a token credential provider through the `get_bearer_token_provider` helper function. It's recommended to use `get_bearer_token_provider` over providing a static token to `AzureOpenAI` because this API will automatically cache and refresh tokens for you. \n",
|
||||
"\n",
|
||||
"Additionally, to properly access the Azure OpenAI Service, we need to create the proper resources at the [Azure Portal](https://portal.azure.com) (you can check a detailed guide on how to do this in the [Microsoft Docs](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal))\n",
|
||||
"\n",
|
||||
"Once the resource is created, the first thing we need to use is its endpoint. You can get the endpoint by looking at the *\"Keys and Endpoints\"* section under the *\"Resource Management\"* section. Having this, we will set up the SDK using this information:"
|
||||
"For more information on how to set up Azure Active Directory authentication with Azure OpenAI, see the [documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_base = \"\" # Add your endpoint here\n",
|
||||
"\n",
|
||||
"# functions is only supported by the 2023-07-01-preview API version\n",
|
||||
"openai.api_version = \"2023-07-01-preview\""
|
||||
"! pip install \"azure-identity>=1.15.0\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\"),\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Authentication\n",
|
||||
"> Note: the AzureOpenAI infers the following arguments from their corresponding environment variables if they are not provided:\n",
|
||||
"\n",
|
||||
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure credentials."
|
||||
"- `api_key` from `AZURE_OPENAI_API_KEY`\n",
|
||||
"- `azure_ad_token` from `AZURE_OPENAI_AD_TOKEN`\n",
|
||||
"- `api_version` from `OPENAI_API_VERSION`\n",
|
||||
"- `azure_endpoint` from `AZURE_OPENAI_ENDPOINT`\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deployments\n",
|
||||
"\n",
|
||||
"In this section we are going to create a deployment of a GPT model that we can use to call functions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Create in the Azure OpenAI Studio\n",
|
||||
"Let's deploy a model to use with chat completions. Go to https://portal.azure.com, find your Azure OpenAI resource, and then navigate to the Azure OpenAI Studio. Click on the \"Deployments\" tab and then create a deployment for the model you want to use for chat completions. The deployment name that you give the model will be used in the code below."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -85,110 +164,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"use_azure_active_directory = False"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"#### Authentication using API key\n",
|
||||
"\n",
|
||||
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set up the `api_type` to `azure` and set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not use_azure_active_directory:\n",
|
||||
" openai.api_type = \"azure\"\n",
|
||||
" openai.api_key = os.environ[\"OPENAI_API_KEY\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"OPENAI_API_BASE\n",
|
||||
"OPENAI_API_KEY\n",
|
||||
"OPENAI_API_TYPE\n",
|
||||
"OPENAI_API_VERSION\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using Microsoft Active Directory\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" default_credential = DefaultAzureCredential()\n",
|
||||
" token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
" openai.api_type = \"azure_ad\"\n",
|
||||
" openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import typing\n",
|
||||
"import time\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"if typing.TYPE_CHECKING:\n",
|
||||
" from azure.core.credentials import TokenCredential\n",
|
||||
"\n",
|
||||
"class TokenRefresh(requests.auth.AuthBase):\n",
|
||||
"\n",
|
||||
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
|
||||
" self.credential = credential\n",
|
||||
" self.scopes = scopes\n",
|
||||
" self.cached_token: typing.Optional[str] = None\n",
|
||||
"\n",
|
||||
" def __call__(self, req):\n",
|
||||
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
|
||||
" self.cached_token = self.credential.get_token(*self.scopes)\n",
|
||||
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
|
||||
" return req\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" session = requests.Session()\n",
|
||||
" session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
|
||||
"\n",
|
||||
" openai.requestssession = session"
|
||||
"deployment = \"\" # Fill in the deployment name from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -218,7 +194,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -257,72 +233,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{\n",
|
||||
" \"choices\": [\n",
|
||||
" {\n",
|
||||
" \"content_filter_results\": {},\n",
|
||||
" \"finish_reason\": \"function_call\",\n",
|
||||
" \"index\": 0,\n",
|
||||
" \"message\": {\n",
|
||||
" \"function_call\": {\n",
|
||||
" \"arguments\": \"{\\n \\\"location\\\": \\\"Seattle, WA\\\"\\n}\",\n",
|
||||
" \"name\": \"get_current_weather\"\n",
|
||||
" },\n",
|
||||
" \"role\": \"assistant\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" ],\n",
|
||||
" \"created\": 1689702512,\n",
|
||||
" \"id\": \"chatcmpl-7dj6GkYdM7Vw9eGn02bc2qqjN70Ps\",\n",
|
||||
" \"model\": \"gpt-4\",\n",
|
||||
" \"object\": \"chat.completion\",\n",
|
||||
" \"prompt_annotations\": [\n",
|
||||
" {\n",
|
||||
" \"content_filter_results\": {\n",
|
||||
" \"hate\": {\n",
|
||||
" \"filtered\": false,\n",
|
||||
" \"severity\": \"safe\"\n",
|
||||
" },\n",
|
||||
" \"self_harm\": {\n",
|
||||
" \"filtered\": false,\n",
|
||||
" \"severity\": \"safe\"\n",
|
||||
" },\n",
|
||||
" \"sexual\": {\n",
|
||||
" \"filtered\": false,\n",
|
||||
" \"severity\": \"safe\"\n",
|
||||
" },\n",
|
||||
" \"violence\": {\n",
|
||||
" \"filtered\": false,\n",
|
||||
" \"severity\": \"safe\"\n",
|
||||
" }\n",
|
||||
" },\n",
|
||||
" \"prompt_index\": 0\n",
|
||||
" }\n",
|
||||
" ],\n",
|
||||
" \"usage\": {\n",
|
||||
" \"completion_tokens\": 18,\n",
|
||||
" \"prompt_tokens\": 115,\n",
|
||||
" \"total_tokens\": 133\n",
|
||||
" }\n",
|
||||
"}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" {\"role\": \"system\", \"content\": \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"What's the weather like today in Seattle?\"}\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"chat_completion = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=\"gpt-35-turbo-0613\",\n",
|
||||
"chat_completion = client.chat.completions.create(\n",
|
||||
" model=deployment,\n",
|
||||
" messages=messages,\n",
|
||||
" functions=functions,\n",
|
||||
")\n",
|
||||
@ -341,20 +262,9 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"get_current_weather\n",
|
||||
"{\n",
|
||||
" \"location\": \"Seattle, WA\"\n",
|
||||
"}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
@ -388,17 +298,9 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Today in Seattle, the weather is sunny with a temperature of 22 degrees celsius.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"messages.append(\n",
|
||||
" {\n",
|
||||
@ -408,8 +310,8 @@
|
||||
" }\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"function_completion = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=\"gpt-35-turbo-0613\",\n",
|
||||
"function_completion = client.chat.completions.create(\n",
|
||||
" model=deployment,\n",
|
||||
" messages=messages,\n",
|
||||
" functions=functions,\n",
|
||||
")\n",
|
||||
|
228
examples/azure/whisper.ipynb
Normal file
228
examples/azure/whisper.ipynb
Normal file
@ -0,0 +1,228 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure audio whisper (preview) example\n",
|
||||
"\n",
|
||||
"The example shows how to use the Azure OpenAI Whisper model to transcribe audio files.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"First, we install the necessary dependencies and import the libraries we will be using."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"openai>=1.0.0,<2.0.0\"\n",
|
||||
"! pip install python-dotenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import openai\n",
|
||||
"import dotenv\n",
|
||||
"\n",
|
||||
"dotenv.load_dotenv()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Authentication\n",
|
||||
"\n",
|
||||
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure Active Directory token credentials."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using API key\n",
|
||||
"\n",
|
||||
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com)). You'll also find the endpoint for your resource here."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" api_key=api_key,\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Authentication using Azure Active Directory\n",
|
||||
"Let's now see how we can autheticate via Azure Active Directory. We'll start by installing the `azure-identity` library. This library will provide the token credentials we need to authenticate and help us build a token credential provider through the `get_bearer_token_provider` helper function. It's recommended to use `get_bearer_token_provider` over providing a static token to `AzureOpenAI` because this API will automatically cache and refresh tokens for you. \n",
|
||||
"\n",
|
||||
"For more information on how to set up Azure Active Directory authentication with Azure OpenAI, see the [documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install \"azure-identity>=1.15.0\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
|
||||
"\n",
|
||||
"if use_azure_active_directory:\n",
|
||||
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
||||
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
||||
"\n",
|
||||
" client = openai.AzureOpenAI(\n",
|
||||
" azure_endpoint=endpoint,\n",
|
||||
" azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\"),\n",
|
||||
" api_version=\"2023-09-01-preview\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> Note: the AzureOpenAI infers the following arguments from their corresponding environment variables if they are not provided:\n",
|
||||
"\n",
|
||||
"- `api_key` from `AZURE_OPENAI_API_KEY`\n",
|
||||
"- `azure_ad_token` from `AZURE_OPENAI_AD_TOKEN`\n",
|
||||
"- `api_version` from `OPENAI_API_VERSION`\n",
|
||||
"- `azure_endpoint` from `AZURE_OPENAI_ENDPOINT`\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deployments\n",
|
||||
"\n",
|
||||
"In this section we are going to create a deployment using the `whisper-1` model to transcribe audio files."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Create in the Azure OpenAI Studio\n",
|
||||
"Let's deploy a model to use with whisper. Go to https://portal.azure.com, find your Azure OpenAI resource, and then navigate to the Azure OpenAI Studio. Click on the \"Deployments\" tab and then create a deployment for the model you want to use for whisper. The deployment name that you give the model will be used in the code below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"deployment = \"whisper-deployment\" # Fill in the deployment name from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Audio transcription\n",
|
||||
"\n",
|
||||
"Audio transcription, or speech-to-text, is the process of converting spoken words into text. Use the `openai.Audio.transcribe` method to transcribe an audio file stream to text.\n",
|
||||
"\n",
|
||||
"You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# download sample audio file\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"sample_audio_url = \"https://github.com/Azure-Samples/cognitive-services-speech-sdk/raw/master/sampledata/audiofiles/wikipediaOcelot.wav\"\n",
|
||||
"audio_file = requests.get(sample_audio_url)\n",
|
||||
"with open(\"wikipediaOcelot.wav\", \"wb\") as f:\n",
|
||||
" f.write(audio_file.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"transcription = client.audio.transcriptions.create(\n",
|
||||
" file=open(\"wikipediaOcelot.wav\", \"rb\"),\n",
|
||||
" model=deployment,\n",
|
||||
")\n",
|
||||
"print(transcription.text)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.0"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
Loading…
x
Reference in New Issue
Block a user