mirror of
https://github.com/james-m-jordan/openai-cookbook.git
synced 2025-05-09 19:32:38 +00:00
888 lines
30 KiB
Plaintext
888 lines
30 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Orchestrating Agents: Routines and Handoffs\n",
|
||
"\n",
|
||
"When working with language models, quite often all you need for solid performance is a good prompt and the right tools. However, when dealing with many unique flows, things may get hairy. This cookbook will walk through one way to tackle this.\n",
|
||
"\n",
|
||
"We'll introduce the notion of **routines** and **handoffs**, then walk through the implementation and show how they can be used to orchestrate multiple agents in a simple, powerful, and controllable way.\n",
|
||
"\n",
|
||
"Finally, we provide a sample repo, [Swarm](https://github.com/openai/swarm), that implements these ideas along with examples."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's start by setting up our imports."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 32,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from openai import OpenAI\n",
|
||
"from pydantic import BaseModel\n",
|
||
"from typing import Optional\n",
|
||
"import json\n",
|
||
"\n",
|
||
"\n",
|
||
"client = OpenAI()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Routines\n",
|
||
"\n",
|
||
"The notion of a \"routine\" is not strictly defined, and instead meant to capture the idea of a set of steps. Concretely, let's define a routine to be a list of instructions in natural langauge (which we'll represent with a system prompt), along with the tools necessary to complete them.\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
"Let's take a look at an example. Below, we've defined a routine for a customer service agent instructing it to triage the user issue, then either suggest a fix or provide a refund. We've also defined the necessary functions `execute_refund` and `look_up_item`. We can call this a customer service routine, agent, assistant, etc – however the idea itself is the same: a set of steps and the tools to execute them."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Customer Service Routine\n",
|
||
"\n",
|
||
"system_message = (\n",
|
||
" \"You are a customer support agent for ACME Inc.\"\n",
|
||
" \"Always answer in a sentence or less.\"\n",
|
||
" \"Follow the following routine with the user:\"\n",
|
||
" \"1. First, ask probing questions and understand the user's problem deeper.\\n\"\n",
|
||
" \" - unless the user has already provided a reason.\\n\"\n",
|
||
" \"2. Propose a fix (make one up).\\n\"\n",
|
||
" \"3. ONLY if not satisfied, offer a refund.\\n\"\n",
|
||
" \"4. If accepted, search for the ID and then execute refund.\"\n",
|
||
" \"\"\n",
|
||
")\n",
|
||
"\n",
|
||
"def look_up_item(search_query):\n",
|
||
" \"\"\"Use to find item ID.\n",
|
||
" Search query can be a description or keywords.\"\"\"\n",
|
||
"\n",
|
||
" # return hard-coded item ID - in reality would be a lookup\n",
|
||
" return \"item_132612938\"\n",
|
||
"\n",
|
||
"\n",
|
||
"def execute_refund(item_id, reason=\"not provided\"):\n",
|
||
"\n",
|
||
" print(\"Summary:\", item_id, reason) # lazy summary\n",
|
||
" return \"success\"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The main power of routines is their simplicity and robustness. Notice that these instructions contain conditionals much like a state machine or branching in code. LLMs can actually handle these cases quite robustly for small and medium sized routine, with the added benefit of having \"soft\" adherance – the LLM can naturally steer the conversation without getting stuck in dead-ends.\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Executing Routines\n",
|
||
"\n",
|
||
"To execute a routine, let's implement a simple loop that:\n",
|
||
"1. Gets user input.\n",
|
||
"1. Appends user message to `messages`.\n",
|
||
"1. Calls the model.\n",
|
||
"1. Appends model response to `messages`.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def run_full_turn(system_message, messages):\n",
|
||
" response = client.chat.completions.create(\n",
|
||
" model=\"gpt-4o-mini\",\n",
|
||
" messages=[{\"role\": \"system\", \"content\": system_message}] + messages,\n",
|
||
" )\n",
|
||
" message = response.choices[0].message\n",
|
||
" messages.append(message)\n",
|
||
"\n",
|
||
" if message.content: print(\"Assistant:\", message.content)\n",
|
||
"\n",
|
||
" return message\n",
|
||
"\n",
|
||
"\n",
|
||
"messages = []\n",
|
||
"while True:\n",
|
||
" user = input(\"User: \")\n",
|
||
" messages.append({\"role\": \"user\", \"content\": user})\n",
|
||
"\n",
|
||
" run_full_turn(system_message, messages)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"As you can see, this currently ignores function calls, so let's add that.\n",
|
||
"\n",
|
||
"Models require functions to be formatted as a function schema. For convenience, we can define a helper function that turns python functions into the corresponding function schema."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import inspect\n",
|
||
"\n",
|
||
"def function_to_schema(func) -> dict:\n",
|
||
" type_map = {\n",
|
||
" str: \"string\",\n",
|
||
" int: \"integer\",\n",
|
||
" float: \"number\",\n",
|
||
" bool: \"boolean\",\n",
|
||
" list: \"array\",\n",
|
||
" dict: \"object\",\n",
|
||
" type(None): \"null\",\n",
|
||
" }\n",
|
||
"\n",
|
||
" try:\n",
|
||
" signature = inspect.signature(func)\n",
|
||
" except ValueError as e:\n",
|
||
" raise ValueError(\n",
|
||
" f\"Failed to get signature for function {func.__name__}: {str(e)}\"\n",
|
||
" )\n",
|
||
"\n",
|
||
" parameters = {}\n",
|
||
" for param in signature.parameters.values():\n",
|
||
" try:\n",
|
||
" param_type = type_map.get(param.annotation, \"string\")\n",
|
||
" except KeyError as e:\n",
|
||
" raise KeyError(\n",
|
||
" f\"Unknown type annotation {param.annotation} for parameter {param.name}: {str(e)}\"\n",
|
||
" )\n",
|
||
" parameters[param.name] = {\"type\": param_type}\n",
|
||
"\n",
|
||
" required = [\n",
|
||
" param.name\n",
|
||
" for param in signature.parameters.values()\n",
|
||
" if param.default == inspect._empty\n",
|
||
" ]\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"type\": \"function\",\n",
|
||
" \"function\": {\n",
|
||
" \"name\": func.__name__,\n",
|
||
" \"description\": (func.__doc__ or \"\").strip(),\n",
|
||
" \"parameters\": {\n",
|
||
" \"type\": \"object\",\n",
|
||
" \"properties\": parameters,\n",
|
||
" \"required\": required,\n",
|
||
" },\n",
|
||
" },\n",
|
||
" }"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"For example:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"{\n",
|
||
" \"type\": \"function\",\n",
|
||
" \"function\": {\n",
|
||
" \"name\": \"sample_function\",\n",
|
||
" \"description\": \"This is my docstring. Call this function when you want.\",\n",
|
||
" \"parameters\": {\n",
|
||
" \"type\": \"object\",\n",
|
||
" \"properties\": {\n",
|
||
" \"param_1\": {\n",
|
||
" \"type\": \"string\"\n",
|
||
" },\n",
|
||
" \"param_2\": {\n",
|
||
" \"type\": \"string\"\n",
|
||
" },\n",
|
||
" \"the_third_one\": {\n",
|
||
" \"type\": \"integer\"\n",
|
||
" },\n",
|
||
" \"some_optional\": {\n",
|
||
" \"type\": \"string\"\n",
|
||
" }\n",
|
||
" },\n",
|
||
" \"required\": [\n",
|
||
" \"param_1\",\n",
|
||
" \"param_2\",\n",
|
||
" \"the_third_one\"\n",
|
||
" ]\n",
|
||
" }\n",
|
||
" }\n",
|
||
"}\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"def sample_function(param_1, param_2, the_third_one: int, some_optional=\"John Doe\"):\n",
|
||
" \"\"\"\n",
|
||
" This is my docstring. Call this function when you want.\n",
|
||
" \"\"\"\n",
|
||
" print(\"Hello, world\")\n",
|
||
"\n",
|
||
"schema = function_to_schema(sample_function)\n",
|
||
"print(json.dumps(schema, indent=2))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now, we can use this function to pass the tools to the model when we call it."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 20,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"Function(arguments='{\"search_query\":\"black boot\"}', name='look_up_item')"
|
||
]
|
||
},
|
||
"execution_count": 20,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"messages = []\n",
|
||
"\n",
|
||
"tools = [execute_refund, look_up_item]\n",
|
||
"tool_schemas = [function_to_schema(tool) for tool in tools]\n",
|
||
"\n",
|
||
"response = client.chat.completions.create(\n",
|
||
" model=\"gpt-4o-mini\",\n",
|
||
" messages=[{\"role\": \"user\", \"content\": \"Look up the black boot.\"}],\n",
|
||
" tools=tool_schemas,\n",
|
||
" )\n",
|
||
"message = response.choices[0].message\n",
|
||
"\n",
|
||
"message.tool_calls[0].function"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Finally, when the model calls a tool we need to execute the corresponding function and provide the result back to the model.\n",
|
||
"\n",
|
||
"We can do this by mapping the name of the tool to the python function in a `tool_map`, then looking it up in `execute_tool_call` and calling it. Finally we add the result to the conversation."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 21,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Assistant: look_up_item({'search_query': 'black boot'})\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"tools_map = {tool.__name__: tool for tool in tools}\n",
|
||
"\n",
|
||
"def execute_tool_call(tool_call, tools_map):\n",
|
||
" name = tool_call.function.name\n",
|
||
" args = json.loads(tool_call.function.arguments)\n",
|
||
"\n",
|
||
" print(f\"Assistant: {name}({args})\")\n",
|
||
"\n",
|
||
" # call corresponding function with provided arguments\n",
|
||
" return tools_map[name](**args)\n",
|
||
"\n",
|
||
"for tool_call in message.tool_calls:\n",
|
||
" result = execute_tool_call(tool_call, tools_map)\n",
|
||
"\n",
|
||
" # add result back to conversation \n",
|
||
" result_message = {\n",
|
||
" \"role\": \"tool\",\n",
|
||
" \"tool_call_id\": tool_call.id,\n",
|
||
" \"content\": result,\n",
|
||
" }\n",
|
||
" messages.append(result_message)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"In practice, we'll also want to let the model use the result to produce another response. That response might _also_ contain a tool call, so we can just run this in a loop until there are no more tool calls.\n",
|
||
"\n",
|
||
"If we put everything together, it will look something like this:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"tools = [execute_refund, look_up_item]\n",
|
||
"\n",
|
||
"\n",
|
||
"def run_full_turn(system_message, tools, messages):\n",
|
||
"\n",
|
||
" num_init_messages = len(messages)\n",
|
||
" messages = messages.copy()\n",
|
||
"\n",
|
||
" while True:\n",
|
||
"\n",
|
||
" # turn python functions into tools and save a reverse map\n",
|
||
" tool_schemas = [function_to_schema(tool) for tool in tools]\n",
|
||
" tools_map = {tool.__name__: tool for tool in tools}\n",
|
||
"\n",
|
||
" # === 1. get openai completion ===\n",
|
||
" response = client.chat.completions.create(\n",
|
||
" model=\"gpt-4o-mini\",\n",
|
||
" messages=[{\"role\": \"system\", \"content\": system_message}] + messages,\n",
|
||
" tools=tool_schemas or None,\n",
|
||
" )\n",
|
||
" message = response.choices[0].message\n",
|
||
" messages.append(message)\n",
|
||
"\n",
|
||
" if message.content: # print assistant response\n",
|
||
" print(\"Assistant:\", message.content)\n",
|
||
"\n",
|
||
" if not message.tool_calls: # if finished handling tool calls, break\n",
|
||
" break\n",
|
||
"\n",
|
||
" # === 2. handle tool calls ===\n",
|
||
"\n",
|
||
" for tool_call in message.tool_calls:\n",
|
||
" result = execute_tool_call(tool_call, tools_map)\n",
|
||
"\n",
|
||
" result_message = {\n",
|
||
" \"role\": \"tool\",\n",
|
||
" \"tool_call_id\": tool_call.id,\n",
|
||
" \"content\": result,\n",
|
||
" }\n",
|
||
" messages.append(result_message)\n",
|
||
"\n",
|
||
" # ==== 3. return new messages =====\n",
|
||
" return messages[num_init_messages:]\n",
|
||
"\n",
|
||
"\n",
|
||
"def execute_tool_call(tool_call, tools_map):\n",
|
||
" name = tool_call.function.name\n",
|
||
" args = json.loads(tool_call.function.arguments)\n",
|
||
"\n",
|
||
" print(f\"Assistant: {name}({args})\")\n",
|
||
"\n",
|
||
" # call corresponding function with provided arguments\n",
|
||
" return tools_map[name](**args)\n",
|
||
"\n",
|
||
"\n",
|
||
"messages = []\n",
|
||
"while True:\n",
|
||
" user = input(\"User: \")\n",
|
||
" messages.append({\"role\": \"user\", \"content\": user})\n",
|
||
"\n",
|
||
" new_messages = run_full_turn(system_message, tools, messages)\n",
|
||
" messages.extend(new_messages)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now that we have a routine, let's say we want to add more steps and more tools. We can up to a point, but eventually if we try growing the routine with too many different tasks it may start to struggle. This is where we can leverage the notion of multiple routines – given a user request, we can load the right routine with the appropriate steps and tools to address it.\n",
|
||
"\n",
|
||
"Dynamically swapping system instructions and tools may seem daunting. However, if we view \"routines\" as \"agents\", then this notion of **handoffs** allow us to represent these swaps simply – as one agent handing off a conversation to another."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Handoffs\n",
|
||
"\n",
|
||
"Let's define a **handoff** as an agent (or routine) handing off an active conversation to another agent, much like when you get transfered to someone else on a phone call. Except in this case, the agents have complete knowledge of your prior conversation!\n",
|
||
"\n",
|
||
"To see handoffs in action, let's start by defining a basic class for an Agent."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 24,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Agent(BaseModel):\n",
|
||
" name: str = \"Agent\"\n",
|
||
" model: str = \"gpt-4o-mini\"\n",
|
||
" instructions: str = \"You are a helpful Agent\"\n",
|
||
" tools: list = []"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now to make our code support it, we can change `run_full_turn` to take an `Agent` instead of separate `system_message` and `tools`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 25,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def run_full_turn(agent, messages):\n",
|
||
"\n",
|
||
" num_init_messages = len(messages)\n",
|
||
" messages = messages.copy()\n",
|
||
"\n",
|
||
" while True:\n",
|
||
"\n",
|
||
" # turn python functions into tools and save a reverse map\n",
|
||
" tool_schemas = [function_to_schema(tool) for tool in agent.tools]\n",
|
||
" tools_map = {tool.__name__: tool for tool in agent.tools}\n",
|
||
"\n",
|
||
" # === 1. get openai completion ===\n",
|
||
" response = client.chat.completions.create(\n",
|
||
" model=agent.model,\n",
|
||
" messages=[{\"role\": \"system\", \"content\": agent.instructions}] + messages,\n",
|
||
" tools=tool_schemas or None,\n",
|
||
" )\n",
|
||
" message = response.choices[0].message\n",
|
||
" messages.append(message)\n",
|
||
"\n",
|
||
" if message.content: # print assistant response\n",
|
||
" print(\"Assistant:\", message.content)\n",
|
||
"\n",
|
||
" if not message.tool_calls: # if finished handling tool calls, break\n",
|
||
" break\n",
|
||
"\n",
|
||
" # === 2. handle tool calls ===\n",
|
||
"\n",
|
||
" for tool_call in message.tool_calls:\n",
|
||
" result = execute_tool_call(tool_call, tools_map)\n",
|
||
"\n",
|
||
" result_message = {\n",
|
||
" \"role\": \"tool\",\n",
|
||
" \"tool_call_id\": tool_call.id,\n",
|
||
" \"content\": result,\n",
|
||
" }\n",
|
||
" messages.append(result_message)\n",
|
||
"\n",
|
||
" # ==== 3. return new messages =====\n",
|
||
" return messages[num_init_messages:]\n",
|
||
"\n",
|
||
"\n",
|
||
"def execute_tool_call(tool_call, tools_map):\n",
|
||
" name = tool_call.function.name\n",
|
||
" args = json.loads(tool_call.function.arguments)\n",
|
||
"\n",
|
||
" print(f\"Assistant: {name}({args})\")\n",
|
||
"\n",
|
||
" # call corresponding function with provided arguments\n",
|
||
" return tools_map[name](**args)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can now run multiple agents easily:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 31,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"User: Place an order for a black boot.\n",
|
||
"Assistant: place_order({'item_name': 'black boot'})\n",
|
||
"Assistant: Your order for a black boot has been successfully placed! If you need anything else, feel free to ask!\n",
|
||
"User: Actually, I want a refund.\n",
|
||
"Assistant: execute_refund({'item_name': 'black boot'})\n",
|
||
"Assistant: Your refund for the black boot has been successfully processed. If you need further assistance, just let me know!\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"def execute_refund(item_name):\n",
|
||
" return \"success\"\n",
|
||
"\n",
|
||
"refund_agent = Agent(\n",
|
||
" name=\"Refund Agent\",\n",
|
||
" instructions=\"You are a refund agent. Help the user with refunds.\",\n",
|
||
" tools=[execute_refund],\n",
|
||
")\n",
|
||
"\n",
|
||
"def place_order(item_name):\n",
|
||
" return \"success\"\n",
|
||
"\n",
|
||
"sales_assistant = Agent(\n",
|
||
" name=\"Sales Assistant\",\n",
|
||
" instructions=\"You are a sales assistant. Sell the user a product.\",\n",
|
||
" tools=[place_order],\n",
|
||
")\n",
|
||
"\n",
|
||
"\n",
|
||
"messages = []\n",
|
||
"user_query = \"Place an order for a black boot.\"\n",
|
||
"print(\"User:\", user_query)\n",
|
||
"messages.append({\"role\": \"user\", \"content\": user_query})\n",
|
||
"\n",
|
||
"response = run_full_turn(sales_assistant, messages) # sales assistant\n",
|
||
"messages.extend(response)\n",
|
||
"\n",
|
||
"\n",
|
||
"user_query = \"Actually, I want a refund.\" # implicitly refers to the last item\n",
|
||
"print(\"User:\", user_query)\n",
|
||
"messages.append({\"role\": \"user\", \"content\": user_query})\n",
|
||
"response = run_full_turn(refund_agent, messages) # refund agent"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Great! But we did the handoff manually here – we want the agents themselves to decide when to perform a handoff. A simple, but surprisingly effective way to do this is by giving them a `transfer_to_XXX` function, where `XXX` is some agent. The model is smart enough to know to call this function when it makes sense to make a handoff!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Handoff Functions\n",
|
||
"\n",
|
||
"Now that agent can express the _intent_ to make a handoff, we must make it actually happen. There's many ways to do this, but there's one particularly clean way.\n",
|
||
"\n",
|
||
"For the agent functions we've defined so far, like `execute_refund` or `place_order` they return a string, which will be provided to the model. What if instead, we return an `Agent` object to indicate which agent we want to transfer to? Like so:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"refund_agent = Agent(\n",
|
||
" name=\"Refund Agent\",\n",
|
||
" instructions=\"You are a refund agent. Help the user with refunds.\",\n",
|
||
" tools=[execute_refund],\n",
|
||
")\n",
|
||
"\n",
|
||
"def transfer_to_refunds():\n",
|
||
" return refund_agent\n",
|
||
"\n",
|
||
"sales_assistant = Agent(\n",
|
||
" name=\"Sales Assistant\",\n",
|
||
" instructions=\"You are a sales assistant. Sell the user a product.\",\n",
|
||
" tools=[place_order],\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can then update our code to check the return type of a function response, and if it's an `Agent`, update the agent in use! Additionally, now `run_full_turn` will need to return the latest agent in use in case there are handoffs. (We can do this in a `Response` class to keep things neat.)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Response(BaseModel):\n",
|
||
" agent: Optional[Agent]\n",
|
||
" messages: list"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now for the updated `run_full_turn`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def run_full_turn(agent, messages):\n",
|
||
"\n",
|
||
" current_agent = agent\n",
|
||
" num_init_messages = len(messages)\n",
|
||
" messages = messages.copy()\n",
|
||
"\n",
|
||
" while True:\n",
|
||
"\n",
|
||
" # turn python functions into tools and save a reverse map\n",
|
||
" tool_schemas = [function_to_schema(tool) for tool in current_agent.tools]\n",
|
||
" tools = {tool.__name__: tool for tool in current_agent.tools}\n",
|
||
"\n",
|
||
" # === 1. get openai completion ===\n",
|
||
" response = client.chat.completions.create(\n",
|
||
" model=agent.model,\n",
|
||
" messages=[{\"role\": \"system\", \"content\": current_agent.instructions}]\n",
|
||
" + messages,\n",
|
||
" tools=tool_schemas or None,\n",
|
||
" )\n",
|
||
" message = response.choices[0].message\n",
|
||
" messages.append(message)\n",
|
||
"\n",
|
||
" if message.content: # print agent response\n",
|
||
" print(f\"{current_agent.name}:\", message.content)\n",
|
||
"\n",
|
||
" if not message.tool_calls: # if finished handling tool calls, break\n",
|
||
" break\n",
|
||
"\n",
|
||
" # === 2. handle tool calls ===\n",
|
||
"\n",
|
||
" for tool_call in message.tool_calls:\n",
|
||
" result = execute_tool_call(tool_call, tools, current_agent.name)\n",
|
||
"\n",
|
||
" if type(result) is Agent: # if agent transfer, update current agent\n",
|
||
" current_agent = result\n",
|
||
" result = (\n",
|
||
" f\"Transfered to {current_agent.name}. Adopt persona immediately.\"\n",
|
||
" )\n",
|
||
"\n",
|
||
" result_message = {\n",
|
||
" \"role\": \"tool\",\n",
|
||
" \"tool_call_id\": tool_call.id,\n",
|
||
" \"content\": result,\n",
|
||
" }\n",
|
||
" messages.append(result_message)\n",
|
||
"\n",
|
||
" # ==== 3. return last agent used and new messages =====\n",
|
||
" return Response(agent=current_agent, messages=messages[num_init_messages:])\n",
|
||
"\n",
|
||
"\n",
|
||
"def execute_tool_call(tool_call, tools, agent_name):\n",
|
||
" name = tool_call.function.name\n",
|
||
" args = json.loads(tool_call.function.arguments)\n",
|
||
"\n",
|
||
" print(f\"{agent_name}:\", f\"{name}({args})\")\n",
|
||
"\n",
|
||
" return tools[name](**args) # call corresponding function with provided arguments"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Let's look at an example with more Agents."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def escalate_to_human(summary):\n",
|
||
" \"\"\"Only call this if explicitly asked to.\"\"\"\n",
|
||
" print(\"Escalating to human agent...\")\n",
|
||
" print(\"\\n=== Escalation Report ===\")\n",
|
||
" print(f\"Summary: {summary}\")\n",
|
||
" print(\"=========================\\n\")\n",
|
||
" exit()\n",
|
||
"\n",
|
||
"\n",
|
||
"def transfer_to_sales_agent():\n",
|
||
" \"\"\"User for anything sales or buying related.\"\"\"\n",
|
||
" return sales_agent\n",
|
||
"\n",
|
||
"\n",
|
||
"def transfer_to_issues_and_repairs():\n",
|
||
" \"\"\"User for issues, repairs, or refunds.\"\"\"\n",
|
||
" return issues_and_repairs_agent\n",
|
||
"\n",
|
||
"\n",
|
||
"def transfer_back_to_triage():\n",
|
||
" \"\"\"Call this if the user brings up a topic outside of your purview,\n",
|
||
" including escalating to human.\"\"\"\n",
|
||
" return triage_agent\n",
|
||
"\n",
|
||
"\n",
|
||
"triage_agent = Agent(\n",
|
||
" name=\"Triage Agent\",\n",
|
||
" instructions=(\n",
|
||
" \"You are a customer service bot for ACME Inc. \"\n",
|
||
" \"Introduce yourself. Always be very brief. \"\n",
|
||
" \"Gather information to direct the customer to the right department. \"\n",
|
||
" \"But make your questions subtle and natural.\"\n",
|
||
" ),\n",
|
||
" tools=[transfer_to_sales_agent, transfer_to_issues_and_repairs, escalate_to_human],\n",
|
||
")\n",
|
||
"\n",
|
||
"\n",
|
||
"def execute_order(product, price: int):\n",
|
||
" \"\"\"Price should be in USD.\"\"\"\n",
|
||
" print(\"\\n\\n=== Order Summary ===\")\n",
|
||
" print(f\"Product: {product}\")\n",
|
||
" print(f\"Price: ${price}\")\n",
|
||
" print(\"=================\\n\")\n",
|
||
" confirm = input(\"Confirm order? y/n: \").strip().lower()\n",
|
||
" if confirm == \"y\":\n",
|
||
" print(\"Order execution successful!\")\n",
|
||
" return \"Success\"\n",
|
||
" else:\n",
|
||
" print(\"Order cancelled!\")\n",
|
||
" return \"User cancelled order.\"\n",
|
||
"\n",
|
||
"\n",
|
||
"sales_agent = Agent(\n",
|
||
" name=\"Sales Agent\",\n",
|
||
" instructions=(\n",
|
||
" \"You are a sales agent for ACME Inc.\"\n",
|
||
" \"Always answer in a sentence or less.\"\n",
|
||
" \"Follow the following routine with the user:\"\n",
|
||
" \"1. Ask them about any problems in their life related to catching roadrunners.\\n\"\n",
|
||
" \"2. Casually mention one of ACME's crazy made-up products can help.\\n\"\n",
|
||
" \" - Don't mention price.\\n\"\n",
|
||
" \"3. Once the user is bought in, drop a ridiculous price.\\n\"\n",
|
||
" \"4. Only after everything, and if the user says yes, \"\n",
|
||
" \"tell them a crazy caveat and execute their order.\\n\"\n",
|
||
" \"\"\n",
|
||
" ),\n",
|
||
" tools=[execute_order, transfer_back_to_triage],\n",
|
||
")\n",
|
||
"\n",
|
||
"\n",
|
||
"def look_up_item(search_query):\n",
|
||
" \"\"\"Use to find item ID.\n",
|
||
" Search query can be a description or keywords.\"\"\"\n",
|
||
" item_id = \"item_132612938\"\n",
|
||
" print(\"Found item:\", item_id)\n",
|
||
" return item_id\n",
|
||
"\n",
|
||
"\n",
|
||
"def execute_refund(item_id, reason=\"not provided\"):\n",
|
||
" print(\"\\n\\n=== Refund Summary ===\")\n",
|
||
" print(f\"Item ID: {item_id}\")\n",
|
||
" print(f\"Reason: {reason}\")\n",
|
||
" print(\"=================\\n\")\n",
|
||
" print(\"Refund execution successful!\")\n",
|
||
" return \"success\"\n",
|
||
"\n",
|
||
"\n",
|
||
"issues_and_repairs_agent = Agent(\n",
|
||
" name=\"Issues and Repairs Agent\",\n",
|
||
" instructions=(\n",
|
||
" \"You are a customer support agent for ACME Inc.\"\n",
|
||
" \"Always answer in a sentence or less.\"\n",
|
||
" \"Follow the following routine with the user:\"\n",
|
||
" \"1. First, ask probing questions and understand the user's problem deeper.\\n\"\n",
|
||
" \" - unless the user has already provided a reason.\\n\"\n",
|
||
" \"2. Propose a fix (make one up).\\n\"\n",
|
||
" \"3. ONLY if not satesfied, offer a refund.\\n\"\n",
|
||
" \"4. If accepted, search for the ID and then execute refund.\"\n",
|
||
" \"\"\n",
|
||
" ),\n",
|
||
" tools=[execute_refund, look_up_item, transfer_back_to_triage],\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Finally, we can run this in a loop (this won't run in python notebooks, so you can try this in a separate python file):"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"agent = triage_agent\n",
|
||
"messages = []\n",
|
||
"\n",
|
||
"while True:\n",
|
||
" user = input(\"User: \")\n",
|
||
" messages.append({\"role\": \"user\", \"content\": user})\n",
|
||
"\n",
|
||
" response = run_full_turn(agent, messages)\n",
|
||
" agent = response.agent\n",
|
||
" messages.extend(response.messages)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Swarm\n",
|
||
"\n",
|
||
"As a proof of concept, we've packaged these ideas into a sample library called [Swarm](https://github.com/openai/swarm). It is meant as an example only, and should not be directly used in production. However, feel free to take the ideas and code to build your own!"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.11.9"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|