mirror of
https://github.com/james-m-jordan/openai-cookbook.git
synced 2025-05-09 19:32:38 +00:00
Tompakeman (#1797)
This commit is contained in:
parent
3c2a4de1ca
commit
69027a4b70
@ -8,11 +8,11 @@
|
||||
"OpenAI now offers function calling using [reasoning models](https://platform.openai.com/docs/guides/reasoning?api-mode=responses). Reasoning models are trained to follow logical chains of thought, making them better suited for complex or multi-step tasks.\n",
|
||||
"> _Reasoning models like o3 and o4-mini are LLMs trained with reinforcement learning to perform reasoning. Reasoning models think before they answer, producing a long internal chain of thought before responding to the user. Reasoning models excel in complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflows. They're also the best models for Codex CLI, our lightweight coding agent._\n",
|
||||
"\n",
|
||||
"For the most part, using these models via the API is very simple and comparable to using familiar classic 'chat' models. \n",
|
||||
"For the most part, using these models via the API is very simple and comparable to using familiar 'chat' models. \n",
|
||||
"\n",
|
||||
"However, there are some nuances to bear in mind, particularly when it comes to using features such as function calling. \n",
|
||||
"\n",
|
||||
"All examples in this notebook use the newer [Responses API](https://community.openai.com/t/introducing-the-responses-api/1140929) which provides convenient abstractions for managing conversation state. The principles here are however relevant when using the older chat completions API."
|
||||
"All examples in this notebook use the newer [Responses API](https://community.openai.com/t/introducing-the-responses-api/1140929) which provides convenient abstractions for managing conversation state. However the principles here are relevant when using the older chat completions API."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -86,9 +86,10 @@
|
||||
"source": [
|
||||
"Nice and easy!\n",
|
||||
"\n",
|
||||
"We're asking relatively complex questions that may requires the model to reason out a plan and proceed through it in steps, but this reasoning is hidden from us. We simply wait a little longer before being shown the output. \n",
|
||||
"We're asking relatively complex questions that may require the model to reason out a plan and proceed through it in steps, but this reasoning is hidden from us - we simply wait a little longer before being shown the response. \n",
|
||||
"\n",
|
||||
"However, if we inspect the output we can see that the model has made use of a hidden set of 'reasoning' tokens that were included in the model context window, but not exposed to us as end users.\n",
|
||||
"We can see these tokens and a summary of the reasoning (but not the literal tokens used) in the response"
|
||||
"We can see these tokens and a summary of the reasoning (but not the literal tokens used) in the response."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -129,7 +130,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It is important to know about these reasoning tokens, because it means we will consume our available context window more quickly than with traditional chat models. More on this later.\n",
|
||||
"It is important to know about these reasoning tokens, because it means we will consume our available context window more quickly than with traditional chat models.\n",
|
||||
"\n",
|
||||
"## Calling custom functions\n",
|
||||
"What happens if we ask the model a complex request that also requires the use of custom tools?\n",
|
||||
@ -182,7 +183,10 @@
|
||||
"# Let's add this to our defaults so we don't have to pass it every time\n",
|
||||
"MODEL_DEFAULTS[\"tools\"] = tools\n",
|
||||
"\n",
|
||||
"response = client.responses.create(input=\"What's the internal ID for the lowest-temperature city?\", previous_response_id=response.id, **MODEL_DEFAULTS)\n",
|
||||
"response = client.responses.create(\n",
|
||||
" input=\"What's the internal ID for the lowest-temperature city?\",\n",
|
||||
" previous_response_id=response.id,\n",
|
||||
" **MODEL_DEFAULTS)\n",
|
||||
"print(response.output_text)\n"
|
||||
]
|
||||
},
|
||||
@ -219,7 +223,8 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Along with the reasoning step, the model has successfully identified the need for a tool call and passed back instructions to send to our function call. \n",
|
||||
"Let's invoke the function and pass the results back to the model so it can continue.\n",
|
||||
"\n",
|
||||
"Let's invoke the function and send the results to the model so it can continue reasoning.\n",
|
||||
"Function responses are a special kind of message, so we need to structure our next message as a special kind of input:\n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
@ -410,7 +415,7 @@
|
||||
"* We may wish to store messages in our own database for audit purposes rather than relying on OpenAI's storage and orchestration\n",
|
||||
"* etc.\n",
|
||||
"\n",
|
||||
"In these situations we will treat the API as stateless - rather than using `previous_message_id` we will instead make and maintain an array of conversation items that we add to and pass as input. This allows us full control of the conversation.\n",
|
||||
"In these situations we may wish to take full control of the conversation. Rather than using `previous_message_id` we can instead treat the API as 'stateless' and make and maintain an array of conversation items that we send to the model as input each time.\n",
|
||||
"\n",
|
||||
"This poses some Reasoning model specific nuances to consider. \n",
|
||||
"* In particular, it is essential that we preserve any reasoning and function call responses in our conversation history.\n",
|
||||
|
Loading…
x
Reference in New Issue
Block a user