Msingh agentic systems code interpreter (#1639)

This commit is contained in:
Mandeep Singh 2025-01-26 16:34:18 -08:00 committed by GitHub
parent 0ce13027d4
commit 47858653d6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
22 changed files with 10173 additions and 0 deletions

View File

@ -0,0 +1,562 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5fde7616369c1e1e",
"metadata": {},
"source": [
"## Build Your Own Code Interpreter: Empowering LLM Agents with Dynamic Tool Calling\n",
"\n",
"Many API providers—such as OpenAIs Assistants API—offer built-in code interpreter functionality. These built-in tools can be immensely powerful, but there are situations where developers may need to create their own custom code interpreter. For example:\n",
"\n",
"1. **Language or library support**: The built-in interpreter may not support the specific programming language (e.g., C++, Java, etc.) or libraries required for your task.\n",
"2. **Task compatibility**: Your use case may not be compatible with the providers built-in solution.\n",
"3. **Model constraints**: You might require a language model that isnt supported by the providers interpreter.\n",
"4. **Cost considerations**: The cost structure for code execution or model usage may not fit your budget or constraints. \n",
"5. **File size**: The file size of input data is too large or not supported by the provider's interpreter.\n",
"6. **Integrating with internal systems**: The provider's interpreter may not be able to integrate with your internal systems.\n",
"\n",
"At the core of this approach is “function (or tool) calling,” where a Large Language Model (LLM) can invoke a function with arguments. Typically, these functions are predefined by the developer, along with their expected arguments and outputs. However, in this Cookbook, we explore a more flexible paradigm.\n",
"\n",
"### Dynamically Generated Tool Calling with Code Interpreter\n",
"A Dynamically Generated Tool is a function or code block created by the LLM itself at runtime based on the users prompt. This means you dont have to predefine every possible scenario in your codebase—enabling far more open-ended, creative, and adaptive problem-solving.\n",
"\n",
"Dynamically Generated Tool Calling goes a step further by granting the LLM the ability to generate tools and execute code blocks on the fly. This dynamic approach is particularly useful for tasks that involve:\n",
"\n",
"- Data analysis and visualization\n",
"- Data manipulation and transformation\n",
"- Machine learning workflow generation and execution\n",
"- Process automation and scripting\n",
"- And much more, as new possibilities emerge through experimentation\n",
"\n",
"### What Youll Learn\n",
"By following this Cookbook, you will learn how to:\n",
"\n",
"- Set up an isolated Python code execution environment using Docker\n",
"- Configure your own code interpreter tool for LLM agents\n",
"- Establish a clear separation of “Agentic” concerns for security and safety\n",
"- Orchestrate agents to efficiently accomplish a given task\n",
"- Design an agentic application that can dynamically generate and execute code\n",
"\n",
"Youll see how to build a custom code interpreter tool from the ground up, leverage the power of LLMs to generate sophisticated code, and safely execute that code in an isolated environment—all in pursuit of making your AI-powered applications more flexible, powerful, and cost-effective."
]
},
{
"cell_type": "markdown",
"id": "fcd93887",
"metadata": {},
"source": [
"### Example Scenario\n",
"\n",
"We'll use the sample data provided at [Key Factors Traffic Accidents](https://www.kaggle.com/datasets/willianoliveiragibin/key-factors-traffic-accidents) to answer a set of questions. These questions do not require to be pre-defined, we will give LLM the ability to generate code to answer such question. \n",
"\n",
"Sample questions could be: \n",
"- What factors contribute the most to accident frequency? (Feature importance analysis)\n",
"- Which areas are at the highest risk of accidents? (Classification/Clustering)\n",
"- How does traffic fine amount influence the number of accidents? (Regression/Causal inference)\n",
"- Can we determine the optimal fine amounts to reduce accident rates? (Optimization models)\n",
"- Do higher fines correlate with lower average speeds or reduced accidents? (Correlation/Regression)\n",
"- and so on ...\n",
"\n",
"Using the traditional **Predefined Tool Calling** approach, developer would need to pre-define the function for each of these questions. This limits the LLM's ability to answer any other questions not defined in the pre-defined set of functions. We overcome this limitation by using the **Dynamic Tool Calling** approach where the LLM generates code and uses a Code Interpretter tool to execute the code. "
]
},
{
"cell_type": "markdown",
"id": "e301abe8",
"metadata": {},
"source": [
"## Overview\n",
"Let's dive into the steps to build this Agentic Applicaiton with Dynamically generated tool calling. There are three components to this application:"
]
},
{
"cell_type": "markdown",
"id": "8ad2269f",
"metadata": {},
"source": [
"#### Step 1: Set up an isolated code execution container environment\n",
"\n",
"We need a secure environment where our LLM generated function calls can be executed. We want to avoid directly running the LLM generated code on the host machine so will create a Docker container environment with restricted resource access (e.g., no network access). By default, Docker containers cannot access the host machines file system, which helps ensure that any code generated by the LLM remains contained. \n",
"\n",
"##### ⚠️ A WORD OF CAUTION: Implement Strong Gaurdrails for the LLM generated code\n",
"LLMs could generate harmful code with unintended consequences. As a best practice, isolate the code execution environment with only required access to resources as needed by the task. Avoid running the LLM generated code on your host machine or laptop. \n",
"\n",
"#### Step 2: Define and Test the Agents\n",
"\n",
"\"**What is an Agent?**\" In the context of this Cookbook, an Agent is:\n",
"1. Set of instructions for the LLM to follow, i.e. the developer prompt\n",
"2. A LLM model, and ability to call the model via the API \n",
"3. Tool call access to a function, and ability to execute the function \n",
"\n",
"We will define two agents:\n",
"1. FileAccessAgent: This agent will read the file and provide the context to the PythonCodeExecAgent.\n",
"2. PythonCodeExecAgent: This agent will generate the Python code to answer the user's question and execute the code in the Docker container.\n",
"\n",
"#### Step 3: Set up Agentic Orchestration to run the application \n",
"There are various ways to orchestrate the Agents based on the application requirements. In this example, we will use a simple orchestration where the user provides a task and the agents are called in sequence to accomplish the task. \n",
"\n",
"The overall orchestration is shown below:\n",
"\n",
"![Agentic Workflow Orchestration](./resources/images/AgenticWorkflow.png)"
]
},
{
"cell_type": "markdown",
"id": "5a651f8d",
"metadata": {},
"source": [
"## Let's get started\n",
"\n",
"\n",
"### Prerequisites\n",
"Before you begin, ensure you have the following installed and configured on your host machine:\n",
"\n",
"1. Docker: installed and running on your local machine. You can learn more about Docker and [install it from here](https://www.docker.com/). \n",
"2. Python: installed on your local machine. You can learn more about Python and [install it from here](https://www.python.org/downloads/). \n",
"3. OpenAI API key: set up on your local machine as an environment variable or in the .env file in the root directory. You can learn more about OpenAI API key and [set it up from here](https://platform.openai.com/docs/api-reference/introduction). \n"
]
},
{
"cell_type": "markdown",
"id": "e1ea3005fbd91c61",
"metadata": {},
"source": [
"### Step 1: Set up an Isolated Code Execution Environment \n",
"\n",
"Lets define a Dockerized container environment that will be used to execute our code. I have defined the **[dockerfile](./resources/docker/dockerfile)** under `resources/docker` directory that will be used to create the container environment with the following specifications:\n",
"- Python 3.10 as the base \n",
"- A non-root user \n",
"- Preinstall the packages in requirements.txt \n",
"\n",
"The requirements.txt included in the docker image creation process contains all the potential packages our LLM generated code may need to accomplish its tasks. Given we will restrict the container from network access, so we need to pre-install the packages that are required for the task. Our LLM will not be allowed to install any additional packages for security purposes. \n",
"\n",
"You could create your own docker image with the language requirements (such as Python 3.10) and pre-install the packages that are required for the task, or create a custom docker image with the specific language (such as Java, C++, etc.) and packages that are required for the task. "
]
},
{
"cell_type": "markdown",
"id": "cf2af004",
"metadata": {},
"source": [
"Let's build the docker image with the following command. For the sake of brevity, I have redirected the output to grep the success message and print a message if the build fails."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "fe136739b0bd164a",
"metadata": {
"ExecuteTime": {
"end_time": "2025-01-27T00:13:35.949224Z",
"start_time": "2025-01-27T00:13:33.524485Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/kl8fo02q7rgbindi9b42pn1zr\r\n"
]
}
],
"source": [
"!docker build -t python_sandbox:latest ./resources/docker 2>&1 | grep -E \"View build details|ERROR\" || echo \"Build failed.\""
]
},
{
"cell_type": "markdown",
"id": "c8c0e9024894d45d",
"metadata": {},
"source": [
"Let's run the container in restricted mode. The container will run in the background. This is our opportunity to define the security policies for the container. It is good practice to only allow the bare minimum features to the container that are required for the task. By default, the container cannot access the host file system from within the container. Let's also restrict its access to network so it cannot access the Internet or any other network resources. "
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cfaa0418f90fde09",
"metadata": {
"ExecuteTime": {
"end_time": "2025-01-27T00:13:43.561453Z",
"start_time": "2025-01-27T00:13:43.213479Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"8446d1e9a7972f2e00a5d1799451c1979d34a2962aa6b4c35a9868af8d321b0e\r\n"
]
}
],
"source": [
"# Run the container in restricted mode. The container will run in the background.\n",
"!docker run -d --name sandbox --network none --cap-drop all --pids-limit 64 --tmpfs /tmp:rw,size=64M python_sandbox:latest sleep infinity"
]
},
{
"cell_type": "markdown",
"id": "c21cafbcfdc09e2c",
"metadata": {},
"source": [
"Let's make sure container is running using the `docker ps` that should list our container. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e4df845011b77a21",
"metadata": {
"ExecuteTime": {
"end_time": "2025-01-27T00:13:45.473413Z",
"start_time": "2025-01-27T00:13:45.316092Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\r\n",
"8446d1e9a797 python_sandbox:latest \"sleep infinity\" 2 seconds ago Up 2 seconds sandbox\r\n"
]
}
],
"source": [
"!docker ps "
]
},
{
"cell_type": "markdown",
"id": "dd42a2c8f40710c2",
"metadata": {},
"source": [
"### Step 2: Define and Test the Agents\n",
"\n",
"For our purposes, we will define two agents. \n",
"1.\t**Agent 1: File Access Agent (with Pre-defined Tool Calling)**\n",
"- Instructions to understand the contents of the file to provide as context to Agent 2.\n",
"- Has access to the host machines file system. \n",
"- Can read a file from the host and copy it into the Docker container.\n",
"- Cannot access the code interpreter tool. \n",
"\n",
"2.\t**Agent 2: Python Code Generator and Executor (with Dynamically Generated Tool Calling and Code Execution)**\n",
"- Recieve the file content's context from Agent 1.\n",
"- Instructions to generate a Python script to answer the user's question.\n",
"- Has access to the code interpreter within the Docker container, which is used to execute Python code.\n",
"- Has access only to the file system inside the Docker container (not the host).\n",
"- Cannot access the host machines file system or the network.\n",
"\n",
"This separation concerns of the File Access (Agent 1) and the Code Generator and Executor (Agent 2) is crucial to prevent the LLM from directly accessing or modifying the host machine. \n",
"\n",
"\n",
"**Limit the Agent 1 to Static Tool Calling as it has access to the host file system.** \n",
"\n",
"\n",
"| Agent | Type of Tool Call | Access to Host File System | Access to Docker Container File System | Access to Code Interpreter |\n",
"|-------|-------------------|----------------------------|----------------------------------------|----------------------------|\n",
"| Agent 1: File Access | Pre-defined Tools | Yes | Yes | No |\n",
"| Agent 2: Python Code Generator and Executor | Dynamically Generated Tools | No | Yes | Yes |\n"
]
},
{
"cell_type": "markdown",
"id": "844d4d2ba5d47226",
"metadata": {},
"source": [
"To keep the Agents and Tools organized, we've defined a set of **core classes** that will be used to create the two agents for consistency using Object Oriented Programming principles.\n",
"\n",
"- **BaseAgent**: We start with an abstract base class that enforces common method signatures such as `task()`. Base class also provides a logger for debugging, a language model interface and other common functions such as `add_context()` to add context to the agent. \n",
"- **ChatMessages**: A class to store the conversation history given ChatCompletions API is stateless. \n",
"- **ToolManager**: A class to manage the tools that an agent can call.\n",
"- **ToolInterface**: An abstract class for any 'tool' that an agent can call so that the tools will have a consistent interface.\n",
"\n",
"These classes are defined in the [object_oriented_agents/core_classes](./resources/object_oriented_agents/core_classes) directory. "
]
},
{
"cell_type": "markdown",
"id": "b90cda38",
"metadata": {},
"source": [
"#### UML Class Diagram for Core Classes\n",
"The following class diagram shows the relationship between the core classes. This UML (Unified Modeling Language) has been generated using [Mermaid](https://mermaid)\n",
"![Class Diagram](./resources/diagrams/images/class_diagram_mermaid.png)"
]
},
{
"cell_type": "markdown",
"id": "c6eb923b",
"metadata": {},
"source": [
"**Define Agent 1: FileAccessAgent with FileAccessTool**\n",
"\n",
"Let's start with definin the FileAccessTool that inherits from the ToolInterface class. The **FileAccessTool** tool is defined in the [file_access_agent.py](./resources/registry/tools/file_access_tool.py) file in the `resources/registry/tools` directory. \n",
"\n",
"- FileAccessTool implements the ToolInterface class, which ensures that the tools will have a consistent interface. \n",
"- Binding the tool definition for the OpenAI Function Calling API in the `get_definition` method and the tool's `run` method ensures maintainability, scalability, and reusability. "
]
},
{
"cell_type": "markdown",
"id": "643f349e",
"metadata": {},
"source": [
"Now, let's define the **FileAccessAgent** that extends the BaseAgent class and bind the **FileAccessTool** to the agent. The FileAccessAgent is defined in the [file_acess_agent.py](./resources/registry/agents/file_access_agent.py) file in `resources/registry/agents` directory. The FileAccessAgent is: \n",
"\n",
"- A concrete implementation of the BaseAgent class. \n",
"- Initialized with the developer prompt, model name, logger, and language model interface. These values can be overridden by the developer if needed. \n",
"- Has a setup_tools method that registers the FileAccessTool to the tool manager. \n",
"- Has a `task` method that calls the FileAccessTool to read the file and provide the context to the PythonCodeExecAgent. \n"
]
},
{
"cell_type": "markdown",
"id": "c06cd2e5447b4ff9",
"metadata": {},
"source": [
"**Define Agent 2: PythonExecAgent with PythonExecTool** \n",
"\n",
"Similarly, PythonExecTool inherits from the ToolInterface class and implements the get_definition and run methods. The get_definition method returns the tool definition in the format expected by the OpenAI Function Calling API. The run method executes the Python code in a Docker container and returns the output. This tool is defined in the [python_code_interpreter_tool.py](./resources/registry/tools/python_code_interpreter_tool.py) file in the `resources/registry/tools` directory. \n",
"\n",
"Likewise, PythonExecAgent is a concrete implementation of the BaseAgent class. It is defined in the [python_code_exec_agent.py](./resources/registry/agents/python_code_exec_agent.py) file in the `resources/registry/agents` directory. The PythonExecAgent is: \n",
"\n",
"- A concrete implementation of the BaseAgent class. \n",
"- Initialized with the developer prompt, model name, logger, and language model interface. These values can be overridden by the developer if needed. \n",
"- Has a setup_tools method that registers the PythonExecTool to the tool manager. \n",
"- Has a `task` method that calls the OpenAI API to perform the user's task, which in this case involves generating a Python script to answer the user's question and run it with Code Interpretter tool. "
]
},
{
"cell_type": "markdown",
"id": "bb93488f",
"metadata": {},
"source": [
"### Step 3: Set up Agentic Orchestration to run the application \n",
"\n",
"With the Agents defined, now we can define the orchestration loop that will run the application. This loop will prompt the user for a question or task, and then call the FileAccessAgent to read the file and provide the context to the PythonExecAgent. The PythonExecAgent will generate the Python code to answer the user's question and execute the code in the Docker container. The output from the code execution will be displayed to the user. \n",
"\n",
"User can type 'exit' to stop the application. Our question: **What factors contribute the most to accident frequency?** Note that we did not pre-define the function to answer this question.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "866b7eb2",
"metadata": {
"ExecuteTime": {
"end_time": "2025-01-27T00:16:03.490020Z",
"start_time": "2025-01-27T00:15:36.487115Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Setup: \n",
"Use the file traffic_accidents.csv for your analysis. The column names are:\n",
"Variable\tDescription\n",
"accidents\tNumber of recorded accidents, as a positive integer.\n",
"traffic_fine_amount\tTraffic fine amount, expressed in thousands of USD.\n",
"traffic_density\tTraffic density index, scale from 0 (low) to 10 (high).\n",
"traffic_lights\tProportion of traffic lights in the area (0 to 1).\n",
"pavement_quality\tPavement quality, scale from 0 (very poor) to 5 (excellent).\n",
"urban_area\tUrban area (1) or rural area (0), as an integer.\n",
"average_speed\tAverage speed of vehicles in km/h.\n",
"rain_intensity\tRain intensity, scale from 0 (no rain) to 3 (heavy rain).\n",
"vehicle_count\tEstimated number of vehicles, in thousands, as an integer.\n",
"time_of_day\tTime of day in 24-hour format (0 to 24).\n",
"accidents\ttraffic_fine_amount\n",
"\n",
"Setting up the agents... \n",
"Understanding the contents of the file...\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2025-01-26 16:15:37,348 - MyApp - INFO - Handling tool call: safe_file_access\n",
"2025-01-26 16:15:37,350 - MyApp - INFO - Tool arguments: {'filename': './resources/data/traffic_accidents.csv'}\n",
"2025-01-26 16:15:37,512 - MyApp - INFO - Tool 'safe_file_access' response: Copied ./resources/data/traffic_accidents.csv into sandbox:/home/sandboxuser/.\n",
"The file content for the first 15 rows is:\n",
" accidents traffic_fine_amount traffic_density traffic_lights pavement_quality urban_area average_speed rain_intensity vehicle_count time_of_day\n",
"0 20 4.3709 2.3049 753.000 0.7700 1 321.592 1.1944 290.8570 160.4320\n",
"1 11 9.5564 3.2757 5.452 4.0540 1 478.623 6.2960 931.8120 8.9108\n",
"2 19 7.5879 2.0989 6.697 345.0000 0 364.476 2.8584 830.0860 5.5727\n",
"3 23 6.3879 4.9188 9.412 4.7290 0 20.920 2.1065 813.1590 131.4520\n",
"4 23 2.4042 1.9610 7.393 1.7111 1 37.378 1.7028 1.4663 6.9610\n",
"5 31 2.4040 6.7137 5.411 5.9050 1 404.621 1.8936 689.0410 8.1801\n",
"6 29 1.5228 5.2316 9.326 2.3785 1 16.292 2.5213 237.9710 12.6622\n",
"7 18 8.7956 8.9864 4.784 1.9984 0 352.566 1.9072 968.0670 8.0602\n",
"8 15 6.4100 1.6439 5.612 3.6090 1 217.198 3.4380 535.4440 8.2904\n",
"9 22 7.3727 8.0411 5.961 4.7650 1 409.261 2.0919 569.0560 203.5910\n",
"10 28 1.1853 7.9196 0.410 3.7678 1 147.689 1.6946 362.9180 224.1580\n",
"11 17 9.7292 1.2718 8.385 8.9720 0 46.888 2.8990 541.3630 198.5740\n",
"12 14 8.4920 3.9856 1.852 4.6776 0 287.393 2.2012 75.2240 2.3728\n",
"13 21 2.9111 1.7015 5.548 1.9607 1 176.652 1.0320 566.3010 6.9538\n",
"14 22 2.6364 2.5472 7.222 2.3709 0 209.686 4.0620 64.4850 170.7110\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Type your question related to the data in the file. Type 'exit' to exit.\n",
"User question: What factors contribute the most to accident frequency?\n",
"Generating dynamic tools and using code interpreter...\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2025-01-26 16:15:50,144 - MyApp - INFO - Handling tool call: execute_python_code\n",
"2025-01-26 16:15:50,144 - MyApp - INFO - Tool arguments: {'python_code': \"import pandas as pd\\nimport numpy as np\\nfrom sklearn.ensemble import RandomForestRegressor\\n\\n# 1. Load the CSV file\\nfile_path = '/home/sandboxuser/traffic_accidents.csv'\\ndf = pd.read_csv(file_path)\\n\\n# 2. Prepare the data\\n# We'll treat 'accidents' as our target.\\nX = df.drop('accidents', axis=1)\\ny = df['accidents']\\n\\n# 3. Train a simple Random Forest Regressor to estimate feature importance.\\nrf = RandomForestRegressor(random_state=0)\\nrf.fit(X, y)\\n\\n# 4. Extract feature importances\\nfeature_importances = rf.feature_importances_\\ncolumns = X.columns\\n\\n# 5. Combine feature names and importances, and sort\\nimportances_df = pd.DataFrame({'Feature': columns, 'Importance': feature_importances})\\nimportances_df = importances_df.sort_values(by='Importance', ascending=False)\\n\\n# 6. Print the results\\nprint('Feature importances based on Random Forest Regressor:')\\nprint(importances_df.to_string(index=False))\"}\n",
"2025-01-26 16:15:53,260 - MyApp - INFO - Tool 'execute_python_code' response: Feature importances based on Random Forest Regressor:\n",
" Feature Importance\n",
"traffic_fine_amount 0.580858\n",
" traffic_density 0.164679\n",
" rain_intensity 0.095392\n",
" time_of_day 0.035838\n",
" average_speed 0.035590\n",
" pavement_quality 0.032545\n",
" traffic_lights 0.022789\n",
" vehicle_count 0.021246\n",
" urban_area 0.011062\n",
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Output...\n",
"Based on a simple Random Forest analysis, the factor with the highest feature importance is “traffic_fine_amount.” The next most influential factors are “traffic_density” and “rain_intensity,” followed by “time_of_day” and “average_speed.”\n",
"Type your question related to the data in the file. Type 'exit' to exit.\n",
"Exiting the application.\n"
]
}
],
"source": [
"# Import the agents from registry/agents\n",
"\n",
"from resources.registry.agents.file_access_agent import FileAccessAgent\n",
"from resources.registry.agents.python_code_exec_agent import PythonExecAgent\n",
"\n",
"\n",
"prompt = \"\"\"Use the file traffic_accidents.csv for your analysis. The column names are:\n",
"Variable\tDescription\n",
"accidents\tNumber of recorded accidents, as a positive integer.\n",
"traffic_fine_amount\tTraffic fine amount, expressed in thousands of USD.\n",
"traffic_density\tTraffic density index, scale from 0 (low) to 10 (high).\n",
"traffic_lights\tProportion of traffic lights in the area (0 to 1).\n",
"pavement_quality\tPavement quality, scale from 0 (very poor) to 5 (excellent).\n",
"urban_area\tUrban area (1) or rural area (0), as an integer.\n",
"average_speed\tAverage speed of vehicles in km/h.\n",
"rain_intensity\tRain intensity, scale from 0 (no rain) to 3 (heavy rain).\n",
"vehicle_count\tEstimated number of vehicles, in thousands, as an integer.\n",
"time_of_day\tTime of day in 24-hour format (0 to 24).\n",
"accidents\ttraffic_fine_amount\n",
"\"\"\"\n",
"\n",
"\n",
"print(\"Setup: \")\n",
"print(prompt)\n",
"\n",
"print(\"Setting up the agents... \")\n",
"\n",
"# Instantiate the agents\n",
"\n",
"# Developer can accept the default prompt, model, logger, and language model interface or accept the default values from the constructor\n",
"\n",
"file_ingestion_agent = FileAccessAgent()\n",
"data_analysis_agent = PythonExecAgent()\n",
"\n",
"print(\"Understanding the contents of the file...\")\n",
"# Give a task to the file ingestion agent to read the file and provide the context to the data analysis agent \n",
"file_ingestion_agent_output = file_ingestion_agent.task(prompt)\n",
"\n",
"# add the file content as context to the data analysis agent \n",
"# The context is added to the agent's tool manager so that the tool manager can use the context to generate the code \n",
"\n",
"data_analysis_agent.add_context(prompt)\n",
"data_analysis_agent.add_context(file_ingestion_agent_output)\n",
"\n",
"while True:\n",
"\n",
" print(\"Type your question related to the data in the file. Type 'exit' to exit.\")\n",
" user_input = input(\"Type your question.\")\n",
"\n",
" if user_input == \"exit\":\n",
" print(\"Exiting the application.\")\n",
" break\n",
"\n",
" print(f\"User question: {user_input}\")\n",
"\n",
" print(\"Generating dynamic tools and using code interpreter...\")\n",
" data_analysis_agent_output = data_analysis_agent.task(user_input)\n",
"\n",
" print(\"Output...\")\n",
" print(data_analysis_agent_output)\n"
]
},
{
"cell_type": "markdown",
"id": "29f96b97",
"metadata": {},
"source": [
"In this example, the LLM dynamically generated a tool (a Python script) to analyze the data and answer the user's question that show cases the following:\n",
"\n",
"**Dynamically Generated Tool Calling**: This tool (the Python script) to analyze the data was not manually written or predetermined by the developer. Instead, the LLM itself created the relevant data exploration, correlation analysis, and machine learning code at runtime.\n",
"\n",
"**Isolated Code Execution**: To ensure security and avoid running untrusted code on the host machine, the Python script was executed inside a Docker container using the `execute_python_code` tool. This container had restricted resource access (e.g., no network and limited filesystem access), minimizing potential risks posed by arbitrary code execution.\n"
]
},
{
"cell_type": "markdown",
"id": "bb1ed586",
"metadata": {},
"source": [
"### Conclusion\n",
"\n",
"The Cookbook provides a guide for developing a **custom code interpreter** tailored to specific application needs, addressing limitations found in vendor-provided solutions such as language constraints, cost considerations, and the need for flexibility with different LLMs or models.\n",
"\n",
"**Approach for Managing Agents and Tools**: We also defined a set of core classes to manage the agents and tools. This approach ensures that the agents and tools will have a consistent interface and can be reused across different applications. A repository of agents and tools such as the [registry](./resources/registry) folder can be created to manage the agents and tools.\n",
"\n",
"To recap, the three steps to build an Agentic Application with Dynamic Tool Calling are:\n",
"1. Set up an isolated code execution container environment\n",
"2. Define and Test the Agents\n",
"3. Set up Agentic Orchestration to run the application \n",
"\n",
"We discussed the importance of isolating the code execution environment to ensure security and avoid running untrusted code on the host machine. With the use case of a CSV file, we demonstrated how to dynamically generate a tool (a Python script) to analyze the data and answer the user's question. We also showed how to execute the code in a Docker container and return the output to the user."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 409 KiB

View File

@ -0,0 +1,92 @@
classDiagram
%% ==========================
%% Abstract Classes
%% ==========================
class BaseAgent {
- model_name: str
- messages: ChatMessages
- tool_manager: ToolManager
- logger: Logger
- language_model_interface: LanguageModelInterface
+ __init__(developer_prompt: str, model_name: str)
+ setup_tools(): void %% abstract
+ add_context(content: str): void
+ add_message(content: str): void
+ task(user_task: str, tool_call_enabled: bool, return_tool_response_as_is: bool): str
}
class ToolInterface {
+ get_definition(): Dict[str, Any]
+ run(arguments: Dict[str, Any]): str
}
class LanguageModelInterface {
+ generate_completion(
model: str,
messages: List<Dict[str, str]],
tools: List[Dict[str, Any]]
): Dict[str, Any]
}
%% ==========================
%% Concrete Classes
%% ==========================
class ChatMessages {
- messages: List<Dict[str, str]>
+ __init__(developer_prompt: str)
+ add_developer_message(content: str): void
+ add_user_message(content: str): void
+ add_assistant_message(content: str): void
+ get_messages(): List<Dict[str, str]>
}
class ToolManager {
- tools: Dict[str, ToolInterface]
- logger: Logger
- language_model_interface: LanguageModelInterface
+ __init__()
+ register_tool(tool: ToolInterface): void
+ get_tool_definitions(): List<Dict[str, Any]>
+ handle_tool_call_sequence(
response: str,
return_tool_response_as_is: bool,
messages: ChatMessages
): str
}
class OpenAILanguageModel {
- openai_client: OpenAI
- logger: Logger
+ __init__()
+ generate_completion(
model: str,
messages: List<Dict[str, str]],
tools: List<Dict[str, Any]]
): Dict[str, Any]
}
class OpenAIClientFactory {
+ create_client(api_key: str): OpenAI
- _resolve_api_key(api_key: str): str
}
%% ==========================
%% Relationships
%% ==========================
BaseAgent --> ChatMessages : uses
BaseAgent --> LanguageModelInterface : uses
BaseAgent --> ToolManager : optionally uses
ToolManager "0..*" --> "1" ToolInterface : manages
ToolManager --> ChatMessages : uses
ToolManager --> LanguageModelInterface : uses
LanguageModelInterface <|-- OpenAILanguageModel
OpenAILanguageModel --> OpenAIClientFactory : uses

View File

@ -0,0 +1,15 @@
FROM python:3.10
RUN apt-get update && \
apt-get install -y build-essential && \
rm -rf /var/lib/apt/lists/*
# Create a non-root user
RUN useradd -m sandboxuser
USER sandboxuser
WORKDIR /home/sandboxuser
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "--version"]

View File

@ -0,0 +1,5 @@
numpy==1.23.5
pandas==1.5.3
matplotlib==3.7.2
seaborn==0.12.2
scikit-learn==1.2.2

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

View File

@ -0,0 +1,37 @@
# object_oriented_agents/core_classes/agent_signature.py
from typing import Optional, Dict, Any, List
from .tool_manager import ToolManager
class AgentSignature:
"""
Encapsulates the logic to produce an agent's 'signature' data:
- The developer prompt
- The model name
- The list of tool definitions
"""
def __init__(self, developer_prompt: str, model_name: str, tool_manager: Optional[ToolManager] = None):
self.developer_prompt = developer_prompt
self.model_name = model_name
self.tool_manager = tool_manager
def to_dict(self) -> Dict[str, Any]:
"""
Return a dictionary containing:
1. The developer prompt
2. The model name
3. A list of tool definitions (function schemas)
"""
if self.tool_manager:
# Each item in get_tool_definitions() looks like {"type": "function", "function": {...}}
tool_definitions = self.tool_manager.get_tool_definitions()
# We need the whole definition for the final signature
functions = [t for t in tool_definitions]
else:
functions = []
return {
"developer_prompt": self.developer_prompt,
"model_name": self.model_name,
"tools": functions
}

View File

@ -0,0 +1,95 @@
# object_oriented_agents/core_classes/base_agent.py
from abc import ABC, abstractmethod
from typing import Optional
from .chat_messages import ChatMessages
from .tool_manager import ToolManager
from ..utils.logger import get_logger
from ..services.language_model_interface import LanguageModelInterface
from .agent_signature import AgentSignature
class BaseAgent(ABC):
"""
An abstract base agent that defines the high-level approach to handling user tasks
and orchestrating calls to the OpenAI API.
"""
def __init__(
self,
developer_prompt: str,
model_name: str,
logger=None,
language_model_interface: LanguageModelInterface = None
):
self.developer_prompt = developer_prompt
self.model_name = model_name
self.messages = ChatMessages(developer_prompt)
self.tool_manager: Optional[ToolManager] = None
self.logger = logger or get_logger(self.__class__.__name__)
self.language_model_interface = language_model_interface
@abstractmethod
def setup_tools(self) -> None:
pass
def add_context(self, content: str) -> None:
self.logger.debug(f"Adding context: {content}")
self.messages.add_user_message(content)
def add_message(self, content: str) -> None:
self.logger.debug(f"Adding user message: {content}")
self.messages.add_user_message(content)
def task(self, user_task: str, tool_call_enabled: bool = True, return_tool_response_as_is: bool = False) -> str:
if self.language_model_interface is None:
error_message = "Error: Cannot execute task without the LanguageModelInterface."
self.logger.error(error_message)
raise ValueError(error_message)
self.logger.debug(f"Starting task: {user_task} (tool_call_enabled={tool_call_enabled})")
# Add user message
self.add_message(user_task)
tools = []
if tool_call_enabled and self.tool_manager:
tools = self.tool_manager.get_tool_definitions()
self.logger.debug(f"Tools available: {tools}")
# Submit to OpenAI
self.logger.debug("Sending request to language model interface...")
response = self.language_model_interface.generate_completion(
model=self.model_name,
messages=self.messages.get_messages(),
tools=tools,
)
tool_calls = response.choices[0].message.tool_calls
if tool_call_enabled and self.tool_manager and tool_calls:
self.logger.debug(f"Tool calls requested: {tool_calls}")
return self.tool_manager.handle_tool_call_sequence(
response,
return_tool_response_as_is,
self.messages,
self.model_name
)
# No tool call, normal assistant response
response_message = response.choices[0].message.content
self.messages.add_assistant_message(response_message)
self.logger.debug("Task completed successfully.")
return response_message
def signature(self) -> dict:
"""
Return a dictionary with:
- The developer prompt
- The model name
- The tool definitions (function schemas)
"""
signature_obj = AgentSignature(
developer_prompt=self.developer_prompt,
model_name=self.model_name,
tool_manager=self.tool_manager
)
return signature_obj.to_dict()

View File

@ -0,0 +1,23 @@
# object_oriented_agents/core_classes/chat_messages.py
from typing import List, Dict
class ChatMessages:
"""
Stores all messages in a conversation (developer, user, assistant).
"""
def __init__(self, developer_prompt: str):
self.messages: List[Dict[str, str]] = []
self.add_developer_message(developer_prompt)
def add_developer_message(self, content: str) -> None:
self.messages.append({"role": "developer", "content": content})
def add_user_message(self, content: str) -> None:
self.messages.append({"role": "user", "content": content})
def add_assistant_message(self, content: str) -> None:
self.messages.append({"role": "assistant", "content": content})
def get_messages(self) -> List[Dict[str, str]]:
return self.messages

View File

@ -0,0 +1,33 @@
# object_oriented_agents/core_classes/tool_interface.py
from abc import ABC, abstractmethod
from typing import Dict, Any
class ToolInterface(ABC):
"""
An abstract class for any 'tool' that an agent can call.
Every tool must provide two things:
1) A definition (in JSON schema format) as expected by OpenAI function calling specifications.
2) A 'run' method to handle the logic given the arguments.
"""
@abstractmethod
def get_definition(self) -> Dict[str, Any]:
"""
Return the JSON/dict definition of the tool's function.
Example:
{
"function": {
"name": "<tool_function_name>",
"description": "<what this function does>",
"parameters": { <JSON schema> }
}
}
"""
pass
@abstractmethod
def run(self, arguments: Dict[str, Any]) -> str:
"""
Execute the tool using the provided arguments and return a result as a string.
"""
pass

View File

@ -0,0 +1,102 @@
# object_oriented_agents/core_classes/tool_manager.py
import json
from typing import Dict, Any, List
from .chat_messages import ChatMessages
from .tool_interface import ToolInterface
from ..utils.logger import get_logger
from ..services.language_model_interface import LanguageModelInterface
class ToolManager:
"""
Manages one or more tools. Allows you to:
- Register multiple tools
- Retrieve their definitions
- Invoke the correct tool by name
- Handle the entire tool call sequence
"""
def __init__(self, logger=None, language_model_interface: LanguageModelInterface = None):
self.tools = {}
self.logger = logger or get_logger(self.__class__.__name__)
self.language_model_interface = language_model_interface
def register_tool(self, tool: ToolInterface) -> None:
"""
Register a tool by using its function name as the key.
"""
tool_def = tool.get_definition()
tool_name = tool_def["function"]["name"]
self.tools[tool_name] = tool
self.logger.debug(f"Registered tool '{tool_name}': {tool_def}")
def get_tool_definitions(self) -> List[Dict[str, Any]]:
"""
Return the list of tool definitions in the format expected by the OpenAI API.
"""
definitions = []
for name, tool in self.tools.items():
tool_def = tool.get_definition()["function"]
self.logger.debug(f"Tool definition retrieved for '{name}': {tool_def}")
definitions.append({"type": "function", "function": tool_def})
return definitions
def handle_tool_call_sequence(
self,
response,
return_tool_response_as_is: bool,
messages: ChatMessages,
model_name: str
) -> str:
"""
If the model wants to call a tool, parse the function arguments, invoke the tool,
then optionally return the tool's raw output or feed it back to the model for a final answer.
"""
# We take the first tool call from the models response
first_tool_call = response.choices[0].message.tool_calls[0]
tool_name = first_tool_call.function.name
self.logger.info(f"Handling tool call: {tool_name}")
args = json.loads(first_tool_call.function.arguments)
self.logger.info(f"Tool arguments: {args}")
if tool_name not in self.tools:
error_message = f"Error: The requested tool '{tool_name}' is not registered."
self.logger.error(error_message)
raise ValueError(error_message)
# 1. Invoke the tool
self.logger.debug(f"Invoking tool '{tool_name}'")
tool_response = self.tools[tool_name].run(args)
self.logger.info(f"Tool '{tool_name}' response: {tool_response}")
# If returning the tool response "as is," just store and return it
if return_tool_response_as_is:
self.logger.debug("Returning tool response as-is without further LLM calls.")
messages.add_assistant_message(tool_response)
return tool_response
self.logger.debug(f"Tool call: {first_tool_call}")
# Otherwise, feed the tool's response back to the LLM for a final answer
function_call_result_message = {
"role": "tool",
"content": tool_response,
"tool_call_id": first_tool_call.id
}
complete_payload = messages.get_messages()
complete_payload.append(response.choices[0].message)
complete_payload.append(function_call_result_message)
self.logger.debug("Calling the model again with the tool response to get the final answer.")
# Use the injected openai_client here
response_after_tool_call = self.language_model_interface.generate_completion(
model=model_name,
messages=complete_payload
)
final_message = response_after_tool_call.choices[0].message.content
self.logger.debug("Received final answer from model after tool call.")
messages.add_assistant_message(final_message)
return final_message

View File

@ -0,0 +1,28 @@
# object_oriented_agents/services/language_model_interface.py
from abc import ABC, abstractmethod
from typing import Dict, Any, List, Optional
class LanguageModelInterface(ABC):
"""
Interface for interacting with a language model.
Decouples application logic from a specific LLM provider (e.g., OpenAI).
"""
@abstractmethod
def generate_completion(
self,
model: str,
messages: List[Dict[str, str]],
tools: Optional[List[Dict[str, Any]]] = None
) -> Dict[str, Any]:
"""
Generate a completion (response) from the language model given a set of messages and optional tool definitions.
:param model: The name of the model to call.
:param messages: A list of messages, where each message is a dict with keys 'role' and 'content'.
:param tools: Optional list of tool definitions.
:return: A dictionary representing the model's response. The shape of this dict follows the provider's format.
"""
pass

View File

@ -0,0 +1,27 @@
# object_oriented_agents/services/openai_factory.py
import os
from openai import OpenAI
from ..utils.logger import get_logger
logger = get_logger("OpenAIFactory")
class OpenAIClientFactory:
@staticmethod
def create_client(api_key: str = None) -> OpenAI:
"""
Create and return an OpenAI client instance.
The API key can be passed explicitly or read from the environment.
"""
final_api_key = OpenAIClientFactory._resolve_api_key(api_key)
return OpenAI(api_key=final_api_key)
@staticmethod
def _resolve_api_key(api_key: str = None) -> str:
if api_key:
return api_key
env_key = os.getenv("OPENAI_API_KEY")
if env_key:
return env_key
error_msg = "No OpenAI API key provided. Set OPENAI_API_KEY env variable or provide as an argument."
logger.error(error_msg)
raise ValueError(error_msg)

View File

@ -0,0 +1,46 @@
# object_oriented_agents/services/openai_language_model.py
from typing import List, Dict, Any, Optional
from .language_model_interface import LanguageModelInterface
from .openai_factory import OpenAIClientFactory
from ..utils.logger import get_logger
class OpenAILanguageModel(LanguageModelInterface):
"""
A concrete implementation of LanguageModelInterface that uses the OpenAI API.
"""
def __init__(self, openai_client=None, api_key: Optional[str] = None, logger=None):
self.logger = logger or get_logger(self.__class__.__name__)
# If no client is provided, create one using the factory
self.openai_client = openai_client or OpenAIClientFactory.create_client(api_key)
def generate_completion(
self,
model: str,
messages: List[Dict[str, str]],
tools: Optional[List[Dict[str, Any]]] = None
) -> Dict[str, Any]:
"""
Calls the OpenAI API to generate a chat completion using the provided messages and tools.
"""
kwargs = {
"model": model,
"messages": messages
}
if tools:
# Passing tools directly to the API depends on how the OpenAI implementation expects them.
# Adjust this as necessary if the API format changes.
kwargs["tools"] = tools
self.logger.debug("Generating completion with OpenAI model.")
self.logger.debug(f"Request: {kwargs}")
try:
response = self.openai_client.chat.completions.create(**kwargs)
self.logger.debug("Received response from OpenAI.")
self.logger.debug(f"Response: {response}")
return response
except Exception as e:
self.logger.error(f"OpenAI call failed: {str(e)}", exc_info=True)
raise e

View File

@ -0,0 +1,28 @@
# object_oriented_agents/utils/logger.py
import logging
from typing import Optional
def get_logger(name: str, level: int = logging.INFO, formatter: Optional[logging.Formatter] = None) -> logging.Logger:
"""
Return a logger instance with a given name and logging level.
If no formatter is provided, a default formatter will be used.
"""
logger = logging.getLogger(name)
logger.setLevel(level)
if not logger.handlers:
# Create a console handler
ch = logging.StreamHandler()
ch.setLevel(level)
# Use a default formatter if none is provided
if formatter is None:
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
ch.setFormatter(formatter)
# Add the handler to the logger
logger.addHandler(ch)
return logger

View File

@ -0,0 +1,36 @@
# object_oriented_agents/utils/openai_util.py
from typing import List, Dict, Any
from .logger import get_logger
from ..services.openai_factory import OpenAIClientFactory
logger = get_logger("OpenAIUtils")
def call_openai_chat_completion(
model: str,
messages: List[Dict[str, str]],
tools: List[Dict[str, Any]] = None,
openai_client=None,
api_key: str = None
) -> Any:
"""
A utility function to call OpenAI's chat completion.
If openai_client is provided, use it, otherwise create a new one.
"""
if openai_client is None:
openai_client = OpenAIClientFactory.create_client(api_key=api_key)
kwargs = {
"model": model,
"messages": messages,
}
if tools:
kwargs["tools"] = tools
try:
response = openai_client.chat.completions.create(**kwargs)
return response
except Exception as e:
logger.error(f"OpenAI call failed: {str(e)}")
raise e

View File

@ -0,0 +1,47 @@
import logging
import os
# Import base classes
from ...object_oriented_agents.utils.logger import get_logger
from ...object_oriented_agents.core_classes.base_agent import BaseAgent
from ...object_oriented_agents.core_classes.tool_manager import ToolManager
from ...object_oriented_agents.services.openai_language_model import OpenAILanguageModel
# Import the Tool
from ..tools.file_access_tool import FileAccessTool
# Set the verbosity level: DEBUG for verbose output, INFO for normal output, and WARNING/ERROR for minimal output
myapp_logger = get_logger("MyApp", level=logging.INFO)
# Create a LanguageModelInterface instance using the OpenAILanguageModel
language_model_api_interface = OpenAILanguageModel(api_key=os.getenv("OPENAI_API_KEY"), logger=myapp_logger)
class FileAccessAgent(BaseAgent):
"""
Agent that can only use the 'safe_file_access' tool to read CSV files.
"""
# We pass the Agent attributes in the constructor
def __init__(self,
developer_prompt: str = """
You are a helpful data science assistant. The user will provide the name of a CSV file that contains relational data. The file is in the directory ./resources/data
Instructions:
1. When the user provides the CSV file name, use the 'safe_read_file' tool to read and display the first 15 lines of that file.
2. If the specified file does not exist in the provided directory, return an appropriate error message (e.g., "Error: The specified file does not exist in the provided directory").
3. The user may request data analysis based on the files contents, but you should NOT perform or write code for any data analysis. Your only task is to read and return the first 6 lines of the file.
Do not include any additional commentary or code not related to reading the file.
""",
model_name: str = "gpt-4o",
logger = myapp_logger,
language_model_interface = language_model_api_interface):
super().__init__(developer_prompt=developer_prompt, model_name=model_name, logger=logger, language_model_interface=language_model_interface)
self.setup_tools()
def setup_tools(self) -> None:
self.logger.debug("Setting up tools for FileAccessAgent.")
# Pass the openai_client to ToolManager
self.tool_manager = ToolManager(logger=self.logger, language_model_interface=self.language_model_interface)
# Register the one tool this agent is allowed to use
self.tool_manager.register_tool(FileAccessTool(logger=self.logger))

View File

@ -0,0 +1,61 @@
import logging
import os
# Import base classes
from ...object_oriented_agents.utils.logger import get_logger
from ...object_oriented_agents.core_classes.base_agent import BaseAgent
from ...object_oriented_agents.core_classes.tool_manager import ToolManager
from ...object_oriented_agents.services.openai_language_model import OpenAILanguageModel
# Import the Python Code Interpreter tool
from ..tools.python_code_interpreter_tool import PythonExecTool
# Set the verbosity level: DEBUG for verbose output, INFO for normal output, and WARNING/ERROR for minimal output
myapp_logger = get_logger("MyApp", level=logging.INFO)
# Create a LanguageModelInterface instance using the OpenAILanguageModel
language_model_api_interface = OpenAILanguageModel(api_key=os.getenv("OPENAI_API_KEY"), logger=myapp_logger)
class PythonExecAgent(BaseAgent):
"""
An agent specialized in executing Python code in a Docker container.
"""
def __init__(
self,
developer_prompt: str = """
You are a helpful data science assistant. Your tasks include analyzing CSV data and generating Python code to address user queries. Follow these guidelines:
1. The user will provide the name of a CSV file located in the directory `/home/sandboxuser`.
2. The user will also supply context, including:
- Column names and their descriptions.
- Sample data from the CSV (headers and a few rows) to help understand data types.
3. Analyze the provided data using Python machine learning libraries and generate appropriate code to fulfill the user's request.
4. Generate Python code to analyze the data and call the tool `execute_python_code` to run the code inside a Docker container.
5. Execute the code in the container and return the output.
Note: All files referenced in the prompt are located in `/home/sandboxuser`.
""",
model_name: str = "o1",
logger=myapp_logger,
language_model_interface = language_model_api_interface
):
super().__init__(
developer_prompt=developer_prompt,
model_name=model_name,
logger=logger,
language_model_interface=language_model_interface
)
self.setup_tools()
def setup_tools(self) -> None:
"""
Create a ToolManager, instantiate the PythonExecTool and register it with the ToolManager.
"""
self.tool_manager = ToolManager(logger=self.logger, language_model_interface=self.language_model_interface)
# Create the Python execution tool
python_exec_tool = PythonExecTool()
# Register the Python execution tool
self.tool_manager.register_tool(python_exec_tool)

View File

@ -0,0 +1,105 @@
from typing import Dict, Any
import pandas as pd
import subprocess
import os
from ...object_oriented_agents.utils.logger import get_logger
from ...object_oriented_agents.core_classes.tool_interface import ToolInterface
class FileAccessTool(ToolInterface):
"""
A tool to read CSV files and copy them to a Docker container.
"""
def __init__(self, logger=None):
self.logger = logger or get_logger(self.__class__.__name__)
def get_definition(self) -> Dict[str, Any]:
self.logger.debug("Returning tool definition for safe_file_access")
return {
"function": {
"name": "safe_file_access",
"description": (
"Read the contents of a file in a secure manner "
"and transfer it to the Python code interpreter docker container"
),
"parameters": {
"type": "object",
"properties": {
"filename": {
"type": "string",
"description": "Name of the file to read"
}
},
"required": ["filename"]
}
}
}
def run(self, arguments: Dict[str, Any]) -> str:
filename = arguments["filename"]
self.logger.debug(f"Running safe_file_access with filename: {filename}")
return self.safe_file_access(filename)
def safe_file_access(self, filename: str) -> str:
if not filename.endswith('.csv'):
error_msg = "Error: The file is not a CSV file."
self.logger.warning(f"{error_msg} - Filename provided: {filename}")
return error_msg
# Ensure the path is correct
if not os.path.dirname(filename):
filename = os.path.join('./resources/data', filename)
self.logger.debug(f"Attempting to read file at path: {filename}")
try:
df = pd.read_csv(filename)
self.logger.debug(f"File '{filename}' loaded successfully.")
copy_output = self.copy_file_to_container(filename)
head_str = df.head(15).to_string()
return f"{copy_output}\nThe file content for the first 15 rows is:\n{head_str}"
except FileNotFoundError:
error_msg = f"Error: The file '{filename}' was not found."
self.logger.error(error_msg)
return error_msg
except Exception as e:
error_msg = f"Error while reading the CSV file: {str(e)}"
self.logger.error(error_msg, exc_info=True)
return error_msg
def copy_file_to_container(self, local_file_name: str, container_name: str = "sandbox") -> str:
container_home_path = "/home/sandboxuser"
self.logger.debug(f"Copying '{local_file_name}' to container '{container_name}'.")
if not os.path.isfile(local_file_name):
error_msg = f"The local file '{local_file_name}' does not exist."
self.logger.error(error_msg)
raise FileNotFoundError(error_msg)
# Check if container is running
check_container_cmd = ["docker", "inspect", "-f", "{{.State.Running}}", container_name]
result = subprocess.run(check_container_cmd, capture_output=True, text=True)
if result.returncode != 0 or result.stdout.strip() != "true":
error_msg = f"The container '{container_name}' is not running."
self.logger.error(error_msg)
raise RuntimeError(error_msg)
# Copy the file into the container
container_path = f"{container_name}:{container_home_path}/{os.path.basename(local_file_name)}"
self.logger.debug(f"Running command: docker cp {local_file_name} {container_path}")
subprocess.run(["docker", "cp", local_file_name, container_path], check=True)
# Verify the file was copied
verify_cmd = ["docker", "exec", container_name, "test", "-f",
f"{container_home_path}/{os.path.basename(local_file_name)}"]
verify_result = subprocess.run(verify_cmd, capture_output=True, text=True)
if verify_result.returncode != 0:
error_msg = f"Failed to verify the file '{local_file_name}' in the container '{container_name}'."
self.logger.error(error_msg)
raise RuntimeError(error_msg)
success_msg = f"Copied {local_file_name} into {container_name}:{container_home_path}/."
self.logger.debug(success_msg)
return success_msg

View File

@ -0,0 +1,66 @@
import subprocess
from typing import Tuple, Dict, Any
from ...object_oriented_agents.utils.logger import get_logger
from ...object_oriented_agents.core_classes.tool_interface import ToolInterface
class PythonExecTool(ToolInterface):
"""
A Tool that executes Python code securely in a container.
"""
def get_definition(self) -> Dict[str, Any]:
"""
Return the JSON/dict definition of the tool's function
in the format expected by the OpenAI function calling API.
"""
return {
"function": {
"name": "execute_python_code",
"description": "Executes Python code securely in a container. Python version 3.10 is installed in the container. pandas, numpy, matplotlib, seaborn, and scikit-learn are installed in the container.",
"parameters": {
"type": "object",
"properties": {
"python_code": {
"type": "string",
"description": "The Python code to execute"
}
},
"required": ["python_code"]
}
}
}
def run(self, arguments: Dict[str, Any]) -> str:
"""
Execute the Python code in a Docker container and return the output.
"""
python_code = arguments["python_code"]
python_code_stripped = python_code.strip('"""')
output, errors = self._run_code_in_container(python_code_stripped)
if errors:
return f"[Error]\n{errors}"
return output
@staticmethod
def _run_code_in_container(code: str, container_name: str = "sandbox") -> Tuple[str, str]:
"""
Helper function that actually runs Python code inside a Docker container named `sandbox` (by default).
"""
cmd = [
"docker", "exec", "-i",
container_name,
"python", "-c", "import sys; exec(sys.stdin.read())"
]
process = subprocess.Popen(
cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
out, err = process.communicate(code)
return out, err

View File

@ -1788,3 +1788,11 @@
tags:
- usage-api
- cost-api
- title: Build Your Own Code Interpreter: Empowering LLM Agents with Dynamic Tool Calling
path: examples/object_oriented_agentic_approach/Secure_code_interpreter_tool_for_LLM_agents.ipynb
date: 2025-01-26
authors:
- msingh-openai
tags:
- completions