Aaronwilkowitz/sharepoint (#1299)

Co-authored-by: maxreid-openai <max.reid@openai.com>
This commit is contained in:
Aaron Wilkowitz 2024-07-22 15:23:47 -04:00 committed by GitHub
parent 015ba208aa
commit 8a288eb83a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
10 changed files with 1922 additions and 14 deletions

View File

@ -33,6 +33,24 @@
"Note: When setting up the GPT Action, for authentication, leave it with \"None\". This is a public API and does not require any Authentication" "Note: When setting up the GPT Action, for authentication, leave it with \"None\". This is a public API and does not require any Authentication"
] ]
}, },
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Value + Example Business Use Cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Value**: Users can now leverage ChatGPT's natural language capability to forecast the weather\n",
"\n",
"**Example Use Cases**: \n",
"- Users can plan out their day based on weather patterns\n",
"- Users can quickly visualize (including graphs) what the weather is forecasted to look like"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},

View File

@ -33,6 +33,25 @@
"Note: these instructions return back a functioning SQL statement, rather than the result itself. Currently middleware is required to return back a CSV file well be posting instructions on an example of that soon" "Note: these instructions return back a functioning SQL statement, rather than the result itself. Currently middleware is required to return back a CSV file well be posting instructions on an example of that soon"
] ]
}, },
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Value + Example Business Use Cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Value**: Users can now leverage ChatGPT's natural language capability to connect directly to BigQuery's DWH. \n",
"\n",
"**Example Use Cases**: \n",
"- Data scientists can connect to tables and run data analyses using ChatGPT's Data Analysis\n",
"- Citizen data users can ask basic questions of their transactional data\n",
"- Users gain more visibility into their data & potential anomalies"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},

View File

@ -31,6 +31,24 @@
"This particular GPT Action provides an overview of how to connect to Outlook, Microsoft's web service for emailing and calendar events. This action assumes a users context and allows them to send and retrieve emails and calendar events from Outlook." "This particular GPT Action provides an overview of how to connect to Outlook, Microsoft's web service for emailing and calendar events. This action assumes a users context and allows them to send and retrieve emails and calendar events from Outlook."
] ]
}, },
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Value + Example Business Use Cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Value**: Users can now leverage ChatGPT's natural language capability to connect directly to Outlook\n",
"\n",
"**Example Use Cases**: \n",
"- A user can look up all of their meetings for the day and have ChatGPT summarize the day\n",
"- A user can email a ChatGPT output to someone directly"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},

View File

@ -0,0 +1,640 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# GPT Action Library: Sharepoint (Return file for Data Analysis / Document Summarization)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This page provides an instruction & guide for developers building a GPT Action for a specific application. Before you proceed, make sure to first familiarize yourself with the following information: \n",
"- [Introduction to GPT Actions](https://platform.openai.com/docs/actions)\n",
"- [Introduction to GPT Actions Library](https://platform.openai.com/docs/actions/actions-library)\n",
"- [Example of Building a GPT Action from Scratch](https://platform.openai.com/docs/actions/getting-started)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This solution enables a GPT action to answer a users question with the context of files the user can access in SharePoint or Office365, using Microsofts Graph API [search capabilities](https://learn.microsoft.com/en-us/graph/api/resources/search-api-overview?view=graph-rest-1.0) and the ability to [retrieve files](https://learn.microsoft.com/en-us/graph/api/driveitem-get?view=graph-rest-1.0\\&tabs=http). It uses Azure Functions to process the Graph API response and convert it to a human readable format or structure it in a way ChatGPT understands. This code is meant to be directional, and you should modify it to your requirements.\n",
"\n",
"This solution uses the ability to[ retrieve files in Actions](https://platform.openai.com/docs/actions/sending-files) and use them as if you had uploaded them directly to a conversation. The Azure Function returns a base64 string that ChatGPT converts into a file. This solution can handle both structured and unstructured data, but does have size volume limitations (see docs [here](https://platform.openai.com/docs/actions/sending-files))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Value + Example Business Use Cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Value**: Users can now leverage ChatGPT's natural language capability to connect directly to files in Sharpeoint\n",
"\n",
"**Example Use Cases**: \n",
"- A user needs to look up which files relate to a certain topic\n",
"- A user needs an answer to a critical question, buried deep in documents"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Architecture / Example"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](../../../images/solution_1.gif)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"This solution uses a Node.js Azure Function to, based on the logged in user:\n",
"\n",
"1. Search for a relevant file that the user has access to, based on the users initial question. \n",
"\n",
"2. For each file that is found, convert it to a base64 string.\n",
"\n",
"3. Format the data in the structure ChatGPT is expecting [here](https://platform.openai.com/docs/actions/sending-files/inline-option).\n",
"\n",
"4. Return that to ChatGPT. The GPT then can use those files as if you had uploaded it to the conversation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](../../../images/solution_1_architecture.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Application Information"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Application Key Links"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check out these links from the application before you get started:\n",
"- Application Website: https://www.microsoft.com/en-us/microsoft-365/sharepoint/collaboration\n",
"- Application API Documentation: https://learn.microsoft.com/en-us/previous-versions/office/developer/sharepoint-rest-reference/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Application Prerequisites"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before you get started, make sure you go through the following steps in your application environment:\n",
"- Access to a Sharepoint environment \n",
"- Postman (and knowledge of APIs and OAuth)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Middleware Information"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you follow the [search concept files guide](https://learn.microsoft.com/en-us/graph/search-concept-files), the [Microsoft Graph Search API](https://learn.microsoft.com/en-us/graph/search-concept-files) returns references to files that fit the criteria, but not the file contents themselves. Therefore, middleware is required, rather than hitting the MSFT endpoints directly.\n",
"\n",
"We need to restructure the response from that API so that it matches the expected structure in `openaiFileResponse` outlined [here](https://platform.openai.com/docs/actions/getting-started/inline-option)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Additional Steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Set up Azure Function"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Set up an Azure Function using the steps in the [Azure Function cookbook](https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_middleware_azure_function)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Add in Function Code"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that you have an authenticated Azure Function, we can update the function to search SharePoint / O365\n",
"\n",
"2. Go to your test function and paste in the code from [this file](https://github.com/openai/openai-cookbook/blob/main/examples/chatgpt/sharepoint_azure_function/solution_one_file_retrieval.js). Save the function.\n",
"\n",
"> **This code is meant to be directional** - while it should work out of the box, it is designed to be customized to your needs (see examples towards the end of this document).\n",
"\n",
"3. Set up the following env variables by going to the **Configuration** tab on the left under **Settings.** Note that this may be listed directly in **Environment Variables** depending on your Azure UI.\n",
"\n",
" 1. `TENANT_ID`: copied from previous section\n",
"\n",
" 2. `CLIENT_ID`: copied from previous section\n",
"\n",
"4. Go to the **Console** tab under the **Development Tools**\n",
"\n",
" 1. Install the following packages in console\n",
"\n",
" 1. `npm install @microsoft/microsoft-graph-client`\n",
"\n",
" 2. `npm install axios`\n",
"\n",
"5. Once this is complete, try calling the function (POST call) from Postman again, putting the below into body (using a query and search term you think will generate responses).\n",
"\n",
" ```json\n",
" {\n",
" \"searchTerm\": \"<choose a search term>\"\n",
" }\n",
" ```\n",
"\n",
"6. If you get a response, you are ready to set this up with a Custom GPT! See the ChatGPT Section of the Azure Function page for more details on setting this up\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## More Detailed Walkthrough"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"The below walks through setup instructions and walkthrough unique to this solution. You can find the entire code [here](https://github.com/openai/openai-cookbook/blob/main/examples/chatgpt/sharepoint_azure_function/solution_one_file_retrieval.js).\n",
"\n",
"### Code Walkthrough\n",
"\n",
"The below walks through the different parts of the function. Before you begin, ensure you have the required packages installed and environment variables set up (see the Installation Steps section).\n",
"\n",
"\n",
"#### Implementing the Authentication \n",
"\n",
"Below we have a few helper functions that well use in the function.\n",
"\n",
"\n",
"##### Initializing the Microsoft Graph Client\n",
"\n",
"Create a function to initialize the Graph client with an access token. This will be used to search through Office 365 and SharePoint.\n",
"\n",
"```javascript\n",
"const { Client } = require('@microsoft/microsoft-graph-client');\n",
"\n",
"function initGraphClient(accessToken) {\n",
" return Client.init({\n",
" authProvider: (done) => {\n",
" done(null, accessToken);\n",
" }\n",
" });\n",
"}\n",
"```\n",
"\n",
"##### Obtaining an On-Behalf-Of (OBO) Token\n",
"\n",
"This function uses an existing bearer token to request an OBO token from Microsoft's identity platform. This enables passing through the credentials to ensure the search only returns files the logged-in user can access.\n",
"\n",
"```javascript\n",
"const axios = require('axios');\n",
"const qs = require('querystring');\n",
"\n",
"async function getOboToken(userAccessToken) {\n",
"    const { TENANT_ID, CLIENT_ID, MICROSOFT_PROVIDER_AUTHENTICATION_SECRET } = process.env;\n",
"    const params = {\n",
"        client_id: CLIENT_ID,\n",
"        client_secret: MICROSOFT_PROVIDER_AUTHENTICATION_SECRET,\n",
"        grant_type: 'urn:ietf:params:oauth:grant-type:jwt-bearer',\n",
"        assertion: userAccessToken,\n",
"        requested_token_use: 'on_behalf_of',\n",
"        scope: 'https://graph.microsoft.com/.default'\n",
"    };\n",
"\n",
"    const url = `https\\://login.microsoftonline.com/${TENANT_ID}/oauth2/v2.0/token`;\n",
"    try {\n",
"        const response = await axios.post(url, qs.stringify(params), {\n",
"            headers: { 'Content-Type': 'application/x-www-form-urlencoded' }\n",
"        });\n",
"        return response.data.access\\_token;\n",
"    } catch (error) {\n",
"        console.error('Error obtaining OBO token:', error.response?.data || error.message);\n",
"        throw error;\n",
"    }\n",
"}\n",
"```\n",
"\n",
"#### Retrieving Content from O365 / SharePoint Items\n",
"\n",
"This function fetches the content of drive items, converts it to a base64 string, and restructures to match the `openaiFileResponse` format.\n",
"```javascript\n",
"const getDriveItemContent = async (client, driveId, itemId, name) => {\n",
"   try\n",
"       const filePath = `/drives/${driveId}/items/${itemId}`;\n",
"       const downloadPath = filePath + `/content`\n",
"       // this is where we get the contents and convert to base64\n",
"       const fileStream = await client.api(downloadPath).getStream();\n",
"       let chunks = [];\n",
"           for await (let chunk of fileStream) {\n",
"               chunks.push(chunk);\n",
"           }\n",
"       const base64String = Buffer.concat(chunks).toString('base64');\n",
"       // this is where we get the other metadata to include in response\n",
"       const file = await client.api(filePath).get();\n",
"       const mime_type = file.file.mimeType;\n",
"       const name = file.name;\n",
"       return {\"name\":name, \"mime_type\":mime_type, \"content\":base64String}\n",
"   } catch (error) {\n",
"       console.error('Error fetching drive content:', error);\n",
"       throw new Error(`Failed to fetch content for ${name}: ${error.message}`);\n",
"   }\n",
"```\n",
"\n",
"#### Creating the Azure Function to Handle Requests\n",
"\n",
"Now that we have all these helper functions, the Azure Function will orchestrate the flow, by authenticating the user, performing the search, and iterating through the search results to extract the text and retrieve the relevant parts of the text to the GPT.\n",
"\n",
"**Handling HTTP Requests:** The function starts by extracting the query and searchTerm from the HTTP request. It checks if the Authorization header is present and extracts the bearer token.\n",
"\n",
"**Authentication:** Using the bearer token, it obtains an OBO token from Microsoft's identity platform using getOboToken defined above.\n",
"\n",
"**Initializing the Graph Client:** With the OBO token, it initializes the Microsoft Graph client using initGraphClient defined above.\n",
"\n",
"**Document Search:** It constructs a search query and sends it to the Microsoft Graph API to find documents based on the searchTerm.\n",
"\n",
"**Document Processing**: For each document returned by the search:\n",
"\n",
"- It retrieves the document content using getDriveItemContent.\n",
"\n",
"- It converts the document to base64 string and restructures it to match the `openaiFileResponse` structure.\n",
"\n",
"**Response**: The function sends them back in the HTTP response.\n",
"```javascript\n",
"module.exports = async function (context, req) {\n",
" // const query = req.query.query || (req.body && req.body.query);\n",
" const searchTerm = req.query.searchTerm || (req.body && req.body.searchTerm);\n",
" if (!req.headers.authorization) {\n",
" context.res = {\n",
" status: 400,\n",
" body: 'Authorization header is missing'\n",
" };\n",
" return;\n",
" }\n",
" /// The below takes the token passed to the function, to use to get an OBO token.\n",
" const bearerToken = req.headers.authorization.split(' ')[1];\n",
" let accessToken;\n",
" try {\n",
" accessToken = await getOboToken(bearerToken);\n",
" } catch (error) {\n",
" context.res = {\n",
" status: 500,\n",
" body: `Failed to obtain OBO token: ${error.message}`\n",
" };\n",
" return;\n",
" }\n",
" // Initialize the Graph Client using the initGraphClient function defined above\n",
" let client = initGraphClient(accessToken);\n",
" // this is the search body to be used in the Microsft Graph Search API: https://learn.microsoft.com/en-us/graph/search-concept-files\n",
" const requestBody = {\n",
" requests: [\n",
" {\n",
" entityTypes: ['driveItem'],\n",
" query: {\n",
" queryString: searchTerm\n",
" },\n",
" from: 0,\n",
" // the below is set to summarize the top 10 search results from the Graph API, but can configure based on your documents.\n",
" size: 10\n",
" }\n",
" ]\n",
" };\n",
"\n",
"\n",
" try {\n",
" // This is where we are doing the search\n",
" const list = await client.api('/search/query').post(requestBody);\n",
" const processList = async () => {\n",
" // This will go through and for each search response, grab the contents of the file and summarize with gpt-3.5-turbo\n",
" const results = [];\n",
" await Promise.all(list.value[0].hitsContainers.map(async (container) => {\n",
" for (const hit of container.hits) {\n",
" if (hit.resource[\"@odata.type\"] === \"#microsoft.graph.driveItem\") {\n",
" const { name, id } = hit.resource;\n",
" // The below is where the file lives\n",
" const driveId = hit.resource.parentReference.driveId;\n",
" // we use the helper function we defined above to get the contents, convert to base64, and restructure it\n",
" const contents = await getDriveItemContent(client, driveId, id, name);\n",
" results.push(contents)\n",
" }\n",
" }));\n",
" return results;\n",
" };\n",
" let results;\n",
" if (list.value[0].hitsContainers[0].total == 0) {\n",
" // Return no results found to the API if the Microsoft Graph API returns no results\n",
" results = 'No results found';\n",
" } else {\n",
" // If the Microsoft Graph API does return results, then run processList to iterate through.\n",
" results = await processList();\n",
" // this is where we structure the response so ChatGPT knows they are files\n",
" results = {'openaiFileResponse': results}\n",
" }\n",
" context.res = {\n",
" status: 200,\n",
" body: results\n",
" };\n",
" } catch (error) {\n",
" context.res = {\n",
" status: 500,\n",
" body: `Error performing search or processing results: ${error.message}`,\n",
" };\n",
" }\n",
"};\n",
"```\n",
"### Customizations\n",
"\n",
"Below are some potential areas to customize. \n",
"\n",
"- You can customize the GPT prompt to search again a certain amount of times if nothing is found.\n",
"\n",
"- You can customize the code to only search through specific SharePoint sites or O365 Drives by customizing the search query. This will help focus the search and improve the retrieval. The function as setup now looks through all files the logged-in user can access.\n",
"\n",
"- You can update the code to only return certain types of files. For example, only return structured data / CSVs. \n",
"\n",
"- You can customize the amount of files it searches through within the call to Microsoft Graph. Note that you should only put a maximum of 10 files based on the documentation [here](https://platform.openai.com/docs/actions/getting-started). \n",
"\n",
"### Considerations\n",
"\n",
"Note that all the same limitations of Actions apply here, with regards to returning 100K characters or less and the [45 second timeout](https://platform.openai.com/docs/actions/production/timeouts).\n",
"\n",
"- Make sure you read the documentation here around [returning files](https://platform.openai.com/docs/actions/sending-files) and [file uploads](https://help.openai.com/en/articles/8555545-file-uploads-faq), as those limitations apply here."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ChatGPT Steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom GPT Instructions "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you've created a Custom GPT, copy the text below in the Instructions panel. Have questions? Check out [Getting Started Example](https://platform.openai.com/docs/actions/getting-started) to see how this step works in more detail."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"You are a Q&A helper that helps answer users questions. You have access to a documents repository through your API action. When a user asks a question, you pass in the \"searchTerm\" a single keyword or term you think you should use for the search.\n",
"\n",
"****\n",
"\n",
"Scenario 1: There are answers\n",
"\n",
"If your action returns results, then you take the results from the action and try to answer the users question. \n",
"\n",
"****\n",
"\n",
"Scenario 2: No results found\n",
"\n",
"If the response you get from the action is \"No results found\", stop there and let the user know there were no results and that you are going to try a different search term, and explain why. You must always let the user know before conducting another search.\n",
"\n",
"Example:\n",
"\n",
"****\n",
"\n",
"I found no results for \"DEI\". I am now going to try [insert term] because [insert explanation]\n",
"\n",
"****\n",
"\n",
"Then, try a different searchTerm that is similar to the one you tried before, with a single word. \n",
"\n",
"Try this three times. After the third time, then let the user know you did not find any relevant documents to answer the question, and to check SharePoint. \n",
"Be sure to be explicit about what you are searching for at each step.\n",
"\n",
"****\n",
"\n",
"In either scenario, try to answer the user's question. If you cannot answer the user's question based on the knowledge you find, let the user know and ask them to go check the HR Docs in SharePoint. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### OpenAPI Schema "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you've created a Custom GPT, copy the text below in the Actions panel. Have questions? Check out [Getting Started Example](https://platform.openai.com/docs/actions/getting-started) to see how this step works in more detail."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This expects a response that matches the file retrieval structure in our doc [here](https://platform.openai.com/docs/actions/sending-files) and passes in a `searchTerm` parameter to inform the search.\n",
">Make sure to switch the function app name, function name and code based on link copied in screenshot above"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "yaml"
}
},
"outputs": [],
"source": [
"openapi: 3.1.0\n",
"info:\n",
" title: SharePoint Search API\n",
" description: API for searching SharePoint documents.\n",
" version: 1.0.0\n",
"servers:\n",
" - url: https://{your_function_app_name}.azurewebsites.net/api\n",
" description: SharePoint Search API server\n",
"paths:\n",
" /{your_function_name}?code={enter your specific endpoint id here}:\n",
" post:\n",
" operationId: searchSharePoint\n",
" summary: Searches SharePoint for documents matching a query and term.\n",
" requestBody:\n",
" required: true\n",
" content:\n",
" application/json:\n",
" schema:\n",
" type: object\n",
" properties:\n",
" searchTerm:\n",
" type: string\n",
" description: A specific term to search for within the documents.\n",
" responses:\n",
" '200':\n",
" description: A CSV file of query results encoded in base64.\n",
" content:\n",
" application/json:\n",
" schema:\n",
" type: object\n",
" properties:\n",
" openaiFileResponseData:\n",
" type: array\n",
" items:\n",
" type: object\n",
" properties:\n",
" name:\n",
" type: string\n",
" description: The name of the file.\n",
" mime_type:\n",
" type: string\n",
" description: The MIME type of the file.\n",
" content:\n",
" type: string\n",
" format: byte\n",
" description: The base64 encoded contents of the file.\n",
" '400':\n",
" description: Bad request when the SQL query parameter is missing.\n",
" '413':\n",
" description: Payload too large if the response exceeds the size limit.\n",
" '500':\n",
" description: Server error when there are issues executing the query or encoding the results."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Authentication Instructions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below are instructions on setting up authentication with this 3rd party application. Have questions? Check out [Getting Started Example](https://platform.openai.com/docs/actions/getting-started) to see how this step works in more detail."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*See above and on the [Azure Function cookbook](https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_middleware_azure_function) for more detailed instructions on authentication.*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## FAQ & Troubleshooting"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Why are you using the Microsoft Graph API in your code instead of the [SharePoint API](https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/get-to-know-the-sharepoint-rest-service?tabs=csom)?\n",
"\n",
" - The SharePoint API is legacy - per the Microsoft documentation [here](https://learn.microsoft.com/en-us/sharepoint/dev/apis/sharepoint-rest-graph), “For SharePoint Online, innovation using a REST API against SharePoint is driven via the Microsoft Graph REST API's.” The Graph API gives us more flexibility, and the SharePoint API still runs into the same file issues listed in the [Why is this necessary instead of interacting with the Microsoft Graph API directly?](#why-is-this-necessary-instead-of-interacting-with-the-microsoft-api-directly) section.\n",
"\n",
"- What types of files does this support?\n",
"\n",
" It follows the same guidelines as the documentation [here](https://help.openai.com/en/articles/8555545-file-uploads-faq) about file uploads. \n",
"\n",
"- Why do I need to request an OBO token?\n",
"\n",
" - When you try to use the same token to authenticate to the Graph API as the one you use to authenticate into the Azure Function, you get an “invalid audience” token. This is because the audience for the token can only be user\\_impersonation.\n",
"\n",
" - To address this, the function requests a new token scoped to Files.Read.All within the app using the [On Behalf Of flow](https://learn.microsoft.com/en-us/entra/identity-platform/v2-oauth2-on-behalf-of-flow). This will inherit the permissions of the logged in user, meaning this function will only search through files the logged-in user has access to. \n",
"\n",
" - We are purposefully requesting a new On Behalf Of token with each request, because Azure Function Apps are meant to be stateless. You could potentially integrate this with Azure Key Vault to store the secret and retrieve programmatically. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*Are there integrations that youd like us to prioritize? Are there errors in our integrations? File a PR or issue in our github, and well take a look.*\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,701 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# GPT Action Library: Sharepoint (Return as Document)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This page provides an instruction & guide for developers building a GPT Action for a specific application. Before you proceed, make sure to first familiarize yourself with the following information: \n",
"- [Introduction to GPT Actions](https://platform.openai.com/docs/actions)\n",
"- [Introduction to GPT Actions Library](https://platform.openai.com/docs/actions/actions-library)\n",
"- [Example of Building a GPT Action from Scratch](https://platform.openai.com/docs/actions/getting-started)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This solution enables a GPT action to answer a users question with the context of files the user can access in SharePoint or Office365, using Microsofts Graph API [search capabilities](https://learn.microsoft.com/en-us/graph/api/resources/search-api-overview?view=graph-rest-1.0) and the ability to [retrieve files](https://learn.microsoft.com/en-us/graph/api/driveitem-get?view=graph-rest-1.0\\&tabs=http). It uses Azure Functions to process the Graph API response and convert it to a human readable format or structure it in a way ChatGPT understands. This code is meant to be directional, and you should modify it to your requirements.\n",
"\n",
"This solution pre-processes the file within the Azure Function. The Azure Function returns text, instead of the base64 encoded file. Due to the pre-processing and the conversion to text, this solution is best used for large, unstructured documents, and for when you want to analyze more than the amount of files supported in the first solution (see documentation [here](https://platform.openai.com/docs/actions/sending-files))."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Value + Example Business Use Cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Value**: Users can now leverage ChatGPT's natural language capability to connect directly to files in Sharpeoint\n",
"\n",
"**Example Use Cases**: \n",
"- A user needs to look up which files relate to a certain topic\n",
"- A user needs an answer to a critical question, buried deep in documents"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Architecture / Example"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](../../../images/solution_2.gif)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This solution uses a Node.js Azure Function to, based on the logged in user:\n",
"\n",
"1. Search for a relevant file that the user has access to, based on the users initial question.\n",
"\n",
"2. For each file that is found, convert it to a consistent readable format and retrieve all the text.\n",
"\n",
"3. Use GPT 4o mini (gpt-4o-mini) to extract the relevant text from the files based on the initial users question. Note the pricing of GPT 4o mini [here](https://openai.com/pricing#language-models) - since we are dealing with small token chunks, the cost of this step is nominal.  \n",
"\n",
"4. Returns that data to ChatGPT. The GPT then uses that information to respond to the user's initial question.\n",
"\n",
"As you can see from the below architecture diagram, the first three steps are the same as Solution 1. The main difference is that this solution converts the file to text instead of a base64 string, and then summarizes that text using GPT 4o mini."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](../../../images/solution_2_architecture.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Application Information"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Application Key Links"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check out these links from the application before you get started:\n",
"- Application Website: https://www.microsoft.com/en-us/microsoft-365/sharepoint/collaboration\n",
"- Application API Documentation: https://learn.microsoft.com/en-us/previous-versions/office/developer/sharepoint-rest-reference/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Application Prerequisites"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before you get started, make sure you go through the following steps in your application environment:\n",
"- Access to a Sharepoint environment \n",
"- Postman (and knowledge of APIs and OAuth)\n",
"- An OpenAI API Key from platform.openai.com"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Middleware Information"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you follow the [search concept files guide](https://learn.microsoft.com/en-us/graph/search-concept-files), the [Microsoft Graph Search API](https://learn.microsoft.com/en-us/graph/search-concept-files) returns references to files that fit the criteria, but not the file contents themselves. Therefore, middleware is required, rather than hitting the MSFT endpoints directly.\n",
"\n",
"Steps: \n",
"\n",
"1. loop through the returned files and download the files using the [Download File endpoint](https://learn.microsoft.com/en-us/graph/api/driveitem-get-content?view=graph-rest-1.0\\&tabs=http) or [Convert File endpoint](https://learn.microsoft.com/en-us/graph/api/driveitem-get-content-format?view=graph-rest-1.0\\&tabs=http)\n",
"\n",
"2. convert that Binary stream to human readable text using [pdf-parse](https://www.npmjs.com/package/pdf-parse)\n",
"\n",
"3. Then, we can optimize further by summarizing using gpt-4o-mini in the function to help with the 100,000 character limit we impose on Actions today. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Additional Steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Set up Azure Function"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Set up an Azure Function using the steps in the [Azure Function cookbook](https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_middleware_azure_function)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Add in Function Code"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that you have an authenticated Azure Function, we can update the function to search SharePoint / O365\n",
"\n",
"2. Go to your test function and paste in the code from [this file]((https://github.com/openai/openai-cookbook/blob/main/examples/chatgpt/sharepoint_azure_function/solution_two_preprocessing.js)). Save the function.\n",
"\n",
"> **This code is meant to be directional** - while it should work out of the box, it is designed to be customized to your needs (see examples towards the end of this document).\n",
"\n",
"3. Set up the following env variables by going to the **Configuration** tab on the left under **Settings.** Note that this may be listed directly in **Environment Variables** depending on your Azure UI.\n",
"\n",
" 1. `TENANT_ID`: copied from previous section\n",
"\n",
" 2. `CLIENT_ID`: copied from previous section \n",
"\n",
" 3. _Solution 2 only:_\n",
"\n",
" 1. `OPENAI_API_KEY:` spin up an OpenAI API key on platform.openai.com.\n",
"\n",
"4. Go to the **Console** tab under the **Development Tools**\n",
"\n",
" 1. Install the following packages in console\n",
"\n",
" 1. `npm install @microsoft/microsoft-graph-client`\n",
"\n",
" 2. `npm install axios`\n",
"\n",
" 3. _Solution 2 only:_\n",
"\n",
" 1. `npm install pdf-parse`\n",
"\n",
" 2. `npm install openai`\n",
"\n",
"5. Once this is complete, try calling the function (POST call) from Postman again, putting the below into body (using a query and search term you think will generate responses).\n",
"\n",
" ```json\n",
" {\n",
" \"query\": \"<choose a question>\",\n",
" \"searchTerm\": \"<choose a search term>\"\n",
" }\n",
" ```\n",
"\n",
"6. If you get a response, you are ready to set this up with a Custom GPT!\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Detailed Walkthrough"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The below walks through setup instructions and walkthrough unique to this solution of pre-processing the files and extracting summaries in the Azure Function. You can find the entire code [here](https://github.com/openai/openai-cookbook/blob/main/examples/chatgpt/sharepoint_azure_function/solution_two_preprocessing.js).\n",
"\n",
"### Code Walkthrough\n",
"\n",
"#### Implementing the Authentication \n",
"\n",
"This solution follows the same authentication steps as solution 1 above - see [Initializing the Microsoft Graph Client](#initializing-the-microsoft-graph-client) and [Obtaining an On-Behalf-Of (OBO) Token](#obtaining-an-on-behalf-of-obo-token) sections.\n",
"\n",
"\n",
"#### Retrieving Content from O365 / SharePoint Items\n",
"\n",
"This function fetches the content of drive items, handling different file types and converting files to PDF when necessary for text extraction. This uses the [download endpoint](https://learn.microsoft.com/en-us/graph/api/driveitem-get-content?view=graph-rest-1.0\\&tabs=http) for PDFs and the [convert endpoint](https://learn.microsoft.com/en-us/graph/api/driveitem-get-content-format?view=graph-rest-1.0\\&tabs=http) for other supported file types.\n",
"```javascript\n",
"const getDriveItemContent = async (client, driveId, itemId, name) => {\n",
" try {\n",
" const fileType = path.extname(name).toLowerCase();\n",
" // the below files types are the ones that are able to be converted to PDF to extract the text. See https://learn.microsoft.com/en-us/graph/api/driveitem-get-content-format?view=graph-rest-1.0&tabs=http\n",
" const allowedFileTypes = ['.pdf', '.doc', '.docx', '.odp', '.ods', '.odt', '.pot', '.potm', '.potx', '.pps', '.ppsx', '.ppsxm', '.ppt', '.pptm', '.pptx', '.rtf'];\n",
" // filePath changes based on file type, adding ?format=pdf to convert non-pdf types to pdf for text extraction, so all files in allowedFileTypes above are converted to pdf\n",
" const filePath = `/drives/${driveId}/items/${itemId}/content` + ((fileType === '.pdf' || fileType === '.txt' || fileType === '.csv') ? '' : '?format=pdf');\n",
" if (allowedFileTypes.includes(fileType)) {\n",
" response = await client.api(filePath).getStream();\n",
" // The below takes the chunks in response and combines\n",
" let chunks = [];\n",
" for await (let chunk of response) {\n",
" chunks.push(chunk);\n",
" }\n",
" let buffer = Buffer.concat(chunks);\n",
" // the below extracts the text from the PDF.\n",
" const pdfContents = await pdfParse(buffer);\n",
" return pdfContents.text;\n",
" } else if (fileType === '.txt') {\n",
" // If the type is txt, it does not need to create a stream and instead just grabs the content\n",
" response = await client.api(filePath).get();\n",
" return response;\n",
" } else if (fileType === '.csv') {\n",
" response = await client.api(filePath).getStream();\n",
" let chunks = [];\n",
" for await (let chunk of response) {\n",
" chunks.push(chunk);\n",
" }\n",
" let buffer = Buffer.concat(chunks);\n",
" let dataString = buffer.toString('utf-8');\n",
" return dataString\n",
" \n",
" } else {\n",
" return 'Unsupported File Type';\n",
" }\n",
" \n",
" } catch (error) {\n",
" console.error('Error fetching drive content:', error);\n",
" throw new Error(`Failed to fetch content for ${name}: ${error.message}`);\n",
" }\n",
"};\n",
"```\n",
"\n",
"#### Integrating GPT 4o mini for Text Analysis\n",
"\n",
"This function utilizes the OpenAI SDK to analyze text extracted from documents and find relevant information based on a user query. This helps to ensure only relevant text to the users question is returned to the GPT. \n",
"\n",
"```javascript\n",
"const getRelevantParts = async (text, query) => {\n",
" try {\n",
" // We use your OpenAI key to initialize the OpenAI client\n",
" const openAIKey = process.env[\"OPENAI_API_KEY\"];\n",
" const openai = new OpenAI({\n",
" apiKey: openAIKey,\n",
" });\n",
" const response = await openai.chat.completions.create({\n",
" // Using gpt-4o-mini due to speed to prevent timeouts. You can tweak this prompt as needed\n",
" model: \"gpt-4o-mini\",\n",
" messages: [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant that finds relevant content in text based on a query. You only return the relevant sentences, and you return a maximum of 10 sentences\"},\n",
" {\"role\": \"user\", \"content\": `Based on this question: **\"${query}\"**, get the relevant parts from the following text:*****\\n\\n${text}*****. If you cannot answer the question based on the text, respond with 'No information provided'`}\n",
" ],\n",
" // using temperature of 0 since we want to just extract the relevant content\n",
" temperature: 0,\n",
" // using max_tokens of 1000, but you can customize this based on the number of documents you are searching. \n",
" max_tokens: 1000\n",
" });\n",
" return response.choices[0].message.content;\n",
" } catch (error) {\n",
" console.error('Error with OpenAI:', error);\n",
" return 'Error processing text with OpenAI' + error;\n",
" }\n",
"};\n",
"```\n",
"\n",
"#### Creating the Azure Function to Handle Requests\n",
"\n",
"Now that we have all these helper functions, the Azure Function will orchestrate the flow, by authenticating the user, performing the search, and iterating through the search results to extract the text and retrieve the relevant parts of the text to the GPT.\n",
"\n",
"**Handling HTTP Requests:** The function starts by extracting the query and searchTerm from the HTTP request. It checks if the Authorization header is present and extracts the bearer token.\n",
"\n",
"**Authentication:** Using the bearer token, it obtains an OBO token from Microsoft's identity platform using getOboToken defined above.\n",
"\n",
"**Initializing the Graph Client:** With the OBO token, it initializes the Microsoft Graph client using initGraphClient defined above.\n",
"\n",
"**Document Search:** It constructs a search query and sends it to the Microsoft Graph API to find documents based on the searchTerm.\n",
"\n",
"**Document Processing**: For each document returned by the search:\n",
"\n",
"- It retrieves the document content using getDriveItemContent.\n",
"\n",
"- If the file type is supported, it analyzes the content using getRelevantParts, which sends the text to OpenAI's model for extracting relevant information based on the query.\n",
"\n",
"- It collects the analysis results and includes metadata like the document name and URL.\n",
"\n",
"**Response**: The function sorts the results by relevance and sends them back in the HTTP response.\n",
"\n",
"```javascript\n",
"module.exports = async function (context, req) {\n",
" const query = req.query.query || (req.body && req.body.query);\n",
" const searchTerm = req.query.searchTerm || (req.body && req.body.searchTerm);\n",
" if (!req.headers.authorization) {\n",
" context.res = {\n",
" status: 400,\n",
" body: 'Authorization header is missing'\n",
" };\n",
" return;\n",
" }\n",
" /// The below takes the token passed to the function, to use to get an OBO token.\n",
" const bearerToken = req.headers.authorization.split(' ')[1];\n",
" let accessToken;\n",
" try {\n",
" accessToken = await getOboToken(bearerToken);\n",
" } catch (error) {\n",
" context.res = {\n",
" status: 500,\n",
" body: `Failed to obtain OBO token: ${error.message}`\n",
" };\n",
" return;\n",
" }\n",
" // Initialize the Graph Client using the initGraphClient function defined above\n",
" let client = initGraphClient(accessToken);\n",
" // this is the search body to be used in the Microsft Graph Search API: https://learn.microsoft.com/en-us/graph/search-concept-files\n",
" const requestBody = {\n",
" requests: [\n",
" {\n",
" entityTypes: ['driveItem'],\n",
" query: {\n",
" queryString: searchTerm\n",
" },\n",
" from: 0,\n",
" // the below is set to summarize the top 10 search results from the Graph API, but can configure based on your documents. \n",
" size: 10\n",
" }\n",
" ]\n",
" };\n",
"\n",
" try { \n",
" // Function to tokenize content (e.g., based on words). \n",
" const tokenizeContent = (content) => {\n",
" return content.split(/\\s+/);\n",
" };\n",
"\n",
" // Function to break tokens into 10k token windows for gpt-4o-mini\n",
" const breakIntoTokenWindows = (tokens) => {\n",
" const tokenWindows = []\n",
" const maxWindowTokens = 10000; // 10k tokens\n",
" let startIndex = 0;\n",
"\n",
" while (startIndex < tokens.length) {\n",
" const window = tokens.slice(startIndex, startIndex + maxWindowTokens);\n",
" tokenWindows.push(window);\n",
" startIndex += maxWindowTokens;\n",
" }\n",
"\n",
" return tokenWindows;\n",
" };\n",
" // This is where we are doing the search\n",
" const list = await client.api('/search/query').post(requestBody);\n",
"\n",
" const processList = async () => {\n",
" // This will go through and for each search response, grab the contents of the file and summarize with gpt-4o-mini\n",
" const results = [];\n",
"\n",
" await Promise.all(list.value[0].hitsContainers.map(async (container) => {\n",
" for (const hit of container.hits) {\n",
" if (hit.resource[\"@odata.type\"] === \"#microsoft.graph.driveItem\") {\n",
" const { name, id } = hit.resource;\n",
" // We use the below to grab the URL of the file to include in the response\n",
" const webUrl = hit.resource.webUrl.replace(/\\s/g, \"%20\");\n",
" // The Microsoft Graph API ranks the reponses, so we use this to order it\n",
" const rank = hit.rank;\n",
" // The below is where the file lives\n",
" const driveId = hit.resource.parentReference.driveId;\n",
" const contents = await getDriveItemContent(client, driveId, id, name);\n",
" if (contents !== 'Unsupported File Type') {\n",
" // Tokenize content using function defined previously\n",
" const tokens = tokenizeContent(contents);\n",
"\n",
" // Break tokens into 10k token windows\n",
" const tokenWindows = breakIntoTokenWindows(tokens);\n",
"\n",
" // Process each token window and combine results\n",
" const relevantPartsPromises = tokenWindows.map(window => getRelevantParts(window.join(' '), query));\n",
" const relevantParts = await Promise.all(relevantPartsPromises);\n",
" const combinedResults = relevantParts.join('\\n'); // Combine results\n",
"\n",
" results.push({ name, webUrl, rank, contents: combinedResults });\n",
" } \n",
" else {\n",
" results.push({ name, webUrl, rank, contents: 'Unsupported File Type' });\n",
" }\n",
" }\n",
" }\n",
" }));\n",
"\n",
" return results;\n",
" };\n",
" let results;\n",
" if (list.value[0].hitsContainers[0].total == 0) {\n",
" // Return no results found to the API if the Microsoft Graph API returns no results\n",
" results = 'No results found';\n",
" } else {\n",
" // If the Microsoft Graph API does return results, then run processList to iterate through.\n",
" results = await processList();\n",
" results.sort((a, b) => a.rank - b.rank);\n",
" }\n",
" context.res = {\n",
" status: 200,\n",
" body: results\n",
" };\n",
" } catch (error) {\n",
" context.res = {\n",
" status: 500,\n",
" body: `Error performing search or processing results: ${error.message}`,\n",
" };\n",
" }\n",
"};\n",
"```\n",
"\n",
"### Customizations\n",
"\n",
"Below are some potential areas to customize. \n",
"\n",
"- You can customize the GPT prompt to search again a certain amount of times if nothing is found.\n",
"\n",
"- You can customize the code to only search through specific SharePoint sites or O365 Drives by customizing the search query. This will help focus the search and improve the retrieval. The function as setup now looks through all files the logged-in user can access.\n",
"\n",
"- You could use gpt-4o instead of gpt-4o-mini. This would slightly increase the cost and latency, but you may get higher quality summarizations.\n",
"\n",
"- You can customize the amount of files it searches through within the call to Microsoft Graph.\n",
"\n",
"\n",
"### Considerations\n",
"\n",
"Note that all the same limitations of Actions apply here, with regards to returning 100K characters or less and the [45 second timeout](https://platform.openai.com/docs/actions/production/timeouts).\n",
"\n",
"\n",
"- This only works for text, not for images. With some additional code in the Azure Function, you could customize this by using GPT-4o to extract summarizations of images.\n",
"\n",
"- This does not work for structured data. We recommend Solution 1 if structured data is a major part of your use case."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ChatGPT Steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom GPT Instructions "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you've created a Custom GPT, copy the text below in the Instructions panel. Have questions? Check out [Getting Started Example](https://platform.openai.com/docs/actions/getting-started) to see how this step works in more detail."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"You are a Q&A helper that helps answer users questions. You have access to a documents repository through your API action. When a user asks a question, you pass in that question exactly as stated to the \"query\" parameter, and for the \"searchTerm\" you use a single keyword or term you think you should use for the search.\n",
"\n",
"****\n",
"\n",
"Scenario 1: There are answers\n",
"\n",
"If your action returns results, then you take the results from the action and summarize concisely with the webUrl returned from the action. You answer the users question to the best of your knowledge from the action\n",
"\n",
"****\n",
"\n",
"Scenario 2: No results found\n",
"\n",
"If the response you get from the action is \"No results found\", stop there and let the user know there were no results and that you are going to try a different search term, and explain why. You must always let the user know before conducting another search.\n",
"\n",
"Example:\n",
"\n",
"****\n",
"\n",
"I found no results for \"DEI\". I am now going to try [insert term] because [insert explanation]\n",
"\n",
"****\n",
"\n",
"Then, try a different searchTerm that is similar to the one you tried before, with a single word. \n",
"\n",
"Try this three times. After the third time, then let the user know you did not find any relevant documents to answer the question, and to check SharePoint. Be sure to be explicit about what you are searching for at each step.\n",
"\n",
"****\n",
"\n",
"In either scenario, try to answer the user's question. If you cannot answer the user's question based on the knowledge you find, let the user know and ask them to go check the HR Docs in SharePoint. If the file is a CSV, XLSX, or XLS, you can tell the user to download the file using the link and re-upload to use Advanced Data Analysis."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### OpenAPI Schema "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you've created a Custom GPT, copy the text below in the Actions panel. Have questions? Check out [Getting Started Example](https://platform.openai.com/docs/actions/getting-started) to see how this step works in more detail."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The below spec passes in the `query` parameter to inform the pre-processing and a `searchTerm` to find the right files in Microsoft Graph.\n",
">Make sure to switch the function app name, function name and code based on link copied in screenshot above"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "yaml"
}
},
"outputs": [],
"source": [
"openapi: 3.1.0\n",
"info:\n",
" title: SharePoint Search API\n",
" description: API for searching SharePoint documents.\n",
" version: 1.0.0\n",
"servers:\n",
" - url: https://{your_function_app_name}.azurewebsites.net/api\n",
" description: SharePoint Search API server\n",
"paths:\n",
" /{your_function_name}?code={enter your specific endpoint id here}:\n",
" post:\n",
" operationId: searchSharePoint\n",
" summary: Searches SharePoint for documents matching a query and term.\n",
" requestBody:\n",
" required: true\n",
" content:\n",
" application/json:\n",
" schema:\n",
" type: object\n",
" properties:\n",
" query:\n",
" type: string\n",
" description: The full query to search for in SharePoint documents.\n",
" searchTerm:\n",
" type: string\n",
" description: A specific term to search for within the documents.\n",
" responses:\n",
" '200':\n",
" description: Search results\n",
" content:\n",
" application/json:\n",
" schema:\n",
" type: array\n",
" items:\n",
" type: object\n",
" properties:\n",
" documentName:\n",
" type: string\n",
" description: The name of the document.\n",
" snippet:\n",
" type: string\n",
" description: A snippet from the document containing the search term.\n",
" url:\n",
" type: string\n",
" description: The URL to access the document."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Authentication Instructions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below are instructions on setting up authentication with this 3rd party application. Have questions? Check out [Getting Started Example](https://platform.openai.com/docs/actions/getting-started) to see how this step works in more detail."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*See above and on the [Azure Function cookbook](https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_middleware_azure_function) for more detailed instructions on authentication.*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## FAQ & Troubleshooting"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- Why are you using the Microsoft Graph API in your code instead of the [SharePoint API](https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/get-to-know-the-sharepoint-rest-service?tabs=csom)?\n",
"\n",
" - The SharePoint API is legacy - per the Microsoft documentation [here](https://learn.microsoft.com/en-us/sharepoint/dev/apis/sharepoint-rest-graph), “For SharePoint Online, innovation using a REST API against SharePoint is driven via the Microsoft Graph REST API's.” The Graph API gives us more flexibility, and the SharePoint API still runs into the same file issues listed in the [Why is this necessary instead of interacting with the Microsoft Graph API directly?](#why-is-this-necessary-instead-of-interacting-with-the-microsoft-api-directly) section.\n",
"\n",
"- What types of files does this support?\n",
" 1. This supports all files listed in the documentation for the Convert File endpoint [_here_](https://learn.microsoft.com/en-us/graph/api/driveitem-get-content-format?view=graph-rest-1.0\\&tabs=http). Specifically, it supports _pdf, doc, docx, odp, ods, odt, pot, potm, potx, pps, ppsx, ppsxm, ppt, pptm, pptx, rtf_.\n",
"\n",
" 2. When a search result returns XLS, XLSX, or CSV, this prompts the user to download the file and re-upload to ask questions using Advanced Data Analysis. As stated above, we recommend solution 1 if structured data is part of your use case.\n",
"\n",
"- Why do I need to request an OBO token?\n",
"\n",
" - When you try to use the same token to authenticate to the Graph API as the one you use to authenticate into the Azure Function, you get an “invalid audience” token. This is because the audience for the token can only be user\\_impersonation.\n",
"\n",
" - To address this, the function requests a new token scoped to Files.Read.All within the app using the [On Behalf Of flow](https://learn.microsoft.com/en-us/entra/identity-platform/v2-oauth2-on-behalf-of-flow). This will inherit the permissions of the logged in user, meaning this function will only search through files the logged-in user has access to. \n",
"\n",
" - We are purposefully requesting a new On Behalf Of token with each request, because Azure Function Apps are meant to be stateless. You could potentially integrate this with Azure Key Vault to store the secret and retrieve programmatically. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*Are there integrations that youd like us to prioritize? Are there errors in our integrations? File a PR or issue in our github, and well take a look.*\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,485 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# GPT Action Library (Middleware): Azure Function"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This page provides an instruction & guide for developers building middleware to connect a GPT Action to a specific application. Before you proceed, make sure to first familiarize yourself with the following information: \n",
"- [Introduction to GPT Actions](https://platform.openai.com/docs/actions)\n",
"- [Introduction to GPT Actions Library](https://platform.openai.com/docs/actions/actions-library)\n",
"- [Example of Building a GPT Action from Scratch](https://platform.openai.com/docs/actions/getting-started)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This particular GPT Action provides an overview of how to build an **Azure Function**, MSFT's cloud-based function builder. This documentation helps a user set up an OAuth-protected Azure Function to connect to a GPT Action, and to a sample application. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Value + Example Business Use Cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Value**: Users can now leverage ChatGPT's natural language capability to connect directly to Azure Function. This can in a few ways:\n",
"\n",
"- 100k character limit in GPT Actions: users can use the middleware to pre-process the text response from an API. For example, you can use OpenAIs API in the middleware to summarize the text before sending it back to ChatGPT.\n",
"- Typically for actions, users are relying on the SaaS API to return text. You can convert the response for the vendor API into easily digestible text, and it can handle different data types such as structured and unstructured data.\n",
"- It can return files instead of just text. This can be useful to surface CSV files for Data Analysis, or bring back an PDF file and ChatGPT will treat it like an upload. \n",
"\n",
"\n",
"**Example Use Cases**: \n",
"- A user needs to look up files in Sharepoint, but needs a middleware app between ChatGPT and Sharepoint\n",
"- A user has built several steps in a row in an Azure function, and needs to be able to kick off that process using ChatGPT"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Application Information"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Application Key Links"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check out these links from the application before you get started:\n",
"- Application Website: https://learn.microsoft.com/en-us/azure/azure-functions/\n",
"- Application API Documentation: https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Application Prerequisites"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before you get started, make sure you go through the following steps in your application environment:\n",
"- Azure Portal with access to create Azure Function Apps and Azure Entra App Registrations"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Application Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installing the app"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can read more about languages and deployment options for Azure Functions on the left hand side of the documentation [here](https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview?pivots=programming-language-csharp). "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Option 1: Use VSCode\n",
"\n",
"See Microsofts documentation [here](https://learn.microsoft.com/en-us/azure/azure-functions/functions-develop-vs-code?tabs=node-v4,python-v2,isolated-process\\&pivots=programming-language-javascript) for how to deploy using VSCode. If you have familiarity with this approach, feel free to use it. \n",
"\n",
"\n",
"#### Option 2: Directly in Azure Portal\n",
"\n",
"See the documentation [here](https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-app-portal?pivots=programming-language-javascript) for how to deploy using the Azure portal. Well walk through an example here step by step.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Part 1: Create Function"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](../../../images/create_function_app.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Create an [Azure Function app](https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview?pivots=programming-language-csharp). I used the following settings but you can use anything you are comfortable with. Note that not every language / operating system allows for editing the functions in the console directly - the combination I chose below does. For my walkthrough, I left everything as default and made the selections below. The below settings work out of the box for the SharePoint Node.js solutions [here](https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_action_sharepoint_doc) and [here](https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_action_sharepoint_text). \n",
"\n",
" 1. Basics\n",
"\n",
" 1. _Do you want to deploy code or container image?:_  **Code**\n",
"\n",
" 2. _Runtime stack:_ **Node.js**\n",
"\n",
" 3. _Operating system:_ **Windows**\n",
"\n",
" 2. Networking\n",
"\n",
" 1. _Enable public access_: **on (need this on to connect to the GPT)**\n",
"\n",
"2. After completing the above, youll land on the “Deployments” page. Once the deployment completes (which should only take a few minutes) click on **“Go to Resource”** to go back to the Function App\n",
"\n",
" > You may get an error the first time you attempt this, click create again and it will likely work. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Part 2: Set up Auth"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"3. On the left-hand side menu of the Azure Function App, click on **Authentication** under the **Settings** menu. \n",
"\n",
" 1. Add identity provider\n",
"\n",
" 2. Select **Microsoft** as identity provider. \n",
"\n",
" 3. **Workforce** as tenant type\n",
"\n",
" 4. **Create a new application.** The instructions are fairly similar if you are using an existing application, but it is easier to create a new application as it will have the callback URLs and the API exposed automatically using “Easy Auth”. You can read more about that [**here**](https://learn.microsoft.com/en-us/azure/app-service/overview-authentication-authorization).\n",
"\n",
" 5. Leave all the other settings on this page as the default, but feel free to change based on your internal guidelines.\n",
"\n",
" 6. On the **permissions** tab, click **Add Permission** and add **Files.Read.All** and **Sites.ReadAll**, then **Add.** This allows this application to read files which is important in order to use the Microsoft Graph Search API. If you are not using this for the SharePoint solution [here](https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_action_sharepoint_doc) and [here](https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_action_sharepoint_text) you can skip this.\n",
"\n",
"4. Once it is created, **click on the enterprise application you just created** (so, leave the Function App page and land on the Enterprise Application that you just spun up)**.** We are now going to give it one more permission, to execute the Azure Function by impersonating the user logging into the application. See [here](https://learn.microsoft.com/en-us/azure/app-service/configure-authentication-provider-aad?tabs=workforce-tenant) for more details.\n",
"\n",
" 1. On the main page, click “**View API Permissions”**\n",
"\n",
" 2. Search for **Microsoft Azure App Service** in the **APIs my organization uses** and find **user\\_impersonation** \n",
"\n",
" 3. Add it, then youll need an Admin on Azure Portal to **Grant Admin Consent.**\n",
"\n",
"5) **Within that enterprise application**, Click on **“Expose an API”** on the left hand menu under **Manage,** then copy the **scope** that was created using the **Copy to Clipboard** button. The scope should look like “api://\\<insert-uuid>/user\\_impersonation”. **Save this for later as** `SCOPE`**.**\n",
"\n",
"6) Click on **“Authentication”** on the left hand menu under **Manage**\n",
"\n",
" 1. Under the **Web** section, youll notice one callback URI was added automatically. Add the Postman redirect URI (<https://oauth.pstmn.io/v1/callback>) for testing.\n",
"\n",
"7) On the left-hand side, go to **Overview**. Copy the **application (client) ID** and and the **directory (tenant) ID** and **save for later as** `CLIENT_ID` **and** `TENANT_ID`**.**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Part 3: Set up Test Function"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"8. Leave the page by going home and then back to your **Function App.**\n",
"\n",
"9. Click on **Create Function.** For this example, Im going to develop it in the portal, but you can also use VSCode or another IDE.\n",
"\n",
" 1. Choose **HTTP trigger**\n",
"\n",
" 2. For **Authorization Level,** you can choose any key type you want.\n",
"\n",
" 1. Note this may error out the first time, but it is likely the Function did create, do a refresh of the page to check.\n",
"\n",
"10. Click on the function you just created (You may need to click refresh to see it). Click on **Get Function URL** and save it to test in Postman. You will also use this when creating the OpenAPI spec later when you put it into the GPT. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](../../../images/get_function_url.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"11. Go back to the function app and click on **Configuration.** Show the value for the `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET` variable, copy it (click advanced edit to copy it), and **save it for later.**  \n",
"\n",
"At this point, you should have a test function created, and you should have saved a **client id, tenant id, secret, scope, and function URL**. You are now ready to test out the authentication in Postman"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Part 4: Test Authentication in Postman"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"12. Try to hit endpoint you created in Postman using those OAuth settings:\n",
"\n",
" 1. **Grant Type:** Authorization Code\n",
"\n",
" 2. **Auth URL**: https://login.microsoftonline.com/`TENANT_ID`/oauth2/v2.0/authorize\n",
"\n",
" 3. **Auth Token URL**: https://login.microsoftonline.com/`TENANT_ID`/oauth2/v2.0/token\n",
"\n",
" 4. **Client ID:** `CLIENT_ID` from step 7 above\n",
"\n",
" 5. **Client secret:** `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET `from step 11 above\n",
"\n",
" 6. **Scope**: `SCOPE` from step 5 above\n",
"\n",
" 7. **Client credentials**: Send client credentials in body\n",
"\n",
"13. You will need to click **Get New Access Token**, and then hit the endpoint you saved in step 10 above. If it was successful, you should get this response: `”This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.”`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Part 5: Set up your Application on an Azure Function"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This should be done separately and is specific to your app. See the [Sharepoint Cookbook]((https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_action_sharepoint_doc)) for an example of that. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Part 6: Set up ChatGPT"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"14. Generate an OpenAPI spec for your endpoint. \n",
"\n",
"15. Paste that into the Actions section of a GPT, and choose OAuth as the authentication type. Fill out the OAuth settings the same way you did for Postman above. \n",
"\n",
"16. Once you save the action, you will see a callback URI at the bottom of the GPT configuration. Copy that URL, then go **back to your Function App in the Azure Portal**.\n",
"\n",
"17. Click on **Authentication** under **Settings**, then click on your Entra application.\n",
"\n",
"18. Once you are there, then click **Authentication** under the **Manage** section.\n",
"\n",
"19. Add a new Redirect URI under the **Web** section of that page, and paste in the Callback URI you got from step 16, then click Save. \n",
"\n",
"20. Test out the GPT and it should work as expected."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ChatGPT Steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom GPT Instructions "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*This is application specific. See [Sharepoint Cookbook]((https://cookbook.openai.com/examples/chatgpt/gpt_actions_library/gpt_action_sharepoint_doc)) for an example*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### OpenAPI Schema "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you've created a Custom GPT, copy the text below in the Actions panel. Have questions? Check out [Getting Started Example](https://platform.openai.com/docs/actions/getting-started) to see how this step works in more detail."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below is an example of what connecting to this Middlware might look like. You'll need to insert your application's & function's information in this section. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "yaml"
}
},
"outputs": [],
"source": [
"openapi: 3.1.0\n",
"info:\n",
" title: {insert title}\n",
" description: {insert description}\n",
" version: 1.0.0\n",
"servers:\n",
" - url: https://{your_function_app_name}.azurewebsites.net/api\n",
" description: {insert description}\n",
"paths:\n",
" /{your_function_name}?code={enter your specific endpoint id here}:\n",
" post:\n",
" operationId: {insert operationID}\n",
" summary: {insert summary}\n",
" requestBody: \n",
"{the rest of this is specific to your application}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Authentication Instructions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below are instructions on setting up authentication with this 3rd party application. Have questions? Check out [Getting Started Example](https://platform.openai.com/docs/actions/getting-started) to see how this step works in more detail."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pre-Action Steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before you set up authentication in ChatGPT, please take the following steps in the application."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*Follow steps 2 & 4 above to setting up authentication*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### In ChatGPT"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In ChatGPT, click on \"Authentication\" and choose **\"OAuth\"**. Enter in the information below. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- **Client ID**: *see step 12 above*\n",
"- **Client Secret**: *ditto*\n",
"- **Authorization URL**: *ditto*\n",
"- **Token URL**: *ditto*\n",
"- **Scope**: *ditto*\n",
"- **Token**: *ditto*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Post-Action Steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you've set up authentication in ChatGPT, follow the steps below in the application to finalize the Action. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*See above for testing out this application*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*Are there integrations that youd like us to prioritize? Are there errors in our integrations? File a PR or issue in our github, and well take a look.*\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -578,7 +578,6 @@ paths:
description: Payload too large if the response exceeds the size limit. description: Payload too large if the response exceeds the size limit.
'500': '500':
description: Server error when there are issues executing the query or encoding the results. description: Server error when there are issues executing the query or encoding the results.
``` ```
## Solution 2 Detailed Walkthrough: Converting the file to text in the Azure Function ## Solution 2 Detailed Walkthrough: Converting the file to text in the Azure Function

View File

@ -88,7 +88,7 @@ const getDriveItemContent = async (client, driveId, itemId, name) => {
} }
}; };
// Function to get relevant parts of text using gpt-3.5-turbo. // Function to get relevant parts of text using got-4o-mini.
const getRelevantParts = async (text, query) => { const getRelevantParts = async (text, query) => {
try { try {
// We use your OpenAI key to initialize the OpenAI client // We use your OpenAI key to initialize the OpenAI client
@ -97,8 +97,8 @@ const getRelevantParts = async (text, query) => {
apiKey: openAIKey, apiKey: openAIKey,
}); });
const response = await openai.chat.completions.create({ const response = await openai.chat.completions.create({
// Using gpt-3.5-turbo due to speed to prevent timeouts. You can tweak this prompt as needed // Using got-4o-mini due to speed to prevent timeouts. You can tweak this prompt as needed
model: "gpt-3.5-turbo-0125", model: "got-4o-mini",
messages: [ messages: [
{"role": "system", "content": "You are a helpful assistant that finds relevant content in text based on a query. You only return the relevant sentences, and you return a maximum of 10 sentences"}, {"role": "system", "content": "You are a helpful assistant that finds relevant content in text based on a query. You only return the relevant sentences, and you return a maximum of 10 sentences"},
{"role": "user", "content": `Based on this question: **"${query}"**, get the relevant parts from the following text:*****\n\n${text}*****. If you cannot answer the question based on the text, respond with 'No information provided'`} {"role": "user", "content": `Based on this question: **"${query}"**, get the relevant parts from the following text:*****\n\n${text}*****. If you cannot answer the question based on the text, respond with 'No information provided'`}
@ -162,7 +162,7 @@ module.exports = async function (context, req) {
return content.split(/\s+/); return content.split(/\s+/);
}; };
// Function to break tokens into 10k token windows for gpt-3.5-turbo // Function to break tokens into 10k token windows for got-4o-mini
const breakIntoTokenWindows = (tokens) => { const breakIntoTokenWindows = (tokens) => {
const tokenWindows = [] const tokenWindows = []
const maxWindowTokens = 10000; // 10k tokens const maxWindowTokens = 10000; // 10k tokens
@ -180,7 +180,7 @@ module.exports = async function (context, req) {
const list = await client.api('/search/query').post(requestBody); const list = await client.api('/search/query').post(requestBody);
const processList = async () => { const processList = async () => {
// This will go through and for each search response, grab the contents of the file and summarize with gpt-3.5-turbo // This will go through and for each search response, grab the contents of the file and summarize with got-4o-mini
const results = []; const results = [];
await Promise.all(list.value[0].hitsContainers.map(async (container) => { await Promise.all(list.value[0].hitsContainers.map(async (container) => {

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 194 KiB

View File

@ -1316,14 +1316,14 @@
- vision - vision
- whisper - whisper
- title: Using Azure Function and the Microsoft Graph Search API to Query Office365 and SharePoint with ChatGPT # - title: Using Azure Function and the Microsoft Graph Search API to Query Office365 and SharePoint with ChatGPT
path: examples/chatgpt/sharepoint_azure_function/Using_Azure_Functions_and_Microsoft_Graph_to_Query_SharePoint.md # path: examples/chatgpt/sharepoint_azure_function/Using_Azure_Functions_and_Microsoft_Graph_to_Query_SharePoint.md
date: 2024-05-24 # date: 2024-05-24
authors: # authors:
- maxreid-openai # - maxreid-openai
tags: # tags:
- completions # - completions
- chatgpt # - chatgpt
- title: GPT Actions library - getting started - title: GPT Actions library - getting started
path: examples/chatgpt/gpt_actions_library/.gpt_action_getting_started.ipynb path: examples/chatgpt/gpt_actions_library/.gpt_action_getting_started.ipynb
@ -1361,6 +1361,33 @@
tags: tags:
- gpt-actions-library - gpt-actions-library
- chatgpt - chatgpt
- title: GPT Actions library - Sharepoint (Return Docs)
path: examples/chatgpt/gpt_actions_library/gpt_action_sharepoint_doc.ipynb
date: 2024-05-24
authors:
- maxreid-openai
tags:
- completions
- chatgpt
- title: GPT Actions library - Sharepoint (Return Text)
path: examples/chatgpt/gpt_actions_library/gpt_action_sharepoint_text.ipynb
date: 2024-05-24
authors:
- maxreid-openai
tags:
- completions
- chatgpt
- title: GPT Actions library (Middleware) - Azure Functions
path: examples/chatgpt/gpt_actions_library/gpt_middleware_azure_function.ipynb
date: 2024-05-24
authors:
- maxreid-openai
tags:
- completions
- chatgpt
- title: GPT Actions library - Canvas LMS - title: GPT Actions library - Canvas LMS
path: examples/chatgpt/gpt_actions_library/gpt_action_canvaslms.ipynb path: examples/chatgpt/gpt_actions_library/gpt_action_canvaslms.ipynb
@ -1379,3 +1406,4 @@
tags: tags:
- gpt-actions-library - gpt-actions-library
- chatgpt - chatgpt