{ "cells": [ { "cell_type": "code", "execution_count": 167, "id": "9e3839a6-9146-4f60-b74b-19abbc24278d", "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import openai\n", "import numpy as np\n", "from tqdm.notebook import tqdm\n", "import pickle\n", "from transformers import GPT2TokenizerFast\n", "\n", "ENGINE_NAME = \"curie\"\n", "\n", "DOC_EMBEDDING_MODEL = f\"text-search-{ENGINE_NAME}-doc-001\"\n", "QUERY_EMBEDDING_MODEL = f\"text-search-{MODEL_NAME}-query-001\"\n", "\n", "COMPLETIONS_MODEL = \"text-davinci-002\"" ] }, { "cell_type": "markdown", "id": "9312f62f-e208-4030-a648-71ad97aee74f", "metadata": {}, "source": [ "# Question Answering\n", "\n", "Many use cases require GPT to respond to user questions with insightful answers. For example, a customer support chatbot may need to provide answers to common questions. The GPT models have picked up a lot of general knowledge in training, but we often need to ingest and use a body of more specific information.\n", "\n", "In this notebook we will demonstrate a method for augmenting GPT with a large body of additional contextual information by using embeddings search and retrieval. We'll be using a dataset of Wikipedia articles about the 2020 Summer Olympic Games. Please see [this notebook](examples/fine-tuned_qa/olympics-1-collect-data.ipynb) to follow the data gathering process.\n", "\n", "GPT-3 isn't an expert on the Olympics by default:" ] }, { "cell_type": "code", "execution_count": 176, "id": "a167516c-7c19-4bda-afa5-031aa0ae13bb", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"The 2020 Summer Olympics men's high jump was won by Mariusz Przybylski of Poland.\"" ] }, "execution_count": 176, "metadata": {}, "output_type": "execute_result" } ], "source": [ "prompt = \"Who won the 2020 Summer Olympics men's high jump?\"\n", "\n", "openai.Completion.create(\n", " prompt=prompt,\n", " temperature=0,\n", " max_tokens=300,\n", " top_p=1,\n", " frequency_penalty=0,\n", " presence_penalty=0,\n", " engine=COMPLETIONS_MODEL\n", ")[\"choices\"][0][\"text\"].strip(\"\\n\")" ] }, { "cell_type": "markdown", "id": "47204cce-a7d5-4c81-ab6e-53323026e08c", "metadata": {}, "source": [ "Mariusz Przybylski is a professional footballer from Poland, and not much of a high jumper! Evidently GPT-3 needs some assistance here. (In fact we'd ideally like the model to be more conservative and say \"I don't know\" rather than making a guess - see [this guide](examples/fine-tuned_qa) for an exploration of that topic.)\n", "\n", "When the total required context is short, we can include it in the prompt directly. In this case, we can use this information taken from Wikipedia:" ] }, { "cell_type": "code", "execution_count": 179, "id": "fceaf665-2602-4788-bc44-9eb256a6f955", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"Gianmarco Tamberi and Mutaz Essa Barshim won the 2020 Summer Olympics men's high jump.\"" ] }, "execution_count": 179, "metadata": {}, "output_type": "execute_result" } ], "source": [ "prompt = \"\"\"\n", "The men's high jump event at the 2020 Summer Olympics took place between 30 July and 1 August 2021 at the Olympic Stadium.\n", "33 athletes from 24 nations competed; the total possible number depended on how many nations would use universality places \n", "to enter athletes in addition to the 32 qualifying through mark or ranking (no universality places were used in 2021).\n", "Italian athlete Gianmarco Tamberi along with Qatari athlete Mutaz Essa Barshim emerged as joint winners of the event following\n", "a tie between both of them as they cleared 2.37m. Both Tamberi and Barshim agreed to share the gold medal in a rare instance\n", "where the athletes of different nations had agreed to share the same medal in the history of Olympics. \n", "Barshim in particular was heard to ask a competition official \"Can we have two golds?\" in response to being offered a \n", "'jump off'. Maksim Nedasekau of Belarus took bronze. The medals were the first ever in the men's high jump for Italy and \n", "Belarus, the first gold in the men's high jump for Italy and Qatar, and the third consecutive medal in the men's high jump\n", "for Qatar (all by Barshim). Barshim became only the second man to earn three medals in high jump, joining Patrik Sjöberg\n", "of Sweden (1984 to 1992).\n", "\n", "Who won the 2020 Summer Olympics men's high jump?\"\"\"\n", "\n", "openai.Completion.create(\n", " prompt=prompt,\n", " temperature=0,\n", " max_tokens=300,\n", " top_p=1,\n", " frequency_penalty=0,\n", " presence_penalty=0,\n", " engine=COMPLETIONS_MODEL\n", ")[\"choices\"][0][\"text\"].strip(\"\\n\")" ] }, { "cell_type": "markdown", "id": "ee85ee77-d8d2-4788-b57e-0785f2d7e2e3", "metadata": {}, "source": [ "But this technique only works when the dataset of extra content that the model may need to know is small enough to fit in a single prompt. What do we do when we need the model to choose relevant contextual information from within a large body of information?\n", "\n", "**In this notebook we demonstrate a method for augmenting GPT with a large body of additional contextual information by using embeddings search and retrieval.** This method answers queries in two steps: first it retrieves the information relevant to the query, then it writes an answer tailored to the question based on the retrieved information. The first step uses the Embedding API, the second step uses the Completion API.\n", " \n", "The steps are:\n", "* Preprocess the contextual information by splitting it into chunks and create an embedding vector for each chunk.\n", "* On receiving a query, embed the query in the same vector space as the context chunks and find the context embeddings which are most similar to the query.\n", "* Prepend the most relevant context embeddings to the query prompt.\n", "* Submit the question along with the most relevant context to GPT, and receive an answer which makes use of the provided contextual information." ] }, { "cell_type": "markdown", "id": "0c9bfea5-a028-4191-b9f1-f210d76ec4e3", "metadata": {}, "source": [ "# 1) Preprocess the contextual information\n", "\n", "We preprocess the contextual information by creating an embedding vector for each chunk of context in the knowledge base. An embedding is a vector of numbers that helps us understand how similar or different things are. The closer two embeddings are to each other, the more similar the things are that they represent.\n", "\n", "This indexing stage can be executed offline and only runs once to precompute the indexes for the dataset so that each piece of content can be retrieved later. Since this is a small example, we will store and search the embeddings locally. If you have a larger dataset, consider using a vector search engine like Pinecone or Weaviate to power the search.\n", "\n", "For the purposes of this tutorial we chose to use Curie embeddings, which are at a very good price and performance point. Since we will be using these embeddings for retrieval, we’ll use the search embeddings. " ] }, { "cell_type": "code", "execution_count": 22, "id": "cc9c8d69-e234-48b4-87e3-935970e1523a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "3980 rows in the data.\n" ] }, { "data": { "text/html": [ "
\n", " | \n", " | content | \n", "tokens | \n", "
---|---|---|---|
title | \n", "heading | \n", "\n", " | \n", " |
United States at the 2020 Summer Olympics | \n", "Diving | \n", "U.S. divers qualified for the following indivi... | \n", "89 | \n", "
Austria at the 2020 Summer Olympics | \n", "Summary | \n", "Austria competed at the 2020 Summer Olympics i... | \n", "115 | \n", "
2020 Women's Rugby Sevens Final Olympic Qualification Tournament | \n", "Knockout stage | \n", "With two Olympic places available, the top eig... | \n", "49 | \n", "
Italy at the 2020 Summer Olympics | \n", "Karate | \n", "Italy entered five karateka into the inaugural... | \n", "148 | \n", "
2020 United States Olympic Team Trials (wrestling) | \n", "Summary | \n", "The 2020 United States Olympic Team Trials for... | \n", "119 | \n", "