{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "Ol5OkztZqoAW" }, "source": [ "# Question Answering with LangChain, Deep Lake, & OpenAI\n", "\n", "This notebook shows how to implement a question answering system with LangChain, [Deep Lake](https://activeloop.ai/) as a vector store and OpenAI embeddings. We will take the following steps to achieve this:\n", "\n", "1. Load a Deep Lake text dataset\n", "2. Initialize a [Deep Lake vector store with LangChain](https://docs.activeloop.ai/tutorials/vector-store/deep-lake-vector-store-in-langchain)\n", "3. Add text to the vector store\n", "4. Run queries on the database\n", "5. Done!\n", "\n", "You can also follow other tutorials such as question answering over any type of data (PDFs, json, csv, text): [chatting with any data](https://www.activeloop.ai/resources/data-chad-an-ai-app-with-lang-chain-deep-lake-to-chat-with-any-data/) stored in Deep Lake, [code understanding](https://www.activeloop.ai/resources/lang-chain-gpt-4-for-code-understanding-twitter-algorithm/), or [question answering over PDFs](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/), or [recommending songs](https://www.activeloop.ai/resources/3-ways-to-build-a-recommendation-engine-for-songs-with-lang-chain/)." ] }, { "cell_type": "markdown", "metadata": { "id": "6uKh5KahrBs3" }, "source": [ "## Install requirements\n", "Let's install the following packages." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "cPsdluAqqnRH" }, "outputs": [], "source": [ "!pip install deeplake langchain openai tiktoken" ] }, { "cell_type": "markdown", "metadata": { "id": "IUm1NzURrGte" }, "source": [ "## Authentication\n", "Provide your OpenAI API key here:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "Q_-OiwJzrJ8m", "outputId": "b11b0d5c-cbd4-469d-95d1-fcd7149bd493" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "··········\n" ] } ], "source": [ "import getpass\n", "import os\n", "\n", "os.environ['OPENAI_API_KEY'] = getpass.getpass()" ] }, { "cell_type": "markdown", "metadata": { "id": "ok-hgiotrLmS" }, "source": [ "## Load a Deep Lake text dataset\n", "We will use a 20000 sample subset of the [cohere-wikipedia-22](https://app.activeloop.ai/davitbun/cohere-wikipedia-22) dataset for this example." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "cIj5g4smrwOm", "outputId": "6315bd53-8a2f-40ef-b2f5-2687c90b2231" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "\\" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Opening dataset in read-only mode as you don't have write permissions.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "-" ] }, { "name": "stdout", "output_type": "stream", "text": [ "This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/activeloop/cohere-wikipedia-22-sample\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "|" ] }, { "name": "stdout", "output_type": "stream", "text": [ "hub://activeloop/cohere-wikipedia-22-sample loaded successfully.\n", "\n", "Dataset(path='hub://activeloop/cohere-wikipedia-22-sample', read_only=True, tensors=['ids', 'metadata', 'text'])\n", "\n", " tensor htype shape dtype compression\n", " ------- ------- ------- ------- ------- \n", " ids text (20000, 1) str None \n", " metadata json (20000, 1) str None \n", " text text (20000, 1) str None \n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\r \r\r\r" ] } ], "source": [ "import deeplake\n", "\n", "ds = deeplake.load(\"hub://activeloop/cohere-wikipedia-22-sample\")\n", "ds.summary()" ] }, { "cell_type": "markdown", "metadata": { "id": "oY6FHqovHPfJ" }, "source": [ "Let's take a look at a few samples:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "IWPYDrtUHPEr", "outputId": "91e1b13e-abd0-4709-f65c-87986e90181a" }, "outputs": [ { "data": { "text/plain": [ "['The 24-hour clock is a way of telling the time in which the day runs from midnight to midnight and is divided into 24 hours, numbered from 0 to 23. It does not use a.m. or p.m. This system is also referred to (only in the US and the English speaking parts of Canada) as military time or (only in the United Kingdom and now very rarely) as continental time. In some parts of the world, it is called railway time. Also, the international standard notation of time (ISO 8601) is based on this format.',\n", " 'A time in the 24-hour clock is written in the form hours:minutes (for example, 01:23), or hours:minutes:seconds (01:23:45). Numbers under 10 have a zero in front (called a leading zero); e.g. 09:07. Under the 24-hour clock system, the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00, which is identical to 00:00 of the following day. 12:00 can only be mid-day. Midnight is called 24:00 and is used to mean the end of the day and 00:00 is used to mean the beginning of the day. For example, you would say \"Tuesday at 24:00\" and \"Wednesday at 00:00\" to mean exactly the same time.',\n", " 'However, the US military prefers not to say 24:00 - they do not like to have two names for the same thing, so they always say \"23:59\", which is one minute before midnight.']" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ds[:3].text.data()[\"value\"]" ] }, { "cell_type": "markdown", "metadata": { "id": "JRFPjoDaGcSa" }, "source": [ "## LangChain's Deep Lake vector store\n", "Let's define a `dataset_path`, this is where your Deep Lake vector store will house the text embeddings." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "Klobw6_T257K" }, "outputs": [], "source": [ "dataset_path = 'wikipedia-embeddings-deeplake'" ] }, { "cell_type": "markdown", "metadata": { "id": "IW6BZubFGgu2" }, "source": [ "We will setup OpenAI's `text-embedding-3-small` as our embedding function and initialize a Deep Lake vector store at `dataset_path`..." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "ykE3HgSl5mcg", "outputId": "dde4d6bb-6c82-473e-f37d-3f03a358ee8b" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "\r\r\r\r" ] } ], "source": [ "from langchain.embeddings.openai import OpenAIEmbeddings\n", "from langchain.vectorstores import DeepLake\n", "\n", "embedding = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n", "db = DeepLake(dataset_path, embedding=embedding, overwrite=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "6mt2S1XpGj-D" }, "source": [ "... and populate it with samples, one batch at a time, using the `add_texts` method." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 275, "referenced_widgets": [ "30a05f9f55ae454ba75137634896e82a", "0add33db728844a59c1ffa53e18fab98", "26bf0f01ac414ab0b0da34971ba8cbdf", "b595729257c34311a1c21b103a20bbb8", "6a75dce7a6b84148a0515e30f116ee07", "1dbe1466e8ba47b1898864ca5aa22f30", "90c56b9af48d480b93c027032e44c9dd", "06099626b6e34bf6acf06e53673d08e7", "b8af7a2bffad44cea5264191b5079995", "d397a65b169647588cf2eaf8342dde5e", "2f9e6758a17441359021a6b66cff1dea" ] }, "id": "hFJTvNGE53lS", "outputId": "200e3808-1309-4520-9b42-6b59cfc506e6" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "30a05f9f55ae454ba75137634896e82a", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/1 [00:00