diff --git a/examples/Truncate_prompts_to_context_length.ipynb b/examples/Embedding_long_inputs.ipynb similarity index 99% rename from examples/Truncate_prompts_to_context_length.ipynb rename to examples/Embedding_long_inputs.ipynb index 1ed48f2..be629d2 100644 --- a/examples/Truncate_prompts_to_context_length.ipynb +++ b/examples/Embedding_long_inputs.ipynb @@ -5,7 +5,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Embedding texts that are larger than the model's context length\n", + "# Embedding texts that are longer than the model's context length\n", "\n", "All models have a maximum context length for the input text they take in. However, this maximum length is defined in terms of _tokens_ instead of string length. If you are unfamiliar with tokenization, you can check out the [\"How to count tokens with tiktoken\"](How_to_count_tokens_with_tiktoken.ipynb) notebook in this same cookbook.\n", "\n",