doc: highlight text and tool name

1. cannot: highlight to enhance the importance
2. Pinecone
This commit is contained in:
liuliu 2023-03-17 14:03:01 +08:00
parent 0421728133
commit afbf055b08

View File

@ -212,7 +212,7 @@
"The best training method to use for fine-tuning a pre-trained model with sentence transformers is the Masked Language Model (MLM) training. MLM training involves randomly masking some of the words in a sentence and then training the model to predict the masked words. This helps the model to learn the context of the sentence and better understand the relationships between words.\n",
"```\n",
"\n",
"This answer seems pretty convincing right? Yet, it's wrong. MLM is typically used in the pretraining step of a transformer model but *cannot* be used to fine-tune a sentence-transformer, and has nothing to do with having _\"pairs of related sentences\"_.\n",
"This answer seems pretty convincing right? Yet, it's wrong. MLM is typically used in the pretraining step of a transformer model but *\"cannot\"* be used to fine-tune a sentence-transformer, and has nothing to do with having _\"pairs of related sentences\"_.\n",
"\n",
"An alternative answer we receive (and the one we returned above) is about `supervised learning approach` being the most suitable. This is completely true, but it's not specific and doesn't answer the question.\n",
"\n",
@ -555,7 +555,7 @@
"id": "VMyJjt1cnwcH"
},
"source": [
"Now we need a place to store these embeddings and enable a efficient _vector search_ through them all. To do that we use Pinecone, we can get a [free API key](https://app.pinecone.io) and enter it below where we will initialize our connection to Pinecone and create a new index."
"Now we need a place to store these embeddings and enable a efficient _vector search_ through them all. To do that we use **`Pinecone`**, we can get a [free API key](https://app.pinecone.io) and enter it below where we will initialize our connection to `Pinecone` and create a new index."
]
},
{