elasticsearch/docs/reference/tab-widgets/inference-api/infer-api-reindex.asciidoc
István Zoltán Szabó 6ae9dbfda7
[DOCS] Adds cohere service example to the inference API tutorial (#105904)
Co-authored-by: Jonathan Buttner <56361221+jonathan-buttner@users.noreply.github.com>
2024-03-04 16:43:41 +01:00

55 lines
No EOL
1.3 KiB
Text

// tag::cohere[]
[source,console]
----
POST _reindex?wait_for_completion=false
{
"source": {
"index": "test-data",
"size": 50 <1>
},
"dest": {
"index": "cohere-embeddings",
"pipeline": "cohere_embeddings"
}
}
----
// TEST[skip:TBD]
<1> The default batch size for reindexing is 1000. Reducing `size` to a smaller
number makes the update of the reindexing process quicker which enables you to
follow the progress closely and detect errors early.
NOTE: The
https://dashboard.cohere.com/billing[rate limit of your Cohere account]
may affect the throughput of the reindexing process.
// end::cohere[]
// tag::openai[]
[source,console]
----
POST _reindex?wait_for_completion=false
{
"source": {
"index": "test-data",
"size": 50 <1>
},
"dest": {
"index": "openai-embeddings",
"pipeline": "openai_embeddings"
}
}
----
// TEST[skip:TBD]
<1> The default batch size for reindexing is 1000. Reducing `size` to a smaller
number makes the update of the reindexing process quicker which enables you to
follow the progress closely and detect errors early.
NOTE: The
https://platform.openai.com/account/limits[rate limit of your OpenAI account]
may affect the throughput of the reindexing process. If this happens, change
`size` to `3` or a similar value in magnitude.
// end::openai[]