elasticsearch/docs/reference/inference/service-mistral.asciidoc
2025-01-29 03:38:23 +11:00

125 lines
No EOL
3.2 KiB
Text

[[infer-service-mistral]]
=== Mistral {infer} integration
.New API reference
[sidebar]
--
For the most up-to-date API details, refer to {api-es}/group/endpoint-inference[{infer-cap} APIs].
--
Creates an {infer} endpoint to perform an {infer} task with the `mistral` service.
[discrete]
[[infer-service-mistral-api-request]]
==== {api-request-title}
`PUT /_inference/<task_type>/<inference_id>`
[discrete]
[[infer-service-mistral-api-path-params]]
==== {api-path-parms-title}
`<inference_id>`::
(Required, string)
include::inference-shared.asciidoc[tag=inference-id]
`<task_type>`::
(Required, string)
include::inference-shared.asciidoc[tag=task-type]
+
--
Available task types:
* `text_embedding`.
--
[discrete]
[[infer-service-mistral-api-request-body]]
==== {api-request-body-title}
`chunking_settings`::
(Optional, object)
include::inference-shared.asciidoc[tag=chunking-settings]
`max_chunk_size`:::
(Optional, integer)
include::inference-shared.asciidoc[tag=chunking-settings-max-chunking-size]
`overlap`:::
(Optional, integer)
include::inference-shared.asciidoc[tag=chunking-settings-overlap]
`sentence_overlap`:::
(Optional, integer)
include::inference-shared.asciidoc[tag=chunking-settings-sentence-overlap]
`strategy`:::
(Optional, string)
include::inference-shared.asciidoc[tag=chunking-settings-strategy]
`service`::
(Required, string)
The type of service supported for the specified task type. In this case,
`mistral`.
`service_settings`::
(Required, object)
include::inference-shared.asciidoc[tag=service-settings]
+
--
These settings are specific to the `mistral` service.
--
`api_key`:::
(Required, string)
A valid API key for your Mistral account.
You can find your Mistral API keys or you can create a new one
https://console.mistral.ai/api-keys/[on the API Keys page].
+
--
include::inference-shared.asciidoc[tag=api-key-admonition]
--
`model`:::
(Required, string)
The name of the model to use for the {infer} task.
Refer to the https://docs.mistral.ai/getting-started/models/[Mistral models documentation]
for the list of available text embedding models.
`max_input_tokens`:::
(Optional, integer)
Allows you to specify the maximum number of tokens per input before chunking occurs.
`rate_limit`:::
(Optional, object)
By default, the `mistral` service sets the number of requests allowed per minute to `240`.
This helps to minimize the number of rate limit errors returned from the Mistral API.
To modify this, set the `requests_per_minute` setting of this object in your service settings:
+
--
include::inference-shared.asciidoc[tag=request-per-minute-example]
--
[discrete]
[[inference-example-mistral]]
==== Mistral service example
The following example shows how to create an {infer} endpoint called
`mistral-embeddings-test` to perform a `text_embedding` task type.
[source,console]
------------------------------------------------------------
PUT _inference/text_embedding/mistral-embeddings-test
{
"service": "mistral",
"service_settings": {
"api_key": "<api_key>",
"model": "mistral-embed" <1>
}
}
------------------------------------------------------------
// TEST[skip:TBD]
<1> The `model` must be the ID of a text embedding model which can be found in the
https://docs.mistral.ai/getting-started/models/[Mistral models documentation].