elasticsearch/docs/reference/inference/put-inference.asciidoc
Mark J. Hoy b3a902e035
Add Docs for Azure AI Studio Support for the Inference API (#108737)
* add docs and embeddings tutorial pieces

* cleanup openai reference

* Suggested cleanups; add missing div tag

* one more change for clarity (requests per minute)
2024-05-17 10:35:43 -04:00

703 lines
24 KiB
Text

[role="xpack"]
[[put-inference-api]]
=== Create {infer} API
experimental[]
Creates an {infer} endpoint to perform an {infer} task.
IMPORTANT: The {infer} APIs enable you to use certain services, such as built-in
{ml} models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure
OpenAI or Hugging Face. For built-in models and models uploaded though
Eland, the {infer} APIs offer an alternative way to use and manage trained
models. However, if you do not plan to use the {infer} APIs to use these models
or if you want to use non-NLP models, use the <<ml-df-trained-models-apis>>.
[discrete]
[[put-inference-api-request]]
==== {api-request-title}
`PUT /_inference/<task_type>/<inference_id>`
[discrete]
[[put-inference-api-prereqs]]
==== {api-prereq-title}
* Requires the `manage_inference` <<privileges-list-cluster,cluster privilege>>
(the built-in `inference_admin` role grants this privilege)
[discrete]
[[put-inference-api-desc]]
==== {api-description-title}
The create {infer} API enables you to create an {infer} endpoint and configure a
{ml} model to perform a specific {infer} task.
The following services are available through the {infer} API:
* Cohere
* ELSER
* Hugging Face
* OpenAI
* Azure OpenAI
* Azure AI Studio
* Elasticsearch (for built-in models and models uploaded through Eland)
[discrete]
[[put-inference-api-path-params]]
==== {api-path-parms-title}
`<inference_id>`::
(Required, string)
The unique identifier of the {infer} endpoint.
`<task_type>`::
(Required, string)
The type of the {infer} task that the model will perform. Available task types:
* `completion`,
* `rerank`,
* `sparse_embedding`,
* `text_embedding`.
[discrete]
[[put-inference-api-request-body]]
==== {api-request-body-title}
`service`::
(Required, string)
The type of service supported for the specified task type.
Available services:
* `cohere`: specify the `completion`, `text_embedding` or the `rerank` task type to use the
Cohere service.
* `elser`: specify the `sparse_embedding` task type to use the ELSER service.
* `hugging_face`: specify the `text_embedding` task type to use the Hugging Face
service.
* `openai`: specify the `completion` or `text_embedding` task type to use the
OpenAI service.
* `azureopenai`: specify the `completion` or `text_embedding` task type to use the Azure OpenAI service.
* `azureaistudio`: specify the `completion` or `text_embedding` task type to use the Azure AI Studio service.
* `elasticsearch`: specify the `text_embedding` task type to use the E5
built-in model or text embedding models uploaded by Eland.
`service_settings`::
(Required, object)
Settings used to install the {infer} model. These settings are specific to the
`service` you specified.
+
.`service_settings` for the `cohere` service
[%collapsible%closed]
=====
`api_key`:::
(Required, string)
A valid API key of your Cohere account. You can find your Cohere API keys or you
can create a new one
https://dashboard.cohere.com/api-keys[on the API keys settings page].
IMPORTANT: You need to provide the API key only once, during the {infer} model
creation. The <<get-inference-api>> does not retrieve your API key. After
creating the {infer} model, you cannot change the associated API key. If you
want to use a different API key, delete the {infer} model and recreate it with
the same name and the updated API key.
`embedding_type`::
(Optional, string)
Only for `text_embedding`. Specifies the types of embeddings you want to get
back. Defaults to `float`.
Valid values are:
* `byte`: use it for signed int8 embeddings (this is a synonym of `int8`).
* `float`: use it for the default float embeddings.
* `int8`: use it for signed int8 embeddings.
`model_id`::
(Optional, string)
The name of the model to use for the {infer} task.
To review the available `rerank` models, refer to the
https://docs.cohere.com/reference/rerank-1[Cohere docs].
To review the available `text_embedding` models, refer to the
https://docs.cohere.com/reference/embed[Cohere docs]. The default value for
`text_embedding` is `embed-english-v2.0`.
=====
+
.`service_settings` for the `elser` service
[%collapsible%closed]
=====
`num_allocations`:::
(Required, integer)
The number of model allocations to create. `num_allocations` must not exceed the
number of available processors per node divided by the `num_threads`.
`num_threads`:::
(Required, integer)
The number of threads to use by each model allocation. `num_threads` must not
exceed the number of available processors per node divided by the number of
allocations. Must be a power of 2. Max allowed value is 32.
=====
+
.`service_settings` for the `hugging_face` service
[%collapsible%closed]
=====
`api_key`:::
(Required, string)
A valid access token of your Hugging Face account. You can find your Hugging
Face access tokens or you can create a new one
https://huggingface.co/settings/tokens[on the settings page].
IMPORTANT: You need to provide the API key only once, during the {infer} model
creation. The <<get-inference-api>> does not retrieve your API key. After
creating the {infer} model, you cannot change the associated API key. If you
want to use a different API key, delete the {infer} model and recreate it with
the same name and the updated API key.
`url`:::
(Required, string)
The URL endpoint to use for the requests.
=====
+
.`service_settings` for the `openai` service
[%collapsible%closed]
=====
`api_key`:::
(Required, string)
A valid API key of your OpenAI account. You can find your OpenAI API keys in
your OpenAI account under the
https://platform.openai.com/api-keys[API keys section].
IMPORTANT: You need to provide the API key only once, during the {infer} model
creation. The <<get-inference-api>> does not retrieve your API key. After
creating the {infer} model, you cannot change the associated API key. If you
want to use a different API key, delete the {infer} model and recreate it with
the same name and the updated API key.
`model_id`:::
(Required, string)
The name of the model to use for the {infer} task. Refer to the
https://platform.openai.com/docs/guides/embeddings/what-are-embeddings[OpenAI documentation]
for the list of available text embedding models.
`organization_id`:::
(Optional, string)
The unique identifier of your organization. You can find the Organization ID in
your OpenAI account under
https://platform.openai.com/account/organization[**Settings** > **Organizations**].
`url`:::
(Optional, string)
The URL endpoint to use for the requests. Can be changed for testing purposes.
Defaults to `https://api.openai.com/v1/embeddings`.
=====
+
.`service_settings` for the `azureopenai` service
[%collapsible%closed]
=====
`api_key` or `entra_id`:::
(Required, string)
You must provide _either_ an API key or an Entra ID.
If you do not provide either, or provide both, you will receive an error when trying to create your model.
See the https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#authentication[Azure OpenAI Authentication documentation] for more details on these authentication types.
IMPORTANT: You need to provide the API key or Entra ID only once, during the {infer} model creation.
The <<get-inference-api>> does not retrieve your authentication credentials.
After creating the {infer} model, you cannot change the associated API key or Entra ID.
If you want to use a different API key or Entra ID, delete the {infer} model and recreate it with the same name and the updated API key.
You _must_ have either an `api_key` or an `entra_id` defined.
If neither are present, an error will occur.
`resource_name`:::
(Required, string)
The name of your Azure OpenAI resource.
You can find this from the https://portal.azure.com/#view/HubsExtension/BrowseAll[list of resources] in the Azure Portal for your subscription.
`deployment_id`:::
(Required, string)
The deployment name of your deployed models.
Your Azure OpenAI deployments can be found though the https://oai.azure.com/[Azure OpenAI Studio] portal that is linked to your subscription.
`api_version`:::
(Required, string)
The Azure API version ID to use.
We recommend using the https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#embeddings[latest supported non-preview version].
=====
+
.`service_settings` for the `azureaistudio` service
[%collapsible%closed]
=====
`api_key`:::
(Required, string)
A valid API key of your Azure AI Studio model deployment.
This key can be found on the overview page for your deployment in the management section of your https://ai.azure.com/[Azure AI Studio] account.
IMPORTANT: You need to provide the API key only once, during the {infer} model
creation. The <<get-inference-api>> does not retrieve your API key. After
creating the {infer} model, you cannot change the associated API key. If you
want to use a different API key, delete the {infer} model and recreate it with
the same name and the updated API key.
`target`:::
(Required, string)
The target URL of your Azure AI Studio model deployment.
This can be found on the overview page for your deployment in the management section of your https://ai.azure.com/[Azure AI Studio] account.
`provider`:::
(Required, string)
The model provider for your deployment.
Note that some providers may support only certain task types.
Supported providers include:
* `openai` - available for `text_embedding` and `completion` task types
* `mistral` - available for `completion` task type only
* `meta` - available for `completion` task type only
* `microsoft_phi` - available for `completion` task type only
* `cohere` - available for `text_embedding` and `completion` task types
* `databricks` - available for `completion` task type only
`endpoint_type`:::
(Required, string)
One of `token` or `realtime`.
Specifies the type of endpoint that is used in your model deployment.
There are https://learn.microsoft.com/en-us/azure/ai-studio/concepts/deployments-overview#billing-for-deploying-and-inferencing-llms-in-azure-ai-studio[two endpoint types available] for deployment through Azure AI Studio.
"Pay as you go" endpoints are billed per token.
For these, you must specify `token` for your `endpoint_type`.
For "real-time" endpoints which are billed per hour of usage, specify `realtime`.
`rate_limit`:::
(Optional, object)
By default, the `azureaistudio` service sets the number of requests allowed per minute to `240`.
This helps to minimize the number of rate limit errors returned from Azure AI Studio.
To modify this, set the `requests_per_minute` setting of this object in your service settings:
```
"rate_limit": {
"requests_per_minute": <<number_of_requests>>
}
```
=====
+
.`service_settings` for the `elasticsearch` service
[%collapsible%closed]
=====
`model_id`:::
(Required, string)
The name of the model to use for the {infer} task. It can be the
ID of either a built-in model (for example, `.multilingual-e5-small` for E5) or
a text embedding model already
{ml-docs}/ml-nlp-import-model.html#ml-nlp-import-script[uploaded through Eland].
`num_allocations`:::
(Required, integer)
The number of model allocations to create. `num_allocations` must not exceed the
number of available processors per node divided by the `num_threads`.
`num_threads`:::
(Required, integer)
The number of threads to use by each model allocation. `num_threads` must not
exceed the number of available processors per node divided by the number of
allocations. Must be a power of 2. Max allowed value is 32.
=====
`task_settings`::
(Optional, object)
Settings to configure the {infer} task. These settings are specific to the
`<task_type>` you specified.
+
.`task_settings` for the `completion` task type
[%collapsible%closed]
=====
`user`:::
(Optional, string)
For `openai` service only. Specifies the user issuing the request, which can be
used for abuse detection.
`temperature`:::
(Optional, float)
For the `azureaistudio` service only.
A number in the range of 0.0 to 2.0 that specifies the sampling temperature to use that controls the apparent creativity of generated completions.
Should not be used if `top_p` is specified.
`top_p`:::
(Optional, float)
For the `azureaistudio` service only.
A number in the range of 0.0 to 2.0 that is an alternative value to temperature that causes the model to consider the results of the tokens with nucleus sampling probability.
Should not be used if `temperature` is specified.
`do_sample`:::
(Optional, float)
For the `azureaistudio` service only.
Instructs the inference process to perform sampling or not.
Has not affect unless `temperature` or `top_p` is specified.
`max_new_tokens`:::
(Optional, integer)
For the `azureaistudio` service only.
Provides a hint for the maximum number of output tokens to be generated.
Defaults to 64.
=====
+
.`task_settings` for the `rerank` task type
[%collapsible%closed]
=====
`return_documents`::
(Optional, boolean)
For `cohere` service only. Specify whether to return doc text within the
results.
`top_n`::
(Optional, integer)
The number of most relevant documents to return, defaults to the number of the
documents.
=====
+
.`task_settings` for the `text_embedding` task type
[%collapsible%closed]
=====
`input_type`:::
(Optional, string)
For `cohere` service only. Specifies the type of input passed to the model.
Valid values are:
* `classification`: use it for embeddings passed through a text classifier.
* `clusterning`: use it for the embeddings run through a clustering algorithm.
* `ingest`: use it for storing document embeddings in a vector database.
* `search`: use it for storing embeddings of search queries run against a
vector database to find relevant documents.
`truncate`:::
(Optional, string)
For `cohere` service only. Specifies how the API handles inputs longer than the
maximum token length. Defaults to `END`. Valid values are:
* `NONE`: when the input exceeds the maximum input token length an error is
returned.
* `START`: when the input exceeds the maximum input token length the start of
the input is discarded.
* `END`: when the input exceeds the maximum input token length the end of
the input is discarded.
`user`:::
(optional, string)
For `openai`, `azureopenai` and `azureaistudio` services only. Specifies the user issuing the
request, which can be used for abuse detection.
=====
[discrete]
[[put-inference-api-example]]
==== {api-examples-title}
This section contains example API calls for every service type.
[discrete]
[[inference-example-cohere]]
===== Cohere service
The following example shows how to create an {infer} endpoint called
`cohere-embeddings` to perform a `text_embedding` task type.
[source,console]
------------------------------------------------------------
PUT _inference/text_embedding/cohere-embeddings
{
"service": "cohere",
"service_settings": {
"api_key": "<api_key>",
"model_id": "embed-english-light-v3.0",
"embedding_type": "byte"
}
}
------------------------------------------------------------
// TEST[skip:TBD]
The following example shows how to create an {infer} endpoint called
`cohere-rerank` to perform a `rerank` task type.
[source,console]
------------------------------------------------------------
PUT _inference/rerank/cohere-rerank
{
"service": "cohere",
"service_settings": {
"api_key": "<API-KEY>",
"model_id": "rerank-english-v3.0"
},
"task_settings": {
"top_n": 10,
"return_documents": true
}
}
------------------------------------------------------------
// TEST[skip:TBD]
For more examples, also review the
https://docs.cohere.com/docs/elasticsearch-and-cohere#rerank-search-results-with-cohere-and-elasticsearch[Cohere documentation].
[discrete]
[[inference-example-e5]]
===== E5 via the elasticsearch service
The following example shows how to create an {infer} endpoint called
`my-e5-model` to perform a `text_embedding` task type.
[source,console]
------------------------------------------------------------
PUT _inference/text_embedding/my-e5-model
{
"service": "elasticsearch",
"service_settings": {
"num_allocations": 1,
"num_threads": 1,
"model_id": ".multilingual-e5-small" <1>
}
}
------------------------------------------------------------
// TEST[skip:TBD]
<1> The `model_id` must be the ID of one of the built-in E5 models. Valid values
are `.multilingual-e5-small` and `.multilingual-e5-small_linux-x86_64`. For
further details, refer to the {ml-docs}/ml-nlp-e5.html[E5 model documentation].
[discrete]
[[inference-example-elser]]
===== ELSER service
The following example shows how to create an {infer} endpoint called
`my-elser-model` to perform a `sparse_embedding` task type.
[source,console]
------------------------------------------------------------
PUT _inference/sparse_embedding/my-elser-model
{
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
}
}
------------------------------------------------------------
// TEST[skip:TBD]
Example response:
[source,console-result]
------------------------------------------------------------
{
"inference_id": "my-elser-model",
"task_type": "sparse_embedding",
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
},
"task_settings": {}
}
------------------------------------------------------------
// NOTCONSOLE
[discrete]
[[inference-example-hugging-face]]
===== Hugging Face service
The following example shows how to create an {infer} endpoint called
`hugging-face-embeddings` to perform a `text_embedding` task type.
[source,console]
------------------------------------------------------------
PUT _inference/text_embedding/hugging-face-embeddings
{
"service": "hugging_face",
"service_settings": {
"api_key": "<access_token>", <1>
"url": "<url_endpoint>" <2>
}
}
------------------------------------------------------------
// TEST[skip:TBD]
<1> A valid Hugging Face access token. You can find on the
https://huggingface.co/settings/tokens[settings page of your account].
<2> The {infer} endpoint URL you created on Hugging Face.
Create a new {infer} endpoint on
https://ui.endpoints.huggingface.co/[the Hugging Face endpoint page] to get an
endpoint URL. Select the model you want to use on the new endpoint creation page
- for example `intfloat/e5-small-v2` - then select the `Sentence Embeddings`
task under the Advanced configuration section. Create the endpoint. Copy the URL
after the endpoint initialization has been finished.
[discrete]
[[inference-example-hugging-face-supported-models]]
The list of recommended models for the Hugging Face service:
* https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2[all-MiniLM-L6-v2]
* https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2[all-MiniLM-L12-v2]
* https://huggingface.co/sentence-transformers/all-mpnet-base-v2[all-mpnet-base-v2]
* https://huggingface.co/intfloat/e5-base-v2[e5-base-v2]
* https://huggingface.co/intfloat/e5-small-v2[e5-small-v2]
* https://huggingface.co/intfloat/multilingual-e5-base[multilingual-e5-base]
* https://huggingface.co/intfloat/multilingual-e5-small[multilingual-e5-small]
[discrete]
[[inference-example-eland]]
===== Models uploaded by Eland via the elasticsearch service
The following example shows how to create an {infer} endpoint called
`my-msmarco-minilm-model` to perform a `text_embedding` task type.
[source,console]
------------------------------------------------------------
PUT _inference/text_embedding/my-msmarco-minilm-model
{
"service": "elasticsearch",
"service_settings": {
"num_allocations": 1,
"num_threads": 1,
"model_id": "msmarco-MiniLM-L12-cos-v5" <1>
}
}
------------------------------------------------------------
// TEST[skip:TBD]
<1> The `model_id` must be the ID of a text embedding model which has already
been
{ml-docs}/ml-nlp-import-model.html#ml-nlp-import-script[uploaded through Eland].
[discrete]
[[inference-example-openai]]
===== OpenAI service
The following example shows how to create an {infer} endpoint called
`openai-embeddings` to perform a `text_embedding` task type.
[source,console]
------------------------------------------------------------
PUT _inference/text_embedding/openai-embeddings
{
"service": "openai",
"service_settings": {
"api_key": "<api_key>",
"model_id": "text-embedding-ada-002"
}
}
------------------------------------------------------------
// TEST[skip:TBD]
The next example shows how to create an {infer} endpoint called
`openai-completion` to perform a `completion` task type.
[source,console]
------------------------------------------------------------
PUT _inference/completion/openai-completion
{
"service": "openai",
"service_settings": {
"api_key": "<api_key>",
"model_id": "gpt-3.5-turbo"
}
}
------------------------------------------------------------
// TEST[skip:TBD]
[discrete]
[[inference-example-azureopenai]]
===== Azure OpenAI service
The following example shows how to create an {infer} endpoint called
`azure_openai_embeddings` to perform a `text_embedding` task type.
Note that we do not specify a model here, as it is defined already via our Azure OpenAI deployment.
The list of embeddings models that you can choose from in your deployment can be found in the https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#embeddings[Azure models documentation].
[source,console]
------------------------------------------------------------
PUT _inference/text_embedding/azure_openai_embeddings
{
"service": "azureopenai",
"service_settings": {
"api_key": "<api_key>",
"resource_name": "<resource_name>",
"deployment_id": "<deployment_id>",
"api_version": "2024-02-01"
}
}
------------------------------------------------------------
// TEST[skip:TBD]
The next example shows how to create an {infer} endpoint called
`azure_openai_completion` to perform a `completion` task type.
[source,console]
------------------------------------------------------------
PUT _inference/completion/azure_openai_completion
{
"service": "azureopenai",
"service_settings": {
"api_key": "<api_key>",
"resource_name": "<resource_name>",
"deployment_id": "<deployment_id>",
"api_version": "2024-02-01"
}
}
------------------------------------------------------------
// TEST[skip:TBD]
The list of chat completion models that you can choose from in your Azure OpenAI deployment can be found at the following places:
* https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-4-and-gpt-4-turbo-models[GPT-4 and GPT-4 Turbo models]
* https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-35[GPT-3.5]
[discrete]
[[inference-example-azureaistudio]]
===== Azure AI Studio service
The following example shows how to create an {infer} endpoint called
`azure_ai_studio_embeddings` to perform a `text_embedding` task type.
Note that we do not specify a model here, as it is defined already via our Azure AI Studio deployment.
The list of embeddings models that you can choose from in your deployment can be found in the https://ai.azure.com/explore/models?selectedTask=embeddings[Azure AI Studio model explorer].
[source,console]
------------------------------------------------------------
PUT _inference/text_embedding/azure_ai_studio_embeddings
{
"service": "azureaistudio",
"service_settings": {
"api_key": "<api_key>",
"target": "<target_uri>",
"provider": "<model_provider>",
"endpoint_type": "<endpoint_type>"
}
}
------------------------------------------------------------
// TEST[skip:TBD]
The next example shows how to create an {infer} endpoint called
`azure_ai_studio_completion` to perform a `completion` task type.
[source,console]
------------------------------------------------------------
PUT _inference/completion/azure_ai_studio_completion
{
"service": "azureaistudio",
"service_settings": {
"api_key": "<api_key>",
"target": "<target_uri>",
"provider": "<model_provider>",
"endpoint_type": "<endpoint_type>"
}
}
------------------------------------------------------------
// TEST[skip:TBD]
The list of chat completion models that you can choose from in your deployment can be found in the https://ai.azure.com/explore/models?selectedTask=chat-completion[Azure AI Studio model explorer].