elasticsearch/docs/reference/inference/put-inference.asciidoc
István Zoltán Szabó c8ab61014c
[DOCS] Expands inference API docs (#102634)
* [DOCS] Expands inference API docs.

* [DOCS] Adds example API call.

* [DOCS] Changes task_settings collapsible section.
2023-11-28 15:10:59 +01:00

173 lines
No EOL
4.2 KiB
Text

[role="xpack"]
[[put-inference-api]]
=== Create {infer} API
experimental[]
Creates a model to perform an {infer} task.
[discrete]
[[put-inference-api-request]]
==== {api-request-title}
`PUT /_inference/<task_type>/<model_id>`
[discrete]
[[put-inference-api-prereqs]]
==== {api-prereq-title}
* Requires the `manage` <<privileges-list-cluster,cluster privilege>>.
[discrete]
[[put-inference-api-desc]]
==== {api-description-title}
The create {infer} API enables you to create and configure an {infer} model to
perform a specific {infer} task.
[discrete]
[[put-inference-api-path-params]]
==== {api-path-parms-title}
`<model_id>`::
(Required, string)
The unique identifier of the model.
`<task_type>`::
(Required, string)
The type of the {infer} task that the model will perform. Available task types:
* `sparse_embedding`,
* `text_embedding`.
[discrete]
[[put-inference-api-request-body]]
== {api-request-body-title}
`service`::
(Required, string)
The type of service supported for the specified task type.
Available services:
* `elser`,
* `openai`.
`service_settings`::
(Required, object)
Settings used to install the {infer} model. These settings are specific to the
`service` you specified.
+
.`service_settings` for `elser`
[%collapsible%closed]
=====
`num_allocations`:::
(Required, integer)
The number of model allocations to create.
`num_threads`:::
(Required, integer)
The number of threads to use by each model allocation.
=====
+
.`service_settings` for `openai`
[%collapsible%closed]
=====
`api_key`:::
(Required, string)
A valid API key of your OpenAI account. You can find your OpenAI API keys in
your OpenAI account under the
https://platform.openai.com/api-keys[API keys section].
IMPORTANT: You need to provide the API key only once, during the {infer} model
creation. The <<get-inference-api>> does not retrieve your API key. After
creating the {infer} model, you cannot change the associated API key. If you
want to use a different API key, delete the {infer} model and recreate it with
the same name and the updated API key.
`organization_id`:::
(Optional, string)
The unique identifier of your organization. You can find the Organization ID in
your OpenAI account under
https://platform.openai.com/account/organization[**Settings** > **Organizations**].
`url`:::
(Optional, string)
The URL endpoint to use for the requests. Can be changed for testing purposes.
Defaults to `https://api.openai.com/v1/embeddings`.
=====
`task_settings`::
(Optional, object)
Settings to configure the {infer} task. These settings are specific to the
`<task_type>` you specified.
+
.`task_settings` for `text_embedding`
[%collapsible%closed]
=====
`model`:::
(Optional, string)
The name of the model to use for the {infer} task. Refer to the
https://platform.openai.com/docs/guides/embeddings/what-are-embeddings[OpenAI documentation]
for the list of available text embedding models.
=====
[discrete]
[[put-inference-api-example]]
==== {api-examples-title}
The following example shows how to create an {infer} model called
`my-elser-model` to perform a `sparse_embedding` task type.
[source,console]
------------------------------------------------------------
PUT _inference/sparse_embedding/my-elser-model
{
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
},
"task_settings": {}
}
------------------------------------------------------------
// TEST[skip:TBD]
Example response:
[source,console-result]
------------------------------------------------------------
{
"model_id": "my-elser-model",
"task_type": "sparse_embedding",
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
},
"task_settings": {}
}
------------------------------------------------------------
// NOTCONSOLE
The following example shows how to create an {infer} model called
`openai_embeddings` to perform a `text_embedding` task type.
[source,console]
------------------------------------------------------------
PUT _inference/text_embedding/openai_embeddings
{
"service": "openai",
"service_settings": {
"api_key": "<api_key>"
},
"task_settings": {
"model": "text-embedding-ada-002"
}
}
------------------------------------------------------------
// TEST[skip:TBD]