elasticsearch/docs/reference/ml/trained-models/apis/get-trained-models.asciidoc
Stef Nestor 612ce0f996
(Doc+) Link API doc to parent object - part2 (#113541)
* (Doc+) Cross-link CAT APIs to parent object

---------

Co-authored-by: Lisa Cawley <lcawley@elastic.co>
Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com>
2024-10-09 14:21:56 -06:00

1463 lines
43 KiB
Text

[role="xpack"]
[[get-trained-models]]
= Get trained models API
[subs="attributes"]
++++
<titleabbrev>Get trained models</titleabbrev>
++++
Retrieves configuration information about {ml-docs}/ml-nlp-deploy-models.html[{infer} trained models].
[[ml-get-trained-models-request]]
== {api-request-title}
`GET _ml/trained_models/` +
`GET _ml/trained_models/<model_id>` +
`GET _ml/trained_models/_all` +
`GET _ml/trained_models/<model_id1>,<model_id2>` +
`GET _ml/trained_models/<model_id_pattern*>`
[[ml-get-trained-models-prereq]]
== {api-prereq-title}
Requires the `monitor_ml` cluster privilege. This privilege is included in the
`machine_learning_user` built-in role.
//[[ml-get-trained-models-desc]]
//== {api-description-title}
[[ml-get-trained-models-path-params]]
== {api-path-parms-title}
`<model_id>`::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=model-id-or-alias]
+
You can get information for multiple trained models in a single API request by
using a comma-separated list of model IDs or a wildcard expression.
[[ml-get-trained-models-query-params]]
== {api-query-parms-title}
`allow_no_match`::
(Optional, Boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=allow-no-match-models]
`decompress_definition`::
(Optional, Boolean)
Specifies whether the included model definition should be returned as a JSON map
(`true`) or in a custom compressed format (`false`). Defaults to `true`.
`exclude_generated`::
(Optional, Boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=exclude-generated]
`from`::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=from-models]
`include`::
(Optional, string)
A comma delimited string of optional fields to include in the response body. The
default value is empty, indicating no optional fields are included. Valid
options are:
- `definition`: Includes the model definition.
- `feature_importance_baseline`: Includes the baseline for {feat-imp} values.
- `hyperparameters`: Includes the information about hyperparameters used to
train the model. This information consists of the value, the absolute and
relative importance of the hyperparameter as well as an indicator of whether
it was specified by the user or tuned during hyperparameter optimization.
- `total_feature_importance`: Includes the total {feat-imp} for the training
data set.
- `definition_status`: Includes the field `fully_defined` indicating if the
full model definition is present.
The baseline and total {feat-imp} values are returned in the `metadata` field
in the response body.
`size`::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=size-models]
`tags`::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=tags]
[role="child_attributes"]
[[ml-get-trained-models-results]]
== {api-response-body-title}
`trained_model_configs`::
(array)
An array of trained model resources, which are sorted by the `model_id` value in
ascending order.
+
.Properties of trained model resources
[%collapsible%open]
====
`created_by`:::
(string)
The creator of the trained model.
`create_time`:::
(<<time-units,time units>>)
The time when the trained model was created.
`default_field_map` :::
(object)
A string object that contains the default field map to use when inferring
against the model. For example, {dfanalytics} may train the model on a specific
multi-field `foo.keyword`. The analytics job would then supply a default field
map entry for `"foo" : "foo.keyword"`.
+
Any field map described in the inference configuration takes precedence.
`description`:::
(string)
The free-text description of the trained model.
`model_size_bytes`:::
(integer)
The estimated model size in bytes to keep the trained model in memory.
`estimated_operations`:::
(integer)
The estimated number of operations to use the trained model.
`inference_config`:::
(object)
The default configuration for inference. This can be either a `regression`
or `classification` configuration. It must match the `target_type` of the
underlying `definition.trained_model`.
+
.Properties of `inference_config`
[%collapsible%open]
=====
`classification`::::
(object)
Classification configuration for inference.
+
.Properties of classification inference
[%collapsible%open]
======
`num_top_classes`:::
(integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-classes]
`num_top_feature_importance_values`:::
(integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-num-top-feature-importance-values]
`prediction_field_type`:::
(string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-prediction-field-type]
`results_field`:::
(string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-results-field]
`top_classes_results_field`:::
(string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-classification-top-classes-results-field]
======
`fill_mask`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-fill-mask]
+
.Properties of fill_mask inference
[%collapsible%open]
======
`mask_token`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-mask-token]
`tokenization`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization]
+
.Properties of tokenization
[%collapsible%open]
=======
`bert`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert]
+
.Properties of bert
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-with-special-tokens]
========
`roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
+
.Properties of roberta
[%collapsible%open]
========
`add_prefix_space`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-add-prefix-space]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`mpnet`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet]
+
.Properties of mpnet
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet-with-special-tokens]
========
`xlm_roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-xlm-roberta]
+
.Properties of xlm_roberta
[%collapsible%open]
========
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`bert_ja`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja]
+
.Properties of bert_ja
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja-with-special-tokens]
========
=======
`vocabulary`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-vocabulary]
+
.Properties of vocabulary
[%collapsible%open]
=======
`index`::::
(Required, string)
The index where the vocabulary is stored.
=======
======
`ner`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-ner]
+
.Properties of ner inference
[%collapsible%open]
======
`classification_labels`::::
(Optional, string)
An array of classification labels. NER supports only
Inside-Outside-Beginning labels (IOB) and only persons, organizations, locations,
and miscellaneous. For example:
`["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC"]`.
`tokenization`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization]
+
.Properties of tokenization
[%collapsible%open]
=======
`bert`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert]
+
.Properties of bert
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-with-special-tokens]
========
`roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
+
.Properties of roberta
[%collapsible%open]
========
`add_prefix_space`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-add-prefix-space]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`mpnet`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet]
+
.Properties of mpnet
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet-with-special-tokens]
========
`xlm_roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-xlm-roberta]
+
.Properties of xlm_roberta
[%collapsible%open]
========
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`bert_ja`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja]
+
.Properties of bert_ja
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja-with-special-tokens]
========
=======
`vocabulary`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-vocabulary]
+
.Properties of vocabulary
[%collapsible%open]
=======
`index`::::
(Required, string)
The index where the vocabulary is stored
=======
======
`pass_through`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-pass-through]
+
.Properties of pass_through inference
[%collapsible%open]
======
`tokenization`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization]
+
.Properties of tokenization
[%collapsible%open]
=======
`bert`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert]
+
.Properties of bert
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-with-special-tokens]
========
`roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
+
.Properties of roberta
[%collapsible%open]
========
`add_prefix_space`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-add-prefix-space]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`mpnet`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet]
+
.Properties of mpnet
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet-with-special-tokens]
========
`xlm_roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-xlm-roberta]
+
.Properties of xlm_roberta
[%collapsible%open]
========
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`bert_ja`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja]
+
.Properties of bert_ja
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja-with-special-tokens]
========
=======
`vocabulary`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-vocabulary]
+
.Properties of vocabulary
[%collapsible%open]
=======
`index`::::
(Required, string)
The index where the vocabulary is stored.
=======
======
`regression`::::
(object)
Regression configuration for inference.
+
.Properties of regression inference
[%collapsible%open]
======
`num_top_feature_importance_values`:::
(integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-regression-num-top-feature-importance-values]
`results_field`:::
(string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-results-field]
======
`text_classification`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-text-classification]
+
.Properties of text_classification inference
[%collapsible%open]
======
`classification_labels`::::
(Optional, string)
An array of classification labels.
`num_top_classes`::::
(Optional, integer)
Specifies the number of top class predictions to return. Defaults to all classes (-1).
`tokenization`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization]
+
.Properties of tokenization
[%collapsible%open]
=======
`bert`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert]
+
.Properties of bert
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-with-special-tokens]
========
`roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
+
.Properties of roberta
[%collapsible%open]
========
`add_prefix_space`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-add-prefix-space]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`mpnet`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet]
+
.Properties of mpnet
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet-with-special-tokens]
========
`xlm_roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-xlm-roberta]
+
.Properties of xlm_roberta
[%collapsible%open]
========
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`bert_ja`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja]
+
.Properties of bert_ja
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja-with-special-tokens]
========
=======
`vocabulary`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-vocabulary]
+
.Properties of vocabulary
[%collapsible%open]
=======
`index`::::
(Required, string)
The index where the vocabulary is stored.
=======
======
`text_embedding`::::
(Object, optional)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-text-embedding]
+
.Properties of text_embedding inference
[%collapsible%open]
======
`embedding_size`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-text-embedding-size]
`results_field`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-results-field]
`tokenization`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization]
+
.Properties of tokenization
[%collapsible%open]
=======
`bert`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert]
+
.Properties of bert
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-with-special-tokens]
========
`roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
+
.Properties of roberta
[%collapsible%open]
========
`add_prefix_space`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-add-prefix-space]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`mpnet`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet]
+
.Properties of mpnet
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet-with-special-tokens]
========
`xlm_roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-xlm-roberta]
+
.Properties of xlm_roberta
[%collapsible%open]
========
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`bert_ja`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja]
+
.Properties of bert_ja
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja-with-special-tokens]
========
=======
`vocabulary`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-vocabulary]
+
.Properties of vocabulary
[%collapsible%open]
=======
`index`::::
(Required, string)
The index where the vocabulary is stored.
=======
======
`text_similarity`::::
(Object, optional)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-text-similarity]
+
.Properties of text_similarity inference
[%collapsible%open]
======
`span_score_combination_function`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-text-similarity-span-score-func]
`tokenization`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization]
+
.Properties of tokenization
[%collapsible%open]
=======
`bert`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert]
+
.Properties of bert
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-with-special-tokens]
========
`roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
+
.Properties of roberta
[%collapsible%open]
========
`add_prefix_space`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-add-prefix-space]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`mpnet`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet]
+
.Properties of mpnet
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet-with-special-tokens]
========
`xlm_roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-xlm-roberta]
+
.Properties of xlm_roberta
[%collapsible%open]
========
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`bert_ja`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja]
+
.Properties of bert_ja
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`span`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja-with-special-tokens]
========
=======
`vocabulary`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-vocabulary]
+
.Properties of vocabulary
[%collapsible%open]
=======
`index`::::
(Required, string)
The index where the vocabulary is stored.
=======
======
`zero_shot_classification`::::
(Object, optional)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-zero-shot-classification]
+
.Properties of zero_shot_classification inference
[%collapsible%open]
======
`classification_labels`::::
(Required, array)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-zero-shot-classification-classification-labels]
`hypothesis_template`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-zero-shot-classification-hypothesis-template]
`labels`::::
(Optional, array)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-zero-shot-classification-labels]
`multi_label`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-zero-shot-classification-multi-label]
`tokenization`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization]
+
.Properties of tokenization
[%collapsible%open]
=======
`bert`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert]
+
.Properties of bert
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-with-special-tokens]
========
`roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
+
.Properties of roberta
[%collapsible%open]
========
`add_prefix_space`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-add-prefix-space]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`mpnet`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet]
+
.Properties of mpnet
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-mpnet-with-special-tokens]
========
`xlm_roberta`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-xlm-roberta]
+
.Properties of xlm_roberta
[%collapsible%open]
========
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta-with-special-tokens]
========
`bert_ja`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja]
+
.Properties of bert_ja
[%collapsible%open]
========
`do_lower_case`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-do-lower-case]
`max_sequence_length`::::
(Optional, integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-max-sequence-length]
`truncate`::::
(Optional, string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
`with_special_tokens`::::
(Optional, boolean)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-bert-ja-with-special-tokens]
========
=======
`vocabulary`::::
(Optional, object)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-vocabulary]
+
.Properties of vocabulary
[%collapsible%open]
=======
`index`::::
(Required, string)
The index where the vocabulary is stored.
=======
======
=====
`input`:::
(object)
The input field names for the model definition.
+
.Properties of `input`
[%collapsible%open]
=====
`field_names`::::
(string)
An array of input field names for the model.
=====
`fully_defined`::
(boolean)
True if the full model definition is present.
This field is only present if `include=definition_status` was specified in the request.
// Begin location
`location`::
(Optional, object)
The model definition location. Must be provided if the `definition` or `compressed_definition` are not
provided.
+
.Properties of `location`
[%collapsible%open]
=====
`index`:::
(Required, object)
Indicates that the model definition is stored in an index. It is required to be empty as
the index for storing model definitions is configured automatically.
=====
// End location
`license_level`::
(string)
The license level of the trained model.
`metadata`::
(object)
An object containing metadata about the trained model. For example, models
created by {dfanalytics} contain `analysis_config` and `input` objects.
+
.Properties of metadata
[%collapsible%open]
=====
`feature_importance_baseline`:::
(object)
An object that contains the baseline for {feat-imp} values. For {reganalysis},
it is a single value. For {classanalysis}, there is a value for each class.
`hyperparameters`:::
(array)
List of the available hyperparameters optimized during the
`fine_parameter_tuning` phase as well as specified by the user.
+
.Properties of hyperparameters
[%collapsible%open]
======
`absolute_importance`::::
(double)
A positive number showing how much the parameter influences the variation of the
{ml-docs}/dfa-regression-lossfunction.html[loss function]. For
hyperparameters with values that are not specified by the user but tuned during
hyperparameter optimization.
`max_trees`::::
(integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=max-trees-trained-models]
`name`::::
(string)
Name of the hyperparameter.
`relative_importance`::::
(double)
A number between 0 and 1 showing the proportion of influence on the variation of
the loss function among all tuned hyperparameters. For hyperparameters with
values that are not specified by the user but tuned during hyperparameter
optimization.
`supplied`::::
(Boolean)
Indicates if the hyperparameter is specified by the user (`true`) or optimized
(`false`).
`value`::::
(double)
The value of the hyperparameter, either optimized or specified by the user.
======
`total_feature_importance`:::
(array)
An array of the total {feat-imp} for each feature used from
the training data set. This array of objects is returned if {dfanalytics} trained
the model and the request includes `total_feature_importance` in the `include`
request parameter.
+
.Properties of total {feat-imp}
[%collapsible%open]
======
`feature_name`:::
(string)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-feature-name]
`importance`:::
(object)
A collection of {feat-imp} statistics related to the training data set for this particular feature.
+
.Properties of {feat-imp}
[%collapsible%open]
=======
`mean_magnitude`:::
(double)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-magnitude]
`max`:::
(integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-max]
`min`:::
(integer)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-min]
=======
`classes`:::
(array)
If the trained model is a classification model, {feat-imp} statistics are gathered
per target class value.
+
.Properties of class {feat-imp}
[%collapsible%open]
=======
`class_name`:::
(string)
The target class value. Could be a string, boolean, or number.
`importance`:::
(object)
A collection of {feat-imp} statistics related to the training data set for this particular feature.
+
.Properties of {feat-imp}
[%collapsible%open]
========
`mean_magnitude`:::
(double)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-magnitude]
`max`:::
(int)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-max]
`min`:::
(int)
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-metadata-feature-importance-min]
========
=======
======
=====
`model_id`::
(string)
Identifier for the trained model.
`model_type`::
(Optional, string)
The created model type. By default the model type is `tree_ensemble`.
Appropriate types are:
+
--
* `tree_ensemble`: The model definition is an ensemble model of decision trees.
* `lang_ident`: A special type reserved for language identification models.
* `pytorch`: The stored definition is a PyTorch (specifically a TorchScript) model. Currently only
NLP models are supported.
--
`tags`::
(string)
A comma delimited string of tags. A trained model can have many tags, or none.
`version`::
(string)
The {ml} configuration version number at which the trained model was created.
NOTE: From {es} 8.10.0, a new version number is used to
track the configuration and state changes in the {ml} plugin. This new
version number is decoupled from the product version and will increment
independently. The `version` value represents the new version number.
====
[[ml-get-trained-models-response-codes]]
== {api-response-codes-title}
`400`::
If `include_model_definition` is `true`, this code indicates that more than
one models match the ID pattern.
`404` (Missing resources)::
If `allow_no_match` is `false`, this code indicates that there are no
resources that match the request or only partial matches for the request.
[[ml-get-trained-models-example]]
== {api-examples-title}
The following example gets configuration information for all the trained models:
[source,console]
--------------------------------------------------
GET _ml/trained_models/
--------------------------------------------------
// TEST[skip:TBD]