elasticsearch/docs/reference/ml/df-analytics/apis/ml-df-analytics-apis.asciidoc
Benjamin Trent a68c6acdb3
[ML] adding new PUT trained model vocabulary endpoint (#77387)
This commit removes the ability to set the vocabulary location in the model config.
This opts instead for sane defaults to be set and used. Wrapping this up in an
API.

The index is now always the internally managed .ml-inference-native index
and the document ID is always <model_id>_vocabulary

This API only works for pytorch/nlp type models.
2021-09-08 10:21:45 -04:00

40 lines
1.4 KiB
Text

[role="xpack"]
[testenv="platinum"]
[[ml-df-analytics-apis]]
= {ml-cap} {dfanalytics} APIs
You can use the following APIs to perform {ml} {dfanalytics} activities:
* <<put-dfanalytics,Create {dfanalytics-jobs}>>
* <<delete-dfanalytics,Delete {dfanalytics-jobs}>>
* <<get-dfanalytics,Get {dfanalytics-jobs} info>>
* <<get-dfanalytics-stats,Get {dfanalytics-jobs} statistics>>
* <<evaluate-dfanalytics,Evaluate {dfanalytics}>>
* <<explain-dfanalytics,Explain {dfanalytics}>>
* <<preview-dfanalytics,Preview {dfanalytics}>>
* <<start-dfanalytics,Start {dfanalytics-jobs}>>
* <<stop-dfanalytics,Stop {dfanalytics-jobs}>>
* <<update-dfanalytics,Update {dfanalytics-jobs}>>
You can use the following APIs to perform {infer} operations:
* <<put-trained-models>>
* <<put-trained-model-definition-part>>
* <<put-trained-model-vocabulary>>
* <<put-trained-models-aliases>>
* <<delete-trained-models>>
* <<delete-trained-models-aliases>>
* <<get-trained-models>>
* <<get-trained-models-stats>>
You can deploy a trained model to make predictions in an ingest pipeline or in
an aggregation. Refer to the following documentation to learn more:
* <<search-aggregations-pipeline-inference-bucket-aggregation,{infer-cap} bucket aggregation>>
* <<inference-processor,{infer-cap} processor>>
* <<infer-trained-model-deployment>>
* <<start-trained-model-deployment>>
* <<stop-trained-model-deployment>>
See also <<ml-apis>>.