This commit adds a new field deployment_stats that is optionally set for models that are deployed.
If a model does not have a deployment, it will be null.
Also, removes the get deployment stats API and makes the deployment stats action internal only.
Removes `testenv` annotations and related code. These annotations originally let you skip x-pack snippet tests in the docs. However, that's no longer possible.
Relates to #79309, #31619
* [ML] add documentation for get deployment stats API
* Apply suggestions from code review
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
This commit removes the ability to set the vocabulary location in the model config.
This opts instead for sane defaults to be set and used. Wrapping this up in an
API.
The index is now always the internally managed .ml-inference-native index
and the document ID is always <model_id>_vocabulary
This API only works for pytorch/nlp type models.
This commit adds a new `_preview` endpoint for data frame analytics.
This allows users to see the data on which their model will be trained. This is especially useful
in the arrival of custom feature processors.
The API design is a similar to datafeed `_preview` and data frame analytics `_explain`.
A `model_alias` allows trained models to be referred by a user defined moniker.
This not only improves the readability and simplicity of numerous API calls, but it allows for simpler deployment and upgrade procedures for trained models.
Previously, if you referenced a model ID directly within an ingest pipeline, when you have a new model that performs better than an earlier referenced model, you have to update the pipeline itself. If this model was used in numerous pipelines, ALL those pipelines would have to be updated.
When using a `model_alias` in an ingest pipeline, only that `model_alias` needs to be updated. Then, the underlying referenced model will change in place for all ingest pipelines automatically.
An additional benefit is that the model referenced is not changed until it is fully loaded into cache, this way throughput is not hampered by changing models.