Add mask_token field to fill_mask of _ml/trained_models.
This change will enable users and Kibana to get the particular mask tokens needed for deployed models by adding a mask_token field to the GET _ml/trained_models API, as an enhancement to support kibana#159577.
Many multi-lingual and newer models use a tokenization scheme similar to
sentence-piece. This PR adds support for one of those tokenization
schemes, XLMRoBERTa.
The main changes are: - Support for xlm_roberta tokenization
configuration - Adding `scores` to the vocabulary document stored,
requiring that scores be the same size as the vocabulary - Adding a new
flat text file to resources that is the spm char normalizer.
Adds a new include flag definition_status to the GET trained models API.
When present the trained model configuration returned in the response
will have the new boolean field fully_defined if the full model definition
is exists.
This prevents docs files from *starting* with a "response" because when
that happens the response is converted to an assertion and appended
to the last snippet that was processed. If that last snipper was in a
different file then it's very hard to reason about the tests. That goes
double because the order we iterate files isn't defined....
Anyway! This adds a guard in the build, removes the offending
"response", and reenables the tests that we'd thought we failing here.
Closes#91081
This adds model_alias support for native pytorch models.
Model aliases can be used in `_infer` or within the inference processor. This way the alias can be atomically changed without down time to another deployed model.
Restrictions:
- Model alias changes need to be done between two models of the same kind (e.g. pytorch -> pytorch)
- Model alias change is not allowed between a model that is deployed to a model that is not
- Model alias change is not allowed between a model that deployed AND allocated to a model that is deployed but NOT allocated (not assigned to any nodes).
- A deployment cannot be stopped (without supplying the `force` parameter) when the model has a model alias that is used by a pipeline.
closes: https://github.com/elastic/elasticsearch/issues/90960
This adds a new parameter to the start trained model deployment API,
namely `priority`. The available settings are `normal` and `low`.
For normal priority deployments the allocations get distributed so that
node processors are never oversubscribed.
Low priority deployments allow users to test model functionality even if there
are no node processors available. They are limited to 1 allocation with a single thread.
In addition, the process is executed in low priority which limits the amount of
CPU that can be used when the CPU is under pressure. The intention of this is to
limit the impact of low priority deployments on normal priority deployments.
When we rebalance model assignments we now:
1. compute a plan just for normal priority deployments
2. fix the resources used by normal deployments
3. compute a plan just for low priority deployments
4. merge the two plans
Closes#91024
This commit adds a new API that users can use calling:
```
POST _ml/trained_models/{model_id}/deployment/_update
{
"number_of_allocations": 4
}
```
This allows a user to update the number of allocations for a deployment
that is `started`.
If the allocations are increased we rebalance and let the assignment
planner find how to allocate the additional allocations.
If the allocations are decreased we cannot use the assignment planner.
Instead, we implement the reduction in a new class `AllocationReducer`
that tries to reduce the allocations so that:
1. availability zone balance is maintained
2. assignments that can be completely stopped are preferred to release memory
When starting a trained model deployment, a queue is created.
If the queue_capacity is too large, it can lead to OOM and a node
crash.
This commit adds validation that the queue_capacity cannot be more
than 1M.
Closes#89555
This adds a new `_ml/trained_models/<model_id>/deployment/cache/_clear` API. This will clear the inference cache on every node where the model is allocated.
Introduced in: #88439
* [ML] add text_similarity nlp task documentation
* Apply suggestions from code review
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
* Update docs/reference/ml/trained-models/apis/infer-trained-model.asciidoc
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
* Apply suggestions from code review
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
* Update docs/reference/ml/ml-shared.asciidoc
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
The inference node stats for deployed PyTorch inference
models now contain two new fields: `inference_cache_hit_count`
and `inference_cache_hit_count_last_minute`.
These indicate how many inferences on that node were served
from the C++-side response cache that was added in
https://github.com/elastic/ml-cpp/pull/2305. Cache hits
occur when exactly the same inference request is sent to the
same node more than once.
The `average_inference_time_ms` and
`average_inference_time_ms_last_minute` fields now refer to
the time taken to do the cache lookup, plus, if necessary,
the time to do the inference. We would expect average inference
time to be vastly reduced in situations where the cache hit
rate is high.
With: https://github.com/elastic/ml-cpp/pull/2305 we now support caching pytorch inference responses per node per model.
By default, the cache will be the same size has the model on disk size. This is because our current best estimate for memory used (for deploying) is 2*model_size + constant_overhead.
This is due to the model having to be loaded in memory twice when serializing to the native process.
But, once the model is in memory and accepting requests, its actual memory usage is reduced vs. what we have "reserved" for it within the node.
Consequently, having a cache layer that takes advantage of that unused (but reserved) memory is effectively free. When used in production, especially in search scenarios, caching inference results is critical for decreasing latency.
As the number of cores in CPUs is typically a power of 2,
this commit adds a validation that trained model deployments
start with `threads_per_allocation` set to be a power of 2.
When we look for how we distribute the allocations across the
cluster, this prevents situations where we have a lot of wasted
CPU cores.
In addition, we add a max value limit of `32`.
When starting a trained model deployment the user can tweak performance
by setting the `model_threads` and `inference_threads` parameters.
These parameters are hard to understand and cause confusion.
This commit renames these as well as the fields where their values are
reported in the stats API.
- `model_threads` => `number_of_allocations`
- `inference_threads` => `threads_per_allocation`
Now the terminology is as follows.
A model deployment starts with a requested `number_of_allocations`.
Each allocation means the model gets another thread for executing
parallel inference requests. Thus, more allocations should increase
throughput. In its turn, each allocation is may be using a number
of threads to parallelize each individual inference request.
This is the `threads_per_allocation` setting and increases inference
speed (which might also result in improved throughput).
This commit adds a new `_ml/trained_models/{model_id}/_infer` API. This api works for both native NLP models and supervised models trained via Data Frame analytics.
The format of the API is the same as the old `_ml/trained_models/{model_id}/deployment/_infer`. Taking a `docs` and an `inference_config` parameter.
This PR also deprecates the old experimental `_ml/trained_models/{model_id}/deployment/_infer` API.
The biggest difference is that the response now nests all results under an "inference_results" object.
closes: https://github.com/elastic/elasticsearch/issues/86032
This renames the internal concept of a trained model allocation into an assignment.
Now models are assigned to a node and routes created for inference. Not "allocated".
This is an internal rename only. The user facing concepts of trained models and deployments are untouched.
This reverts commit 4eaedb265d.
On further investigation of how to improve allocation of trained models,
we concluded that being able to set `inference_threads` in combination with
`model_threads` is fundamental for scalability.
Starting a trained model deployment the user may set values for `inference_threads`
of `model_threads`. The first improves latency whereas the latter improves throughput.
It is easier to reason on how a model allocation uses resources if we ensure only
one of those two may be greater than one. In addition, it allows us to distribute
the cores of the ML nodes in the cluster across the model allocations in the future.
This commit adds a validation that prevents both `inference_threads` and `model_threads`
to be greater than one.
Throughput is measured as the number of inference requests
processed per minute. The node level stats peak_throughput_per_minute,
throughput_last_minute and average_inference_time_ms_last_minute are
added with a deployment level stat peak_throughput_per_minute which
is the summed throughput of all nodes.
This commit adds initial windowing support for text_classification tasks.
Specifically, a user can now indicate a span (non-negative) indicating the tokenization windowing span when creating
sub-sequences.
Default value is span: -1 indicates that no windowing should take place.
This commit adds support for MPNet based models.
MPNet models differ from BERT style models in that:
- Special tokens are different
- Input to the model doesn't require token positions.
To configure an MPNet tokenizer for your pytorch MPNet based model:
```
"tokenization": {
"mpnet": {...}
}
```
The options provided to `mpnet` are the same as the previously supported `bert` configuration.
This improves reporting of trained model size in the response of the stats API.
In particular, it removes the `model_size_bytes` from the `deployment_stats` section and
replaces it with a top-level `model_size_stats` object that contains:
- `model_size_bytes`: the actual model size
- `required_native_memory_bytes`: the amount of memory required to load a model
In addition, these are now reported for PyTorch models regardless of their deployment state.