Commit graph

19 commits

Author SHA1 Message Date
David Kyle
fbb6abd2f4
[ML] Increase the default timeout for start trained model deployment (#92328)
A 30 second timeout is inline with the default value used in most ML APIs.
2022-12-14 13:32:23 +00:00
David Roberts
6fa3d73fd5
[ML] Make native inference generally available (#92213)
Previously this functionality was beta. This PR changes it to GA.
2022-12-12 15:43:30 +00:00
István Zoltán Szabó
612a7b673a
[DOCS] Highlights inference caching behavior (#91608) 2022-11-16 13:17:49 +01:00
Dimitris Athanasiou
4e67df8b05
[ML] Low priority trained model deployments (#91234)
This adds a new parameter to the start trained model deployment API,
namely `priority`. The available settings are `normal` and `low`.

For normal priority deployments the allocations get distributed so that
node processors are never oversubscribed.

Low priority deployments allow users to test model functionality even if there
are no node processors available. They are limited to 1 allocation with a single thread.
In addition, the process is executed in low priority which limits the amount of
CPU that can be used when the CPU is under pressure. The intention of this is to
limit the impact of low priority deployments on normal priority deployments.

When we rebalance model assignments we now:

  1. compute a plan just for normal priority deployments
  2. fix the resources used by normal deployments
  3. compute a plan just for low priority deployments
  4. merge the two plans

Closes #91024
2022-11-04 14:22:30 +02:00
David Roberts
d9ea080d10
[ML] Release native inference functionality as beta (#90418)
Previously this functionality was tech preview (aka experimental).
This PR changes it to beta.
2022-09-28 11:09:02 +01:00
Dimitris Athanasiou
32d512286d
[ML] Validate trained model deployment queue_capacity limit (#89573)
When starting a trained model deployment, a queue is created.
If the queue_capacity is too large, it can lead to OOM and a node
crash.

This commit adds validation that the queue_capacity cannot be more
than 1M.

Closes #89555
2022-08-24 16:52:19 +03:00
Benjamin Trent
afa28d49b4
[ML] add new cache_size parameter to trained_model deployments API (#88450)
With: https://github.com/elastic/ml-cpp/pull/2305 we now support caching pytorch inference responses per node per model.

By default, the cache will be the same size has the model on disk size. This is because our current best estimate for memory used (for deploying) is 2*model_size + constant_overhead. 

This is due to the model having to be loaded in memory twice when serializing to the native process. 

But, once the model is in memory and accepting requests, its actual memory usage is reduced vs. what we have "reserved" for it within the node.

Consequently, having a cache layer that takes advantage of that unused (but reserved) memory is effectively free. When used in production, especially in search scenarios, caching inference results is critical for decreasing latency.
2022-07-18 09:19:01 -04:00
Dimitris Athanasiou
f3199e968b
[ML] Adjust docs for distributed model allocation (#87955)
[ML] Adjust docs for distributed model allocation

Follow up to #87366
2022-06-23 15:35:58 +03:00
Dimitris Athanasiou
679351e224
[ML] Require that threads_per_allocation is a power of 2 (#87697)
As the number of cores in CPUs is typically a power of 2,
this commit adds a validation that trained model deployments
start with `threads_per_allocation` set to be a power of 2.
When we look for how we distribute the allocations across the
cluster, this prevents situations where we have a lot of wasted
CPU cores.

In addition, we add a max value limit of `32`.
2022-06-17 15:12:37 +03:00
Lisa Cawley
6b7320790f
[DOCS] Updates example output for start trained model deployment API (#86824) 2022-05-17 07:27:44 -07:00
Dimitris Athanasiou
68c51f3ada
[ML] Rename threading params in _start trained model deployment API (#86597)
When starting a trained model deployment the user can tweak performance
by setting the `model_threads` and `inference_threads` parameters.
These parameters are hard to understand and cause confusion.

This commit renames these as well as the fields where their values are
reported in the stats API.

- `model_threads` => `number_of_allocations`
- `inference_threads` => `threads_per_allocation`

Now the terminology is as follows.

A model deployment starts with a requested `number_of_allocations`.
Each allocation means the model gets another thread for executing
parallel inference requests. Thus, more allocations should increase
throughput. In its turn, each allocation is may be using a number
of threads to parallelize each individual inference request.
This is the `threads_per_allocation` setting and increases inference
speed (which might also result in improved throughput).
2022-05-10 17:41:00 +03:00
Lisa Cawley
89a3e18e10
[DOCS] Add preview admonition to infer API (#86486) 2022-05-05 13:49:02 -07:00
Benjamin Trent
a907f0bb6f
[ML] add new trained_models/{model_id}/_infer endpoint for all supervised models and deprecate deployment infer api (#86361)
This commit adds a new `_ml/trained_models/{model_id}/_infer` API. This api works for both native NLP models and supervised models trained via Data Frame analytics. 

The format of the API is the same as the old `_ml/trained_models/{model_id}/deployment/_infer`. Taking a `docs` and an `inference_config` parameter.

This PR also deprecates the old experimental `_ml/trained_models/{model_id}/deployment/_infer` API.

The biggest difference is that the response now nests all results under an "inference_results" object.

closes: https://github.com/elastic/elasticsearch/issues/86032
2022-05-05 14:58:59 -04:00
Benjamin Trent
25d1afbe6f
[ML] rename trained model allocations to assignments (#85503)
This renames the internal concept of a trained model allocation into an assignment.

Now models are assigned to a node and routes created for inference. Not "allocated".

This is an internal rename only. The user facing concepts of trained models and deployments are untouched.
2022-04-18 11:35:10 -04:00
Dimitris Athanasiou
5d670e45ac
Revert "[ML] Only one of inference_threads and model_threads may be great… (#84794)" (#85089)
This reverts commit 4eaedb265d.

On further investigation of how to improve allocation of trained models,
we concluded that being able to set `inference_threads` in combination with
`model_threads` is fundamental for scalability.
2022-03-18 09:41:27 +02:00
Dimitris Athanasiou
4eaedb265d
[ML] Only one of inference_threads and model_threads may be great… (#84794)
Starting a trained model deployment the user may set values for `inference_threads`
of `model_threads`. The first improves latency whereas the latter improves throughput.
It is easier to reason on how a model allocation uses resources if we ensure only
one of those two may be greater than one. In addition, it allows us to distribute
the cores of the ML nodes in the cluster across the model allocations in the future.

This commit adds a validation that prevents both `inference_threads` and `model_threads`
to be greater than one.
2022-03-09 16:33:35 +02:00
David Kyle
1473b09415
[ML] Add NLP inference configs to the inference processor docs (#82320) 2022-01-11 08:50:45 +00:00
David Kyle
d1ee756da8
[ML][DOCS] Add note about max values of thread settings (#81367) 2021-12-14 13:07:34 +00:00
Lisa Cawley
429bdd9afc
[DOCS] Move trained model APIs out of dataframe analytics (#81315) 2021-12-03 09:21:09 -08:00
Renamed from docs/reference/ml/df-analytics/apis/start-trained-model-deployment.asciidoc (Browse further)