elasticsearch/docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc
Kostas Krikellas 6d62e09315
Make TDigestState configurable (#96794)
* Initial import for TDigest forking.

* Fix MedianTest.

More work needed for TDigestPercentile*Tests and the TDigestTest (and
the rest of the tests) in the tdigest lib to pass.

* Fix Dist.

* Fix AVLTreeDigest.quantile to match Dist for uniform centroids.

* Update docs/changelog/96086.yaml

* Fix `MergingDigest.quantile` to match `Dist` on uniform distribution.

* Add merging to TDigestState.hashCode and .equals.

Remove wrong asserts from tests and MergingDigest.

* Fix style violations for tdigest library.

* Fix typo.

* Fix more style violations.

* Fix more style violations.

* Fix remaining style violations in tdigest library.

* Update results in docs based on the forked tdigest.

* Fix YAML tests in aggs module.

* Fix YAML tests in x-pack/plugin.

* Skip failing V7 compat tests in modules/aggregations.

* Fix TDigest library unittests.

Remove redundant serializing interfaces from the library.

* Remove YAML test versions for older releases.

These tests don't address compatibility issues in mixed cluster tests as
the latter contain a mix of older and newer nodes, so the output depends
on which node is picked as a data node since the forked TDigest library
is not backwards compatible (produces slightly different results).

* Fix test failures in docs and mixed cluster.

* Reduce buffer sizes in MergingDigest to avoid oom.

* Exclude more failing V7 compatibility tests.

* Update results for JdbcCsvSpecIT tests.

* Update results for JdbcDocCsvSpecIT tests.

* Revert unrelated change.

* More test fixes.

* Use version skips instead of blacklisting in mixed cluster tests.

* Switch TDigestState back to AVLTreeDigest.

* Update docs and tests with AVLTreeDigest output.

* Update flaky test.

* Remove dead code, esp around tracking of incoming data.

* Update docs/changelog/96086.yaml

* Delete docs/changelog/96086.yaml

* Remove explicit compression calls.

This was added to prevent concurrency tests from failing, but it leads
to reduces precision. Submit this to see if the concurrency tests are
still failing.

* Revert "Remove explicit compression calls."

This reverts commit 5352c96f65.

* Remove explicit compression calls to MedianAbsoluteDeviation input.

* Add unittests for AVL and merging digest accuracy.

* Fix spotless violations.

* Delete redundant tests and benchmarks.

* Fix spotless violation.

* Use the old implementation of AVLTreeDigest.

The latest library version is 50% slower and less accurate, as verified
by ComparisonTests.

* Update docs with latest percentile results.

* Update docs with latest percentile results.

* Remove repeated compression calls.

* Update more percentile results.

* Use approximate percentile values in integration tests.

This helps with mixed cluster tests, where some of the tests where
blocked.

* Fix expected percentile value in test.

* Revert in-place node updates in AVL tree.

Update quantile calculations between centroids and min/max values to
match v.3.2.

* Add SortingDigest and HybridDigest.

The SortingDigest tracks all samples in an ArrayList that
gets sorted for quantile calculations. This approach
provides perfectly accurate results and is the most
efficient implementation for up to millions of samples,
at the cost of bloated memory footprint.

The HybridDigest uses a SortingDigest for small sample
populations, then switches to a MergingDigest. This
approach combines to the best performance and results for
small sample counts with very good performance and
acceptable accuracy for effectively unbounded sample
counts.

* Remove deps to the 3.2 library.

* Remove unused licenses for tdigest.

* Revert changes for SortingDigest and HybridDigest.

These will be submitted in a follow-up PR for enabling MergingDigest.

* Remove unused Histogram classes and unit tests.

Delete dead and commented out code, make the remaining tests run
reasonably fast. Remove unused annotations, esp. SuppressWarnings.

* Remove Comparison class, not used.

* Revert "Revert changes for SortingDigest and HybridDigest."

This reverts commit 2336b11598.

* Use HybridDigest as default tdigest implementation

Add SortingDigest as a simple structure for percentile calculations that
tracks all data points in a sorted array. This is a fast and perfectly
accurate solution that leads to bloated memory allocation.

Add HybridDigest that uses SortingDigest for small sample counts, then
switches to MergingDigest. This approach delivers extreme
performance and accuracy for small populations while scaling
indefinitely and maintaining acceptable performance and accuracy with
constant memory allocation (15kB by default).

Provide knobs to switch back to AVLTreeDigest, either per query or
through ClusterSettings.

* Small fixes.

* Add javadoc and tests.

* Add javadoc and tests.

* Remove special logic for singletons in the boundaries.

While this helps with the case where the digest contains only
singletons (perfect accuracy), it has a major issue problem
(non-monotonic quantile function) when the first singleton is followed
by a non-singleton centroid. It's preferable to revert to the old
version from 3.2; inaccuracies in a singleton-only digest should be
mitigated by using a sorted array for small sample counts.

* Revert changes to expected values in tests.

This is due to restoring quantile functions to match head.

* Revert changes to expected values in tests.

This is due to restoring quantile functions to match head.

* Tentatively restore percentile rank expected results.

* Use cdf version from 3.2

Update Dist.cdf to use interpolation, use the same cdf
version in AVLTreeDigest and MergingDigest.

* Revert "Tentatively restore percentile rank expected results."

This reverts commit 7718dbba59.

* Revert remaining changes compared to main.

* Revert excluded V7 compat tests.

* Exclude V7 compat tests still failing.

* Exclude V7 compat tests still failing.

* Remove ClusterSettings tentatively.

* Initial import for TDigest forking.

* Fix MedianTest.

More work needed for TDigestPercentile*Tests and the TDigestTest (and
the rest of the tests) in the tdigest lib to pass.

* Fix Dist.

* Fix AVLTreeDigest.quantile to match Dist for uniform centroids.

* Update docs/changelog/96086.yaml

* Fix `MergingDigest.quantile` to match `Dist` on uniform distribution.

* Add merging to TDigestState.hashCode and .equals.

Remove wrong asserts from tests and MergingDigest.

* Fix style violations for tdigest library.

* Fix typo.

* Fix more style violations.

* Fix more style violations.

* Fix remaining style violations in tdigest library.

* Update results in docs based on the forked tdigest.

* Fix YAML tests in aggs module.

* Fix YAML tests in x-pack/plugin.

* Skip failing V7 compat tests in modules/aggregations.

* Fix TDigest library unittests.

Remove redundant serializing interfaces from the library.

* Remove YAML test versions for older releases.

These tests don't address compatibility issues in mixed cluster tests as
the latter contain a mix of older and newer nodes, so the output depends
on which node is picked as a data node since the forked TDigest library
is not backwards compatible (produces slightly different results).

* Fix test failures in docs and mixed cluster.

* Reduce buffer sizes in MergingDigest to avoid oom.

* Exclude more failing V7 compatibility tests.

* Update results for JdbcCsvSpecIT tests.

* Update results for JdbcDocCsvSpecIT tests.

* Revert unrelated change.

* More test fixes.

* Use version skips instead of blacklisting in mixed cluster tests.

* Switch TDigestState back to AVLTreeDigest.

* Update docs and tests with AVLTreeDigest output.

* Update flaky test.

* Remove dead code, esp around tracking of incoming data.

* Remove explicit compression calls.

This was added to prevent concurrency tests from failing, but it leads
to reduces precision. Submit this to see if the concurrency tests are
still failing.

* Update docs/changelog/96086.yaml

* Delete docs/changelog/96086.yaml

* Revert "Remove explicit compression calls."

This reverts commit 5352c96f65.

* Remove explicit compression calls to MedianAbsoluteDeviation input.

* Add unittests for AVL and merging digest accuracy.

* Fix spotless violations.

* Delete redundant tests and benchmarks.

* Fix spotless violation.

* Use the old implementation of AVLTreeDigest.

The latest library version is 50% slower and less accurate, as verified
by ComparisonTests.

* Update docs with latest percentile results.

* Update docs with latest percentile results.

* Remove repeated compression calls.

* Update more percentile results.

* Use approximate percentile values in integration tests.

This helps with mixed cluster tests, where some of the tests where
blocked.

* Fix expected percentile value in test.

* Revert in-place node updates in AVL tree.

Update quantile calculations between centroids and min/max values to
match v.3.2.

* Add SortingDigest and HybridDigest.

The SortingDigest tracks all samples in an ArrayList that
gets sorted for quantile calculations. This approach
provides perfectly accurate results and is the most
efficient implementation for up to millions of samples,
at the cost of bloated memory footprint.

The HybridDigest uses a SortingDigest for small sample
populations, then switches to a MergingDigest. This
approach combines to the best performance and results for
small sample counts with very good performance and
acceptable accuracy for effectively unbounded sample
counts.

* Remove deps to the 3.2 library.

* Remove unused licenses for tdigest.

* Revert changes for SortingDigest and HybridDigest.

These will be submitted in a follow-up PR for enabling MergingDigest.

* Remove unused Histogram classes and unit tests.

Delete dead and commented out code, make the remaining tests run
reasonably fast. Remove unused annotations, esp. SuppressWarnings.

* Remove Comparison class, not used.

* Revert "Revert changes for SortingDigest and HybridDigest."

This reverts commit 2336b11598.

* Use HybridDigest as default tdigest implementation

Add SortingDigest as a simple structure for percentile calculations that
tracks all data points in a sorted array. This is a fast and perfectly
accurate solution that leads to bloated memory allocation.

Add HybridDigest that uses SortingDigest for small sample counts, then
switches to MergingDigest. This approach delivers extreme
performance and accuracy for small populations while scaling
indefinitely and maintaining acceptable performance and accuracy with
constant memory allocation (15kB by default).

Provide knobs to switch back to AVLTreeDigest, either per query or
through ClusterSettings.

* Add javadoc and tests.

* Remove ClusterSettings tentatively.

* Restore bySize function in TDigest and subclasses.

* Update Dist.cdf to match the rest.

Update tests.

* Revert outdated test changes.

* Revert outdated changes.

* Small fixes.

* Update docs/changelog/96794.yaml

* Make HybridDigest the default implementation.

* Update boxplot documentation.

* Restore AVLTreeDigest as the default in TDigestState.

TDigest.createHybridDigest nw returns the right type.
The switch in TDigestState will happen in a separate PR
as it requires many test updates.

* Use execution_hint in tdigest spec.

* Fix Dist.cdf for empty digest.

* Bump up TransportVersion.

* Bump up TransportVersion for real.

* HybridDigest uses its final implementation during deserialization.

* Restore the right TransportVersion in TDigestState.read

* Use TDigestExecutionHint instead of strings.

* Add link to TDigest javadoc.

* Spotless fix.

* Small fixes.

* Bump up TransportVersion.

* Bump up the TransportVersion, again.
2023-06-18 18:44:39 +03:00

233 lines
6.8 KiB
Text

[[search-aggregations-metrics-percentile-rank-aggregation]]
=== Percentile ranks aggregation
++++
<titleabbrev>Percentile ranks</titleabbrev>
++++
A `multi-value` metrics aggregation that calculates one or more percentile ranks
over numeric values extracted from the aggregated documents. These values can be
extracted from specific numeric or <<histogram,histogram fields>> in the documents.
[NOTE]
==================================================
Please see <<search-aggregations-metrics-percentile-aggregation-approximation>>,
<<search-aggregations-metrics-percentile-aggregation-compression>> and
<<search-aggregations-metrics-percentile-aggregation-execution-hint>> for advice
regarding approximation, performance and memory use of the percentile ranks aggregation
==================================================
Percentile rank show the percentage of observed values which are below certain
value. For example, if a value is greater than or equal to 95% of the observed values
it is said to be at the 95th percentile rank.
Assume your data consists of website load times. You may have a service agreement that
95% of page loads complete within 500ms and 99% of page loads complete within 600ms.
Let's look at a range of percentiles representing load time:
[source,console]
--------------------------------------------------
GET latency/_search
{
"size": 0,
"aggs": {
"load_time_ranks": {
"percentile_ranks": {
"field": "load_time", <1>
"values": [ 500, 600 ]
}
}
}
}
--------------------------------------------------
// TEST[setup:latency]
<1> The field `load_time` must be a numeric field
The response will look like this:
[source,console-result]
--------------------------------------------------
{
...
"aggregations": {
"load_time_ranks": {
"values": {
"500.0": 55.0,
"600.0": 64.0
}
}
}
}
--------------------------------------------------
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
// TESTRESPONSE[s/"500.0": 55.0/"500.0": 55.00000000000001/]
// TESTRESPONSE[s/"600.0": 64.0/"600.0": 64.0/]
From this information you can determine you are hitting the 99% load time target but not quite
hitting the 95% load time target
==== Keyed Response
By default the `keyed` flag is set to `true` associates a unique string key with each bucket and returns the ranges as a hash rather than an array. Setting the `keyed` flag to `false` will disable this behavior:
[source,console]
--------------------------------------------------
GET latency/_search
{
"size": 0,
"aggs": {
"load_time_ranks": {
"percentile_ranks": {
"field": "load_time",
"values": [ 500, 600 ],
"keyed": false
}
}
}
}
--------------------------------------------------
// TEST[setup:latency]
Response:
[source,console-result]
--------------------------------------------------
{
...
"aggregations": {
"load_time_ranks": {
"values": [
{
"key": 500.0,
"value": 55.0
},
{
"key": 600.0,
"value": 64.0
}
]
}
}
}
--------------------------------------------------
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
// TESTRESPONSE[s/"value": 55.0/"value": 55.00000000000001/]
// TESTRESPONSE[s/"value": 64.0/"value": 64.0/]
==== Script
If you need to run the aggregation against values that aren't indexed, use
a <<runtime,runtime field>>. For example, if our load times
are in milliseconds but we want percentiles calculated in seconds:
[source,console]
----
GET latency/_search
{
"size": 0,
"runtime_mappings": {
"load_time.seconds": {
"type": "long",
"script": {
"source": "emit(doc['load_time'].value / params.timeUnit)",
"params": {
"timeUnit": 1000
}
}
}
},
"aggs": {
"load_time_ranks": {
"percentile_ranks": {
"values": [ 500, 600 ],
"field": "load_time.seconds"
}
}
}
}
----
// TEST[setup:latency]
// TEST[s/_search/_search?filter_path=aggregations/]
////
[source,console-result]
--------------------------------------------------
{
"aggregations": {
"load_time_ranks": {
"values": {
"500.0": 100.0,
"600.0": 100.0
}
}
}
}
--------------------------------------------------
////
==== HDR Histogram
https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
that can be useful when calculating percentile ranks for latency measurements as it can be faster than the t-digest implementation
with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified as a
number of significant digits). This means that if data is recorded with values from 1 microsecond up to 1 hour (3,600,000,000
microseconds) in a histogram set to 3 significant digits, it will maintain a value resolution of 1 microsecond for values up to
1 millisecond and 3.6 seconds (or better) for the maximum tracked value (1 hour).
The HDR Histogram can be used by specifying the `hdr` object in the request:
[source,console]
--------------------------------------------------
GET latency/_search
{
"size": 0,
"aggs": {
"load_time_ranks": {
"percentile_ranks": {
"field": "load_time",
"values": [ 500, 600 ],
"hdr": { <1>
"number_of_significant_value_digits": 3 <2>
}
}
}
}
}
--------------------------------------------------
// TEST[setup:latency]
<1> `hdr` object indicates that HDR Histogram should be used to calculate the percentiles and specific settings for this algorithm can be specified inside the object
<2> `number_of_significant_value_digits` specifies the resolution of values for the histogram in number of significant digits
The HDRHistogram only supports positive values and will error if it is passed a negative value. It is also not a good idea to use
the HDRHistogram if the range of values is unknown as this could lead to high memory usage.
==== Missing value
The `missing` parameter defines how documents that are missing a value should be treated.
By default they will be ignored but it is also possible to treat them as if they
had a value.
[source,console]
--------------------------------------------------
GET latency/_search
{
"size": 0,
"aggs": {
"load_time_ranks": {
"percentile_ranks": {
"field": "load_time",
"values": [ 500, 600 ],
"missing": 10 <1>
}
}
}
}
--------------------------------------------------
// TEST[setup:latency]
<1> Documents without a value in the `load_time` field will fall into the same bucket as documents that have the value `10`.