mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-29 18:03:32 -04:00
* Initial import for TDigest forking. * Fix MedianTest. More work needed for TDigestPercentile*Tests and the TDigestTest (and the rest of the tests) in the tdigest lib to pass. * Fix Dist. * Fix AVLTreeDigest.quantile to match Dist for uniform centroids. * Update docs/changelog/96086.yaml * Fix `MergingDigest.quantile` to match `Dist` on uniform distribution. * Add merging to TDigestState.hashCode and .equals. Remove wrong asserts from tests and MergingDigest. * Fix style violations for tdigest library. * Fix typo. * Fix more style violations. * Fix more style violations. * Fix remaining style violations in tdigest library. * Update results in docs based on the forked tdigest. * Fix YAML tests in aggs module. * Fix YAML tests in x-pack/plugin. * Skip failing V7 compat tests in modules/aggregations. * Fix TDigest library unittests. Remove redundant serializing interfaces from the library. * Remove YAML test versions for older releases. These tests don't address compatibility issues in mixed cluster tests as the latter contain a mix of older and newer nodes, so the output depends on which node is picked as a data node since the forked TDigest library is not backwards compatible (produces slightly different results). * Fix test failures in docs and mixed cluster. * Reduce buffer sizes in MergingDigest to avoid oom. * Exclude more failing V7 compatibility tests. * Update results for JdbcCsvSpecIT tests. * Update results for JdbcDocCsvSpecIT tests. * Revert unrelated change. * More test fixes. * Use version skips instead of blacklisting in mixed cluster tests. * Switch TDigestState back to AVLTreeDigest. * Update docs and tests with AVLTreeDigest output. * Update flaky test. * Remove dead code, esp around tracking of incoming data. * Update docs/changelog/96086.yaml * Delete docs/changelog/96086.yaml * Remove explicit compression calls. This was added to prevent concurrency tests from failing, but it leads to reduces precision. Submit this to see if the concurrency tests are still failing. * Revert "Remove explicit compression calls." This reverts commit5352c96f65
. * Remove explicit compression calls to MedianAbsoluteDeviation input. * Add unittests for AVL and merging digest accuracy. * Fix spotless violations. * Delete redundant tests and benchmarks. * Fix spotless violation. * Use the old implementation of AVLTreeDigest. The latest library version is 50% slower and less accurate, as verified by ComparisonTests. * Update docs with latest percentile results. * Update docs with latest percentile results. * Remove repeated compression calls. * Update more percentile results. * Use approximate percentile values in integration tests. This helps with mixed cluster tests, where some of the tests where blocked. * Fix expected percentile value in test. * Revert in-place node updates in AVL tree. Update quantile calculations between centroids and min/max values to match v.3.2. * Add SortingDigest and HybridDigest. The SortingDigest tracks all samples in an ArrayList that gets sorted for quantile calculations. This approach provides perfectly accurate results and is the most efficient implementation for up to millions of samples, at the cost of bloated memory footprint. The HybridDigest uses a SortingDigest for small sample populations, then switches to a MergingDigest. This approach combines to the best performance and results for small sample counts with very good performance and acceptable accuracy for effectively unbounded sample counts. * Remove deps to the 3.2 library. * Remove unused licenses for tdigest. * Revert changes for SortingDigest and HybridDigest. These will be submitted in a follow-up PR for enabling MergingDigest. * Remove unused Histogram classes and unit tests. Delete dead and commented out code, make the remaining tests run reasonably fast. Remove unused annotations, esp. SuppressWarnings. * Remove Comparison class, not used. * Revert "Revert changes for SortingDigest and HybridDigest." This reverts commit2336b11598
. * Use HybridDigest as default tdigest implementation Add SortingDigest as a simple structure for percentile calculations that tracks all data points in a sorted array. This is a fast and perfectly accurate solution that leads to bloated memory allocation. Add HybridDigest that uses SortingDigest for small sample counts, then switches to MergingDigest. This approach delivers extreme performance and accuracy for small populations while scaling indefinitely and maintaining acceptable performance and accuracy with constant memory allocation (15kB by default). Provide knobs to switch back to AVLTreeDigest, either per query or through ClusterSettings. * Small fixes. * Add javadoc and tests. * Add javadoc and tests. * Remove special logic for singletons in the boundaries. While this helps with the case where the digest contains only singletons (perfect accuracy), it has a major issue problem (non-monotonic quantile function) when the first singleton is followed by a non-singleton centroid. It's preferable to revert to the old version from 3.2; inaccuracies in a singleton-only digest should be mitigated by using a sorted array for small sample counts. * Revert changes to expected values in tests. This is due to restoring quantile functions to match head. * Revert changes to expected values in tests. This is due to restoring quantile functions to match head. * Tentatively restore percentile rank expected results. * Use cdf version from 3.2 Update Dist.cdf to use interpolation, use the same cdf version in AVLTreeDigest and MergingDigest. * Revert "Tentatively restore percentile rank expected results." This reverts commit7718dbba59
. * Revert remaining changes compared to main. * Revert excluded V7 compat tests. * Exclude V7 compat tests still failing. * Exclude V7 compat tests still failing. * Remove ClusterSettings tentatively. * Initial import for TDigest forking. * Fix MedianTest. More work needed for TDigestPercentile*Tests and the TDigestTest (and the rest of the tests) in the tdigest lib to pass. * Fix Dist. * Fix AVLTreeDigest.quantile to match Dist for uniform centroids. * Update docs/changelog/96086.yaml * Fix `MergingDigest.quantile` to match `Dist` on uniform distribution. * Add merging to TDigestState.hashCode and .equals. Remove wrong asserts from tests and MergingDigest. * Fix style violations for tdigest library. * Fix typo. * Fix more style violations. * Fix more style violations. * Fix remaining style violations in tdigest library. * Update results in docs based on the forked tdigest. * Fix YAML tests in aggs module. * Fix YAML tests in x-pack/plugin. * Skip failing V7 compat tests in modules/aggregations. * Fix TDigest library unittests. Remove redundant serializing interfaces from the library. * Remove YAML test versions for older releases. These tests don't address compatibility issues in mixed cluster tests as the latter contain a mix of older and newer nodes, so the output depends on which node is picked as a data node since the forked TDigest library is not backwards compatible (produces slightly different results). * Fix test failures in docs and mixed cluster. * Reduce buffer sizes in MergingDigest to avoid oom. * Exclude more failing V7 compatibility tests. * Update results for JdbcCsvSpecIT tests. * Update results for JdbcDocCsvSpecIT tests. * Revert unrelated change. * More test fixes. * Use version skips instead of blacklisting in mixed cluster tests. * Switch TDigestState back to AVLTreeDigest. * Update docs and tests with AVLTreeDigest output. * Update flaky test. * Remove dead code, esp around tracking of incoming data. * Remove explicit compression calls. This was added to prevent concurrency tests from failing, but it leads to reduces precision. Submit this to see if the concurrency tests are still failing. * Update docs/changelog/96086.yaml * Delete docs/changelog/96086.yaml * Revert "Remove explicit compression calls." This reverts commit5352c96f65
. * Remove explicit compression calls to MedianAbsoluteDeviation input. * Add unittests for AVL and merging digest accuracy. * Fix spotless violations. * Delete redundant tests and benchmarks. * Fix spotless violation. * Use the old implementation of AVLTreeDigest. The latest library version is 50% slower and less accurate, as verified by ComparisonTests. * Update docs with latest percentile results. * Update docs with latest percentile results. * Remove repeated compression calls. * Update more percentile results. * Use approximate percentile values in integration tests. This helps with mixed cluster tests, where some of the tests where blocked. * Fix expected percentile value in test. * Revert in-place node updates in AVL tree. Update quantile calculations between centroids and min/max values to match v.3.2. * Add SortingDigest and HybridDigest. The SortingDigest tracks all samples in an ArrayList that gets sorted for quantile calculations. This approach provides perfectly accurate results and is the most efficient implementation for up to millions of samples, at the cost of bloated memory footprint. The HybridDigest uses a SortingDigest for small sample populations, then switches to a MergingDigest. This approach combines to the best performance and results for small sample counts with very good performance and acceptable accuracy for effectively unbounded sample counts. * Remove deps to the 3.2 library. * Remove unused licenses for tdigest. * Revert changes for SortingDigest and HybridDigest. These will be submitted in a follow-up PR for enabling MergingDigest. * Remove unused Histogram classes and unit tests. Delete dead and commented out code, make the remaining tests run reasonably fast. Remove unused annotations, esp. SuppressWarnings. * Remove Comparison class, not used. * Revert "Revert changes for SortingDigest and HybridDigest." This reverts commit2336b11598
. * Use HybridDigest as default tdigest implementation Add SortingDigest as a simple structure for percentile calculations that tracks all data points in a sorted array. This is a fast and perfectly accurate solution that leads to bloated memory allocation. Add HybridDigest that uses SortingDigest for small sample counts, then switches to MergingDigest. This approach delivers extreme performance and accuracy for small populations while scaling indefinitely and maintaining acceptable performance and accuracy with constant memory allocation (15kB by default). Provide knobs to switch back to AVLTreeDigest, either per query or through ClusterSettings. * Add javadoc and tests. * Remove ClusterSettings tentatively. * Restore bySize function in TDigest and subclasses. * Update Dist.cdf to match the rest. Update tests. * Revert outdated test changes. * Revert outdated changes. * Small fixes. * Update docs/changelog/96794.yaml * TDigestState uses MergingDigest by default. * Make HybridDigest the default implementation. * Update boxplot documentation. * Use HybridDigest for real. * Restore AVLTreeDigest as the default in TDigestState. TDigest.createHybridDigest nw returns the right type. The switch in TDigestState will happen in a separate PR as it requires many test updates. * Use execution_hint in tdigest spec. * Restore expected test values. * Fix Dist.cdf for empty digest. * Bump up TransportVersion. * More test updates. * Bump up TransportVersion for real. * Restore V7 compat blacklisting. * HybridDigest uses its final implementation during deserialization. * Restore the right TransportVersion in TDigestState.read * More test fixes. * More test updates. * Use TDigestExecutionHint instead of strings. * Add link to TDigest javadoc. * Spotless fix. * Small fixes. * Bump up TransportVersion. * Bump up the TransportVersion, again. * Update docs/changelog/96904.yaml * Delete 96794.yaml Delete existing changelog to get a new one. * Restore previous changelog. * Rename 96794.yaml to 96794.yaml * Update breaking change notes in changelog. * Remove mapping value from changelog. * Set a valid breaking area. * Use HybridDigest as default TDigest impl. * Update docs/changelog/96904.yaml * Use TDigestExecutionHint in MedianAbsoluteDeviationAggregator. * Update changelog and comment in blacklisted V7 compat tests. * Update breaking area in changelog.
415 lines
13 KiB
Text
415 lines
13 KiB
Text
[[search-aggregations-metrics-percentile-aggregation]]
|
|
=== Percentiles aggregation
|
|
++++
|
|
<titleabbrev>Percentiles</titleabbrev>
|
|
++++
|
|
|
|
A `multi-value` metrics aggregation that calculates one or more percentiles
|
|
over numeric values extracted from the aggregated documents. These values can be
|
|
extracted from specific numeric or <<histogram,histogram fields>> in the documents.
|
|
|
|
Percentiles show the point at which a certain percentage of observed values
|
|
occur. For example, the 95th percentile is the value which is greater than 95%
|
|
of the observed values.
|
|
|
|
Percentiles are often used to find outliers. In normal distributions, the
|
|
0.13th and 99.87th percentiles represents three standard deviations from the
|
|
mean. Any data which falls outside three standard deviations is often considered
|
|
an anomaly.
|
|
|
|
When a range of percentiles are retrieved, they can be used to estimate the
|
|
data distribution and determine if the data is skewed, bimodal, etc.
|
|
|
|
Assume your data consists of website load times. The average and median
|
|
load times are not overly useful to an administrator. The max may be interesting,
|
|
but it can be easily skewed by a single slow response.
|
|
|
|
Let's look at a range of percentiles representing load time:
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
GET latency/_search
|
|
{
|
|
"size": 0,
|
|
"aggs": {
|
|
"load_time_outlier": {
|
|
"percentiles": {
|
|
"field": "load_time" <1>
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[setup:latency]
|
|
<1> The field `load_time` must be a numeric field
|
|
|
|
By default, the `percentile` metric will generate a range of
|
|
percentiles: `[ 1, 5, 25, 50, 75, 95, 99 ]`. The response will look like this:
|
|
|
|
[source,console-result]
|
|
--------------------------------------------------
|
|
{
|
|
...
|
|
|
|
"aggregations": {
|
|
"load_time_outlier": {
|
|
"values": {
|
|
"1.0": 10.0,
|
|
"5.0": 30.0,
|
|
"25.0": 170.0,
|
|
"50.0": 445.0,
|
|
"75.0": 720.0,
|
|
"95.0": 940.0,
|
|
"99.0": 980.0
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
|
|
// TESTRESPONSE[s/"1.0": 10.0/"1.0": 9.9/]
|
|
// TESTRESPONSE[s/"5.0": 30.0/"5.0": 29.5/]
|
|
// TESTRESPONSE[s/"25.0": 170.0/"25.0": 167.5/]
|
|
// TESTRESPONSE[s/"50.0": 445.0/"50.0": 445.0/]
|
|
// TESTRESPONSE[s/"75.0": 720.0/"75.0": 722.5/]
|
|
// TESTRESPONSE[s/"95.0": 940.0/"95.0": 940.5/]
|
|
// TESTRESPONSE[s/"99.0": 980.0/"99.0": 980.1/]
|
|
|
|
As you can see, the aggregation will return a calculated value for each percentile
|
|
in the default range. If we assume response times are in milliseconds, it is
|
|
immediately obvious that the webpage normally loads in 10-725ms, but occasionally
|
|
spikes to 945-985ms.
|
|
|
|
Often, administrators are only interested in outliers -- the extreme percentiles.
|
|
We can specify just the percents we are interested in (requested percentiles
|
|
must be a value between 0-100 inclusive):
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
GET latency/_search
|
|
{
|
|
"size": 0,
|
|
"aggs": {
|
|
"load_time_outlier": {
|
|
"percentiles": {
|
|
"field": "load_time",
|
|
"percents": [ 95, 99, 99.9 ] <1>
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[setup:latency]
|
|
<1> Use the `percents` parameter to specify particular percentiles to calculate
|
|
|
|
==== Keyed Response
|
|
|
|
By default the `keyed` flag is set to `true` which associates a unique string key with each bucket and returns the ranges as a hash rather than an array. Setting the `keyed` flag to `false` will disable this behavior:
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
GET latency/_search
|
|
{
|
|
"size": 0,
|
|
"aggs": {
|
|
"load_time_outlier": {
|
|
"percentiles": {
|
|
"field": "load_time",
|
|
"keyed": false
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[setup:latency]
|
|
|
|
Response:
|
|
|
|
[source,console-result]
|
|
--------------------------------------------------
|
|
{
|
|
...
|
|
|
|
"aggregations": {
|
|
"load_time_outlier": {
|
|
"values": [
|
|
{
|
|
"key": 1.0,
|
|
"value": 10.0
|
|
},
|
|
{
|
|
"key": 5.0,
|
|
"value": 30.0
|
|
},
|
|
{
|
|
"key": 25.0,
|
|
"value": 170.0
|
|
},
|
|
{
|
|
"key": 50.0,
|
|
"value": 445.0
|
|
},
|
|
{
|
|
"key": 75.0,
|
|
"value": 720.0
|
|
},
|
|
{
|
|
"key": 95.0,
|
|
"value": 940.0
|
|
},
|
|
{
|
|
"key": 99.0,
|
|
"value": 980.0
|
|
}
|
|
]
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
|
|
// TESTRESPONSE[s/"value": 10.0/"value": 9.9/]
|
|
// TESTRESPONSE[s/"value": 30.0/"value": 29.5/]
|
|
// TESTRESPONSE[s/"value": 170.0/"value": 167.5/]
|
|
// TESTRESPONSE[s/"value": 445.0/"value": 445.0/]
|
|
// TESTRESPONSE[s/"value": 720.0/"value": 722.5/]
|
|
// TESTRESPONSE[s/"value": 940.0/"value": 940.5/]
|
|
// TESTRESPONSE[s/"value": 980.0/"value": 980.1/]
|
|
|
|
==== Script
|
|
|
|
If you need to run the aggregation against values that aren't indexed, use
|
|
a <<runtime,runtime field>>. For example, if our load times
|
|
are in milliseconds but you want percentiles calculated in seconds:
|
|
|
|
[source,console]
|
|
----
|
|
GET latency/_search
|
|
{
|
|
"size": 0,
|
|
"runtime_mappings": {
|
|
"load_time.seconds": {
|
|
"type": "long",
|
|
"script": {
|
|
"source": "emit(doc['load_time'].value / params.timeUnit)",
|
|
"params": {
|
|
"timeUnit": 1000
|
|
}
|
|
}
|
|
}
|
|
},
|
|
"aggs": {
|
|
"load_time_outlier": {
|
|
"percentiles": {
|
|
"field": "load_time.seconds"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
----
|
|
// TEST[setup:latency]
|
|
// TEST[s/_search/_search?filter_path=aggregations/]
|
|
// TEST[s/"timeUnit": 1000/"timeUnit": 10/]
|
|
|
|
////
|
|
[source,console-result]
|
|
----
|
|
{
|
|
"aggregations": {
|
|
"load_time_outlier": {
|
|
"values": {
|
|
"1.0": 0.99,
|
|
"5.0": 2.95,
|
|
"25.0": 16.75,
|
|
"50.0": 44.5,
|
|
"75.0": 72.25,
|
|
"95.0": 94.05,
|
|
"99.0": 98.01
|
|
}
|
|
}
|
|
}
|
|
}
|
|
----
|
|
////
|
|
|
|
[[search-aggregations-metrics-percentile-aggregation-approximation]]
|
|
==== Percentiles are (usually) approximate
|
|
|
|
There are many different algorithms to calculate percentiles. The naive
|
|
implementation simply stores all the values in a sorted array. To find the 50th
|
|
percentile, you simply find the value that is at `my_array[count(my_array) * 0.5]`.
|
|
|
|
Clearly, the naive implementation does not scale -- the sorted array grows
|
|
linearly with the number of values in your dataset. To calculate percentiles
|
|
across potentially billions of values in an Elasticsearch cluster, _approximate_
|
|
percentiles are calculated.
|
|
|
|
The algorithm used by the `percentile` metric is called TDigest (introduced by
|
|
Ted Dunning in
|
|
https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[Computing Accurate Quantiles using T-Digests]).
|
|
|
|
When using this metric, there are a few guidelines to keep in mind:
|
|
|
|
- Accuracy is proportional to `q(1-q)`. This means that extreme percentiles (e.g. 99%)
|
|
are more accurate than less extreme percentiles, such as the median
|
|
- For small sets of values, percentiles are highly accurate (and potentially
|
|
100% accurate if the data is small enough).
|
|
- As the quantity of values in a bucket grows, the algorithm begins to approximate
|
|
the percentiles. It is effectively trading accuracy for memory savings. The
|
|
exact level of inaccuracy is difficult to generalize, since it depends on your
|
|
data distribution and volume of data being aggregated
|
|
|
|
The following chart shows the relative error on a uniform distribution depending
|
|
on the number of collected values and the requested percentile:
|
|
|
|
image:images/percentiles_error.png[]
|
|
|
|
It shows how precision is better for extreme percentiles. The reason why error diminishes
|
|
for large number of values is that the law of large numbers makes the distribution of
|
|
values more and more uniform and the t-digest tree can do a better job at summarizing
|
|
it. It would not be the case on more skewed distributions.
|
|
|
|
[WARNING]
|
|
====
|
|
Percentile aggregations are also
|
|
{wikipedia}/Nondeterministic_algorithm[non-deterministic].
|
|
This means you can get slightly different results using the same data.
|
|
====
|
|
|
|
[[search-aggregations-metrics-percentile-aggregation-compression]]
|
|
==== Compression
|
|
|
|
Approximate algorithms must balance memory utilization with estimation accuracy.
|
|
This balance can be controlled using a `compression` parameter:
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
GET latency/_search
|
|
{
|
|
"size": 0,
|
|
"aggs": {
|
|
"load_time_outlier": {
|
|
"percentiles": {
|
|
"field": "load_time",
|
|
"tdigest": {
|
|
"compression": 200 <1>
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[setup:latency]
|
|
|
|
<1> Compression controls memory usage and approximation error
|
|
|
|
// tag::t-digest[]
|
|
The TDigest algorithm uses a number of "nodes" to approximate percentiles -- the
|
|
more nodes available, the higher the accuracy (and large memory footprint) proportional
|
|
to the volume of data. The `compression` parameter limits the maximum number of
|
|
nodes to `20 * compression`.
|
|
|
|
Therefore, by increasing the compression value, you can increase the accuracy of
|
|
your percentiles at the cost of more memory. Larger compression values also
|
|
make the algorithm slower since the underlying tree data structure grows in size,
|
|
resulting in more expensive operations. The default compression value is
|
|
`100`.
|
|
|
|
A "node" uses roughly 32 bytes of memory, so under worst-case scenarios (large amount
|
|
of data which arrives sorted and in-order) the default settings will produce a
|
|
TDigest roughly 64KB in size. In practice data tends to be more random and
|
|
the TDigest will use less memory.
|
|
// end::t-digest[]
|
|
|
|
[[search-aggregations-metrics-percentile-aggregation-execution-hint]]
|
|
==== Execution hint
|
|
|
|
The default implementation of TDigest is optimized for performance, scaling to millions or even
|
|
billions of sample values while maintaining acceptable accuracy levels (close to 1% relative error
|
|
for millions of samples in some cases). There's an option to use an implementation optimized
|
|
for accuracy by setting parameter `execution_hint` to value `high_accuracy`:
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
GET latency/_search
|
|
{
|
|
"size": 0,
|
|
"aggs": {
|
|
"load_time_outlier": {
|
|
"percentiles": {
|
|
"field": "load_time",
|
|
"tdigest": {
|
|
"execution_hint": "high_accuracy" <1>
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[setup:latency]
|
|
|
|
<1> Optimize TDigest for accuracy, at the expense of performance
|
|
|
|
This option can lead to improved accuracy (relative error close to 0.01% for millions of samples in some
|
|
cases) but then percentile queries take 2x-10x longer to complete.
|
|
|
|
==== HDR Histogram
|
|
|
|
https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
|
|
that can be useful when calculating percentiles for latency measurements as it can be faster than the t-digest implementation
|
|
with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified
|
|
as a number of significant digits). This means that if data is recorded with values from 1 microsecond up to 1 hour
|
|
(3,600,000,000 microseconds) in a histogram set to 3 significant digits, it will maintain a value resolution of 1 microsecond
|
|
for values up to 1 millisecond and 3.6 seconds (or better) for the maximum tracked value (1 hour).
|
|
|
|
The HDR Histogram can be used by specifying the `method` parameter in the request:
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
GET latency/_search
|
|
{
|
|
"size": 0,
|
|
"aggs": {
|
|
"load_time_outlier": {
|
|
"percentiles": {
|
|
"field": "load_time",
|
|
"percents": [ 95, 99, 99.9 ],
|
|
"hdr": { <1>
|
|
"number_of_significant_value_digits": 3 <2>
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[setup:latency]
|
|
|
|
<1> `hdr` object indicates that HDR Histogram should be used to calculate the percentiles and specific settings for this algorithm can be specified inside the object
|
|
<2> `number_of_significant_value_digits` specifies the resolution of values for the histogram in number of significant digits
|
|
|
|
The HDRHistogram only supports positive values and will error if it is passed a negative value. It is also not a good idea to use
|
|
the HDRHistogram if the range of values is unknown as this could lead to high memory usage.
|
|
|
|
==== Missing value
|
|
|
|
The `missing` parameter defines how documents that are missing a value should be treated.
|
|
By default they will be ignored but it is also possible to treat them as if they
|
|
had a value.
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
GET latency/_search
|
|
{
|
|
"size": 0,
|
|
"aggs": {
|
|
"grade_percentiles": {
|
|
"percentiles": {
|
|
"field": "grade",
|
|
"missing": 10 <1>
|
|
}
|
|
}
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[setup:latency]
|
|
|
|
<1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `10`.
|