Commit graph

532 commits

Author SHA1 Message Date
Nik Everett
0683c90ded
REST tests for normalize agg (#89629)
This adds a REST test for the normalize pipeline agg so we have
backwards compatibility tests for it.
2022-08-26 14:18:46 -04:00
István Zoltán Szabó
7602015384
[DOCS] Improves frequent items aggregation docs (#89122) 2022-08-08 15:46:29 +02:00
Benjamin Trent
46fc42b817
[ML] Make bucket_count_ks_test aggregation generally available (#88657)
Initially released in 7.14, bucket_count_ks_test is now generally available.
2022-07-25 13:30:48 -04:00
Benjamin Trent
239d45a019
[ML] make bucket_correlation aggregation generally available (#88655)
Originally released in 7.14, bucket_correlation is now generally available.
2022-07-21 07:20:09 -04:00
Benjamin Trent
94f2544998
Adding cardinality support for random_sampler agg (#86838)
This adds support for the `cardinality` aggregation within a random_sampler.

This usecase is helpful in determining the ratio of unique values compared to the count of total documents within the sampled set.
2022-07-21 07:19:35 -04:00
Sean Letendre
67cacde18b
Corrected an incomplete sentence. (#86542)
* Corrected an incomplete sentence.

* Update docs/reference/aggregations/metrics/avg-aggregation.asciidoc

Co-authored-by: Christos Soulios <1561376+csoulios@users.noreply.github.com>

Co-authored-by: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com>
Co-authored-by: Christos Soulios <1561376+csoulios@users.noreply.github.com>
2022-07-12 09:19:58 -04:00
Mark Tozzi
9ee6a19187
Add ability to select execution mode for cardinality aggregation (#87704)
Plumbs through a new parameter for the cardinality aggregation, to allow configuring the execution mode.  This can have significant impacts on speed and memory usage.  This PR exposes three collection modes and two heuristics that we can tune going forward.  All of these are treated as hints and can be silently ignored, e.g. if not applicable to the given field type.  I've change the default behavior to optimize for time, which potentially uses more memory.  Users can override this for the old behavior if needed.
2022-07-05 09:11:22 -04:00
apeltop
71234f7464
[DOCS] Fix typos in docs (#88226) 2022-07-05 11:02:29 +02:00
David Roberts
93bc2e382f
[ML] Replace the implementation of the categorize_text aggregation (#85872)
This replaces the implementation of the categorize_text aggregation
with the new algorithm that was added in #80867. The new algorithm
works in the same way as the ML C++ code used for categorization jobs
(and now includes the fixes of elastic/ml-cpp#2277).

The docs are updated to reflect the workings of the new implementation.
2022-05-23 18:46:13 +01:00
Umut Uz
53461f89f1 Remove duplicate text from cardinality aggs docs (#86615)
The same explanation is repeated twice within a section.
2022-05-19 11:51:31 -07:00
Craig Taverner
5f7ea792ac
Soft-deprecation of point/geo_point formats (#86835)
* Soft-deprecation of point/geo_point formats

Since GeoJSON and WKT are now common formats for all three types:
  geo_shape, geo_point and point
We decided to soft-deprecate the other point formats by ordering:
* GeoJSON (object with keys `type` and `coordinates`)
* WKT `POINT(x y)`
* Object with keys `lat` and `lon` (or `x` and `y` for point)
* Array [lon,lat]
* String `"lat,lon"` (or `"x,y"` in point)
* String with geohash (only in `geo_point`)

The geohash is last because it is only in one field type.
The string version is second last because it is the most controversial
being the only version to reverse the coordinate order from all other
formats (for geo_point only, since the coordinates are not reversed
in point).

In addition we replaced many examples in both documentation and tests
to prioritize WKT over the plain string format.

Many remaining examples of array format or object with keys still exist
and could be replaced by, for example, GeoJSON, if we feel the need.

* Incorrect quote position
2022-05-17 23:46:43 +02:00
Mark Tozzi
54efc59eff
Clarify risks around ordering terms aggregation (#86528)
Add some details as to why some terms orderings are worse than others.


Co-authored-by: Adam Locke <adam.locke@elastic.co>
2022-05-16 11:05:22 -04:00
István Zoltán Szabó
95ef40656f
[DOCS] Adds more details to the frequent items agg documentation (#86661)
Co-authored-by: Mark Tozzi <mark.tozzi@gmail.com>
2022-05-16 10:24:14 +02:00
István Zoltán Szabó
e590e900a4
[DOCS] Adds frequent items agg docs (#86037)
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
2022-05-05 16:07:24 +02:00
Benjamin Trent
237e345d71
[ML][Docs] fix minimum buckets for change_point agg (#86396) 2022-05-04 09:37:46 -04:00
Benjamin Trent
c49b92e425
Allow bucket paths to specify _count within a bucket (#85720)
Users should be able to specify specific metrics/keys within a specific bucket key. 

An example is `agg["bucket_foo"]._count`. 

This change now allows that.

closes: https://github.com/elastic/elasticsearch/issues/76320
2022-04-29 08:42:46 -04:00
James Garside
fca3487395
Updated format parameter description to reference Java decimal format (#86163) 2022-04-25 20:52:44 +01:00
Elasticsearch addict
7b2511e22b Update histogram-aggregation.asciidoc (#85356)
Fix small grammatical mistake.

Closes #85355
2022-03-28 12:27:32 -07:00
Salvatore Campagna
db6c58ed45
fix: use the correct field name when reading data from multi fields (#84752)
When using a multi-field we need to extract data from the document
using the correct field name. That is the name of the top field.
Here we delegate extraction of the correct name to a method in the
SearchContext that is wrapped by the AggregationContext.

Issue: #82918
2022-03-11 17:11:26 +01:00
Abele Mălan
9ecb96fcf3
Fix some typos in plugins & reference docs (#84667)
This pull request removes a few instances of duplicate words or
punctuation and erroneous spelling from the docs.
2022-03-07 12:29:58 -05:00
Benjamin Trent
cf151b53fe
[ML] adds new change_point pipeline aggregation (#83428)
adds a new `change_point` sibling pipeline aggregation.

This aggregation detects a change_point in a multi-bucket aggregation. 

Example:
```
POST kibana_sample_data_flights/_search
{
  "size": 0,
  "aggs": {
    "histo": {
      "date_histogram": {
        "field": "timestamp",
        "fixed_interval": "3h"
      },
      "aggs": {
        "ticket_price": {
          "max": {
            "field": "AvgTicketPrice"
          }
        }
      }
    },
    "changes": {
      "change_point": {
        "buckets_path": "histo>ticket_price"
      }
    }
  }
}
```

Response
```
{
  /*<snip>*/ 
  "aggregations" : {
    "histo" : {
      "buckets" : [ /*<snip>*/ ]
    },
    "changes" : {
      "bucket" : {
        "key" : "2022-01-28T23:00:00.000Z",
        "doc_count" : 48,
        "ticket_price" : {
          "value" : 1187.61083984375
        }
      },
      "type" : {
        "distribution_change" : {
          "p_value" : 0.023753965139433175,
          "change_point" : 40
        }
      }
    }
  }
}
```
2022-03-04 07:00:58 -05:00
Benjamin Trent
b592d2bf01
New random_sampler aggregation for sampling documents in aggregations (#84363)
This adds a new sampling aggregation that performs a background sampling over all documents in an index. 

The syntax is as follows:
```
{
  "aggregations": {
    "sampling": {
      "random_sampler": {
        "probability": 0.1
      },
      "aggs": {
        "price_percentiles": {
          "percentiles": {
            "field": "taxful_total_price"
          }
        }
      }
    }
  }
}
```

This aggregation provides fast random sampling over the entire document set in order to speed up costly aggregations.

Testing this over a variety of aggregations and data sets, the median speed up when sampling at `0.001` over millions of documents is around 70X speed improvement.

Relative error rate does rely on the size of the data and the aggregation kind. Here are some typically expected numbers when sampling over 10s of millions of documents. `p` is the configured probability and `n` is the number of documents matched by your provided filter query.
2022-03-02 14:32:30 -05:00
James Rodewig
74e4add3a8
[DOCS] Update sum aggregation for histograms (#84493) (#84496)
Fixes an error and test snippets for the sum aggregation example for histograms.

Closes #84491

Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
(cherry picked from commit fb45ac9dea)

Co-authored-by: Maja Grubic <maja.grubic@elastic.co>
2022-03-01 08:42:05 -05:00
Lisa Cawley
4fbbcda494
[DOCS] Fix nesting in bucket correlation aggregation (#83816) 2022-02-11 11:14:11 -08:00
James Rodewig
d31bdd6bf4
[DOCS] Remove unneeded callouts from snippets (#83798)
These callouts aren't referenced anywhere. Leaving them in can be confusing.
2022-02-10 15:04:46 -05:00
James Rodewig
280fd2fff7
[DOCS] Fix min/max agg snippets for histograms (#83695)
* Updates the `min` and `max` snippets for histograms. These should now run as docs integration tests.
* Fixes a copy/paste error in the `max` aggregation snippet for histograms.

Relates to https://github.com/elastic/elasticsearch/pull/83384
2022-02-08 19:48:15 -05:00
Salvatore Campagna
9de75c2ac5
Add an aggregator for IPv4 and IPv6 subnets (#82410)
Parameters accepted by the aggregator include:

* prefix_length (integer, required): defines the network size of the subnet mask;
* is_ipv6 (boolean, optional, default: false): defines whether the prefix applies to IPv6 (true) or IPv4 (false) IP addresses;
* min_doc_count (integer, optional, default: 1): defines the minimum number of documents for a bucket to be returned in the results;
* append_prefix_length (boolean, optional, default: false): defines if the prefix length is appended to the IP address key when returning results;
* keyed (boolean, optional, default: false): defines whether the result is returned keyed or as an array of buckets;

Each bucket returned by the aggregator represents a different subnet. IPv4 subnets also include a netmask field set to the subnet mask value (i.e. "255.255.0.0" for a /16 subnet).

Related to: #57964 and elastic/kibana#68424
2022-01-28 11:59:07 +01:00
Ignacio Vera
0873893bb7
New GeoHexGrid aggregation (#82924)
This commit introduces a new geogrid aggregation called GeoHexGridAggregation that
is based in Uber h3 grid. It only supports geo_point fields.
2022-01-27 07:45:51 +01:00
James Rodewig
63f228e24e
[DOCS] Re-add paragraph noting doc_count is approximate (#83154)
This paragraph was accidentally removed as part of #79205. Also fixes a minor heading capitalization error.
2022-01-26 11:07:59 -05:00
James Rodewig
ccac525d90
[DOCS] Fix typo (#82344) (#82379)
(cherry picked from commit 129d0fc91d)

Co-authored-by: Oleks <oleks@users.noreply.github.com>
2022-01-10 13:47:03 -05:00
William Chaparro
c8e8104f66
[DOCS] Remove experimental language from HDR Histo percentiles/ranks (#81773)
per issue 60780, decision from team to remove experimental language from HDR Histogram percentiles and ranks. Feature has been in production for quite some time.
closes #60780
2021-12-15 14:35:08 -05:00
James Rodewig
04318961b9
[DOCS] Clarify supported parameters for terms value source (#81775)
The composite aggregation's `terms` value source doesn't support the same set of
parameters as the `terms` aggregation.

Closes #81431.
2021-12-15 14:32:16 -05:00
Salvatore Campagna
2b5ebba94a
[DOCS] Fix the weighed average documentation (#81307)
The documentations states that if the `weight` field is missing, and no
explicit missing configuration is provided, a default value of 1 is used.
This is incorrect and does not match the implementation of the weighted
average aggregator. In this specific case the document is skipped, instead.
2021-12-03 23:28:41 +01:00
James Rodewig
cf30b54a58
[DOCS] Fix typo in gap_policy's default value for serial differencing aggregation (#80893) (#80912)
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>

Co-authored-by: Simon Stücher <stchr@users.noreply.github.com>
2021-11-22 13:43:16 -05:00
James Rodewig
f56a0f4b66
[DOCS] Remove testenv annotations from doc snippet tests (#80023)
Removes `testenv` annotations and related code. These annotations originally let you skip x-pack snippet tests in the docs. However, that's no longer possible.

Relates to #79309, #31619
2021-11-05 18:38:50 -04:00
Nik Everett
66de804a9e
Rework docs for the size of terms agg (#79205)
The `terms` agg picks the top `size` terms in a single scatter/gather
pass across all the shards. For the default `order` and if you `order`
by `_key` this works quite well. Some errors creep in, but it's fairly
easy to point to them and understand them. But ordering by doc count
ascending is like inviting the error vampire into your agg. It's super
easy to get inaccurate results. This updates the docs to be more stark
about it. Closes #72684
2021-11-01 17:07:31 -04:00
Benjamin Trent
f245c477d1
[ML] fail on poor configuration for categorize_text (#79586)
This commit fixes a handful of bugs with categorize_text agg

 - The agg now fails on fields that are not text fields
 - Limits the number of tokens categorized
 - Validates the configuration inputs to disallow settings above static maximums
2021-10-21 12:14:27 -04:00
Christos Soulios
de93d95dcf
Fix rate agg with custom _doc_count (#79346)
When running a rate aggregation without setting the field parameter, the result is computed based on the bucket doc_count.

This PR adds support for a custom _doc_count field.

Closes #77734
2021-10-19 13:25:54 +03:00
Benjamin Trent
843fa42c1e
[ML] add new normalize_above parameter to p_value significant terms heuristic (#78833)
This commit adds the new normalize_above parameter to the p_value significant
terms heuristic.

This parameter allows for consistent significance results at various scales. When a total count (in or out of the set background set) is above the normalize_above parameter, both the total set and the set including the term are scaled by normalize_above/count where count is term in the set or total set size.
2021-10-12 10:38:09 -04:00
Stef Nestor
ddc1a0df28
[DOCS] Add prod warning to composite agg (#78723)
The composite aggregation is considered expensive. Users should perform load testing before deploying it in production.

Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
2021-10-06 13:44:12 -04:00
Benjamin Trent
7a7fffcb5a
[ML] Text/Log categorization multi-bucket aggregation (#71752)
This commit adds a new multi-bucket aggregation: `categorize_text`

The aggregation follows a similar design to significant text in that it reads from `_source`
and re-analyzes the the text as it is read. 

Key difference is that it does not use the indexed field's analyzer, but instead relies on 
the `ml_standard` tokenizer with specialized ML token filters. The tokenizer + filters are the
same that machine learning categorization anomaly jobs utilize.

The high level logical flow is as follows:
 - at each shard, read in the text field with a custom analyzer using `ml_standard` tokenizer
 - Read in the particular tokens from the analyzer
 - Feed these tokens to a token tree algorithm (an adaptation of the drain categorization algorithm)
 - Gather the individual log categories (the leaf nodes), sort them by doc_count, ship those buckets to be merged
 - Merge all buckets that have the EXACT same key
 - Once all buckets are merged, pass those keys + counts to a new token tree for additional merging
 - That tree builds the final buckets and that is returned to the user

Algorithm explanation:

 - Each log is parsed with the ml-standard tokenizer
 - each token is passed into a token tree
 - For `max_match_token` each token is stored in the tree and at `max_match_token+1` (or `len(tokens)`) a log group is created
 - If another log group exists at that leaf, merge it if they have `similarity_threshold` percentage of tokens in common
     - merging simply replaces tokens that are different in the group with `*`
 - If a layer in the tree has `max_unique_tokens` we add a `*` child and any new tokens are passed through there. Catch here is that on the final merge, we first attempt to merge together subtrees with the smallest number of documents. Especially if the new sub tree has more documents counted.

## Aggregation configuration.

Here is an example on some openstack logs
```js
POST openstack/_search?size=0
{
  "aggs": {
    "categories": {
      "categorize_text": {
        "field": "message", // The field to categorize
        "similarity_threshold": 20, // merge log groups if they are this similar
        "max_unique_tokens": 20, // Max Number of children per token position
        "max_match_token": 4, // Maximum tokens to build prefix trees
        "size": 1
      }
    }
  }
}
```

This will return buckets like
```json
"aggregations" : {
    "categories" : {
      "buckets" : [
        {
          "doc_count" : 806,
          "key" : "nova-api.log.1.2017-05-16_13 INFO nova.osapi_compute.wsgi.server * HTTP/1.1 status len time"
        }
      ]
    }
  }
```
2021-10-04 11:49:16 -04:00
Lukas Wegmann
421b3e80de
Document missing_order param for composite aggregations (#77839)
Documents the missing_order parameter for composite aggregations introduced in #76740
2021-09-27 09:57:45 +02:00
István Zoltán Szabó
1d367abffc
[DOCS] Modifies aggregations title abbreviation to follow convention. (#78252) 2021-09-23 16:22:27 +02:00
James Rodewig
15baf4017a
[DOCS] Remove _term and _time agg order keys (#78209)
Adds an 8.0 breaking change for the removal of the `_term` and `_time`
agg `order` keys.

Relates to #39450
2021-09-22 15:54:14 -04:00
edh-oss
62a471aefe
Update JSON parser and snippets (#77983)
Related to issue  #77823

This does the following:

- Updates several asciidoc files that contained code snippets with
  invalid JSON, most involving unnecessary trailing commas.

- Makes the switch from the Groovy JSON parser to the Jackson parser,
  pursuant to the general goal of eliminating Groovy dependence.

- Makes testing of JSON validity at build time more strict.

Note that this update still allows backslash escaping for any
character. Currently that matters because of the file
"docs/reference/ml/anomaly-detection/apis/get-datafeed-stats.asciidoc",
specifically this part:

    "attributes" : {
      "ml.machine_memory" :
        "$body.datafeeds.0.node.attributes.ml\.machine_memory",
      "ml.max_open_jobs" : "512"
    }

It's not clear to me what change, if any, is appropriate there. So,
I've left in the escaped period and configured the parser to ignore
it for the time being.
2021-09-20 11:08:26 +01:00
James Rodewig
de59fd2b43
[DOCS] Include index in range agg snippets (#77290) (#77568)
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>

Co-authored-by: xiaozhiliaoo(小知了) <772654204@qq.com>
2021-09-10 12:36:05 -04:00
Benjamin Trent
100f222650
Adds support for the rate aggregation under a composite agg (#76992)
rate aggregation should support being a sub-aggregation
of a composite agg.

The catch is that the composite aggregation source
must be a date histogram. Other sources can be present
but their must be exactly one date histogram source
otherwise the rate aggregation does not know which
interval to compare its unit rate to.

closes https://github.com/elastic/elasticsearch/issues/76988
2021-09-01 07:29:13 -04:00
James Rodewig
8ba07b4b97
[DOCS] Add filter example to nested agg docs (#76118)
Changes:
* Simplifies and formats several snippets in the nested agg docs
* Adds a `filter` sub-aggregration example
2021-08-05 09:48:28 -04:00
James Rodewig
fc0ac1923d
[DOCS] Correct spelling for geo terms (#76028)
Changes:
* Use "geopoint" when not referring to the literal field type
* Use "geoshape" when not referring to the literal field type or query type
* Use "GeoJSON" consistently
2021-08-03 09:55:48 -04:00
István Zoltán Szabó
60f3c77e3f
[DOCS] Adds p-value heuristic to significant terms aggregation (#75369)
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
2021-07-27 09:12:45 +02:00