* Documentation for geohex_grid over geo_shape
The feature to add support for geohex_grid aggregations over geo_shape
fields was added in https://github.com/elastic/elasticsearch/pull/91956.
This is the associated documentation for that.
* Update docs/reference/aggregations/bucket/geohexgrid-aggregation.asciidoc
Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
* Fix explanation for geo_point vs geo_shape proj
When aggregating geohex over geoshape we use requirectangular because
underlying lucene index indexes and searches the polygons in that way.
* Correct spelling
According to grammarly, "therefor" is not an alternative spelling
of "therefore". We should use the conjunctive form here.
See https://www.grammarly.com/blog/therefore-vs-therefor/
Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
* [DOCS] typo in date_histogram aggregation example
The field name fixed
* Update docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc
Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
add a filter to the frequent items agg that filters documents from the analysis while still calculating support on the full set
A filter is specified top-level in frequent_items:
"frequent_items": {
"filter": {
"term": {
"host.name.keyword": "i-12345"
}
},
...
The above filters documents that don't match, however still counts the docs when calculating support. That's in contrast to
specifying a query at the top, in which case you find the same item sets, but don't know the importance given the full
document set.
Adds more detail about the meaning of the results
fields of the `categorize_text` aggregation, and
advice about how to use these fields when searching
for messages that match the categories.
Followup to #90723
The new `regex` field in `categorize_text` output is created in
the same way as the `regex` field that appears in the category
definitions created by anomaly detection jobs that do categorization.
It consists of the terms that occur in the same order for every
message that matches the category, separated with a `.+?` wildcard.
It therefore matches the category messages and enforces the order
of the terms that occurred in the same order for all messages used
to create the category.
It is not recommended to use the regex as the primary mechanism for
searching for the original documents that were categorized. Search
using a regular expression is very slow. Instead the terms of the
category should be used to search for matching documents, as a
terms search can use the inverted index and hence be much faster.
However, there may be situations where it is useful to use the
`regex` field to test whether a small set of messages that have not
been indexed match the category.
Added Cartesian support for centroid aggregation
* First draft of cartesian-centroid docs
However, this is largely a duplicate of geo-centroid docs since they are essentially identical behaviour. We should consider merging them.
* Work on isAggregatable caused a minor logic conflict. When that work was done, Point and Shape were not aggregatable, but now they are.
This adds support for the `cardinality` aggregation within a random_sampler.
This usecase is helpful in determining the ratio of unique values compared to the count of total documents within the sampled set.
Plumbs through a new parameter for the cardinality aggregation, to allow configuring the execution mode. This can have significant impacts on speed and memory usage. This PR exposes three collection modes and two heuristics that we can tune going forward. All of these are treated as hints and can be silently ignored, e.g. if not applicable to the given field type. I've change the default behavior to optimize for time, which potentially uses more memory. Users can override this for the old behavior if needed.
This replaces the implementation of the categorize_text aggregation
with the new algorithm that was added in #80867. The new algorithm
works in the same way as the ML C++ code used for categorization jobs
(and now includes the fixes of elastic/ml-cpp#2277).
The docs are updated to reflect the workings of the new implementation.
* Soft-deprecation of point/geo_point formats
Since GeoJSON and WKT are now common formats for all three types:
geo_shape, geo_point and point
We decided to soft-deprecate the other point formats by ordering:
* GeoJSON (object with keys `type` and `coordinates`)
* WKT `POINT(x y)`
* Object with keys `lat` and `lon` (or `x` and `y` for point)
* Array [lon,lat]
* String `"lat,lon"` (or `"x,y"` in point)
* String with geohash (only in `geo_point`)
The geohash is last because it is only in one field type.
The string version is second last because it is the most controversial
being the only version to reverse the coordinate order from all other
formats (for geo_point only, since the coordinates are not reversed
in point).
In addition we replaced many examples in both documentation and tests
to prioritize WKT over the plain string format.
Many remaining examples of array format or object with keys still exist
and could be replaced by, for example, GeoJSON, if we feel the need.
* Incorrect quote position
Users should be able to specify specific metrics/keys within a specific bucket key.
An example is `agg["bucket_foo"]._count`.
This change now allows that.
closes: https://github.com/elastic/elasticsearch/issues/76320
When using a multi-field we need to extract data from the document
using the correct field name. That is the name of the top field.
Here we delegate extraction of the correct name to a method in the
SearchContext that is wrapped by the AggregationContext.
Issue: #82918
This adds a new sampling aggregation that performs a background sampling over all documents in an index.
The syntax is as follows:
```
{
"aggregations": {
"sampling": {
"random_sampler": {
"probability": 0.1
},
"aggs": {
"price_percentiles": {
"percentiles": {
"field": "taxful_total_price"
}
}
}
}
}
}
```
This aggregation provides fast random sampling over the entire document set in order to speed up costly aggregations.
Testing this over a variety of aggregations and data sets, the median speed up when sampling at `0.001` over millions of documents is around 70X speed improvement.
Relative error rate does rely on the size of the data and the aggregation kind. Here are some typically expected numbers when sampling over 10s of millions of documents. `p` is the configured probability and `n` is the number of documents matched by your provided filter query.
Fixes an error and test snippets for the sum aggregation example for histograms.
Closes#84491
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
(cherry picked from commit fb45ac9dea)
Co-authored-by: Maja Grubic <maja.grubic@elastic.co>
* Updates the `min` and `max` snippets for histograms. These should now run as docs integration tests.
* Fixes a copy/paste error in the `max` aggregation snippet for histograms.
Relates to https://github.com/elastic/elasticsearch/pull/83384
Parameters accepted by the aggregator include:
* prefix_length (integer, required): defines the network size of the subnet mask;
* is_ipv6 (boolean, optional, default: false): defines whether the prefix applies to IPv6 (true) or IPv4 (false) IP addresses;
* min_doc_count (integer, optional, default: 1): defines the minimum number of documents for a bucket to be returned in the results;
* append_prefix_length (boolean, optional, default: false): defines if the prefix length is appended to the IP address key when returning results;
* keyed (boolean, optional, default: false): defines whether the result is returned keyed or as an array of buckets;
Each bucket returned by the aggregator represents a different subnet. IPv4 subnets also include a netmask field set to the subnet mask value (i.e. "255.255.0.0" for a /16 subnet).
Related to: #57964 and elastic/kibana#68424
per issue 60780, decision from team to remove experimental language from HDR Histogram percentiles and ranks. Feature has been in production for quite some time.
closes#60780
The documentations states that if the `weight` field is missing, and no
explicit missing configuration is provided, a default value of 1 is used.
This is incorrect and does not match the implementation of the weighted
average aggregator. In this specific case the document is skipped, instead.
Removes `testenv` annotations and related code. These annotations originally let you skip x-pack snippet tests in the docs. However, that's no longer possible.
Relates to #79309, #31619
The `terms` agg picks the top `size` terms in a single scatter/gather
pass across all the shards. For the default `order` and if you `order`
by `_key` this works quite well. Some errors creep in, but it's fairly
easy to point to them and understand them. But ordering by doc count
ascending is like inviting the error vampire into your agg. It's super
easy to get inaccurate results. This updates the docs to be more stark
about it. Closes#72684
This commit fixes a handful of bugs with categorize_text agg
- The agg now fails on fields that are not text fields
- Limits the number of tokens categorized
- Validates the configuration inputs to disallow settings above static maximums
When running a rate aggregation without setting the field parameter, the result is computed based on the bucket doc_count.
This PR adds support for a custom _doc_count field.
Closes#77734
This commit adds the new normalize_above parameter to the p_value significant
terms heuristic.
This parameter allows for consistent significance results at various scales. When a total count (in or out of the set background set) is above the normalize_above parameter, both the total set and the set including the term are scaled by normalize_above/count where count is term in the set or total set size.