Commit graph

307 commits

Author SHA1 Message Date
Liam Thompson
33a71e3289
[DOCS] Refactor book-scoped variables in docs/reference/index.asciidoc (#107413)
* Remove `es-test-dir` book-scoped variable

* Remove `plugins-examples-dir` book-scoped variable

* Remove `:dependencies-dir:` and `:xes-repo-dir:` book-scoped variables

- In `index.asciidoc`, two variables (`:dependencies-dir:` and `:xes-repo-dir:`) were removed.
- In `sql/index.asciidoc`, the `:sql-tests:` path was updated to fuller path
- In `esql/index.asciidoc`, the `:esql-tests:` path was updated idem

* Replace `es-repo-dir` with `es-ref-dir`

* Move `:include-xpack: true` to few files that use it, remove from index.asciidoc
2024-04-17 14:37:07 +02:00
Benjamin Trent
984e793e44
Add note about random sampler consistency (#107479) 2024-04-16 08:28:24 -04:00
shainaraskas
8a1df9be2d
[DOCS] fix time zone logic example (#106962)
* [DOCS] fix time zone logic example

* specify standard time

* goodbye e.g.
2024-04-04 10:44:14 -04:00
Martijn van Groningen
81a49f1567
Restrict usage of certain aggregations when in sort order execution is required (#104665)
A number of aggregations that rely on deferred collection don't work
with time series index searcher and will produce incorrect result. These
aggregation usages should fail. The documentation has been updated to
describe these limitations.

In case of multi terms aggregation, the depth first collection is
forcefully used when time series aggregation is used. This behaviour is
inline with the terms aggregation.
2024-02-01 07:09:17 -05:00
Kostas Krikellas
c7705aa32a
Improve time-series error and documentation (#100018)
* Improve time-series error and documentation

* spotless fix

* Update docs/changelog/100018.yaml

* Fix changelist

* Change exception type
2023-09-29 10:42:01 +03:00
Abdon Pijpelink
e766050edc
[DOCS] Update geohash_grid agg field description (#98494) 2023-08-15 15:59:16 +02:00
Abdon Pijpelink
6993a6d74e
[DOCS] Update ramge aggregation example (#98059) 2023-08-01 09:35:07 +02:00
Philipp Kahr
4ccc5a9c8c
Update histogram-aggregation docs (#96974)
* Update histogram-aggregation

* Little tweak

* Update docs/reference/aggregations/bucket/histogram-aggregation.asciidoc

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>

* Update docs/reference/aggregations/bucket/histogram-aggregation.asciidoc

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>

* Add test

---------

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
2023-06-22 11:16:39 +02:00
Craig Taverner
fe9f008755
Correct rare-terms default precision in docs (#96887) 2023-06-18 21:51:31 +02:00
debadair
777598d602
[DOCS] Remove redirect pages (#88738)
* [DOCS] Remove manual redirects

* [DOCS] Removed refs to modules-discovery-hosts-providers

* [DOCS] Fixed broken internal refs

* Fixing bad cross links in ES book, and adding redirects.asciidoc[] back into docs/reference/index.asciidoc.

* Update docs/reference/search/point-in-time-api.asciidoc

Co-authored-by: James Rodewig <james.rodewig@elastic.co>

* Update docs/reference/setup/restart-cluster.asciidoc

Co-authored-by: James Rodewig <james.rodewig@elastic.co>

* Update docs/reference/sql/endpoints/translate.asciidoc

Co-authored-by: James Rodewig <james.rodewig@elastic.co>

* Update docs/reference/snapshot-restore/restore-snapshot.asciidoc

Co-authored-by: James Rodewig <james.rodewig@elastic.co>

* Update repository-azure.asciidoc

* Update node-tool.asciidoc

* Update repository-azure.asciidoc

---------

Co-authored-by: amyjtechwriter <61687663+amyjtechwriter@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Co-authored-by: Amy Jonsson <amy.jonsson@elastic.co>
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
2023-05-24 12:32:46 +01:00
tmgordeeva
2abbce0e50
Time series docs (#94337)
* Time series docs

Tech preview docs with a very basic example.

---------

Co-authored-by: lcawl <lcawley@elastic.co>
2023-05-03 11:01:07 -07:00
QY
2306f78ca9
Add keyed param to allow named filters agg return buckets as an array of objects (#89256)
Adds a new `keyed` param for `filters` aggs to come back with their `key` attached rather than as a json object. So that sorting them is meaningful.
2023-04-10 13:53:26 -04:00
Craig Taverner
f55d70a682
Document datehistogram with long offsets (#93328)
* Document datehistogram with long offsets

When offsets are longer than calendar_intervals that are non-standard,
like months which differ in length, then the usual rule of all buckets
starting at the same day and time will no longer apply.

This update attempts to explain this with examples.

* Removed TEST-skip lines

These don't seem to be parsable, even though they match the syntax
described in the README.asciidoc

* Added // TESTRESPONSE[skip:...] lines

* Refined docs description and added more examples

* Update docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>

* Update docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>

* Update docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>

* Update docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>

---------

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
2023-02-06 16:20:40 +01:00
Hendrik Muhs
cf5ea0bb1f
[ML] rename frequent_items to frequent_item_sets and make it GA (#93421)
rename frequent_items to frequent_item_sets and remove the experimental batch
2023-02-02 09:25:00 +01:00
Glen Smith
81d9cbe0ca
Update frequent-items-aggregation.asciidoc (#93287)
Fix type togeher > together
2023-01-27 09:45:17 -05:00
Craig Taverner
e8b4de9a8a
Documentation for geohex_grid over geo_shape (#92999)
* Documentation for geohex_grid over geo_shape

The feature to add support for geohex_grid aggregations over geo_shape
fields was added in https://github.com/elastic/elasticsearch/pull/91956.
This is the associated documentation for that.

* Update docs/reference/aggregations/bucket/geohexgrid-aggregation.asciidoc

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>

* Fix explanation for geo_point vs geo_shape proj

When aggregating geohex over geoshape we use requirectangular because
underlying lucene index indexes and searches the polygons in that way.

* Correct spelling

According to grammarly, "therefor" is not an alternative spelling
of "therefore". We should use the conjunctive form here.

See https://www.grammarly.com/blog/therefore-vs-therefor/

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
2023-01-24 16:03:27 +01:00
István Zoltán Szabó
e4721f1dfe
[DOCS] Fine-tunes documentation on exclude/include in frequent items (#92758) 2023-01-10 12:23:27 +01:00
Hendrik Muhs
b9c0315d24
[ML] add the ability to include and exclude values in Frequent items (#92414)
This PR adds include and excludes to frequent items. This will allow to filter values from the analysis.
2022-12-21 12:24:10 +01:00
Paweł Krześniak
34c30ad7be
[DOCS] typo in date_histogram aggregation example (#91715)
* [DOCS] typo in date_histogram aggregation example

The field name fixed

* Update docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc

Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
2022-11-21 13:13:44 +01:00
Lisa Cawley
d7c0b37924
[DOCS] Edits frequent items aggregation (#91564) 2022-11-14 17:20:27 -08:00
David Roberts
3dbaa3ff23
[ML] Make categorize_text aggregation GA (#88600)
Removes the experimental tag from the categorize_text aggregation.
2022-11-09 13:05:35 +00:00
Hendrik Muhs
14b2d2d37e
[ML] frequent items filter (#91137)
add a filter to the frequent items agg that filters documents from the analysis while still calculating support on the full set

A filter is specified top-level in frequent_items:

"frequent_items": {
  "filter": {
    "term": {
      "host.name.keyword": "i-12345"
    }
   },
...

The above filters documents that don't match, however still counts the docs when calculating support. That's in contrast to
specifying a query at the top, in which case you find the same item sets, but don't know the importance given the full
document set.
2022-11-03 13:58:40 +01:00
David Roberts
be006e2eee
[ML] Improve categorize_text docs (#90765)
Adds more detail about the meaning of the results
fields of the `categorize_text` aggregation, and
advice about how to use these fields when searching
for messages that match the categories.

Followup to #90723
2022-10-13 10:46:53 +01:00
David Roberts
bfccd20155
[ML] Add a regex to the output of the categorize_text aggregation (#90723)
The new `regex` field in `categorize_text` output is created in
the same way as the `regex` field that appears in the category
definitions created by anomaly detection jobs that do categorization.

It consists of the terms that occur in the same order for every
message that matches the category, separated with a `.+?` wildcard.
It therefore matches the category messages and enforces the order
of the terms that occurred in the same order for all messages used
to create the category.

It is not recommended to use the regex as the primary mechanism for
searching for the original documents that were categorized. Search
using a regular expression is very slow. Instead the terms of the
category should be used to search for matching documents, as a
terms search can use the inverted index and hence be much faster.
However, there may be situations where it is useful to use the
`regex` field to test whether a small set of messages that have not
been indexed match the category.
2022-10-10 11:41:16 +01:00
István Zoltán Szabó
7602015384
[DOCS] Improves frequent items aggregation docs (#89122) 2022-08-08 15:46:29 +02:00
Benjamin Trent
94f2544998
Adding cardinality support for random_sampler agg (#86838)
This adds support for the `cardinality` aggregation within a random_sampler.

This usecase is helpful in determining the ratio of unique values compared to the count of total documents within the sampled set.
2022-07-21 07:19:35 -04:00
apeltop
71234f7464
[DOCS] Fix typos in docs (#88226) 2022-07-05 11:02:29 +02:00
David Roberts
93bc2e382f
[ML] Replace the implementation of the categorize_text aggregation (#85872)
This replaces the implementation of the categorize_text aggregation
with the new algorithm that was added in #80867. The new algorithm
works in the same way as the ML C++ code used for categorization jobs
(and now includes the fixes of elastic/ml-cpp#2277).

The docs are updated to reflect the workings of the new implementation.
2022-05-23 18:46:13 +01:00
Craig Taverner
5f7ea792ac
Soft-deprecation of point/geo_point formats (#86835)
* Soft-deprecation of point/geo_point formats

Since GeoJSON and WKT are now common formats for all three types:
  geo_shape, geo_point and point
We decided to soft-deprecate the other point formats by ordering:
* GeoJSON (object with keys `type` and `coordinates`)
* WKT `POINT(x y)`
* Object with keys `lat` and `lon` (or `x` and `y` for point)
* Array [lon,lat]
* String `"lat,lon"` (or `"x,y"` in point)
* String with geohash (only in `geo_point`)

The geohash is last because it is only in one field type.
The string version is second last because it is the most controversial
being the only version to reverse the coordinate order from all other
formats (for geo_point only, since the coordinates are not reversed
in point).

In addition we replaced many examples in both documentation and tests
to prioritize WKT over the plain string format.

Many remaining examples of array format or object with keys still exist
and could be replaced by, for example, GeoJSON, if we feel the need.

* Incorrect quote position
2022-05-17 23:46:43 +02:00
Mark Tozzi
54efc59eff
Clarify risks around ordering terms aggregation (#86528)
Add some details as to why some terms orderings are worse than others.


Co-authored-by: Adam Locke <adam.locke@elastic.co>
2022-05-16 11:05:22 -04:00
István Zoltán Szabó
95ef40656f
[DOCS] Adds more details to the frequent items agg documentation (#86661)
Co-authored-by: Mark Tozzi <mark.tozzi@gmail.com>
2022-05-16 10:24:14 +02:00
István Zoltán Szabó
e590e900a4
[DOCS] Adds frequent items agg docs (#86037)
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
2022-05-05 16:07:24 +02:00
Elasticsearch addict
7b2511e22b Update histogram-aggregation.asciidoc (#85356)
Fix small grammatical mistake.

Closes #85355
2022-03-28 12:27:32 -07:00
Salvatore Campagna
db6c58ed45
fix: use the correct field name when reading data from multi fields (#84752)
When using a multi-field we need to extract data from the document
using the correct field name. That is the name of the top field.
Here we delegate extraction of the correct name to a method in the
SearchContext that is wrapped by the AggregationContext.

Issue: #82918
2022-03-11 17:11:26 +01:00
Abele Mălan
9ecb96fcf3
Fix some typos in plugins & reference docs (#84667)
This pull request removes a few instances of duplicate words or
punctuation and erroneous spelling from the docs.
2022-03-07 12:29:58 -05:00
Benjamin Trent
b592d2bf01
New random_sampler aggregation for sampling documents in aggregations (#84363)
This adds a new sampling aggregation that performs a background sampling over all documents in an index. 

The syntax is as follows:
```
{
  "aggregations": {
    "sampling": {
      "random_sampler": {
        "probability": 0.1
      },
      "aggs": {
        "price_percentiles": {
          "percentiles": {
            "field": "taxful_total_price"
          }
        }
      }
    }
  }
}
```

This aggregation provides fast random sampling over the entire document set in order to speed up costly aggregations.

Testing this over a variety of aggregations and data sets, the median speed up when sampling at `0.001` over millions of documents is around 70X speed improvement.

Relative error rate does rely on the size of the data and the aggregation kind. Here are some typically expected numbers when sampling over 10s of millions of documents. `p` is the configured probability and `n` is the number of documents matched by your provided filter query.
2022-03-02 14:32:30 -05:00
Salvatore Campagna
9de75c2ac5
Add an aggregator for IPv4 and IPv6 subnets (#82410)
Parameters accepted by the aggregator include:

* prefix_length (integer, required): defines the network size of the subnet mask;
* is_ipv6 (boolean, optional, default: false): defines whether the prefix applies to IPv6 (true) or IPv4 (false) IP addresses;
* min_doc_count (integer, optional, default: 1): defines the minimum number of documents for a bucket to be returned in the results;
* append_prefix_length (boolean, optional, default: false): defines if the prefix length is appended to the IP address key when returning results;
* keyed (boolean, optional, default: false): defines whether the result is returned keyed or as an array of buckets;

Each bucket returned by the aggregator represents a different subnet. IPv4 subnets also include a netmask field set to the subnet mask value (i.e. "255.255.0.0" for a /16 subnet).

Related to: #57964 and elastic/kibana#68424
2022-01-28 11:59:07 +01:00
Ignacio Vera
0873893bb7
New GeoHexGrid aggregation (#82924)
This commit introduces a new geogrid aggregation called GeoHexGridAggregation that
is based in Uber h3 grid. It only supports geo_point fields.
2022-01-27 07:45:51 +01:00
James Rodewig
63f228e24e
[DOCS] Re-add paragraph noting doc_count is approximate (#83154)
This paragraph was accidentally removed as part of #79205. Also fixes a minor heading capitalization error.
2022-01-26 11:07:59 -05:00
James Rodewig
ccac525d90
[DOCS] Fix typo (#82344) (#82379)
(cherry picked from commit 129d0fc91d)

Co-authored-by: Oleks <oleks@users.noreply.github.com>
2022-01-10 13:47:03 -05:00
James Rodewig
04318961b9
[DOCS] Clarify supported parameters for terms value source (#81775)
The composite aggregation's `terms` value source doesn't support the same set of
parameters as the `terms` aggregation.

Closes #81431.
2021-12-15 14:32:16 -05:00
James Rodewig
f56a0f4b66
[DOCS] Remove testenv annotations from doc snippet tests (#80023)
Removes `testenv` annotations and related code. These annotations originally let you skip x-pack snippet tests in the docs. However, that's no longer possible.

Relates to #79309, #31619
2021-11-05 18:38:50 -04:00
Nik Everett
66de804a9e
Rework docs for the size of terms agg (#79205)
The `terms` agg picks the top `size` terms in a single scatter/gather
pass across all the shards. For the default `order` and if you `order`
by `_key` this works quite well. Some errors creep in, but it's fairly
easy to point to them and understand them. But ordering by doc count
ascending is like inviting the error vampire into your agg. It's super
easy to get inaccurate results. This updates the docs to be more stark
about it. Closes #72684
2021-11-01 17:07:31 -04:00
Benjamin Trent
f245c477d1
[ML] fail on poor configuration for categorize_text (#79586)
This commit fixes a handful of bugs with categorize_text agg

 - The agg now fails on fields that are not text fields
 - Limits the number of tokens categorized
 - Validates the configuration inputs to disallow settings above static maximums
2021-10-21 12:14:27 -04:00
Benjamin Trent
843fa42c1e
[ML] add new normalize_above parameter to p_value significant terms heuristic (#78833)
This commit adds the new normalize_above parameter to the p_value significant
terms heuristic.

This parameter allows for consistent significance results at various scales. When a total count (in or out of the set background set) is above the normalize_above parameter, both the total set and the set including the term are scaled by normalize_above/count where count is term in the set or total set size.
2021-10-12 10:38:09 -04:00
Stef Nestor
ddc1a0df28
[DOCS] Add prod warning to composite agg (#78723)
The composite aggregation is considered expensive. Users should perform load testing before deploying it in production.

Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
2021-10-06 13:44:12 -04:00
Benjamin Trent
7a7fffcb5a
[ML] Text/Log categorization multi-bucket aggregation (#71752)
This commit adds a new multi-bucket aggregation: `categorize_text`

The aggregation follows a similar design to significant text in that it reads from `_source`
and re-analyzes the the text as it is read. 

Key difference is that it does not use the indexed field's analyzer, but instead relies on 
the `ml_standard` tokenizer with specialized ML token filters. The tokenizer + filters are the
same that machine learning categorization anomaly jobs utilize.

The high level logical flow is as follows:
 - at each shard, read in the text field with a custom analyzer using `ml_standard` tokenizer
 - Read in the particular tokens from the analyzer
 - Feed these tokens to a token tree algorithm (an adaptation of the drain categorization algorithm)
 - Gather the individual log categories (the leaf nodes), sort them by doc_count, ship those buckets to be merged
 - Merge all buckets that have the EXACT same key
 - Once all buckets are merged, pass those keys + counts to a new token tree for additional merging
 - That tree builds the final buckets and that is returned to the user

Algorithm explanation:

 - Each log is parsed with the ml-standard tokenizer
 - each token is passed into a token tree
 - For `max_match_token` each token is stored in the tree and at `max_match_token+1` (or `len(tokens)`) a log group is created
 - If another log group exists at that leaf, merge it if they have `similarity_threshold` percentage of tokens in common
     - merging simply replaces tokens that are different in the group with `*`
 - If a layer in the tree has `max_unique_tokens` we add a `*` child and any new tokens are passed through there. Catch here is that on the final merge, we first attempt to merge together subtrees with the smallest number of documents. Especially if the new sub tree has more documents counted.

## Aggregation configuration.

Here is an example on some openstack logs
```js
POST openstack/_search?size=0
{
  "aggs": {
    "categories": {
      "categorize_text": {
        "field": "message", // The field to categorize
        "similarity_threshold": 20, // merge log groups if they are this similar
        "max_unique_tokens": 20, // Max Number of children per token position
        "max_match_token": 4, // Maximum tokens to build prefix trees
        "size": 1
      }
    }
  }
}
```

This will return buckets like
```json
"aggregations" : {
    "categories" : {
      "buckets" : [
        {
          "doc_count" : 806,
          "key" : "nova-api.log.1.2017-05-16_13 INFO nova.osapi_compute.wsgi.server * HTTP/1.1 status len time"
        }
      ]
    }
  }
```
2021-10-04 11:49:16 -04:00
Lukas Wegmann
421b3e80de
Document missing_order param for composite aggregations (#77839)
Documents the missing_order parameter for composite aggregations introduced in #76740
2021-09-27 09:57:45 +02:00
James Rodewig
15baf4017a
[DOCS] Remove _term and _time agg order keys (#78209)
Adds an 8.0 breaking change for the removal of the `_term` and `_time`
agg `order` keys.

Relates to #39450
2021-09-22 15:54:14 -04:00
edh-oss
62a471aefe
Update JSON parser and snippets (#77983)
Related to issue  #77823

This does the following:

- Updates several asciidoc files that contained code snippets with
  invalid JSON, most involving unnecessary trailing commas.

- Makes the switch from the Groovy JSON parser to the Jackson parser,
  pursuant to the general goal of eliminating Groovy dependence.

- Makes testing of JSON validity at build time more strict.

Note that this update still allows backslash escaping for any
character. Currently that matters because of the file
"docs/reference/ml/anomaly-detection/apis/get-datafeed-stats.asciidoc",
specifically this part:

    "attributes" : {
      "ml.machine_memory" :
        "$body.datafeeds.0.node.attributes.ml\.machine_memory",
      "ml.max_open_jobs" : "512"
    }

It's not clear to me what change, if any, is appropriate there. So,
I've left in the escaped period and configured the parser to ignore
it for the time being.
2021-09-20 11:08:26 +01:00