[DOCS] fix external links (#124248)

This commit is contained in:
Colleen McGinnis 2025-03-06 10:27:03 -06:00 committed by GitHub
parent 54c826532c
commit 23be51a04f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
56 changed files with 102 additions and 102 deletions

View file

@ -29,7 +29,7 @@ A sibling pipeline aggregation which calculates the mean value of a specified me
: (Optional, string) Policy to apply when gaps are found in the data. For valid values, see [Dealing with gaps in the data](/reference/data-analysis/aggregations/pipeline.md#gap-policy). Defaults to `skip`.
`format`
: (Optional, string) [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for the output value. If specified, the formatted value is returned in the aggregations `value_as_string` property.
: (Optional, string) [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for the output value. If specified, the formatted value is returned in the aggregations `value_as_string` property.
## Response body [avg-bucket-agg-response]

View file

@ -35,7 +35,7 @@ $$$bucket-script-params$$$
| `script` | The script to run for this aggregation. The script can be inline, file or indexed. (see [Scripting](docs-content://explore-analyze/scripting.md)for more details) | Required | |
| `buckets_path` | A map of script variables and their associated path to the buckets we wish to use for the variable(see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `gap_policy` | The policy to apply when gaps are found in the data (see [Dealing with gaps in the data](/reference/data-analysis/aggregations/pipeline.md#gap-policy) for more details) | Optional | `skip` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
The following snippet calculates the ratio percentage of t-shirt sales compared to total sales each month:

View file

@ -28,7 +28,7 @@ $$$cumulative-cardinality-params$$$
| Parameter Name | Description | Required | Default Value |
| --- | --- | --- | --- |
| `buckets_path` | The path to the cardinality aggregation we wish to find the cumulative cardinality for (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
The following snippet calculates the cumulative cardinality of the total daily `users`:

View file

@ -26,7 +26,7 @@ $$$cumulative-sum-params$$$
| Parameter Name | Description | Required | Default Value |
| --- | --- | --- | --- |
| `buckets_path` | The path to the buckets we wish to find the cumulative sum for (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
The following snippet calculates the cumulative sum of the total monthly `sales`:

View file

@ -25,7 +25,7 @@ $$$derivative-params$$$
| --- | --- | --- | --- |
| `buckets_path` | The path to the buckets we wish to find the derivative for (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `gap_policy` | The policy to apply when gaps are found in the data (see [Dealing with gaps in the data](/reference/data-analysis/aggregations/pipeline.md#gap-policy) for more details) | Optional | `skip` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
## First Order Derivative [_first_order_derivative]

View file

@ -29,7 +29,7 @@ $$$extended-stats-bucket-params$$$
| --- | --- | --- | --- |
| `buckets_path` | The path to the buckets we wish to calculate stats for (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `gap_policy` | The policy to apply when gaps are found in the data (see [Dealing with gaps in the data](/reference/data-analysis/aggregations/pipeline.md#gap-policy) for more details) | Optional | `skip` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `sigma` | The number of standard deviations above/below the mean to display | Optional | 2 |
The following snippet calculates the extended stats for monthly `sales` bucket:

View file

@ -27,7 +27,7 @@ $$$max-bucket-params$$$
| --- | --- | --- | --- |
| `buckets_path` | The path to the buckets we wish to find the maximum for (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `gap_policy` | The policy to apply when gaps are found in the data (see [Dealing with gaps in the data](/reference/data-analysis/aggregations/pipeline.md#gap-policy) for more details) | Optional | `skip` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
The following snippet calculates the maximum of the total monthly `sales`:

View file

@ -27,7 +27,7 @@ $$$min-bucket-params$$$
| --- | --- | --- | --- |
| `buckets_path` | The path to the buckets we wish to find the minimum for (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `gap_policy` | The policy to apply when gaps are found in the data (see [Dealing with gaps in the data](/reference/data-analysis/aggregations/pipeline.md#gap-policy) for more details) | Optional | `skip` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
The following snippet calculates the minimum of the total monthly `sales`:

View file

@ -28,7 +28,7 @@ $$$normalize_pipeline-params$$$
| --- | --- | --- | --- |
| `buckets_path` | The path to the buckets we wish to normalize (see [`buckets_path` syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `method` | The specific [method](#normalize_pipeline-method) to apply | Required | |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
## Methods [_methods]

View file

@ -27,7 +27,7 @@ $$$percentiles-bucket-params$$$
| --- | --- | --- | --- |
| `buckets_path` | The path to the buckets we wish to find the percentiles for (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `gap_policy` | The policy to apply when gaps are found in the data (see [Dealing with gaps in the data](/reference/data-analysis/aggregations/pipeline.md#gap-policy) for more details) | Optional | `skip` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `percents` | The list of percentiles to calculate | Optional | `[ 1, 5, 25, 50, 75, 95, 99 ]` |
| `keyed` | Flag which returns the range as an hash instead of an array of key-value pairs | Optional | `true` |

View file

@ -51,7 +51,7 @@ $$$serial-diff-params$$$
| `buckets_path` | Path to the metric of interest (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details | Required | |
| `lag` | The historical bucket to subtract from the current value. E.g. a lag of 7 will subtract the current value from the value 7 buckets ago. Must be a positive, non-zero integer | Optional | `1` |
| `gap_policy` | Determines what should happen when a gap in the data is encountered. | Optional | `insert_zeros` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
`serial_diff` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation:

View file

@ -27,7 +27,7 @@ $$$stats-bucket-params$$$
| --- | --- | --- | --- |
| `buckets_path` | The path to the buckets we wish to calculate stats for (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `gap_policy` | The policy to apply when gaps are found in the data (see [Dealing with gaps in the data](/reference/data-analysis/aggregations/pipeline.md#gap-policy) for more details) | Optional | `skip` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property | Optional | `null` |
The following snippet calculates the stats for monthly `sales`:

View file

@ -27,7 +27,7 @@ $$$sum-bucket-params$$$
| --- | --- | --- | --- |
| `buckets_path` | The path to the buckets we wish to find the sum for (see [`buckets_path` Syntax](/reference/data-analysis/aggregations/pipeline.md#buckets-path-syntax) for more details) | Required | |
| `gap_policy` | The policy to apply when gaps are found in the data (see [Dealing with gaps in the data](/reference/data-analysis/aggregations/pipeline.md#gap-policy) for more details) | Optional | `skip` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.md) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property. | Optional | `null` |
| `format` | [DecimalFormat pattern](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html) for theoutput value. If specified, the formatted value is returned in the aggregations`value_as_string` property. | Optional | `null` |
The following snippet calculates the sum of all the total monthly `sales` buckets:

View file

@ -9,7 +9,7 @@ mapped_pages:
Strips all characters after an apostrophe, including the apostrophe itself.
This filter is included in {{es}}'s built-in [Turkish language analyzer](/reference/data-analysis/text-analysis/analysis-lang-analyzer.md#turkish-analyzer). It uses Lucenes [ApostropheFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/tr/ApostropheFilter.md), which was built for the Turkish language.
This filter is included in {{es}}'s built-in [Turkish language analyzer](/reference/data-analysis/text-analysis/analysis-lang-analyzer.md#turkish-analyzer). It uses Lucenes [ApostropheFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/tr/ApostropheFilter.html), which was built for the Turkish language.
## Example [analysis-apostrophe-tokenfilter-analyze-ex]

View file

@ -9,7 +9,7 @@ mapped_pages:
Converts alphabetic, numeric, and symbolic characters that are not in the Basic Latin Unicode block (first 127 ASCII characters) to their ASCII equivalent, if one exists. For example, the filter changes `à` to `a`.
This filter uses Lucenes [ASCIIFoldingFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.md).
This filter uses Lucenes [ASCIIFoldingFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html).
## Example [analysis-asciifolding-tokenfilter-analyze-ex]

View file

@ -16,7 +16,7 @@ The pattern analyzer uses [Java Regular Expressions](https://docs.oracle.com/jav
A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly.
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.md).
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.html).
::::

View file

@ -16,7 +16,7 @@ The pattern capture token filter uses [Java Regular Expressions](https://docs.or
A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly.
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.md).
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.html).
::::

View file

@ -16,7 +16,7 @@ The pattern replace character filter uses [Java Regular Expressions](https://doc
A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly.
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.md).
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.html).
::::

View file

@ -18,7 +18,7 @@ The pattern tokenizer uses [Java Regular Expressions](https://docs.oracle.com/ja
A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly.
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.md).
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.html).
::::

View file

@ -14,12 +14,12 @@ The `pattern_replace` filter uses [Javas regular expression syntax](https://d
::::{warning}
A poorly-written regular expression may run slowly or return a StackOverflowError, causing the node running the expression to exit suddenly.
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.md).
Read more about [pathological regular expressions and how to avoid them](https://www.regular-expressions.info/catastrophic.html).
::::
This filter uses Lucenes [PatternReplaceFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/pattern/PatternReplaceFilter.md).
This filter uses Lucenes [PatternReplaceFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/pattern/PatternReplaceFilter.html).
## Example [analysis-pattern-replace-tokenfilter-analyze-ex]

View file

@ -7,13 +7,13 @@ mapped_pages:
# Porter stem token filter [analysis-porterstem-tokenfilter]
Provides [algorithmic stemming](docs-content://manage-data/data-store/text-analysis/stemming.md#algorithmic-stemmers) for the English language, based on the [Porter stemming algorithm](https://snowballstem.org/algorithms/porter/stemmer.md).
Provides [algorithmic stemming](docs-content://manage-data/data-store/text-analysis/stemming.md#algorithmic-stemmers) for the English language, based on the [Porter stemming algorithm](https://snowballstem.org/algorithms/porter/stemmer.html).
This filter tends to stem more aggressively than other English stemmer filters, such as the [`kstem`](/reference/data-analysis/text-analysis/analysis-kstem-tokenfilter.md) filter.
The `porter_stem` filter is equivalent to the [`stemmer`](/reference/data-analysis/text-analysis/analysis-stemmer-tokenfilter.md) filters [`english`](/reference/data-analysis/text-analysis/analysis-stemmer-tokenfilter.md#analysis-stemmer-tokenfilter-language-parm) variant.
The `porter_stem` filter uses Lucenes [PorterStemFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/en/PorterStemFilter.md).
The `porter_stem` filter uses Lucenes [PorterStemFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/en/PorterStemFilter.html).
## Example [analysis-porterstem-tokenfilter-analyze-ex]

View file

@ -9,7 +9,7 @@ mapped_pages:
Removes duplicate tokens in the same position.
The `remove_duplicates` filter uses Lucenes [RemoveDuplicatesTokenFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/miscellaneous/RemoveDuplicatesTokenFilter.md).
The `remove_duplicates` filter uses Lucenes [RemoveDuplicatesTokenFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/miscellaneous/RemoveDuplicatesTokenFilter.html).
## Example [analysis-remove-duplicates-tokenfilter-analyze-ex]

View file

@ -11,7 +11,7 @@ Reverses each token in a stream. For example, you can use the `reverse` filter t
Reversed tokens are useful for suffix-based searches, such as finding words that end in `-ion` or searching file names by their extension.
This filter uses Lucenes [ReverseStringFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/reverse/ReverseStringFilter.md).
This filter uses Lucenes [ReverseStringFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html).
## Example [analysis-reverse-tokenfilter-analyze-ex]

View file

@ -16,7 +16,7 @@ Shingles are often used to help speed up phrase queries, such as [`match_phrase`
::::
This filter uses Lucenes [ShingleFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/shingle/ShingleFilter.md).
This filter uses Lucenes [ShingleFilter](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/shingle/ShingleFilter.html).
## Example [analysis-shingle-tokenfilter-analyze-ex]

View file

@ -11,7 +11,7 @@ The `simple_pattern` tokenizer uses a regular expression to capture matching tex
This tokenizer does not support splitting the input on a pattern match, unlike the [`pattern`](/reference/data-analysis/text-analysis/analysis-pattern-tokenizer.md) tokenizer. To split on pattern matches using the same restricted regular expression subset, see the [`simple_pattern_split`](/reference/data-analysis/text-analysis/analysis-simplepatternsplit-tokenizer.md) tokenizer.
This tokenizer uses [Lucene regular expressions](https://lucene.apache.org/core/10_0_0/core/org/apache/lucene/util/automaton/RegExp.md). For an explanation of the supported features and syntax, see [Regular Expression Syntax](/reference/query-languages/regexp-syntax.md).
This tokenizer uses [Lucene regular expressions](https://lucene.apache.org/core/10_0_0/core/org/apache/lucene/util/automaton/RegExp.html). For an explanation of the supported features and syntax, see [Regular Expression Syntax](/reference/query-languages/regexp-syntax.md).
The default pattern is the empty string, which produces no terms. This tokenizer should always be configured with a non-default pattern.
@ -21,7 +21,7 @@ The default pattern is the empty string, which produces no terms. This tokenizer
The `simple_pattern` tokenizer accepts the following parameters:
`pattern`
: [Lucene regular expression](https://lucene.apache.org/core/10_0_0/core/org/apache/lucene/util/automaton/RegExp.md), defaults to the empty string.
: [Lucene regular expression](https://lucene.apache.org/core/10_0_0/core/org/apache/lucene/util/automaton/RegExp.html), defaults to the empty string.
## Example configuration [_example_configuration_11]

View file

@ -11,7 +11,7 @@ The `simple_pattern_split` tokenizer uses a regular expression to split the inpu
This tokenizer does not produce terms from the matches themselves. To produce terms from matches using patterns in the same restricted regular expression subset, see the [`simple_pattern`](/reference/data-analysis/text-analysis/analysis-simplepattern-tokenizer.md) tokenizer.
This tokenizer uses [Lucene regular expressions](https://lucene.apache.org/core/10_0_0/core/org/apache/lucene/util/automaton/RegExp.md). For an explanation of the supported features and syntax, see [Regular Expression Syntax](/reference/query-languages/regexp-syntax.md).
This tokenizer uses [Lucene regular expressions](https://lucene.apache.org/core/10_0_0/core/org/apache/lucene/util/automaton/RegExp.html). For an explanation of the supported features and syntax, see [Regular Expression Syntax](/reference/query-languages/regexp-syntax.md).
The default pattern is the empty string, which produces one term containing the full input. This tokenizer should always be configured with a non-default pattern.
@ -21,7 +21,7 @@ The default pattern is the empty string, which produces one term containing the
The `simple_pattern_split` tokenizer accepts the following parameters:
`pattern`
: A [Lucene regular expression](https://lucene.apache.org/core/10_0_0/core/org/apache/lucene/util/automaton/RegExp.md), defaults to the empty string.
: A [Lucene regular expression](https://lucene.apache.org/core/10_0_0/core/org/apache/lucene/util/automaton/RegExp.html), defaults to the empty string.
## Example configuration [_example_configuration_12]

View file

@ -9,7 +9,7 @@ mapped_pages:
Provides [algorithmic stemming](docs-content://manage-data/data-store/text-analysis/stemming.md#algorithmic-stemmers) for several languages, some with additional variants. For a list of supported languages, see the [`language`](#analysis-stemmer-tokenfilter-language-parm) parameter.
When not customized, the filter uses the [porter stemming algorithm](https://snowballstem.org/algorithms/porter/stemmer.md) for English.
When not customized, the filter uses the [porter stemming algorithm](https://snowballstem.org/algorithms/porter/stemmer.html) for English.
## Example [analysis-stemmer-tokenfilter-analyze-ex]
@ -60,7 +60,7 @@ $$$analysis-stemmer-tokenfilter-language-parm$$$
:::{dropdown} Valid values for `language`
Valid values are sorted by language. Defaults to [**`english`**](https://snowballstem.org/algorithms/porter/stemmer.md). Recommended algorithms are **bolded**.
Valid values are sorted by language. Defaults to [**`english`**](https://snowballstem.org/algorithms/porter/stemmer.html). Recommended algorithms are **bolded**.
Arabic: [**`arabic`**](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/ar/ArabicStemmer.md)
Armenian: [**`armenian`**](https://snowballstem.org/algorithms/armenian/stemmer.md)
Basque: [**`basque`**](https://snowballstem.org/algorithms/basque/stemmer.md)
@ -71,31 +71,31 @@ Catalan:[**`catalan`**](https://snowballstem.org/algorithms/catalan/stemmer.md)
Czech:[**`czech`**](https://dl.acm.org/doi/10.1016/j.ipm.2009.06.001)
Danish:[**`danish`**](https://snowballstem.org/algorithms/danish/stemmer.md)
Dutch:[**`dutch`**](https://snowballstem.org/algorithms/dutch/stemmer.md), [`dutch_kp`](https://snowballstem.org/algorithms/kraaij_pohlmann/stemmer.md) [8.16.0]
English:[**`english`**](https://snowballstem.org/algorithms/porter/stemmer.md), [`light_english`](https://ciir.cs.umass.edu/pubfiles/ir-35.pdf), [`lovins`](https://snowballstem.org/algorithms/lovins/stemmer.md) [8.16.0], [`minimal_english`](https://www.researchgate.net/publication/220433848_How_effective_is_suffixing), [`porter2`](https://snowballstem.org/algorithms/english/stemmer.md), [`possessive_english`](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/en/EnglishPossessiveFilter.md)
English:[**`english`**](https://snowballstem.org/algorithms/porter/stemmer.html), [`light_english`](https://ciir.cs.umass.edu/pubfiles/ir-35.pdf), [`lovins`](https://snowballstem.org/algorithms/lovins/stemmer.md) [8.16.0], [`minimal_english`](https://www.researchgate.net/publication/220433848_How_effective_is_suffixing), [`porter2`](https://snowballstem.org/algorithms/english/stemmer.html), [`possessive_english`](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/en/EnglishPossessiveFilter.html)
Estonian:[**`estonian`**](https://lucene.apache.org/core/10_0_0/analyzers-common/org/tartarus/snowball/ext/EstonianStemmer.md)
Finnish:[**`finnish`**](https://snowballstem.org/algorithms/finnish/stemmer.md), [`light_finnish`](http://clef.isti.cnr.it/2003/WN_web/22.pdf)
French:[**`light_french`**](https://dl.acm.org/citation.cfm?id=1141523), [`french`](https://snowballstem.org/algorithms/french/stemmer.md), [`minimal_french`](https://dl.acm.org/citation.cfm?id=318984)
Finnish:[**`finnish`**](https://snowballstem.org/algorithms/finnish/stemmer.html), [`light_finnish`](http://clef.isti.cnr.it/2003/WN_web/22.pdf)
French:[**`light_french`**](https://dl.acm.org/citation.cfm?id=1141523), [`french`](https://snowballstem.org/algorithms/french/stemmer.html), [`minimal_french`](https://dl.acm.org/citation.cfm?id=318984)
Galician:[**`galician`**](http://bvg.udc.es/recursos_lingua/stemming.jsp), [`minimal_galician`](http://bvg.udc.es/recursos_lingua/stemming.jsp) (Plural step only)
German:[**`light_german`**](https://dl.acm.org/citation.cfm?id=1141523), [`german`](https://snowballstem.org/algorithms/german/stemmer.md), [`minimal_german`](http://members.unine.ch/jacques.savoy/clef/morpho.pdf)
German:[**`light_german`**](https://dl.acm.org/citation.cfm?id=1141523), [`german`](https://snowballstem.org/algorithms/german/stemmer.html), [`minimal_german`](http://members.unine.ch/jacques.savoy/clef/morpho.pdf)
Greek:[**`greek`**](https://sais.se/mthprize/2007/ntais2007.pdf)
Hindi:[**`hindi`**](http://computing.open.ac.uk/Sites/EACLSouthAsia/Papers/p6-Ramanathan.pdf)
Hungarian:[**`hungarian`**](https://snowballstem.org/algorithms/hungarian/stemmer.md), [`light_hungarian`](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)
Hungarian:[**`hungarian`**](https://snowballstem.org/algorithms/hungarian/stemmer.html), [`light_hungarian`](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)
Indonesian:[**`indonesian`**](http://www.illc.uva.nl/Publications/ResearchReports/MoL-2003-02.text.pdf)
Irish:[**`irish`**](https://snowballstem.org/otherapps/oregan/)
Italian:[**`light_italian`**](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf), [`italian`](https://snowballstem.org/algorithms/italian/stemmer.md)
Italian:[**`light_italian`**](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf), [`italian`](https://snowballstem.org/algorithms/italian/stemmer.html)
Kurdish (Sorani):[**`sorani`**](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/ckb/SoraniStemmer.md)
Latvian:[**`latvian`**](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/lv/LatvianStemmer.md)
Lithuanian:[**`lithuanian`**](https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_3/lucene/analysis/common/src/java/org/apache/lucene/analysis/lt/stem_ISO_8859_1.sbl?view=markup)
Norwegian (Bokmål):[**`norwegian`**](https://snowballstem.org/algorithms/norwegian/stemmer.md), [**`light_norwegian`**](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/no/NorwegianLightStemmer.md), [`minimal_norwegian`](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.md)
Norwegian (Bokmål):[**`norwegian`**](https://snowballstem.org/algorithms/norwegian/stemmer.html), [**`light_norwegian`**](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/no/NorwegianLightStemmer.md), [`minimal_norwegian`](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.md)
Norwegian:(Nynorsk)[**`light_nynorsk`**](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/no/NorwegianLightStemmer.md), [`minimal_nynorsk`](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.md)
Persian:[**`persian`**](https://lucene.apache.org/core/10_0_0/analysis/common/org/apache/lucene/analysis/fa/PersianStemmer.md)
Portuguese:[**`light_portuguese`**](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181), [`minimal_portuguese`](http://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf), [`portuguese`](https://snowballstem.org/algorithms/portuguese/stemmer.md), [`portuguese_rslp`](https://www.inf.ufrgs.br/\~viviane/rslp/index.htm)
Romanian:[**`romanian`**](https://snowballstem.org/algorithms/romanian/stemmer.md)
Russian:[**`russian`**](https://snowballstem.org/algorithms/russian/stemmer.md), [`light_russian`](https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf)
Serbian:[**`serbian`**](https://snowballstem.org/algorithms/serbian/stemmer.md)
Spanish:[**`light_spanish`**](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf), [`spanish`](https://snowballstem.org/algorithms/spanish/stemmer.md) [`spanish_plural`](https://www.wikilengua.org/index.php/Plural_(formaci%C3%B3n))
Swedish:[**`swedish`**](https://snowballstem.org/algorithms/swedish/stemmer.md), [`light_swedish`](http://clef.isti.cnr.it/2003/WN_web/22.pdf)
Turkish:[**`turkish`**](https://snowballstem.org/algorithms/turkish/stemmer.md)
Portuguese:[**`light_portuguese`**](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181), [`minimal_portuguese`](http://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf), [`portuguese`](https://snowballstem.org/algorithms/portuguese/stemmer.html), [`portuguese_rslp`](https://www.inf.ufrgs.br/\~viviane/rslp/index.htm)
Romanian:[**`romanian`**](https://snowballstem.org/algorithms/romanian/stemmer.html)
Russian:[**`russian`**](https://snowballstem.org/algorithms/russian/stemmer.html), [`light_russian`](https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf)
Serbian:[**`serbian`**](https://snowballstem.org/algorithms/serbian/stemmer.html)
Spanish:[**`light_spanish`**](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf), [`spanish`](https://snowballstem.org/algorithms/spanish/stemmer.html) [`spanish_plural`](https://www.wikilengua.org/index.php/Plural_(formaci%C3%B3n))
Swedish:[**`swedish`**](https://snowballstem.org/algorithms/swedish/stemmer.html), [`light_swedish`](http://clef.isti.cnr.it/2003/WN_web/22.pdf)
Turkish:[**`turkish`**](https://snowballstem.org/algorithms/turkish/stemmer.html)
:::
`name`: An alias for the [`language`](#analysis-stemmer-tokenfilter-language-parm) parameter. If both this and the `language` parameter are specified, the `language` parameter argument is used.

View file

@ -33,7 +33,7 @@ This functionality is marked as experimental in Lucene
You can customize the `icu-tokenizer` behavior by specifying per-script rule files, see the [RBBI rules syntax reference](http://userguide.icu-project.org/boundaryanalysis#TOC-RBBI-Rules) for a more detailed explanation.
To add icu tokenizer rules, set the `rule_files` settings, which should contain a comma-separated list of `code:rulefile` pairs in the following format: [four-letter ISO 15924 script code](https://unicode.org/iso15924/iso15924-codes.md), followed by a colon, then a rule file name. Rule files are placed `ES_HOME/config` directory.
To add icu tokenizer rules, set the `rule_files` settings, which should contain a comma-separated list of `code:rulefile` pairs in the following format: [four-letter ISO 15924 script code](https://unicode.org/iso15924/iso15924-codes.html), followed by a colon, then a rule file name. Rule files are placed `ES_HOME/config` directory.
As a demonstration of how the rule files can be used, save the following user file to `$ES_HOME/config/KeywordTokenizer.rbbi`:

View file

@ -5,7 +5,7 @@ mapped_pages:
# nori_part_of_speech token filter [analysis-nori-speech]
The `nori_part_of_speech` token filter removes tokens that match a set of part-of-speech tags. The list of supported tags and their meanings can be found here: [Part of speech tags](https://lucene.apache.org/core/10_1_0/core/../analysis/nori/org/apache/lucene/analysis/ko/POS.Tag.md)
The `nori_part_of_speech` token filter removes tokens that match a set of part-of-speech tags. The list of supported tags and their meanings can be found here: [Part of speech tags](https://lucene.apache.org/core/10_1_0/core/../analysis/nori/org/apache/lucene/analysis/ko/POS.Tag.html)
It accepts the following setting:

View file

@ -11,7 +11,7 @@ This section contains some other information about designing and managing an {{e
EC2 instances offer a number of different kinds of storage. Please be aware of the following when selecting the storage for your cluster:
* [Instance Store](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.md) is recommended for {{es}} clusters as it offers excellent performance and is cheaper than EBS-based storage. {{es}} is designed to work well with this kind of ephemeral storage because it replicates each shard across multiple nodes. If a node fails and its Instance Store is lost then {{es}} will rebuild any lost shards from other copies.
* [Instance Store](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html) is recommended for {{es}} clusters as it offers excellent performance and is cheaper than EBS-based storage. {{es}} is designed to work well with this kind of ephemeral storage because it replicates each shard across multiple nodes. If a node fails and its Instance Store is lost then {{es}} will rebuild any lost shards from other copies.
* [EBS-based storage](https://aws.amazon.com/ebs/) may be acceptable for smaller clusters (1-2 nodes). Be sure to use provisioned IOPS to ensure your cluster has satisfactory performance.
* [EFS-based storage](https://aws.amazon.com/efs/) is not recommended or supported as it does not offer satisfactory performance. Historically, shared network filesystems such as EFS have not always offered precisely the behaviour that {{es}} requires of its filesystem, and this has been known to lead to index corruption. Although EFS offers durability, shared storage, and the ability to grow and shrink filesystems dynamically, you can achieve the same benefits using {{es}} directly.
@ -24,12 +24,12 @@ Prefer the [Amazon Linux 2 AMIs](https://aws.amazon.com/amazon-linux-2/) as thes
## Networking [_networking]
* Smaller instance types have limited network performance, in terms of both [bandwidth and number of connections](https://lab.getbase.com/how-we-discovered-limitations-on-the-aws-tcp-stack/). If networking is a bottleneck, avoid [instance types](https://aws.amazon.com/ec2/instance-types/) with networking labelled as `Moderate` or `Low`.
* It is a good idea to distribute your nodes across multiple [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.md) and use [shard allocation awareness](docs-content://deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md) to ensure that each shard has copies in more than one availability zone.
* It is a good idea to distribute your nodes across multiple [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) and use [shard allocation awareness](docs-content://deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md) to ensure that each shard has copies in more than one availability zone.
* Do not span a cluster across regions. {{es}} expects that node-to-node connections within a cluster are reasonably reliable and offer high bandwidth and low latency, and these properties do not hold for connections between regions. Although an {{es}} cluster will behave correctly when node-to-node connections are unreliable or slow, it is not optimised for this case and its performance may suffer. If you wish to geographically distribute your data, you should provision multiple clusters and use features such as [cross-cluster search](docs-content://solutions/search/cross-cluster-search.md) and [cross-cluster replication](docs-content://deploy-manage/tools/cross-cluster-replication.md).
## Other recommendations [_other_recommendations]
* If you have split your nodes into roles, consider [tagging the EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.md) by role to make it easier to filter and view your EC2 instances in the AWS console.
* Consider [enabling termination protection](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.md#Using_ChangingDisableAPITermination) for all of your data and master-eligible nodes. This will help to prevent accidental termination of these nodes which could temporarily reduce the resilience of the cluster and which could cause a potentially disruptive reallocation of shards.
* If running your cluster using one or more [auto-scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.md), consider protecting your data and master-eligible nodes [against termination during scale-in](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.md#instance-protection-instance). This will help to prevent automatic termination of these nodes which could temporarily reduce the resilience of the cluster and which could cause a potentially disruptive reallocation of shards. If these instances are protected against termination during scale-in then you can use shard allocation filtering to gracefully migrate any data off these nodes before terminating them manually. Refer to [](/reference/elasticsearch/index-settings/shard-allocation.md).
* If you have split your nodes into roles, consider [tagging the EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) by role to make it easier to filter and view your EC2 instances in the AWS console.
* Consider [enabling termination protection](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingDisableAPITermination) for all of your data and master-eligible nodes. This will help to prevent accidental termination of these nodes which could temporarily reduce the resilience of the cluster and which could cause a potentially disruptive reallocation of shards.
* If running your cluster using one or more [auto-scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html), consider protecting your data and master-eligible nodes [against termination during scale-in](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html#instance-protection-instance). This will help to prevent automatic termination of these nodes which could temporarily reduce the resilience of the cluster and which could cause a potentially disruptive reallocation of shards. If these instances are protected against termination during scale-in then you can use shard allocation filtering to gracefully migrate any data off these nodes before terminating them manually. Refer to [](/reference/elasticsearch/index-settings/shard-allocation.md).

View file

@ -9,7 +9,7 @@ The `discovery-ec2` plugin allows {{es}} to find the master-eligible nodes in a
It is normally a good idea to restrict the discovery process just to the master-eligible nodes in the cluster. This plugin allows you to identify these nodes by certain criteria including their tags, their membership of security groups, and their placement within availability zones. The discovery process will work correctly even if it finds master-ineligible nodes, but master elections will be more efficient if this can be avoided.
The interaction with the AWS API can be authenticated using the [instance role](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.md), or else custom credentials can be supplied.
The interaction with the AWS API can be authenticated using the [instance role](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), or else custom credentials can be supplied.
## Enabling EC2 discovery [_enabling_ec2_discovery]
@ -41,7 +41,7 @@ The available settings for the EC2 discovery plugin are as follows.
: An EC2 session token. If set, you must also set `discovery.ec2.access_key` and `discovery.ec2.secret_key`. This setting is sensitive and must be stored in the {{es}} keystore.
`discovery.ec2.endpoint`
: The EC2 service endpoint to which to connect. See [https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region](https://docs.aws.amazon.com/general/latest/gr/rande.md#ec2_region) to find the appropriate endpoint for the region. This setting defaults to `ec2.us-east-1.amazonaws.com` which is appropriate for clusters running in the `us-east-1` region.
: The EC2 service endpoint to which to connect. See [https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region](https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region) to find the appropriate endpoint for the region. This setting defaults to `ec2.us-east-1.amazonaws.com` which is appropriate for clusters running in the `us-east-1` region.
`discovery.ec2.protocol`
: The protocol to use to connect to the EC2 service endpoint, which may be either `http` or `https`. Defaults to `https`.
@ -75,11 +75,11 @@ The available settings for the EC2 discovery plugin are as follows.
If you set `discovery.ec2.host_type` to a value of the form `tag:TAGNAME` then the value of the tag `TAGNAME` attached to each instance will be used as that instances address for discovery. Instances which do not have this tag set will be ignored by the discovery process.
For example if you tag some EC2 instances with a tag named `elasticsearch-host-name` and set `host_type: tag:elasticsearch-host-name` then the `discovery-ec2` plugin will read each instances host name from the value of the `elasticsearch-host-name` tag. [Read more about EC2 Tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.md).
For example if you tag some EC2 instances with a tag named `elasticsearch-host-name` and set `host_type: tag:elasticsearch-host-name` then the `discovery-ec2` plugin will read each instances host name from the value of the `elasticsearch-host-name` tag. [Read more about EC2 Tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html).
`discovery.ec2.availability_zones`
: A list of the names of the availability zones to use for discovery. The name of an availability zone is the [region code followed by a letter](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.md), such as `us-east-1a`. Only instances placed in one of the given availability zones will be used for discovery.
: A list of the names of the availability zones to use for discovery. The name of an availability zone is the [region code followed by a letter](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html), such as `us-east-1a`. Only instances placed in one of the given availability zones will be used for discovery.
$$$discovery-ec2-filtering$$$

View file

@ -60,7 +60,7 @@ Integrations are not plugins, but are external tools or modules that make it eas
* [Spring Elasticsearch](https://github.com/dadoonet/spring-elasticsearch): Spring Factory for Elasticsearch
* [Zeebe](https://zeebe.io): An Elasticsearch exporter acts as a bridge between Zeebe and Elasticsearch
* [Apache Pulsar](https://pulsar.apache.org/docs/en/io-elasticsearch): The Elasticsearch Sink Connector is used to pull messages from Pulsar topics and persist the messages to an index.
* [Micronaut Elasticsearch Integration](https://micronaut-projects.github.io/micronaut-elasticsearch/latest/guide/index.md): Integration of Micronaut with Elasticsearch
* [Micronaut Elasticsearch Integration](https://micronaut-projects.github.io/micronaut-elasticsearch/latest/guide/index.html): Integration of Micronaut with Elasticsearch
* [Apache StreamPipes](https://streampipes.apache.org): StreamPipes is a framework that enables users to work with IoT data sources.
* [Apache MetaModel](https://metamodel.apache.org/): Providing a common interface for discovery, exploration of metadata and querying of different types of data sources.
* [Micrometer](https://micrometer.io): Vendor-neutral application metrics facade. Think SLF4j, but for metrics.
@ -84,7 +84,7 @@ Integrations are not plugins, but are external tools or modules that make it eas
### Supported by the community: [_supported_by_the_community_6]
* [SPM for Elasticsearch](https://sematext.com/spm/index.md): Performance monitoring with live charts showing cluster and node stats, integrated alerts, email reports, etc.
* [SPM for Elasticsearch](https://sematext.com/spm/index.html): Performance monitoring with live charts showing cluster and node stats, integrated alerts, email reports, etc.
* [Zabbix monitoring template](https://www.zabbix.com/integrations/elasticsearch): Monitor the performance and status of your {{es}} nodes and cluster with Zabbix and receive events information.

View file

@ -145,7 +145,7 @@ By default, {{es}} enables garbage collection (GC) logs. These are configured in
You can reconfigure JVM logging using the command line options described in [JEP 158: Unified JVM Logging](https://openjdk.java.net/jeps/158). Unless you change the default `jvm.options` file directly, the {{es}} default configuration is applied in addition to your own settings. To disable the default configuration, first disable logging by supplying the `-Xlog:disable` option, then supply your own command line options. This disables *all* JVM logging, so be sure to review the available options and enable everything that you require.
To see further options not contained in the original JEP, see [Enable Logging with the JVM Unified Logging Framework](https://docs.oracle.com/en/java/javase/13/docs/specs/man/java.md#enable-logging-with-the-jvm-unified-logging-framework).
To see further options not contained in the original JEP, see [Enable Logging with the JVM Unified Logging Framework](https://docs.oracle.com/en/java/javase/13/docs/specs/man/java.html#enable-logging-with-the-jvm-unified-logging-framework).
### Examples [_examples_2]

View file

@ -16,7 +16,7 @@ Fields of type `geo_point` accept latitude-longitude pairs, which can be used:
* to integrate distance into a documents [relevance score](/reference/query-languages/query-dsl-function-score-query.md).
* to [sort](/reference/elasticsearch/rest-apis/sort-search-results.md#geo-sorting) documents by distance.
As with [geo_shape](/reference/elasticsearch/mapping-reference/geo-shape.md) and [point](/reference/elasticsearch/mapping-reference/point.md), `geo_point` can be specified in [GeoJSON](http://geojson.org) and [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md) formats. However, there are a number of additional formats that are supported for convenience and historical reasons. In total there are six ways that a geopoint may be specified, as demonstrated below:
As with [geo_shape](/reference/elasticsearch/mapping-reference/geo-shape.md) and [point](/reference/elasticsearch/mapping-reference/point.md), `geo_point` can be specified in [GeoJSON](http://geojson.org) and [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html) formats. However, there are a number of additional formats that are supported for convenience and historical reasons. In total there are six ways that a geopoint may be specified, as demonstrated below:
```console
PUT my-index-000001
@ -92,7 +92,7 @@ GET my-index-000001/_search
```
1. Geopoint expressed as an object, in [GeoJSON](https://geojson.org/) format, with `type` and `coordinates` keys.
2. Geopoint expressed as a [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md) POINT with the format: `"POINT(lon lat)"`
2. Geopoint expressed as a [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html) POINT with the format: `"POINT(lon lat)"`
3. Geopoint expressed as an object, with `lat` and `lon` keys.
4. Geopoint expressed as an array with the format: [ `lon`, `lat`]
5. Geopoint expressed as a string with the format: `"lat,lon"`.
@ -105,7 +105,7 @@ GET my-index-000001/_search
Please note that string geopoints are ordered as `lat,lon`, while array geopoints, GeoJSON and WKT are ordered as the reverse: `lon,lat`.
The reasons for this are historical. Geographers traditionally write `latitude` before `longitude`, while recent formats specified for geographic data like [GeoJSON](https://geojson.org/) and [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md) order `longitude` before `latitude` (easting before northing) in order to match the mathematical convention of ordering `x` before `y`.
The reasons for this are historical. Geographers traditionally write `latitude` before `longitude`, while recent formats specified for geographic data like [GeoJSON](https://geojson.org/) and [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html) order `longitude` before `latitude` (easting before northing) in order to match the mathematical convention of ordering `x` before `y`.
::::

View file

@ -68,7 +68,7 @@ PUT /example
### Input Structure [input-structure]
Shapes can be represented using either the [GeoJSON](http://geojson.org) or [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md) (WKT) format. The following table provides a mapping of GeoJSON and WKT to Elasticsearch types:
Shapes can be represented using either the [GeoJSON](http://geojson.org) or [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html) (WKT) format. The following table provides a mapping of GeoJSON and WKT to Elasticsearch types:
| GeoJSON Type | WKT Type | Elasticsearch Type | Description |
| --- | --- | --- | --- |
@ -88,7 +88,7 @@ For all types, both the inner `type` and `coordinates` fields are required.
#### [Point](http://geojson.org/geojson-spec.md#id2) [geo-point-type]
#### [Point](http://geojson.org/geojson-spec.html#id2) [geo-point-type]
A point is a single geographic coordinate, such as the location of a building or the current position given by a smartphones Geolocation API. The following is an example of a point in GeoJSON.
@ -112,7 +112,7 @@ POST /example/_doc
```
#### [LineString](http://geojson.org/geojson-spec.md#id3) [geo-linestring]
#### [LineString](http://geojson.org/geojson-spec.html#id3) [geo-linestring]
A linestring defined by an array of two or more positions. By specifying only two points, the linestring will represent a straight line. Specifying more than two points creates an arbitrary path. The following is an example of a linestring in GeoJSON.
@ -138,7 +138,7 @@ POST /example/_doc
The above linestring would draw a straight line starting at the White House to the US Capitol Building.
#### [Polygon](http://geojson.org/geojson-spec.md#id4) [geo-polygon]
#### [Polygon](http://geojson.org/geojson-spec.html#id4) [geo-polygon]
A polygon is defined by a list of a list of points. The first and last points in each (outer) list must be the same (the polygon must be closed). The following is an example of a polygon in GeoJSON.
@ -216,7 +216,7 @@ POST /example/_doc
If the difference between a polygons minimum longitude and the maximum longitude is 180° or greater, {{es}} checks whether the polygons document-level `orientation` differs from the default orientation. If the orientation differs, {{es}} considers the polygon to cross the international dateline and splits the polygon at the dateline.
#### [MultiPoint](http://geojson.org/geojson-spec.md#id5) [geo-multipoint]
#### [MultiPoint](http://geojson.org/geojson-spec.html#id5) [geo-multipoint]
The following is an example of a list of GeoJSON points:
@ -242,7 +242,7 @@ POST /example/_doc
```
#### [MultiLineString](http://geojson.org/geojson-spec.md#id6) [geo-multilinestring]
#### [MultiLineString](http://geojson.org/geojson-spec.html#id6) [geo-multilinestring]
The following is an example of a list of GeoJSON linestrings:
@ -270,7 +270,7 @@ POST /example/_doc
```
#### [MultiPolygon](http://geojson.org/geojson-spec.md#id7) [geo-multipolygon]
#### [MultiPolygon](http://geojson.org/geojson-spec.html#id7) [geo-multipolygon]
The following is an example of a list of GeoJSON polygons (second polygon contains a hole):

View file

@ -11,7 +11,7 @@ The `point` data type facilitates the indexing of and searching arbitrary `x, y`
You can query documents using this type using [shape Query](/reference/query-languages/query-dsl-shape-query.md).
As with [geo_shape](/reference/elasticsearch/mapping-reference/geo-shape.md) and [geo_point](/reference/elasticsearch/mapping-reference/geo-point.md), `point` can be specified in [GeoJSON](http://geojson.org) and [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md) formats. However, there are a number of additional formats that are supported for convenience and historical reasons. In total there are five ways that a cartesian point may be specified, as demonstrated below:
As with [geo_shape](/reference/elasticsearch/mapping-reference/geo-shape.md) and [geo_point](/reference/elasticsearch/mapping-reference/geo-point.md), `point` can be specified in [GeoJSON](http://geojson.org) and [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html) formats. However, there are a number of additional formats that are supported for convenience and historical reasons. In total there are five ways that a cartesian point may be specified, as demonstrated below:
```console
PUT my-index-000001
@ -63,7 +63,7 @@ PUT my-index-000001/_doc/5
```
1. Point expressed as an object, in [GeoJSON](https://geojson.org/) format, with `type` and `coordinates` keys.
2. Point expressed as a [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md) POINT with the format: `"POINT(x y)"`
2. Point expressed as a [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html) POINT with the format: `"POINT(x y)"`
3. Point expressed as an object, with `x` and `y` keys.
4. Point expressed as an array with the format: [ `x`, `y`]
5. Point expressed as a string with the format: `"x,y"`.

View file

@ -14,7 +14,7 @@ You can query documents using this type using [shape Query](/reference/query-lan
## Mapping Options [shape-mapping-options]
Like the [`geo_shape`](/reference/elasticsearch/mapping-reference/geo-shape.md) field type, the `shape` field mapping maps [GeoJSON](http://geojson.org) or [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md) (WKT) geometry objects to the shape type. To enable it, users must explicitly map fields to the shape type.
Like the [`geo_shape`](/reference/elasticsearch/mapping-reference/geo-shape.md) field type, the `shape` field mapping maps [GeoJSON](http://geojson.org) or [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html) (WKT) geometry objects to the shape type. To enable it, users must explicitly map fields to the shape type.
| Option | Description | Default |
| --- | --- | --- |
@ -53,7 +53,7 @@ This mapping definition maps the geometry field to the shape type. The indexer u
## Input Structure [shape-input-structure]
Shapes can be represented using either the [GeoJSON](http://geojson.org) or [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md) (WKT) format. The following table provides a mapping of GeoJSON and WKT to Elasticsearch types:
Shapes can be represented using either the [GeoJSON](http://geojson.org) or [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html) (WKT) format. The following table provides a mapping of GeoJSON and WKT to Elasticsearch types:
| GeoJSON Type | WKT Type | Elasticsearch Type | Description |
| --- | --- | --- | --- |
@ -75,7 +75,7 @@ In GeoJSON and WKT, and therefore Elasticsearch, the correct **coordinate order
### [Point](http://geojson.org/geojson-spec.md#id2) [point-shape]
### [Point](http://geojson.org/geojson-spec.html#id2) [point-shape]
A point is a single coordinate in cartesian `x, y` space. It may represent the location of an item of interest in a virtual world or projected space. The following is an example of a point in GeoJSON.
@ -99,7 +99,7 @@ POST /example/_doc
```
### [LineString](http://geojson.org/geojson-spec.md#id3) [linestring]
### [LineString](http://geojson.org/geojson-spec.html#id3) [linestring]
A `linestring` defined by an array of two or more positions. By specifying only two points, the `linestring` will represent a straight line. Specifying more than two points creates an arbitrary path. The following is an example of a LineString in GeoJSON.
@ -123,7 +123,7 @@ POST /example/_doc
```
### [Polygon](http://geojson.org/geojson-spec.md#id4) [polygon]
### [Polygon](http://geojson.org/geojson-spec.html#id4) [polygon]
A polygon is defined by a list of a list of points. The first and last points in each (outer) list must be the same (the polygon must be closed). The following is an example of a Polygon in GeoJSON.
@ -192,7 +192,7 @@ POST /example/_doc
```
### [MultiPoint](http://geojson.org/geojson-spec.md#id5) [multipoint]
### [MultiPoint](http://geojson.org/geojson-spec.html#id5) [multipoint]
The following is an example of a list of GeoJSON points:
@ -218,7 +218,7 @@ POST /example/_doc
```
### [MultiLineString](http://geojson.org/geojson-spec.md#id6) [multilinestring]
### [MultiLineString](http://geojson.org/geojson-spec.html#id6) [multilinestring]
The following is an example of a list of GeoJSON linestrings:
@ -246,7 +246,7 @@ POST /example/_doc
```
### [MultiPolygon](http://geojson.org/geojson-spec.md#id7) [multipolygon]
### [MultiPolygon](http://geojson.org/geojson-spec.html#id7) [multipolygon]
The following is an example of a list of GeoJSON polygons (second polygon contains a hole):

View file

@ -65,7 +65,7 @@ A cron expression is a string of the following form:
<seconds> <minutes> <hours> <day_of_month> <month> <day_of_week> [year]
```
{{es}} uses the cron parser from the [Quartz Job Scheduler](https://quartz-scheduler.org). For more information about writing Quartz cron expressions, see the [Quartz CronTrigger Tutorial](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.md).
{{es}} uses the cron parser from the [Quartz Job Scheduler](https://quartz-scheduler.org). For more information about writing Quartz cron expressions, see the [Quartz CronTrigger Tutorial](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html).
All schedule times are in coordinated universal time (UTC); other timezones are not supported.

View file

@ -287,7 +287,7 @@ Perl
: See [Search::Elasticsearch::Client::5_0::Bulk](https://metacpan.org/pod/Search::Elasticsearch::Client::5_0::Bulk) and [Search::Elasticsearch::Client::5_0::Scroll](https://metacpan.org/pod/Search::Elasticsearch::Client::5_0::Scroll)
Python
: See [elasticsearch.helpers.*](https://elasticsearch-py.readthedocs.io/en/stable/helpers.md)
: See [elasticsearch.helpers.*](https://elasticsearch-py.readthedocs.io/en/stable/helpers.html)
JavaScript
: See [client.helpers.*](elasticsearch-js://reference/client-helpers.md)

View file

@ -428,7 +428,7 @@ GET /_search
### Lat lon as WKT string [_lat_lon_as_wkt_string]
Format in [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md).
Format in [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html).
```console
GET /_search

View file

@ -24,7 +24,7 @@ $$$attachment-options$$$
| `properties` | no | all properties |  Array of properties to select to be stored. Can be `content`, `title`, `name`, `author`, `keywords`, `date`, `content_type`, `content_length`, `language` |
| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document |
| `remove_binary` | encouraged | `false` | If `true`, the binary `field` will be removed from the document. This option is not required, but setting it explicitly is encouraged, and omitting it will result in a warning. |
| `resource_name` | no | | Field containing the name of the resource to decode. If specified, the processor passes this resource name to the underlying Tika library to enable [Resource Name Based Detection](https://tika.apache.org/1.24.1/detection.md#Resource_Name_Based_Detection). |
| `resource_name` | no | | Field containing the name of the resource to decode. If specified, the processor passes this resource name to the underlying Tika library to enable [Resource Name Based Detection](https://tika.apache.org/1.24.1/detection.html#Resource_Name_Based_Detection). |
### Example [attachment-json-ex]

View file

@ -70,7 +70,7 @@ PUT _ingest/pipeline/geohex2shape
}
```
These two pipelines can be used to index documents into the `geocells` index. The `geocell` field will be the string version of either a rectangular tile with format `z/x/y` or an H3 cell address, depending on which ingest processor we use when indexing the document. The resulting geometry will be represented and indexed as a [`geo_shape`](/reference/elasticsearch/mapping-reference/geo-shape.md) field in either [GeoJSON](http://geojson.org) or the [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md) format.
These two pipelines can be used to index documents into the `geocells` index. The `geocell` field will be the string version of either a rectangular tile with format `z/x/y` or an H3 cell address, depending on which ingest processor we use when indexing the document. The resulting geometry will be represented and indexed as a [`geo_shape`](/reference/elasticsearch/mapping-reference/geo-shape.md) field in either [GeoJSON](http://geojson.org) or the [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html) format.
## Example: Rectangular geotile with envelope in GeoJSON [_example_rectangular_geotile_with_envelope_in_geojson]
@ -112,7 +112,7 @@ The response shows how the ingest-processor has replaced the `geocell` field wit
## Example: Hexagonal geohex with polygon in WKT format [_example_hexagonal_geohex_with_polygon_in_wkt_format]
In this example a `geocell` field with an H3 string address is indexed as a [WKT Polygon](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md), since this ingest processor explicitly defined the `target_format`.
In this example a `geocell` field with an H3 string address is indexed as a [WKT Polygon](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html), since this ingest processor explicitly defined the `target_format`.
```console
PUT geocells/_doc/1?pipeline=geohex2shape

View file

@ -149,7 +149,7 @@ The following configuration fields are required to set up the connector:
`spaces`
: Comma-separated list of [Space Keys](https://confluence.atlassian.com/doc/space-keys-829076188.md) to fetch data from Confluence. If the value is `*`, the connector will fetch data from all spaces present in the configured `spaces`. Default value is `*`. Examples:
: Comma-separated list of [Space Keys](https://confluence.atlassian.com/doc/space-keys-829076188.html) to fetch data from Confluence. If the value is `*`, the connector will fetch data from all spaces present in the configured `spaces`. Default value is `*`. Examples:
* `EC`, `TP`
* `*`

View file

@ -124,7 +124,7 @@ Follow these steps:
```
For more information, refer to this [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.SSL.md) about Oracle SSL. Oracle docs: [https://docs.oracle.com/database/121/DBSEG/asossl.htm#DBSEG070](https://docs.oracle.com/database/121/DBSEG/asossl.htm#DBSEG070).
For more information, refer to this [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.SSL.html) about Oracle SSL. Oracle docs: [https://docs.oracle.com/database/121/DBSEG/asossl.htm#DBSEG070](https://docs.oracle.com/database/121/DBSEG/asossl.htm#DBSEG070).
For additional operations, see [*Connectors UI in {{kib}}*](/reference/ingestion-tools/search-connectors/connectors-ui-in-kibana.md).

View file

@ -103,9 +103,9 @@ S3 users will also need to [Create an IAM identity](#es-connectors-s3-client-usa
#### Create an IAM identity [es-connectors-s3-client-usage-create-iam]
Users need to create an IAM identity to use this connector as a **self-managed connector**. Refer to [the AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-set-up.md).
Users need to create an IAM identity to use this connector as a **self-managed connector**. Refer to [the AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-set-up.html).
The [policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.md) associated with the IAM identity must have the following **AWS permissions**:
The [policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) associated with the IAM identity must have the following **AWS permissions**:
* `ListAllMyBuckets`
* `ListBucket`

View file

@ -201,7 +201,7 @@ FROM employees
**Parameters**
`datePattern`
: The date format. Refer to the [`DateTimeFormatter` documentation](https://docs.oracle.com/en/java/javase/14/docs/api/java.base/java/time/format/DateTimeFormatter.md) for the syntax. If `null`, the function returns `null`.
: The date format. Refer to the [`DateTimeFormatter` documentation](https://docs.oracle.com/en/java/javase/14/docs/api/java.base/java/time/format/DateTimeFormatter.html) for the syntax. If `null`, the function returns `null`.
`dateString`
: Date expression as a string. If `null` or an empty string, the function returns `null`.

View file

@ -17,7 +17,7 @@
Round a number up to the nearest integer.
::::{note}
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.ceil](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.md#ceil(double)).
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.ceil](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.html#ceil(double)).
::::

View file

@ -10,7 +10,7 @@
**Parameters**
`datePattern`
: The date format. Refer to the [`DateTimeFormatter` documentation](https://docs.oracle.com/en/java/javase/14/docs/api/java.base/java/time/format/DateTimeFormatter.md) for the syntax. If `null`, the function returns `null`.
: The date format. Refer to the [`DateTimeFormatter` documentation](https://docs.oracle.com/en/java/javase/14/docs/api/java.base/java/time/format/DateTimeFormatter.html) for the syntax. If `null`, the function returns `null`.
`dateString`
: Date expression as a string. If `null` or an empty string, the function returns `null`.

View file

@ -5,7 +5,7 @@
Round a number up to the nearest integer.
::::{note}
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.ceil](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.md#ceil(double)).
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.ceil](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.html#ceil(double)).
::::

View file

@ -5,7 +5,7 @@
Round a number down to the nearest integer.
::::{note}
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.floor](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.md#floor(double)).
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.floor](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.html#floor(double)).
::::

View file

@ -17,7 +17,7 @@
Round a number down to the nearest integer.
::::{note}
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.floor](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.md#floor(double)).
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.floor](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.html#floor(double)).
::::

View file

@ -3,7 +3,7 @@
**Parameters**
`datePattern`
: The date format. Refer to the [`DateTimeFormatter` documentation](https://docs.oracle.com/en/java/javase/14/docs/api/java.base/java/time/format/DateTimeFormatter.md) for the syntax. If `null`, the function returns `null`.
: The date format. Refer to the [`DateTimeFormatter` documentation](https://docs.oracle.com/en/java/javase/14/docs/api/java.base/java/time/format/DateTimeFormatter.html) for the syntax. If `null`, the function returns `null`.
`dateString`
: Date expression as a string. If `null` or an empty string, the function returns `null`.

View file

@ -286,7 +286,7 @@ ROW d = 1000.0
Round a number up to the nearest integer.
::::{note}
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.ceil](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.md#ceil(double)).
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.ceil](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.html#ceil(double)).
::::
@ -479,7 +479,7 @@ ROW d = 5.0
Round a number down to the nearest integer.
::::{note}
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.floor](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.md#floor(double)).
This is a noop for `long` (including unsigned) and `integer`. For `double` this picks the closest `double` value to the integer similar to [Math.floor](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Math.html#floor(double)).
::::

View file

@ -195,7 +195,7 @@ GET /my_locations/_search
### Lat lon as WKT string [_lat_lon_as_wkt_string_2]
Format in [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.md).
Format in [Well-Known Text](https://docs.opengeospatial.org/is/12-063r5/12-063r5.html).
```console
GET /my_locations/_search

View file

@ -10,7 +10,7 @@ mapped_pages:
::::{admonition} Deprecated in 7.12.
:class: warning
Use [Geoshape](/reference/query-languages/query-dsl-geo-shape-query.md) instead where polygons are defined in GeoJSON or [Well-Known Text (WKT)](http://docs.opengeospatial.org/is/18-010r7/18-010r7.md).
Use [Geoshape](/reference/query-languages/query-dsl-geo-shape-query.md) instead where polygons are defined in GeoJSON or [Well-Known Text (WKT)](http://docs.opengeospatial.org/is/18-010r7/18-010r7.html).
::::

View file

@ -36,7 +36,7 @@ Switching between different representations of datetimes is often necessary to a
Datetime parsing is a switch from a string datetime to a complex datetime, and datetime formatting is a switch from a complex datetime to a string datetime.
A [DateTimeFormatter](https://www.elastic.co/guide/en/elasticsearch/painless/current/painless-api-reference-shared-java-time-format.html#painless-api-reference-shared-DateTimeFormatter) is a complex type ([object](/reference/scripting-languages/painless/painless-types.md#reference-types)) that defines the allowed sequence of characters for a string datetime. Datetime parsing and formatting often require a DateTimeFormatter. For more information about how to use a DateTimeFormatter see the [Java documentation](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/format/DateTimeFormatter.md).
A [DateTimeFormatter](https://www.elastic.co/guide/en/elasticsearch/painless/current/painless-api-reference-shared-java-time-format.html#painless-api-reference-shared-DateTimeFormatter) is a complex type ([object](/reference/scripting-languages/painless/painless-types.md#reference-types)) that defines the allowed sequence of characters for a string datetime. Datetime parsing and formatting often require a DateTimeFormatter. For more information about how to use a DateTimeFormatter see the [Java documentation](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/format/DateTimeFormatter.html).
### Datetime Parsing Examples [_datetime_parsing_examples]