mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-29 01:44:36 -04:00
Merge pull request ESQL-1173 from elastic/main
🤖 ESQL: Merge upstream
This commit is contained in:
commit
ae12d77f32
199 changed files with 1950 additions and 1288 deletions
7
docs/changelog/96035.yaml
Normal file
7
docs/changelog/96035.yaml
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
pr: 96035
|
||||||
|
summary: Expand start and end time to nanoseconds during coordinator rewrite when
|
||||||
|
needed
|
||||||
|
area: TSDB
|
||||||
|
type: bug
|
||||||
|
issues:
|
||||||
|
- 96030
|
6
docs/changelog/96265.yaml
Normal file
6
docs/changelog/96265.yaml
Normal file
|
@ -0,0 +1,6 @@
|
||||||
|
pr: 96265
|
||||||
|
summary: Reduce nesting of same bool queries
|
||||||
|
area: Query Languages
|
||||||
|
type: enhancement
|
||||||
|
issues:
|
||||||
|
- 96236
|
6
docs/changelog/96293.yaml
Normal file
6
docs/changelog/96293.yaml
Normal file
|
@ -0,0 +1,6 @@
|
||||||
|
pr: 96293
|
||||||
|
summary: Report version conflict on concurrent updates
|
||||||
|
area: Transform
|
||||||
|
type: bug
|
||||||
|
issues:
|
||||||
|
- 96311
|
5
docs/changelog/96317.yaml
Normal file
5
docs/changelog/96317.yaml
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
pr: 96317
|
||||||
|
summary: API rest compatibility for type parameter in `geo_bounding_box` query
|
||||||
|
area: Geo
|
||||||
|
type: bug
|
||||||
|
issues: []
|
|
@ -28,7 +28,7 @@ Each document in the `seat` data contains the following fields:
|
||||||
The date and time of the play as a date object.
|
The date and time of the play as a date object.
|
||||||
|
|
||||||
==== Prerequisites
|
==== Prerequisites
|
||||||
Start an {ref}/getting-started-install.html[{es} instance], and then access the
|
Start an {ref}/configuring-stack-security.html[{es} instance], and then access the
|
||||||
{kibana-ref}/console-kibana.html[Console] in {kib}.
|
{kibana-ref}/console-kibana.html[Console] in {kib}.
|
||||||
|
|
||||||
==== Configure the `seat` sample data
|
==== Configure the `seat` sample data
|
||||||
|
|
|
@ -47,9 +47,6 @@ Use `bin/elasticsearch-plugin install file:///path/to/your/plugin`
|
||||||
to install your plugin for testing. The Java plugin is auto-loaded only if it's in the
|
to install your plugin for testing. The Java plugin is auto-loaded only if it's in the
|
||||||
`plugins/` directory.
|
`plugins/` directory.
|
||||||
|
|
||||||
You may also load your plugin within the test framework for integration tests.
|
|
||||||
Check {ref}/integration-tests.html#changing-node-configuration[Changing Node Configuration] for more information.
|
|
||||||
|
|
||||||
[discrete]
|
[discrete]
|
||||||
[[plugin-authors-jsm]]
|
[[plugin-authors-jsm]]
|
||||||
==== Java Security permissions
|
==== Java Security permissions
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
=== EC2 Discovery plugin
|
=== EC2 Discovery plugin
|
||||||
|
|
||||||
The EC2 discovery plugin provides a list of seed addresses to the
|
The EC2 discovery plugin provides a list of seed addresses to the
|
||||||
{ref}/modules-discovery-hosts-providers.html[discovery process] by querying the
|
{ref}/discovery-hosts-providers.html[discovery process] by querying the
|
||||||
https://github.com/aws/aws-sdk-java[AWS API] for a list of EC2 instances
|
https://github.com/aws/aws-sdk-java[AWS API] for a list of EC2 instances
|
||||||
matching certain criteria determined by the <<discovery-ec2-usage,plugin
|
matching certain criteria determined by the <<discovery-ec2-usage,plugin
|
||||||
settings>>.
|
settings>>.
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
=== Hadoop HDFS repository plugin
|
=== Hadoop HDFS repository plugin
|
||||||
|
|
||||||
The HDFS repository plugin adds support for using HDFS File System as a repository for
|
The HDFS repository plugin adds support for using HDFS File System as a repository for
|
||||||
{ref}/modules-snapshots.html[Snapshot/Restore].
|
{ref}/snapshot-restore.html[Snapshot/Restore].
|
||||||
|
|
||||||
:plugin_name: repository-hdfs
|
:plugin_name: repository-hdfs
|
||||||
include::install_remove.asciidoc[]
|
include::install_remove.asciidoc[]
|
||||||
|
@ -23,7 +23,7 @@ plugin folder and point `HADOOP_HOME` variable to it; this should minimize the a
|
||||||
==== Configuration properties
|
==== Configuration properties
|
||||||
|
|
||||||
Once installed, define the configuration for the `hdfs` repository through the
|
Once installed, define the configuration for the `hdfs` repository through the
|
||||||
{ref}/modules-snapshots.html[REST API]:
|
{ref}/snapshot-restore.html[REST API]:
|
||||||
|
|
||||||
[source,console]
|
[source,console]
|
||||||
----
|
----
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
[[repository]]
|
[[repository]]
|
||||||
== Snapshot/restore repository plugins
|
== Snapshot/restore repository plugins
|
||||||
|
|
||||||
Repository plugins extend the {ref}/modules-snapshots.html[Snapshot/Restore]
|
Repository plugins extend the {ref}/snapshot-restore.html[Snapshot/Restore]
|
||||||
functionality in Elasticsearch by adding repositories backed by the cloud or
|
functionality in Elasticsearch by adding repositories backed by the cloud or
|
||||||
by distributed file systems:
|
by distributed file systems:
|
||||||
|
|
||||||
|
|
|
@ -107,7 +107,7 @@ or <<binary, `binary`>>.
|
||||||
|
|
||||||
NOTE: By default, you cannot run a `terms` aggregation on a `text` field. Use a
|
NOTE: By default, you cannot run a `terms` aggregation on a `text` field. Use a
|
||||||
`keyword` <<multi-fields,sub-field>> instead. Alternatively, you can enable
|
`keyword` <<multi-fields,sub-field>> instead. Alternatively, you can enable
|
||||||
<<fielddata,`fielddata`>> on the `text` field to create buckets for the field's
|
<<fielddata-mapping-param,`fielddata`>> on the `text` field to create buckets for the field's
|
||||||
<<analysis,analyzed>> terms. Enabling `fielddata` can significantly increase
|
<<analysis,analyzed>> terms. Enabling `fielddata` can significantly increase
|
||||||
memory usage.
|
memory usage.
|
||||||
|
|
||||||
|
|
|
@ -81,7 +81,7 @@ hard-linked files.
|
||||||
|
|
||||||
`disk.avail`::
|
`disk.avail`::
|
||||||
Free disk space available to {es}. {es} retrieves this metric from the node's
|
Free disk space available to {es}. {es} retrieves this metric from the node's
|
||||||
OS. <<disk-allocator,Disk-based shard allocation>> uses this metric to assign
|
OS. <<disk-based-shard-allocation,Disk-based shard allocation>> uses this metric to assign
|
||||||
shards to nodes based on available disk space.
|
shards to nodes based on available disk space.
|
||||||
|
|
||||||
`disk.total`::
|
`disk.total`::
|
||||||
|
|
|
@ -135,7 +135,7 @@ measurements.
|
||||||
[[cat-recovery-api-ex-snapshot]]
|
[[cat-recovery-api-ex-snapshot]]
|
||||||
===== Example with a snapshot recovery
|
===== Example with a snapshot recovery
|
||||||
|
|
||||||
You can restore backups of an index using the <<modules-snapshots,snapshot and
|
You can restore backups of an index using the <<snapshot-restore,snapshot and
|
||||||
restore>> API. You can use the cat recovery API retrieve information about a
|
restore>> API. You can use the cat recovery API retrieve information about a
|
||||||
snapshot recovery.
|
snapshot recovery.
|
||||||
|
|
||||||
|
|
|
@ -11,7 +11,7 @@ console. They are _not_ intended for use by applications. For application
|
||||||
consumption, use the <<get-snapshot-repo-api,get snapshot repository API>>.
|
consumption, use the <<get-snapshot-repo-api,get snapshot repository API>>.
|
||||||
====
|
====
|
||||||
|
|
||||||
Returns the <<snapshots-repositories,snapshot repositories>> for a cluster.
|
Returns the <<snapshots-register-repository,snapshot repositories>> for a cluster.
|
||||||
|
|
||||||
|
|
||||||
[[cat-repositories-api-request]]
|
[[cat-repositories-api-request]]
|
||||||
|
|
|
@ -11,7 +11,7 @@ console. They are _not_ intended for use by applications. For application
|
||||||
consumption, use the <<get-snapshot-api,get snapshot API>>.
|
consumption, use the <<get-snapshot-api,get snapshot API>>.
|
||||||
====
|
====
|
||||||
|
|
||||||
Returns information about the <<modules-snapshots,snapshots>> stored in one or
|
Returns information about the <<snapshot-restore,snapshots>> stored in one or
|
||||||
more repositories. A snapshot is a backup of an index or running {es} cluster.
|
more repositories. A snapshot is a backup of an index or running {es} cluster.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -31,7 +31,7 @@ When the {es} keystore is password protected and not simply obfuscated, you must
|
||||||
provide the password for the keystore when you reload the secure settings.
|
provide the password for the keystore when you reload the secure settings.
|
||||||
Reloading the settings for the whole cluster assumes that all nodes' keystores
|
Reloading the settings for the whole cluster assumes that all nodes' keystores
|
||||||
are protected with the same password; this method is allowed only when
|
are protected with the same password; this method is allowed only when
|
||||||
<<tls-transport,inter-node communications are encrypted>>. Alternatively, you can
|
<<encrypt-internode-communication,inter-node communications are encrypted>>. Alternatively, you can
|
||||||
reload the secure settings on each node by locally accessing the API and passing
|
reload the secure settings on each node by locally accessing the API and passing
|
||||||
the node-specific {es} keystore password.
|
the node-specific {es} keystore password.
|
||||||
|
|
||||||
|
|
|
@ -1294,7 +1294,7 @@ Number of selected nodes using the HTTP type.
|
||||||
|
|
||||||
`discovery_types`::
|
`discovery_types`::
|
||||||
(object)
|
(object)
|
||||||
Contains statistics about the <<modules-discovery-hosts-providers,discovery
|
Contains statistics about the <<discovery-hosts-providers,discovery
|
||||||
types>> used by selected nodes.
|
types>> used by selected nodes.
|
||||||
+
|
+
|
||||||
.Properties of `discovery_types`
|
.Properties of `discovery_types`
|
||||||
|
@ -1302,7 +1302,7 @@ types>> used by selected nodes.
|
||||||
=====
|
=====
|
||||||
`<discovery_type>`::
|
`<discovery_type>`::
|
||||||
(integer)
|
(integer)
|
||||||
Number of selected nodes using the <<modules-discovery-hosts-providers,discovery
|
Number of selected nodes using the <<discovery-hosts-providers,discovery
|
||||||
type>> to find other nodes.
|
type>> to find other nodes.
|
||||||
=====
|
=====
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
[[elasticsearch-croneval]]
|
[[elasticsearch-croneval]]
|
||||||
== elasticsearch-croneval
|
== elasticsearch-croneval
|
||||||
|
|
||||||
Validates and evaluates a <<cron-expressions,cron expression>>.
|
Validates and evaluates a <<api-cron-expressions,cron expression>>.
|
||||||
|
|
||||||
[discrete]
|
[discrete]
|
||||||
=== Synopsis
|
=== Synopsis
|
||||||
|
|
|
@ -284,12 +284,11 @@ unsafely-bootstrapped cluster.
|
||||||
Unsafe cluster bootstrapping is only possible if there is at least one
|
Unsafe cluster bootstrapping is only possible if there is at least one
|
||||||
surviving master-eligible node. If there are no remaining master-eligible nodes
|
surviving master-eligible node. If there are no remaining master-eligible nodes
|
||||||
then the cluster metadata is completely lost. However, the individual data
|
then the cluster metadata is completely lost. However, the individual data
|
||||||
nodes also contain a copy of the index metadata corresponding with their
|
nodes also contain a copy of the index metadata corresponding with their shards. This sometimes allows a new cluster to import these shards as
|
||||||
shards. It is therefore sometimes possible to manually import these shards as
|
<<dangling-indices,dangling indices>>. You can sometimes
|
||||||
<<dangling-indices,dangling indices>>. For example you can sometimes recover some
|
recover some indices after the loss of all main-eligible nodes in a cluster
|
||||||
indices after the loss of all master-eligible nodes in a cluster by creating a new
|
by creating a new cluster and then using the `elasticsearch-node
|
||||||
cluster and then using the `elasticsearch-node detach-cluster` command to move any
|
detach-cluster` command to move any surviving nodes into this new cluster. Once the new cluster is fully formed,
|
||||||
surviving nodes into this new cluster. Once the new cluster is fully formed,
|
|
||||||
use the <<dangling-indices-api,Dangling indices API>> to list, import or delete
|
use the <<dangling-indices-api,Dangling indices API>> to list, import or delete
|
||||||
any dangling indices.
|
any dangling indices.
|
||||||
|
|
||||||
|
@ -317,7 +316,7 @@ cluster formed as described above.
|
||||||
below. Verify that the tool reported `Node was successfully detached from the
|
below. Verify that the tool reported `Node was successfully detached from the
|
||||||
cluster`.
|
cluster`.
|
||||||
5. If necessary, configure each data node to
|
5. If necessary, configure each data node to
|
||||||
<<modules-discovery-hosts-providers,discover the new cluster>>.
|
<<discovery-hosts-providers,discover the new cluster>>.
|
||||||
6. Start each data node and verify that it has joined the new cluster.
|
6. Start each data node and verify that it has joined the new cluster.
|
||||||
7. Wait for all recoveries to have completed, and investigate the data in the
|
7. Wait for all recoveries to have completed, and investigate the data in the
|
||||||
cluster to discover if any was lost during this process. Use the
|
cluster to discover if any was lost during this process. Use the
|
||||||
|
|
|
@ -231,7 +231,7 @@ participate in the `_bulk` request at all.
|
||||||
[[bulk-security]]
|
[[bulk-security]]
|
||||||
===== Security
|
===== Security
|
||||||
|
|
||||||
See <<url-access-control>>.
|
See <<api-url-access-control>>.
|
||||||
|
|
||||||
[[docs-bulk-api-path-params]]
|
[[docs-bulk-api-path-params]]
|
||||||
==== {api-path-parms-title}
|
==== {api-path-parms-title}
|
||||||
|
|
|
@ -46,7 +46,7 @@ If you specify an index in the request URI, you only need to specify the documen
|
||||||
[[mget-security]]
|
[[mget-security]]
|
||||||
===== Security
|
===== Security
|
||||||
|
|
||||||
See <<url-access-control>>.
|
See <<api-url-access-control>>.
|
||||||
|
|
||||||
[[multi-get-partial-responses]]
|
[[multi-get-partial-responses]]
|
||||||
===== Partial responses
|
===== Partial responses
|
||||||
|
|
|
@ -73,7 +73,7 @@ See <<run-eql-search-across-clusters>>.
|
||||||
(Optional, Boolean)
|
(Optional, Boolean)
|
||||||
+
|
+
|
||||||
NOTE: This parameter's behavior differs from the `allow_no_indices` parameter
|
NOTE: This parameter's behavior differs from the `allow_no_indices` parameter
|
||||||
used in other <<multi-index,multi-target APIs>>.
|
used in other <<api-multi-index,multi-target APIs>>.
|
||||||
+
|
+
|
||||||
If `false`, the request returns an error if any wildcard pattern, alias, or
|
If `false`, the request returns an error if any wildcard pattern, alias, or
|
||||||
`_all` value targets only missing or closed indices. This behavior applies even
|
`_all` value targets only missing or closed indices. This behavior applies even
|
||||||
|
|
|
@ -69,7 +69,7 @@ cluster can report a `green` status, override the default by setting
|
||||||
<<dynamic-index-settings,`index.number_of_replicas`>> to `0` on every index.
|
<<dynamic-index-settings,`index.number_of_replicas`>> to `0` on every index.
|
||||||
|
|
||||||
If the node fails, you may need to restore an older copy of any lost indices
|
If the node fails, you may need to restore an older copy of any lost indices
|
||||||
from a <<modules-snapshots,snapshot>>.
|
from a <<snapshot-restore,snapshot>>.
|
||||||
|
|
||||||
Because they are not resilient to any failures, we do not recommend using
|
Because they are not resilient to any failures, we do not recommend using
|
||||||
one-node clusters in production.
|
one-node clusters in production.
|
||||||
|
@ -281,7 +281,7 @@ cluster when handling such a failure.
|
||||||
|
|
||||||
For resilience against whole-zone failures, it is important that there is a copy
|
For resilience against whole-zone failures, it is important that there is a copy
|
||||||
of each shard in more than one zone, which can be achieved by placing data
|
of each shard in more than one zone, which can be achieved by placing data
|
||||||
nodes in multiple zones and configuring <<allocation-awareness,shard allocation
|
nodes in multiple zones and configuring <<shard-allocation-awareness,shard allocation
|
||||||
awareness>>. You should also ensure that client requests are sent to nodes in
|
awareness>>. You should also ensure that client requests are sent to nodes in
|
||||||
more than one zone.
|
more than one zone.
|
||||||
|
|
||||||
|
@ -334,7 +334,7 @@ tiebreaker need not be as powerful as the other two nodes since it has no other
|
||||||
roles and will not perform any searches nor coordinate any client requests nor
|
roles and will not perform any searches nor coordinate any client requests nor
|
||||||
be elected as the master of the cluster.
|
be elected as the master of the cluster.
|
||||||
|
|
||||||
You should use <<allocation-awareness,shard allocation awareness>> to ensure
|
You should use <<shard-allocation-awareness,shard allocation awareness>> to ensure
|
||||||
that there is a copy of each shard in each zone. This means either zone remains
|
that there is a copy of each shard in each zone. This means either zone remains
|
||||||
fully available if the other zone fails.
|
fully available if the other zone fails.
|
||||||
|
|
||||||
|
@ -359,7 +359,7 @@ mean that the cluster can still elect a master even if one of the zones fails.
|
||||||
|
|
||||||
As always, your indices should have at least one replica in case a node fails,
|
As always, your indices should have at least one replica in case a node fails,
|
||||||
unless they are <<searchable-snapshots,searchable snapshot indices>>. You
|
unless they are <<searchable-snapshots,searchable snapshot indices>>. You
|
||||||
should also use <<allocation-awareness,shard allocation awareness>> to limit
|
should also use <<shard-allocation-awareness,shard allocation awareness>> to limit
|
||||||
the number of copies of each shard in each zone. For instance, if you have an
|
the number of copies of each shard in each zone. For instance, if you have an
|
||||||
index with one or two replicas configured then allocation awareness will ensure
|
index with one or two replicas configured then allocation awareness will ensure
|
||||||
that the replicas of the shard are in a different zone from the primary. This
|
that the replicas of the shard are in a different zone from the primary. This
|
||||||
|
|
|
@ -181,7 +181,7 @@ For high-cardinality `text` fields, fielddata can use a large amount of JVM
|
||||||
memory. To avoid this, {es} disables fielddata on `text` fields by default. If
|
memory. To avoid this, {es} disables fielddata on `text` fields by default. If
|
||||||
you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata
|
you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata
|
||||||
circuit breaker>>, consider disabling it and using a `keyword` field instead.
|
circuit breaker>>, consider disabling it and using a `keyword` field instead.
|
||||||
See <<fielddata>>.
|
See <<fielddata-mapping-param>>.
|
||||||
|
|
||||||
**Clear the fieldata cache**
|
**Clear the fieldata cache**
|
||||||
|
|
||||||
|
|
|
@ -107,7 +107,7 @@ that it will increase the risk of failure since the failure of any one SSD
|
||||||
destroys the index. However this is typically the right tradeoff to make:
|
destroys the index. However this is typically the right tradeoff to make:
|
||||||
optimize single shards for maximum performance, and then add replicas across
|
optimize single shards for maximum performance, and then add replicas across
|
||||||
different nodes so there's redundancy for any node failures. You can also use
|
different nodes so there's redundancy for any node failures. You can also use
|
||||||
<<modules-snapshots,snapshot and restore>> to backup the index for further
|
<<snapshot-restore,snapshot and restore>> to backup the index for further
|
||||||
insurance.
|
insurance.
|
||||||
|
|
||||||
Directly-attached (local) storage generally performs better than remote storage
|
Directly-attached (local) storage generally performs better than remote storage
|
||||||
|
|
|
@ -93,7 +93,7 @@ Use {kib}'s **Dashboard** feature to visualize your data in a chart, table, map,
|
||||||
and more. See {kib}'s {kibana-ref}/dashboard.html[Dashboard documentation].
|
and more. See {kib}'s {kibana-ref}/dashboard.html[Dashboard documentation].
|
||||||
|
|
||||||
You can also search and aggregate your data using the <<search-search,search
|
You can also search and aggregate your data using the <<search-search,search
|
||||||
API>>. Use <<runtime-search-request,runtime fields>> and <<grok-basics,grok
|
API>>. Use <<runtime-search-request,runtime fields>> and <<grok,grok
|
||||||
patterns>> to dynamically extract data from log messages and other unstructured
|
patterns>> to dynamically extract data from log messages and other unstructured
|
||||||
content at search time.
|
content at search time.
|
||||||
|
|
||||||
|
|
|
@ -47,7 +47,7 @@ to use {ilm-init} for new data.
|
||||||
[[ilm-existing-indices-reindex]]
|
[[ilm-existing-indices-reindex]]
|
||||||
=== Reindex into a managed index
|
=== Reindex into a managed index
|
||||||
|
|
||||||
An alternative to <<ilm-with-existing-periodic-indices,applying policies to existing indices>> is to
|
An alternative to <<ilm-existing-indices-apply,applying policies to existing indices>> is to
|
||||||
reindex your data into an {ilm-init}-managed index.
|
reindex your data into an {ilm-init}-managed index.
|
||||||
You might want to do this if creating periodic indices with very small amounts of data
|
You might want to do this if creating periodic indices with very small amounts of data
|
||||||
has led to excessive shard counts, or if continually indexing into the same index has led to large shards
|
has led to excessive shard counts, or if continually indexing into the same index has led to large shards
|
||||||
|
|
|
@ -12,7 +12,7 @@ These actions are intended to protect the cluster against data loss by
|
||||||
ensuring that every shard is fully replicated as soon as possible.
|
ensuring that every shard is fully replicated as soon as possible.
|
||||||
|
|
||||||
Even though we throttle concurrent recoveries both at the
|
Even though we throttle concurrent recoveries both at the
|
||||||
<<recovery,node level>> and at the <<shards-allocation,cluster level>>, this
|
<<recovery,node level>> and at the <<cluster-shard-allocation-settings,cluster level>>, this
|
||||||
``shard-shuffle'' can still put a lot of extra load on the cluster which
|
``shard-shuffle'' can still put a lot of extra load on the cluster which
|
||||||
may not be necessary if the missing node is likely to return soon. Imagine
|
may not be necessary if the missing node is likely to return soon. Imagine
|
||||||
this scenario:
|
this scenario:
|
||||||
|
|
|
@ -36,7 +36,7 @@ or indices.
|
||||||
|
|
||||||
`<alias>`::
|
`<alias>`::
|
||||||
(Required, string) Alias to update. If the alias doesn't exist, the request
|
(Required, string) Alias to update. If the alias doesn't exist, the request
|
||||||
creates it. Index alias names support <<date-math-index-names,date math>>.
|
creates it. Index alias names support <<api-date-math-index-names,date math>>.
|
||||||
|
|
||||||
`<target>`::
|
`<target>`::
|
||||||
(Required, string) Comma-separated list of data streams or indices to add.
|
(Required, string) Comma-separated list of data streams or indices to add.
|
||||||
|
|
|
@ -79,14 +79,14 @@ The object body contains options for the alias. Supports an empty object.
|
||||||
=====
|
=====
|
||||||
`alias`::
|
`alias`::
|
||||||
(Required*, string) Alias for the action. Index alias names support
|
(Required*, string) Alias for the action. Index alias names support
|
||||||
<<date-math-index-names,date math>>. If `aliases` is not specified, the `add`
|
<<api-date-math-index-names,date math>>. If `aliases` is not specified, the `add`
|
||||||
and `remove` actions require this parameter. For the `remove` action, this
|
and `remove` actions require this parameter. For the `remove` action, this
|
||||||
parameter supports wildcards (`*`). The `remove_index` action doesn't support
|
parameter supports wildcards (`*`). The `remove_index` action doesn't support
|
||||||
this parameter.
|
this parameter.
|
||||||
|
|
||||||
`aliases`::
|
`aliases`::
|
||||||
(Required*, array of strings) Aliases for the action. Index alias names support
|
(Required*, array of strings) Aliases for the action. Index alias names support
|
||||||
<<date-math-index-names,date math>>. If `alias` is not specified, the `add` and
|
<<api-date-math-index-names,date math>>. If `alias` is not specified, the `add` and
|
||||||
`remove` actions require this parameter. For the `remove` action, this parameter
|
`remove` actions require this parameter. For the `remove` action, this parameter
|
||||||
supports wildcards (`*`). The `remove_index` action doesn't support this
|
supports wildcards (`*`). The `remove_index` action doesn't support this
|
||||||
parameter.
|
parameter.
|
||||||
|
@ -122,7 +122,7 @@ Only the `add` action supports this parameter.
|
||||||
|
|
||||||
// tag::alias-options[]
|
// tag::alias-options[]
|
||||||
`is_hidden`::
|
`is_hidden`::
|
||||||
(Optional, Boolean) If `true`, the alias is <<hidden,hidden>>. Defaults to
|
(Optional, Boolean) If `true`, the alias is <<multi-hidden,hidden>>. Defaults to
|
||||||
`false`. All data streams or indices for the alias must have the same
|
`false`. All data streams or indices for the alias must have the same
|
||||||
`is_hidden` value.
|
`is_hidden` value.
|
||||||
// end::alias-options[]
|
// end::alias-options[]
|
||||||
|
|
|
@ -78,7 +78,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
|
||||||
=======
|
=======
|
||||||
`<alias>`::
|
`<alias>`::
|
||||||
(Required, object) The key is the alias name. Index alias names support
|
(Required, object) The key is the alias name. Index alias names support
|
||||||
<<date-math-index-names,date math>>.
|
<<api-date-math-index-names,date math>>.
|
||||||
+
|
+
|
||||||
The object body contains options for the alias. Supports an empty object.
|
The object body contains options for the alias. Supports an empty object.
|
||||||
+
|
+
|
||||||
|
@ -94,7 +94,7 @@ alias can access.
|
||||||
If specified, this overwrites the `routing` value for indexing operations.
|
If specified, this overwrites the `routing` value for indexing operations.
|
||||||
|
|
||||||
`is_hidden`::
|
`is_hidden`::
|
||||||
(Optional, Boolean) If `true`, the alias is <<hidden,hidden>>. Defaults to
|
(Optional, Boolean) If `true`, the alias is <<multi-hidden,hidden>>. Defaults to
|
||||||
`false`. All indices for the alias must have the same `is_hidden` value.
|
`false`. All indices for the alias must have the same `is_hidden` value.
|
||||||
|
|
||||||
`is_write_index`::
|
`is_write_index`::
|
||||||
|
@ -204,7 +204,7 @@ PUT /test
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Index alias names also support <<date-math-index-names,date math>>.
|
Index alias names also support <<api-date-math-index-names,date math>>.
|
||||||
|
|
||||||
[source,console]
|
[source,console]
|
||||||
----
|
----
|
||||||
|
|
|
@ -205,7 +205,7 @@ policies. To retrieve the lifecycle policy for individual backing indices,
|
||||||
use the <<indices-get-settings,get index settings API>>.
|
use the <<indices-get-settings,get index settings API>>.
|
||||||
|
|
||||||
`hidden`::
|
`hidden`::
|
||||||
(Boolean) If `true`, the data stream is <<hidden,hidden>>.
|
(Boolean) If `true`, the data stream is <<multi-hidden,hidden>>.
|
||||||
|
|
||||||
`system`::
|
`system`::
|
||||||
(Boolean)
|
(Boolean)
|
||||||
|
|
|
@ -108,7 +108,7 @@ See <<create-index-template,create an index template>>.
|
||||||
<<mapping-routing-field,custom routing>>. Defaults to `false`.
|
<<mapping-routing-field,custom routing>>. Defaults to `false`.
|
||||||
|
|
||||||
`hidden`::
|
`hidden`::
|
||||||
(Optional, Boolean) If `true`, the data stream is <<hidden,hidden>>. Defaults to
|
(Optional, Boolean) If `true`, the data stream is <<multi-hidden,hidden>>. Defaults to
|
||||||
`false`.
|
`false`.
|
||||||
|
|
||||||
`index_mode`::
|
`index_mode`::
|
||||||
|
|
|
@ -75,7 +75,7 @@ index's name.
|
||||||
.Use date math with index alias rollovers
|
.Use date math with index alias rollovers
|
||||||
****
|
****
|
||||||
If you use an index alias for time series data, you can use
|
If you use an index alias for time series data, you can use
|
||||||
<<date-math-index-names,date math>> in the index name to track the rollover
|
<<api-date-math-index-names,date math>> in the index name to track the rollover
|
||||||
date. For example, you can create an alias that points to an index named
|
date. For example, you can create an alias that points to an index named
|
||||||
`<my-index-{now/d}-000001>`. If you create the index on May 6, 2099, the index's
|
`<my-index-{now/d}-000001>`. If you create the index on May 6, 2099, the index's
|
||||||
name is `my-index-2099.05.06-000001`. If you roll over the alias on May 7, 2099,
|
name is `my-index-2099.05.06-000001`. If you roll over the alias on May 7, 2099,
|
||||||
|
@ -98,7 +98,7 @@ Name of the data stream or index alias to roll over.
|
||||||
|
|
||||||
`<target-index>`::
|
`<target-index>`::
|
||||||
(Optional, string)
|
(Optional, string)
|
||||||
Name of the index to create. Supports <<date-math-index-names,date math>>. Data
|
Name of the index to create. Supports <<api-date-math-index-names,date math>>. Data
|
||||||
streams do not support this parameter.
|
streams do not support this parameter.
|
||||||
+
|
+
|
||||||
If the name of the alias's current write index does not end with `-` and a
|
If the name of the alias's current write index does not end with `-` and a
|
||||||
|
|
|
@ -112,7 +112,7 @@ access.
|
||||||
overwrites the `routing` value for indexing operations.
|
overwrites the `routing` value for indexing operations.
|
||||||
|
|
||||||
`is_hidden`::
|
`is_hidden`::
|
||||||
(Boolean) If `true`, the alias is <<hidden,hidden>>.
|
(Boolean) If `true`, the alias is <<multi-hidden,hidden>>.
|
||||||
|
|
||||||
`is_write_index`::
|
`is_write_index`::
|
||||||
(Boolean) If `true`, the index is the <<write-index,write index>> for the alias.
|
(Boolean) If `true`, the index is the <<write-index,write index>> for the alias.
|
||||||
|
|
|
@ -861,7 +861,7 @@ PUT _ingest/pipeline/my-pipeline
|
||||||
}
|
}
|
||||||
----
|
----
|
||||||
|
|
||||||
You can also specify a <<modules-scripting-stored-scripts,stored script>> as the
|
You can also specify a <<script-stored-scripts,stored script>> as the
|
||||||
`if` condition.
|
`if` condition.
|
||||||
|
|
||||||
[source,console]
|
[source,console]
|
||||||
|
|
|
@ -34,7 +34,7 @@ pipeline.
|
||||||
|
|
||||||
.. Click **Add a processor** and select the **Grok** processor type.
|
.. Click **Add a processor** and select the **Grok** processor type.
|
||||||
.. Set **Field** to `message` and **Patterns** to the following
|
.. Set **Field** to `message` and **Patterns** to the following
|
||||||
<<grok-basics,grok pattern>>:
|
<<grok,grok pattern>>:
|
||||||
+
|
+
|
||||||
[source,grok]
|
[source,grok]
|
||||||
----
|
----
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
++++
|
++++
|
||||||
|
|
||||||
The purpose of this processor is to point documents to the right time based index based
|
The purpose of this processor is to point documents to the right time based index based
|
||||||
on a date or timestamp field in a document by using the <<date-math-index-names, date math index name support>>.
|
on a date or timestamp field in a document by using the <<api-date-math-index-names, date math index name support>>.
|
||||||
|
|
||||||
The processor sets the `_index` metadata field with a date math index name expression based on the provided index name
|
The processor sets the `_index` metadata field with a date math index name expression based on the provided index name
|
||||||
prefix, a date or timestamp field in the documents being processed and the provided date rounding.
|
prefix, a date or timestamp field in the documents being processed and the provided date rounding.
|
||||||
|
@ -126,7 +126,7 @@ and the result:
|
||||||
// TESTRESPONSE[s/2016-11-08T19:43:03.850\+0000/$body.docs.0.doc._ingest.timestamp/]
|
// TESTRESPONSE[s/2016-11-08T19:43:03.850\+0000/$body.docs.0.doc._ingest.timestamp/]
|
||||||
|
|
||||||
The above example shows that `_index` was set to `<my-index-{2016-04-25||/M{yyyy-MM-dd|UTC}}>`. Elasticsearch
|
The above example shows that `_index` was set to `<my-index-{2016-04-25||/M{yyyy-MM-dd|UTC}}>`. Elasticsearch
|
||||||
understands this to mean `2016-04-01` as is explained in the <<date-math-index-names, date math index name documentation>>
|
understands this to mean `2016-04-01` as is explained in the <<api-date-math-index-names, date math index name documentation>>
|
||||||
|
|
||||||
[[date-index-name-options]]
|
[[date-index-name-options]]
|
||||||
.Date index name options
|
.Date index name options
|
||||||
|
|
|
@ -14,7 +14,7 @@ The following mapping parameters are common to some or all field data types:
|
||||||
* <<dynamic,`dynamic`>>
|
* <<dynamic,`dynamic`>>
|
||||||
* <<eager-global-ordinals,`eager_global_ordinals`>>
|
* <<eager-global-ordinals,`eager_global_ordinals`>>
|
||||||
* <<enabled,`enabled`>>
|
* <<enabled,`enabled`>>
|
||||||
* <<fielddata,`fielddata`>>
|
* <<fielddata-mapping-param,`fielddata`>>
|
||||||
* <<multi-fields,`fields`>>
|
* <<multi-fields,`fields`>>
|
||||||
* <<mapping-date-format,`format`>>
|
* <<mapping-date-format,`format`>>
|
||||||
* <<ignore-above,`ignore_above`>>
|
* <<ignore-above,`ignore_above`>>
|
||||||
|
|
|
@ -29,7 +29,7 @@ Global ordinals are used if a search contains any of the following components:
|
||||||
* Certain bucket aggregations on `keyword`, `ip`, and `flattened` fields. This
|
* Certain bucket aggregations on `keyword`, `ip`, and `flattened` fields. This
|
||||||
includes `terms` aggregations as mentioned above, as well as `composite`,
|
includes `terms` aggregations as mentioned above, as well as `composite`,
|
||||||
`diversified_sampler`, and `significant_terms`.
|
`diversified_sampler`, and `significant_terms`.
|
||||||
* Bucket aggregations on `text` fields that require <<fielddata, `fielddata`>>
|
* Bucket aggregations on `text` fields that require <<fielddata-mapping-param, `fielddata`>>
|
||||||
to be enabled.
|
to be enabled.
|
||||||
* Operations on parent and child documents from a `join` field, including
|
* Operations on parent and child documents from a `join` field, including
|
||||||
`has_child` queries and `parent` aggregations.
|
`has_child` queries and `parent` aggregations.
|
||||||
|
|
|
@ -77,7 +77,7 @@ The following parameters are accepted by `text` fields:
|
||||||
(default). Enabling this is a good idea on fields that are frequently used for
|
(default). Enabling this is a good idea on fields that are frequently used for
|
||||||
(significant) terms aggregations.
|
(significant) terms aggregations.
|
||||||
|
|
||||||
<<fielddata,`fielddata`>>::
|
<<fielddata-mapping-param,`fielddata`>>::
|
||||||
|
|
||||||
Can the field use in-memory fielddata for sorting, aggregations,
|
Can the field use in-memory fielddata for sorting, aggregations,
|
||||||
or scripting? Accepts `true` or `false` (default).
|
or scripting? Accepts `true` or `false` (default).
|
||||||
|
|
|
@ -403,7 +403,7 @@ If you do not want to enable SSL and are currently using other
|
||||||
* Discontinue use of other `xpack.security.http.ssl` settings
|
* Discontinue use of other `xpack.security.http.ssl` settings
|
||||||
|
|
||||||
If you want to enable SSL, follow the instructions in
|
If you want to enable SSL, follow the instructions in
|
||||||
{ref}/configuring-tls.html#tls-http[Encrypting HTTP client communications]. As part
|
{ref}/security-basic-setup-https.html#encrypt-http-communication[Encrypting HTTP client communications]. As part
|
||||||
of this configuration, explicitly specify `xpack.security.http.ssl.enabled`
|
of this configuration, explicitly specify `xpack.security.http.ssl.enabled`
|
||||||
as `true`.
|
as `true`.
|
||||||
|
|
||||||
|
|
|
@ -891,7 +891,7 @@ For example:
|
||||||
"ignore_throttled": true
|
"ignore_throttled": true
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
For more information about these options, see <<multi-index>>.
|
For more information about these options, see <<api-multi-index>>.
|
||||||
--
|
--
|
||||||
end::indices-options[]
|
end::indices-options[]
|
||||||
|
|
||||||
|
|
|
@ -8,7 +8,7 @@ each time it changes.
|
||||||
The following processes and settings are part of discovery and cluster
|
The following processes and settings are part of discovery and cluster
|
||||||
formation:
|
formation:
|
||||||
|
|
||||||
<<modules-discovery-hosts-providers>>::
|
<<discovery-hosts-providers>>::
|
||||||
|
|
||||||
Discovery is the process where nodes find each other when the master is
|
Discovery is the process where nodes find each other when the master is
|
||||||
unknown, such as when a node has just started up or when the previous
|
unknown, such as when a node has just started up or when the previous
|
||||||
|
@ -34,7 +34,7 @@ formation:
|
||||||
<<dev-vs-prod-mode,production mode>> requires bootstrapping to be
|
<<dev-vs-prod-mode,production mode>> requires bootstrapping to be
|
||||||
<<modules-discovery-bootstrap-cluster,explicitly configured>>.
|
<<modules-discovery-bootstrap-cluster,explicitly configured>>.
|
||||||
|
|
||||||
<<modules-discovery-adding-removing-nodes,Adding and removing master-eligible nodes>>::
|
<<add-elasticsearch-nodes,Adding and removing master-eligible nodes>>::
|
||||||
|
|
||||||
It is recommended to have a small and fixed number of master-eligible nodes
|
It is recommended to have a small and fixed number of master-eligible nodes
|
||||||
in a cluster, and to scale the cluster up and down by adding and removing
|
in a cluster, and to scale the cluster up and down by adding and removing
|
||||||
|
|
|
@ -4,11 +4,9 @@
|
||||||
Starting an Elasticsearch cluster for the very first time requires the initial
|
Starting an Elasticsearch cluster for the very first time requires the initial
|
||||||
set of <<master-node,master-eligible nodes>> to be explicitly defined on one or
|
set of <<master-node,master-eligible nodes>> to be explicitly defined on one or
|
||||||
more of the master-eligible nodes in the cluster. This is known as _cluster
|
more of the master-eligible nodes in the cluster. This is known as _cluster
|
||||||
bootstrapping_. This is only required the first time a cluster starts up: nodes
|
bootstrapping_. This is only required the first time a cluster starts up.
|
||||||
that have already joined a cluster store this information in their data folder
|
Freshly-started nodes that are joining a running cluster obtain this
|
||||||
for use in a <<restart-upgrade,full cluster restart>>, and freshly-started nodes
|
information from the cluster's elected master.
|
||||||
that are joining a running cluster obtain this information from the cluster's
|
|
||||||
elected master.
|
|
||||||
|
|
||||||
The initial set of master-eligible nodes is defined in the
|
The initial set of master-eligible nodes is defined in the
|
||||||
<<initial_master_nodes,`cluster.initial_master_nodes` setting>>. This should be
|
<<initial_master_nodes,`cluster.initial_master_nodes` setting>>. This should be
|
||||||
|
|
|
@ -187,7 +187,7 @@ considered to have failed and is removed from the cluster. See
|
||||||
`cluster.max_voting_config_exclusions`::
|
`cluster.max_voting_config_exclusions`::
|
||||||
(<<dynamic-cluster-setting,Dynamic>>)
|
(<<dynamic-cluster-setting,Dynamic>>)
|
||||||
Sets a limit on the number of voting configuration exclusions at any one time.
|
Sets a limit on the number of voting configuration exclusions at any one time.
|
||||||
The default value is `10`. See <<modules-discovery-adding-removing-nodes>>.
|
The default value is `10`. See <<add-elasticsearch-nodes>>.
|
||||||
|
|
||||||
`cluster.publish.info_timeout`::
|
`cluster.publish.info_timeout`::
|
||||||
(<<static-cluster-setting,Static>>)
|
(<<static-cluster-setting,Static>>)
|
||||||
|
|
|
@ -15,7 +15,7 @@ those of the other piece.
|
||||||
|
|
||||||
Elasticsearch allows you to add and remove master-eligible nodes to a running
|
Elasticsearch allows you to add and remove master-eligible nodes to a running
|
||||||
cluster. In many cases you can do this simply by starting or stopping the nodes
|
cluster. In many cases you can do this simply by starting or stopping the nodes
|
||||||
as required. See <<modules-discovery-adding-removing-nodes>>.
|
as required. See <<add-elasticsearch-nodes>>.
|
||||||
|
|
||||||
As nodes are added or removed Elasticsearch maintains an optimal level of fault
|
As nodes are added or removed Elasticsearch maintains an optimal level of fault
|
||||||
tolerance by updating the cluster's <<modules-discovery-voting,voting
|
tolerance by updating the cluster's <<modules-discovery-voting,voting
|
||||||
|
|
|
@ -22,7 +22,7 @@ After a node joins or leaves the cluster, {es} reacts by automatically making
|
||||||
corresponding changes to the voting configuration in order to ensure that the
|
corresponding changes to the voting configuration in order to ensure that the
|
||||||
cluster is as resilient as possible. It is important to wait for this adjustment
|
cluster is as resilient as possible. It is important to wait for this adjustment
|
||||||
to complete before you remove more nodes from the cluster. For more information,
|
to complete before you remove more nodes from the cluster. For more information,
|
||||||
see <<modules-discovery-adding-removing-nodes>>.
|
see <<add-elasticsearch-nodes>>.
|
||||||
|
|
||||||
The current voting configuration is stored in the cluster state so you can
|
The current voting configuration is stored in the cluster state so you can
|
||||||
inspect its current contents as follows:
|
inspect its current contents as follows:
|
||||||
|
|
|
@ -411,7 +411,7 @@ Similarly, each master-eligible node maintains the following data on disk:
|
||||||
|
|
||||||
Each node checks the contents of its data path at startup. If it discovers
|
Each node checks the contents of its data path at startup. If it discovers
|
||||||
unexpected data then it will refuse to start. This is to avoid importing
|
unexpected data then it will refuse to start. This is to avoid importing
|
||||||
unwanted <<modules-gateway-dangling-indices,dangling indices>> which can lead
|
unwanted <<dangling-indices,dangling indices>> which can lead
|
||||||
to a red cluster health. To be more precise, nodes without the `data` role will
|
to a red cluster health. To be more precise, nodes without the `data` role will
|
||||||
refuse to start if they find any shard data on disk at startup, and nodes
|
refuse to start if they find any shard data on disk at startup, and nodes
|
||||||
without both the `master` and `data` roles will refuse to start if they have any
|
without both the `master` and `data` roles will refuse to start if they have any
|
||||||
|
@ -424,13 +424,13 @@ must perform some extra steps to prepare a node for repurposing when starting
|
||||||
the node without the `data` or `master` roles.
|
the node without the `data` or `master` roles.
|
||||||
|
|
||||||
* If you want to repurpose a data node by removing the `data` role then you
|
* If you want to repurpose a data node by removing the `data` role then you
|
||||||
should first use an <<allocation-filtering,allocation filter>> to safely
|
should first use an <<cluster-shard-allocation-filtering,allocation filter>> to safely
|
||||||
migrate all the shard data onto other nodes in the cluster.
|
migrate all the shard data onto other nodes in the cluster.
|
||||||
|
|
||||||
* If you want to repurpose a node to have neither the `data` nor `master` roles
|
* If you want to repurpose a node to have neither the `data` nor `master` roles
|
||||||
then it is simplest to start a brand-new node with an empty data path and the
|
then it is simplest to start a brand-new node with an empty data path and the
|
||||||
desired roles. You may find it safest to use an
|
desired roles. You may find it safest to use an
|
||||||
<<allocation-filtering,allocation filter>> to migrate the shard data elsewhere
|
<<cluster-shard-allocation-filtering,allocation filter>> to migrate the shard data elsewhere
|
||||||
in the cluster first.
|
in the cluster first.
|
||||||
|
|
||||||
If it is not possible to follow these extra steps then you may be able to use
|
If it is not possible to follow these extra steps then you may be able to use
|
||||||
|
|
|
@ -186,7 +186,7 @@ The `transport.compress` setting always configures local cluster request
|
||||||
compression and is the fallback setting for remote cluster request compression.
|
compression and is the fallback setting for remote cluster request compression.
|
||||||
If you want to configure remote request compression differently than local
|
If you want to configure remote request compression differently than local
|
||||||
request compression, you can set it on a per-remote cluster basis using the
|
request compression, you can set it on a per-remote cluster basis using the
|
||||||
<<remote-cluster-settings,`cluster.remote.${cluster_alias}.transport.compress` setting>>.
|
<<remote-clusters-settings,`cluster.remote.${cluster_alias}.transport.compress` setting>>.
|
||||||
|
|
||||||
|
|
||||||
[[response-compression]]
|
[[response-compression]]
|
||||||
|
|
|
@ -222,4 +222,4 @@ document's field value.
|
||||||
Unlike the <<query-dsl-function-score-query,`function_score`>> query or other
|
Unlike the <<query-dsl-function-score-query,`function_score`>> query or other
|
||||||
ways to change <<relevance-scores,relevance scores>>, the
|
ways to change <<relevance-scores,relevance scores>>, the
|
||||||
`distance_feature` query efficiently skips non-competitive hits when the
|
`distance_feature` query efficiently skips non-competitive hits when the
|
||||||
<<search-uri-request,`track_total_hits`>> parameter is **not** `true`.
|
<<search-search,`track_total_hits`>> parameter is **not** `true`.
|
|
@ -9,7 +9,7 @@ By default, Elasticsearch sorts matching search results by **relevance
|
||||||
score**, which measures how well each document matches a query.
|
score**, which measures how well each document matches a query.
|
||||||
|
|
||||||
The relevance score is a positive floating point number, returned in the
|
The relevance score is a positive floating point number, returned in the
|
||||||
`_score` metadata field of the <<search-request-body,search>> API. The higher the
|
`_score` metadata field of the <<search-search,search>> API. The higher the
|
||||||
`_score`, the more relevant the document. While each query type can calculate
|
`_score`, the more relevant the document. While each query type can calculate
|
||||||
relevance scores differently, score calculation also depends on whether the
|
relevance scores differently, score calculation also depends on whether the
|
||||||
query clause is run in a **query** or **filter** context.
|
query clause is run in a **query** or **filter** context.
|
||||||
|
|
|
@ -569,7 +569,7 @@ instead. A regular flush has the same effect as a synced flush in 7.6 and later.
|
||||||
|
|
||||||
[role="exclude",id="_repositories"]
|
[role="exclude",id="_repositories"]
|
||||||
=== Snapshot repositories
|
=== Snapshot repositories
|
||||||
See <<snapshots-repositories>>.
|
See <<snapshots-register-repository>>.
|
||||||
|
|
||||||
[role="exclude",id="_snapshot"]
|
[role="exclude",id="_snapshot"]
|
||||||
=== Snapshot
|
=== Snapshot
|
||||||
|
|
|
@ -274,7 +274,7 @@ Type of data stream that wildcard patterns can match. Supports
|
||||||
comma-separated values, such as `open,hidden`. Valid values are:
|
comma-separated values, such as `open,hidden`. Valid values are:
|
||||||
|
|
||||||
`all`, `hidden`::
|
`all`, `hidden`::
|
||||||
Match any data stream, including <<hidden,hidden>> ones.
|
Match any data stream, including <<multi-hidden,hidden>> ones.
|
||||||
|
|
||||||
`open`, `closed`::
|
`open`, `closed`::
|
||||||
Matches any non-hidden data stream. Data streams cannot be closed.
|
Matches any non-hidden data stream. Data streams cannot be closed.
|
||||||
|
@ -295,7 +295,7 @@ streams. Supports comma-separated values, such as `open,hidden`. Valid values
|
||||||
are:
|
are:
|
||||||
|
|
||||||
`all`::
|
`all`::
|
||||||
Match any data stream or index, including <<hidden,hidden>> ones.
|
Match any data stream or index, including <<multi-hidden,hidden>> ones.
|
||||||
|
|
||||||
`open`::
|
`open`::
|
||||||
Match open, non-hidden indices. Also matches any non-hidden data stream.
|
Match open, non-hidden indices. Also matches any non-hidden data stream.
|
||||||
|
@ -510,7 +510,7 @@ Number of documents and deleted docs, which have not yet merged out.
|
||||||
<<indices-refresh,Index refreshes>> can affect this statistic.
|
<<indices-refresh,Index refreshes>> can affect this statistic.
|
||||||
|
|
||||||
`fielddata`::
|
`fielddata`::
|
||||||
<<fielddata,Fielddata>> statistics.
|
<<fielddata-mapping-param,Fielddata>> statistics.
|
||||||
|
|
||||||
`flush`::
|
`flush`::
|
||||||
<<indices-flush,Flush>> statistics.
|
<<indices-flush,Flush>> statistics.
|
||||||
|
@ -554,9 +554,6 @@ Size of the index in <<byte-units, byte units>>.
|
||||||
|
|
||||||
`translog`::
|
`translog`::
|
||||||
<<index-modules-translog,Translog>> statistics.
|
<<index-modules-translog,Translog>> statistics.
|
||||||
|
|
||||||
`warmer`::
|
|
||||||
<<indices-warmers,Warmer>> statistics.
|
|
||||||
--
|
--
|
||||||
end::index-metric[]
|
end::index-metric[]
|
||||||
|
|
||||||
|
|
|
@ -133,7 +133,7 @@ existence of the field in mappings in an `expression` script.
|
||||||
===================================================
|
===================================================
|
||||||
|
|
||||||
The `doc['field']` syntax can also be used for <<text,analyzed `text` fields>>
|
The `doc['field']` syntax can also be used for <<text,analyzed `text` fields>>
|
||||||
if <<fielddata,`fielddata`>> is enabled, but *BEWARE*: enabling fielddata on a
|
if <<fielddata-mapping-param,`fielddata`>> is enabled, but *BEWARE*: enabling fielddata on a
|
||||||
`text` field requires loading all of the terms into the JVM heap, which can be
|
`text` field requires loading all of the terms into the JVM heap, which can be
|
||||||
very expensive both in terms of memory and CPU. It seldom makes sense to
|
very expensive both in terms of memory and CPU. It seldom makes sense to
|
||||||
access `text` fields from scripts.
|
access `text` fields from scripts.
|
||||||
|
|
|
@ -334,7 +334,7 @@ all search requests.
|
||||||
[[msearch-security]]
|
[[msearch-security]]
|
||||||
==== Security
|
==== Security
|
||||||
|
|
||||||
See <<url-access-control>>
|
See <<api-url-access-control>>
|
||||||
|
|
||||||
|
|
||||||
[[multi-search-partial-responses]]
|
[[multi-search-partial-responses]]
|
||||||
|
|
|
@ -57,7 +57,7 @@ POST /_search <1>
|
||||||
// TEST[catch:unavailable]
|
// TEST[catch:unavailable]
|
||||||
|
|
||||||
<1> A search request with the `pit` parameter must not specify `index`, `routing`,
|
<1> A search request with the `pit` parameter must not specify `index`, `routing`,
|
||||||
and {ref}/search-request-body.html#request-body-search-preference[`preference`]
|
or <<search-preference,`preference`>>
|
||||||
as these parameters are copied from the point in time.
|
as these parameters are copied from the point in time.
|
||||||
<2> Just like regular searches, you can <<paginate-search-results,use `from` and
|
<2> Just like regular searches, you can <<paginate-search-results,use `from` and
|
||||||
`size` to page through search results>>, up to the first 10,000 hits. If you
|
`size` to page through search results>>, up to the first 10,000 hits. If you
|
||||||
|
|
|
@ -50,7 +50,7 @@ https://github.com/mapbox/vector-tile-spec[Mapbox vector tile specification].
|
||||||
|
|
||||||
* If the {es} {security-features} are enabled, you must have the `read`
|
* If the {es} {security-features} are enabled, you must have the `read`
|
||||||
<<privileges-list-indices,index privilege>> for the target data stream, index,
|
<<privileges-list-indices,index privilege>> for the target data stream, index,
|
||||||
or alias. For cross-cluster search, see <<cross-cluster-configuring>>.
|
or alias. For cross-cluster search, see <<remote-clusters-security>>.
|
||||||
|
|
||||||
[[search-vector-tile-api-path-params]]
|
[[search-vector-tile-api-path-params]]
|
||||||
==== {api-path-parms-title}
|
==== {api-path-parms-title}
|
||||||
|
|
|
@ -74,7 +74,7 @@ Inner hits also supports the following per document features:
|
||||||
* <<highlighting,Highlighting>>
|
* <<highlighting,Highlighting>>
|
||||||
* <<request-body-search-explain,Explain>>
|
* <<request-body-search-explain,Explain>>
|
||||||
* <<search-fields-param,Search fields>>
|
* <<search-fields-param,Search fields>>
|
||||||
* <<request-body-search-source-filtering,Source filtering>>
|
* <<source-filtering,Source filtering>>
|
||||||
* <<script-fields,Script fields>>
|
* <<script-fields,Script fields>>
|
||||||
* <<docvalue-fields,Doc value fields>>
|
* <<docvalue-fields,Doc value fields>>
|
||||||
* <<request-body-search-version,Include versions>>
|
* <<request-body-search-version,Include versions>>
|
||||||
|
|
|
@ -588,7 +588,7 @@ for loading fields:
|
||||||
parameter to get values for selected fields. This can be a good
|
parameter to get values for selected fields. This can be a good
|
||||||
choice when returning a fairly small number of fields that support doc values,
|
choice when returning a fairly small number of fields that support doc values,
|
||||||
such as keywords and dates.
|
such as keywords and dates.
|
||||||
* Use the <<request-body-search-stored-fields, `stored_fields`>> parameter to
|
* Use the <<stored-fields, `stored_fields`>> parameter to
|
||||||
get the values for specific stored fields (fields that use the
|
get the values for specific stored fields (fields that use the
|
||||||
<<mapping-store,`store`>> mapping option).
|
<<mapping-store,`store`>> mapping option).
|
||||||
|
|
||||||
|
|
|
@ -158,7 +158,7 @@ the request hits. However, hitting a large number of shards can significantly
|
||||||
increase CPU and memory usage.
|
increase CPU and memory usage.
|
||||||
|
|
||||||
TIP: For tips on preventing indices with large numbers of shards, see
|
TIP: For tips on preventing indices with large numbers of shards, see
|
||||||
<<avoid-oversharding>>.
|
<<size-your-shards>>.
|
||||||
|
|
||||||
You can use the `max_concurrent_shard_requests` query parameter to control
|
You can use the `max_concurrent_shard_requests` query parameter to control
|
||||||
maximum number of concurrent shards a search request can hit per node. This
|
maximum number of concurrent shards a search request can hit per node. This
|
||||||
|
|
|
@ -38,7 +38,7 @@ must have the `read` index privilege for the alias's data streams or indices.
|
||||||
|
|
||||||
Allows you to execute a search query and get back search hits that match the
|
Allows you to execute a search query and get back search hits that match the
|
||||||
query. You can provide search queries using the <<search-api-query-params-q,`q`
|
query. You can provide search queries using the <<search-api-query-params-q,`q`
|
||||||
query string parameter>> or <<search-request-body,request body>>.
|
query string parameter>> or <<search-search,request body>>.
|
||||||
|
|
||||||
[[search-search-api-path-params]]
|
[[search-search-api-path-params]]
|
||||||
==== {api-path-parms-title}
|
==== {api-path-parms-title}
|
||||||
|
|
|
@ -2342,7 +2342,7 @@ Contents of a JSON Web Key Set (JWKS), including the secret key that the JWT
|
||||||
realm uses to verify token signatures. This format supports multiple keys and
|
realm uses to verify token signatures. This format supports multiple keys and
|
||||||
optional attributes, and is preferred over the `hmac_key` setting. Cannot be
|
optional attributes, and is preferred over the `hmac_key` setting. Cannot be
|
||||||
used in conjunction with the `hmac_key` setting. Refer to
|
used in conjunction with the `hmac_key` setting. Refer to
|
||||||
<<jwt-realm-configuration,Configure {es} to use a JWT realm>>.
|
<<jwt-auth-realm,Configure {es} to use a JWT realm>>.
|
||||||
// end::jwt-hmac-jwkset-tag[]
|
// end::jwt-hmac-jwkset-tag[]
|
||||||
|
|
||||||
// tag::jwt-hmac-key-tag[]
|
// tag::jwt-hmac-key-tag[]
|
||||||
|
@ -2354,7 +2354,7 @@ without attributes, and cannot be used with the `hmac_jwkset` setting. This
|
||||||
format is compatible with OIDC. The HMAC key must be a UNICODE string, where
|
format is compatible with OIDC. The HMAC key must be a UNICODE string, where
|
||||||
the key bytes are the UTF-8 encoding of the UNICODE string.
|
the key bytes are the UTF-8 encoding of the UNICODE string.
|
||||||
The `hmac_jwkset` setting is preferred. Refer to
|
The `hmac_jwkset` setting is preferred. Refer to
|
||||||
<<jwt-realm-configuration,Configure {es} to use a JWT realm>>.
|
<<jwt-auth-realm,Configure {es} to use a JWT realm>>.
|
||||||
|
|
||||||
// end::jwt-hmac-key-tag[]
|
// end::jwt-hmac-key-tag[]
|
||||||
|
|
||||||
|
|
|
@ -19,7 +19,7 @@ When you want to form a cluster with nodes on other hosts, use the
|
||||||
<<static-cluster-setting, static>> `discovery.seed_hosts` setting. This setting
|
<<static-cluster-setting, static>> `discovery.seed_hosts` setting. This setting
|
||||||
provides a list of other nodes in the cluster
|
provides a list of other nodes in the cluster
|
||||||
that are master-eligible and likely to be live and contactable to seed
|
that are master-eligible and likely to be live and contactable to seed
|
||||||
the <<modules-discovery-hosts-providers,discovery process>>. This setting
|
the <<discovery-hosts-providers,discovery process>>. This setting
|
||||||
accepts a YAML sequence or array of the addresses of all the master-eligible
|
accepts a YAML sequence or array of the addresses of all the master-eligible
|
||||||
nodes in the cluster. Each address can be either an IP address or a hostname
|
nodes in the cluster. Each address can be either an IP address or a hostname
|
||||||
that resolves to one or more IP addresses via DNS.
|
that resolves to one or more IP addresses via DNS.
|
||||||
|
|
|
@ -143,7 +143,7 @@ documentation].
|
||||||
Each Java package in the {es-repo}[{es} source code] has a related logger. For
|
Each Java package in the {es-repo}[{es} source code] has a related logger. For
|
||||||
example, the `org.elasticsearch.discovery` package has
|
example, the `org.elasticsearch.discovery` package has
|
||||||
`logger.org.elasticsearch.discovery` for logs related to the
|
`logger.org.elasticsearch.discovery` for logs related to the
|
||||||
<<modules-discovery-hosts-providers,discovery>> process.
|
<<discovery-hosts-providers,discovery>> process.
|
||||||
|
|
||||||
To get more or less verbose logs, use the <<cluster-update-settings,cluster
|
To get more or less verbose logs, use the <<cluster-update-settings,cluster
|
||||||
update settings API>> to change the related logger's log level. Each logger
|
update settings API>> to change the related logger's log level. Each logger
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
[[restart-cluster]]
|
[[restart-cluster]]
|
||||||
== Full-cluster restart and rolling restart
|
== Full-cluster restart and rolling restart
|
||||||
|
|
||||||
There may be {ref}/configuring-tls.html#tls-transport[situations where you want
|
There may be <<security-basic-setup,situations where you want
|
||||||
to perform a full-cluster restart] or a rolling restart. In the case of
|
to perform a full-cluster restart>> or a rolling restart. In the case of
|
||||||
<<restart-cluster-full,full-cluster restart>>, you shut down and restart all the
|
<<restart-cluster-full,full-cluster restart>>, you shut down and restart all the
|
||||||
nodes in the cluster while in the case of
|
nodes in the cluster while in the case of
|
||||||
<<restart-cluster-rolling,rolling restart>>, you shut down only one node at a
|
<<restart-cluster-rolling,rolling restart>>, you shut down only one node at a
|
||||||
|
|
|
@ -41,7 +41,7 @@ include::install/systemd.asciidoc[]
|
||||||
|
|
||||||
If you installed a Docker image, you can start {es} from the command line. There
|
If you installed a Docker image, you can start {es} from the command line. There
|
||||||
are different methods depending on whether you're using development mode or
|
are different methods depending on whether you're using development mode or
|
||||||
production mode. See <<docker-cli-run>>.
|
production mode. See <<docker-cli-run-dev-mode>>.
|
||||||
|
|
||||||
[discrete]
|
[discrete]
|
||||||
[[start-rpm]]
|
[[start-rpm]]
|
||||||
|
|
|
@ -62,7 +62,7 @@ include::{es-repo-dir}/snapshot-restore/apis/create-snapshot-api.asciidoc[tag=sn
|
||||||
`name`::
|
`name`::
|
||||||
(Required, string)
|
(Required, string)
|
||||||
Name automatically assigned to each snapshot created by the policy.
|
Name automatically assigned to each snapshot created by the policy.
|
||||||
<<date-math-index-names,Date math>> is supported.
|
<<api-date-math-index-names,Date math>> is supported.
|
||||||
To prevent conflicting snapshot names, a UUID is automatically appended to each
|
To prevent conflicting snapshot names, a UUID is automatically appended to each
|
||||||
snapshot name.
|
snapshot name.
|
||||||
|
|
||||||
|
@ -70,7 +70,7 @@ snapshot name.
|
||||||
(Required, string)
|
(Required, string)
|
||||||
Repository used to store snapshots created by this policy. This repository must
|
Repository used to store snapshots created by this policy. This repository must
|
||||||
exist prior to the policy's creation. You can create a repository using the
|
exist prior to the policy's creation. You can create a repository using the
|
||||||
<<modules-snapshots,snapshot repository API>>.
|
<<snapshot-restore,snapshot repository API>>.
|
||||||
|
|
||||||
[[slm-api-put-retention]]
|
[[slm-api-put-retention]]
|
||||||
`retention`::
|
`retention`::
|
||||||
|
@ -100,7 +100,7 @@ Minimum number of snapshots to retain, even if the snapshots have expired.
|
||||||
====
|
====
|
||||||
|
|
||||||
`schedule`::
|
`schedule`::
|
||||||
(Required, <<cron-expressions,Cron syntax>>)
|
(Required, <<api-cron-expressions,Cron syntax>>)
|
||||||
Periodic or absolute schedule at which the policy creates snapshots. {slm-init}
|
Periodic or absolute schedule at which the policy creates snapshots. {slm-init}
|
||||||
applies `schedule` changes immediately.
|
applies `schedule` changes immediately.
|
||||||
|
|
||||||
|
|
|
@ -55,4 +55,4 @@ fails and returns an error. Defaults to `30s`.
|
||||||
`indices`::
|
`indices`::
|
||||||
(Required, string)
|
(Required, string)
|
||||||
A comma-separated list of indices to include in the snapshot.
|
A comma-separated list of indices to include in the snapshot.
|
||||||
<<multi-index,multi-target syntax>> is supported.
|
<<api-multi-index,multi-target syntax>> is supported.
|
|
@ -78,7 +78,7 @@ match data streams and indices. Supports comma-separated values, such as
|
||||||
`open,hidden`. Defaults to `all`. Valid values are:
|
`open,hidden`. Defaults to `all`. Valid values are:
|
||||||
|
|
||||||
`all`:::
|
`all`:::
|
||||||
Match any data stream or index, including <<hidden,hidden>> ones.
|
Match any data stream or index, including <<multi-hidden,hidden>> ones.
|
||||||
|
|
||||||
`open`:::
|
`open`:::
|
||||||
Match open indices and data streams.
|
Match open indices and data streams.
|
||||||
|
|
|
@ -1,8 +1,7 @@
|
||||||
[[repository-azure]]
|
[[repository-azure]]
|
||||||
=== Azure repository
|
=== Azure repository
|
||||||
|
|
||||||
You can use https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction[Azure Blob storage] as a repository for
|
You can use https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction[Azure Blob storage] as a repository for <<snapshot-restore,Snapshot/Restore>>.
|
||||||
{ref}/modules-snapshots.html[Snapshot/Restore].
|
|
||||||
|
|
||||||
[[repository-azure-usage]]
|
[[repository-azure-usage]]
|
||||||
==== Setup
|
==== Setup
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
=== Google Cloud Storage repository
|
=== Google Cloud Storage repository
|
||||||
|
|
||||||
You can use the https://cloud.google.com/storage/[Google Cloud Storage]
|
You can use the https://cloud.google.com/storage/[Google Cloud Storage]
|
||||||
service as a repository for {ref}/modules-snapshots.html[Snapshot/Restore].
|
service as a repository for {ref}/snapshot-restore.html[Snapshot/Restore].
|
||||||
|
|
||||||
[[repository-gcs-usage]]
|
[[repository-gcs-usage]]
|
||||||
==== Getting started
|
==== Getting started
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
[[repository-s3]]
|
[[repository-s3]]
|
||||||
=== S3 repository
|
=== S3 repository
|
||||||
|
|
||||||
You can use AWS S3 as a repository for {ref}/modules-snapshots.html[Snapshot/Restore].
|
You can use AWS S3 as a repository for {ref}/snapshot-restore.html[Snapshot/Restore].
|
||||||
|
|
||||||
*If you are looking for a hosted solution of Elasticsearch on AWS, please visit
|
*If you are looking for a hosted solution of Elasticsearch on AWS, please visit
|
||||||
https://www.elastic.co/cloud/.*
|
https://www.elastic.co/cloud/.*
|
||||||
|
|
|
@ -276,7 +276,7 @@ before you start.
|
||||||
|
|
||||||
. If you <<back-up-config-files,backed up the cluster's configuration
|
. If you <<back-up-config-files,backed up the cluster's configuration
|
||||||
files>>, you can restore them to each node. This step is optional and requires a
|
files>>, you can restore them to each node. This step is optional and requires a
|
||||||
<<restart-upgrade,full cluster restart>>.
|
<<restart-cluster, full cluster restart>>.
|
||||||
+
|
+
|
||||||
After you shut down a node, copy the backed-up configuration files over to the
|
After you shut down a node, copy the backed-up configuration files over to the
|
||||||
node's `$ES_PATH_CONF` directory. Before restarting the node, ensure
|
node's `$ES_PATH_CONF` directory. Before restarting the node, ensure
|
||||||
|
|
|
@ -53,7 +53,7 @@ Which returns:
|
||||||
Which is the request that SQL will run to provide the results.
|
Which is the request that SQL will run to provide the results.
|
||||||
In this case, SQL will use the <<scroll-search-results,scroll>>
|
In this case, SQL will use the <<scroll-search-results,scroll>>
|
||||||
API. If the result contained an aggregation then SQL would use
|
API. If the result contained an aggregation then SQL would use
|
||||||
the normal <<search-request-body,search>> API.
|
the normal <<search-search,search API>>.
|
||||||
|
|
||||||
The request body accepts the same <<sql-search-api-request-body,parameters>> as
|
The request body accepts the same <<sql-search-api-request-body,parameters>> as
|
||||||
the <<sql-search-api,SQL search API>>, excluding `cursor`.
|
the <<sql-search-api,SQL search API>>, excluding `cursor`.
|
||||||
|
|
|
@ -10,7 +10,7 @@
|
||||||
A common requirement when dealing with date/time in general revolves around
|
A common requirement when dealing with date/time in general revolves around
|
||||||
the notion of `interval`, a topic that is worth exploring in the context of {es} and {es-sql}.
|
the notion of `interval`, a topic that is worth exploring in the context of {es} and {es-sql}.
|
||||||
|
|
||||||
{es} has comprehensive support for <<date-math, date math>> both inside <<date-math-index-names, index names>> and <<mapping-date-format, queries>>.
|
{es} has comprehensive support for <<date-math, date math>> both inside <<api-date-math-index-names, index names>> and <<mapping-date-format, queries>>.
|
||||||
Inside {es-sql} the former is supported as is by passing the expression in the table name, while the latter is supported through the standard SQL `INTERVAL`.
|
Inside {es-sql} the former is supported as is by passing the expression in the table name, while the latter is supported through the standard SQL `INTERVAL`.
|
||||||
|
|
||||||
The table below shows the mapping between {es} and {es-sql}:
|
The table below shows the mapping between {es} and {es-sql}:
|
||||||
|
|
|
@ -116,7 +116,7 @@ If the table name contains special SQL characters (such as `.`,`-`,`*`,etc...) u
|
||||||
include-tagged::{sql-specs}/docs/docs.csv-spec[fromTableQuoted]
|
include-tagged::{sql-specs}/docs/docs.csv-spec[fromTableQuoted]
|
||||||
----
|
----
|
||||||
|
|
||||||
The name can be a <<multi-index, pattern>> pointing to multiple indices (likely requiring quoting as mentioned above) with the restriction that *all* resolved concrete tables have **exact mapping**.
|
The name can be a <<api-multi-index, pattern>> pointing to multiple indices (likely requiring quoting as mentioned above) with the restriction that *all* resolved concrete tables have **exact mapping**.
|
||||||
|
|
||||||
[source, sql]
|
[source, sql]
|
||||||
----
|
----
|
||||||
|
|
|
@ -42,7 +42,7 @@ Identifiers can be of two types: __quoted__ and __unquoted__:
|
||||||
SELECT ip_address FROM "hosts-*"
|
SELECT ip_address FROM "hosts-*"
|
||||||
----
|
----
|
||||||
|
|
||||||
This query has two identifiers, `ip_address` and `hosts-*` (an <<multi-index,index pattern>>). As `ip_address` does not clash with any key words it can be used verbatim, `hosts-*` on the other hand cannot as it clashes with `-` (minus operation) and `*` hence the double quotes.
|
This query has two identifiers, `ip_address` and `hosts-*` (an <<api-multi-index,index pattern>>). As `ip_address` does not clash with any key words it can be used verbatim, `hosts-*` on the other hand cannot as it clashes with `-` (minus operation) and `*` hence the double quotes.
|
||||||
|
|
||||||
Another example:
|
Another example:
|
||||||
|
|
||||||
|
@ -51,7 +51,7 @@ Another example:
|
||||||
SELECT "from" FROM "<logstash-{now/d}>"
|
SELECT "from" FROM "<logstash-{now/d}>"
|
||||||
----
|
----
|
||||||
|
|
||||||
The first identifier from needs to quoted as otherwise it clashes with the `FROM` key word (which is case insensitive as thus can be written as `from`) while the second identifier using {es} <<date-math-index-names>> would have otherwise confuse the parser.
|
The first identifier from needs to quoted as otherwise it clashes with the `FROM` key word (which is case insensitive as thus can be written as `from`) while the second identifier using {es} <<api-date-math-index-names>> would have otherwise confuse the parser.
|
||||||
|
|
||||||
Hence why in general, *especially* when dealing with user input it is *highly* recommended to use quotes for identifiers. It adds minimal increase to your queries and in return offers clarity and disambiguation.
|
Hence why in general, *especially* when dealing with user input it is *highly* recommended to use quotes for identifiers. It adds minimal increase to your queries and in return offers clarity and disambiguation.
|
||||||
|
|
||||||
|
|
|
@ -80,7 +80,7 @@ For high-cardinality `text` fields, fielddata can use a large amount of JVM
|
||||||
memory. To avoid this, {es} disables fielddata on `text` fields by default. If
|
memory. To avoid this, {es} disables fielddata on `text` fields by default. If
|
||||||
you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata
|
you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata
|
||||||
circuit breaker>>, consider disabling it and using a `keyword` field instead.
|
circuit breaker>>, consider disabling it and using a `keyword` field instead.
|
||||||
See <<fielddata>>.
|
See <<fielddata-mapping-param>>.
|
||||||
|
|
||||||
**Clear the fielddata cache**
|
**Clear the fielddata cache**
|
||||||
|
|
||||||
|
|
|
@ -45,6 +45,9 @@ public class TSDBIndexingIT extends ESSingleNodeTestCase {
|
||||||
{
|
{
|
||||||
"_doc":{
|
"_doc":{
|
||||||
"properties": {
|
"properties": {
|
||||||
|
"@timestamp" : {
|
||||||
|
"type": "date"
|
||||||
|
},
|
||||||
"metricset": {
|
"metricset": {
|
||||||
"type": "keyword",
|
"type": "keyword",
|
||||||
"time_series_dimension": true
|
"time_series_dimension": true
|
||||||
|
@ -86,28 +89,18 @@ public class TSDBIndexingIT extends ESSingleNodeTestCase {
|
||||||
}
|
}
|
||||||
|
|
||||||
public void testTimeRanges() throws Exception {
|
public void testTimeRanges() throws Exception {
|
||||||
var mappingTemplate = """
|
|
||||||
{
|
|
||||||
"_doc":{
|
|
||||||
"properties": {
|
|
||||||
"metricset": {
|
|
||||||
"type": "keyword",
|
|
||||||
"time_series_dimension": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}""";
|
|
||||||
var templateSettings = Settings.builder().put("index.mode", "time_series");
|
var templateSettings = Settings.builder().put("index.mode", "time_series");
|
||||||
if (randomBoolean()) {
|
if (randomBoolean()) {
|
||||||
templateSettings.put("index.routing_path", "metricset");
|
templateSettings.put("index.routing_path", "metricset");
|
||||||
}
|
}
|
||||||
|
var mapping = new CompressedXContent(randomBoolean() ? MAPPING_TEMPLATE : MAPPING_TEMPLATE.replace("date", "date_nanos"));
|
||||||
|
|
||||||
if (randomBoolean()) {
|
if (randomBoolean()) {
|
||||||
var request = new PutComposableIndexTemplateAction.Request("id");
|
var request = new PutComposableIndexTemplateAction.Request("id");
|
||||||
request.indexTemplate(
|
request.indexTemplate(
|
||||||
new ComposableIndexTemplate(
|
new ComposableIndexTemplate(
|
||||||
List.of("k8s*"),
|
List.of("k8s*"),
|
||||||
new Template(templateSettings.build(), new CompressedXContent(mappingTemplate), null),
|
new Template(templateSettings.build(), mapping, null),
|
||||||
null,
|
null,
|
||||||
null,
|
null,
|
||||||
null,
|
null,
|
||||||
|
@ -119,9 +112,7 @@ public class TSDBIndexingIT extends ESSingleNodeTestCase {
|
||||||
client().execute(PutComposableIndexTemplateAction.INSTANCE, request).actionGet();
|
client().execute(PutComposableIndexTemplateAction.INSTANCE, request).actionGet();
|
||||||
} else {
|
} else {
|
||||||
var putComponentTemplateRequest = new PutComponentTemplateAction.Request("1");
|
var putComponentTemplateRequest = new PutComponentTemplateAction.Request("1");
|
||||||
putComponentTemplateRequest.componentTemplate(
|
putComponentTemplateRequest.componentTemplate(new ComponentTemplate(new Template(null, mapping, null), null, null));
|
||||||
new ComponentTemplate(new Template(null, new CompressedXContent(mappingTemplate), null), null, null)
|
|
||||||
);
|
|
||||||
client().execute(PutComponentTemplateAction.INSTANCE, putComponentTemplateRequest).actionGet();
|
client().execute(PutComponentTemplateAction.INSTANCE, putComponentTemplateRequest).actionGet();
|
||||||
|
|
||||||
var putTemplateRequest = new PutComposableIndexTemplateAction.Request("id");
|
var putTemplateRequest = new PutComposableIndexTemplateAction.Request("id");
|
||||||
|
@ -376,13 +367,14 @@ public class TSDBIndexingIT extends ESSingleNodeTestCase {
|
||||||
|
|
||||||
public void testSkippingShards() throws Exception {
|
public void testSkippingShards() throws Exception {
|
||||||
Instant time = Instant.now();
|
Instant time = Instant.now();
|
||||||
|
var mapping = new CompressedXContent(randomBoolean() ? MAPPING_TEMPLATE : MAPPING_TEMPLATE.replace("date", "date_nanos"));
|
||||||
{
|
{
|
||||||
var templateSettings = Settings.builder().put("index.mode", "time_series").put("index.routing_path", "metricset").build();
|
var templateSettings = Settings.builder().put("index.mode", "time_series").put("index.routing_path", "metricset").build();
|
||||||
var request = new PutComposableIndexTemplateAction.Request("id1");
|
var request = new PutComposableIndexTemplateAction.Request("id1");
|
||||||
request.indexTemplate(
|
request.indexTemplate(
|
||||||
new ComposableIndexTemplate(
|
new ComposableIndexTemplate(
|
||||||
List.of("pattern-1"),
|
List.of("pattern-1"),
|
||||||
new Template(templateSettings, new CompressedXContent(MAPPING_TEMPLATE), null),
|
new Template(templateSettings, mapping, null),
|
||||||
null,
|
null,
|
||||||
null,
|
null,
|
||||||
null,
|
null,
|
||||||
|
@ -401,7 +393,7 @@ public class TSDBIndexingIT extends ESSingleNodeTestCase {
|
||||||
request.indexTemplate(
|
request.indexTemplate(
|
||||||
new ComposableIndexTemplate(
|
new ComposableIndexTemplate(
|
||||||
List.of("pattern-2"),
|
List.of("pattern-2"),
|
||||||
new Template(null, new CompressedXContent(MAPPING_TEMPLATE), null),
|
new Template(null, mapping, null),
|
||||||
null,
|
null,
|
||||||
null,
|
null,
|
||||||
null,
|
null,
|
||||||
|
|
|
@ -19,7 +19,6 @@ import java.io.IOException;
|
||||||
import java.time.Instant;
|
import java.time.Instant;
|
||||||
import java.time.temporal.ChronoUnit;
|
import java.time.temporal.ChronoUnit;
|
||||||
import java.util.HashSet;
|
import java.util.HashSet;
|
||||||
import java.util.List;
|
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
|
|
||||||
|
@ -213,77 +212,19 @@ public class TsdbDataStreamRestIT extends ESRestTestCase {
|
||||||
}
|
}
|
||||||
|
|
||||||
public void testTsdbDataStreams() throws Exception {
|
public void testTsdbDataStreams() throws Exception {
|
||||||
var bulkRequest = new Request("POST", "/k8s/_bulk");
|
assertTsdbDataStream();
|
||||||
bulkRequest.setJsonEntity(BULK.replace("$now", formatInstant(Instant.now())));
|
|
||||||
bulkRequest.addParameter("refresh", "true");
|
|
||||||
var response = client().performRequest(bulkRequest);
|
|
||||||
assertOK(response);
|
|
||||||
var responseBody = entityAsMap(response);
|
|
||||||
assertThat("errors in response:\n " + responseBody, responseBody.get("errors"), equalTo(false));
|
|
||||||
|
|
||||||
var getDataStreamsRequest = new Request("GET", "/_data_stream");
|
|
||||||
response = client().performRequest(getDataStreamsRequest);
|
|
||||||
assertOK(response);
|
|
||||||
var dataStreams = entityAsMap(response);
|
|
||||||
assertThat(ObjectPath.evaluate(dataStreams, "data_streams"), hasSize(1));
|
|
||||||
assertThat(ObjectPath.evaluate(dataStreams, "data_streams.0.name"), equalTo("k8s"));
|
|
||||||
assertThat(ObjectPath.evaluate(dataStreams, "data_streams.0.generation"), equalTo(1));
|
|
||||||
assertThat(ObjectPath.evaluate(dataStreams, "data_streams.0.template"), equalTo("1"));
|
|
||||||
assertThat(ObjectPath.evaluate(dataStreams, "data_streams.0.indices"), hasSize(1));
|
|
||||||
String firstBackingIndex = ObjectPath.evaluate(dataStreams, "data_streams.0.indices.0.index_name");
|
|
||||||
assertThat(firstBackingIndex, backingIndexEqualTo("k8s", 1));
|
|
||||||
|
|
||||||
var indices = getIndex(firstBackingIndex);
|
|
||||||
var escapedBackingIndex = firstBackingIndex.replace(".", "\\.");
|
|
||||||
assertThat(ObjectPath.evaluate(indices, escapedBackingIndex + ".data_stream"), equalTo("k8s"));
|
|
||||||
assertThat(ObjectPath.evaluate(indices, escapedBackingIndex + ".settings.index.mode"), equalTo("time_series"));
|
|
||||||
String startTimeFirstBackingIndex = ObjectPath.evaluate(indices, escapedBackingIndex + ".settings.index.time_series.start_time");
|
|
||||||
assertThat(startTimeFirstBackingIndex, notNullValue());
|
|
||||||
String endTimeFirstBackingIndex = ObjectPath.evaluate(indices, escapedBackingIndex + ".settings.index.time_series.end_time");
|
|
||||||
assertThat(endTimeFirstBackingIndex, notNullValue());
|
|
||||||
List<?> routingPaths = ObjectPath.evaluate(indices, escapedBackingIndex + ".settings.index.routing_path");
|
|
||||||
assertThat(routingPaths, containsInAnyOrder("metricset", "k8s.pod.uid", "pod.labels.*"));
|
|
||||||
|
|
||||||
var rolloverRequest = new Request("POST", "/k8s/_rollover");
|
|
||||||
assertOK(client().performRequest(rolloverRequest));
|
|
||||||
|
|
||||||
response = client().performRequest(getDataStreamsRequest);
|
|
||||||
assertOK(response);
|
|
||||||
dataStreams = entityAsMap(response);
|
|
||||||
assertThat(ObjectPath.evaluate(dataStreams, "data_streams.0.name"), equalTo("k8s"));
|
|
||||||
assertThat(ObjectPath.evaluate(dataStreams, "data_streams.0.generation"), equalTo(2));
|
|
||||||
String secondBackingIndex = ObjectPath.evaluate(dataStreams, "data_streams.0.indices.1.index_name");
|
|
||||||
assertThat(secondBackingIndex, backingIndexEqualTo("k8s", 2));
|
|
||||||
|
|
||||||
indices = getIndex(secondBackingIndex);
|
|
||||||
escapedBackingIndex = secondBackingIndex.replace(".", "\\.");
|
|
||||||
assertThat(ObjectPath.evaluate(indices, escapedBackingIndex + ".data_stream"), equalTo("k8s"));
|
|
||||||
String startTimeSecondBackingIndex = ObjectPath.evaluate(indices, escapedBackingIndex + ".settings.index.time_series.start_time");
|
|
||||||
assertThat(startTimeSecondBackingIndex, equalTo(endTimeFirstBackingIndex));
|
|
||||||
String endTimeSecondBackingIndex = ObjectPath.evaluate(indices, escapedBackingIndex + ".settings.index.time_series.end_time");
|
|
||||||
assertThat(endTimeSecondBackingIndex, notNullValue());
|
|
||||||
|
|
||||||
var indexRequest = new Request("POST", "/k8s/_doc");
|
|
||||||
Instant time = parseInstant(startTimeFirstBackingIndex);
|
|
||||||
indexRequest.setJsonEntity(DOC.replace("$time", formatInstant(time)));
|
|
||||||
response = client().performRequest(indexRequest);
|
|
||||||
assertOK(response);
|
|
||||||
assertThat(entityAsMap(response).get("_index"), equalTo(firstBackingIndex));
|
|
||||||
|
|
||||||
indexRequest = new Request("POST", "/k8s/_doc");
|
|
||||||
time = parseInstant(endTimeSecondBackingIndex).minusMillis(1);
|
|
||||||
indexRequest.setJsonEntity(DOC.replace("$time", formatInstant(time)));
|
|
||||||
response = client().performRequest(indexRequest);
|
|
||||||
assertOK(response);
|
|
||||||
assertThat(entityAsMap(response).get("_index"), equalTo(secondBackingIndex));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public void testTsdbDataStreamsNanos() throws Exception {
|
public void testTsdbDataStreamsNanos() throws Exception {
|
||||||
// Create a template
|
// Overwrite template to use date_nanos field type:
|
||||||
var putComposableIndexTemplateRequest = new Request("POST", "/_index_template/1");
|
var putComposableIndexTemplateRequest = new Request("POST", "/_index_template/1");
|
||||||
putComposableIndexTemplateRequest.setJsonEntity(TEMPLATE.replace("date", "date_nanos"));
|
putComposableIndexTemplateRequest.setJsonEntity(TEMPLATE.replace("date", "date_nanos"));
|
||||||
assertOK(client().performRequest(putComposableIndexTemplateRequest));
|
assertOK(client().performRequest(putComposableIndexTemplateRequest));
|
||||||
|
|
||||||
|
assertTsdbDataStream();
|
||||||
|
}
|
||||||
|
|
||||||
|
private void assertTsdbDataStream() throws IOException {
|
||||||
var bulkRequest = new Request("POST", "/k8s/_bulk");
|
var bulkRequest = new Request("POST", "/k8s/_bulk");
|
||||||
bulkRequest.setJsonEntity(BULK.replace("$now", formatInstantNanos(Instant.now())));
|
bulkRequest.setJsonEntity(BULK.replace("$now", formatInstantNanos(Instant.now())));
|
||||||
bulkRequest.addParameter("refresh", "true");
|
bulkRequest.addParameter("refresh", "true");
|
||||||
|
@ -333,6 +274,7 @@ public class TsdbDataStreamRestIT extends ESRestTestCase {
|
||||||
assertThat(endTimeSecondBackingIndex, notNullValue());
|
assertThat(endTimeSecondBackingIndex, notNullValue());
|
||||||
|
|
||||||
var indexRequest = new Request("POST", "/k8s/_doc");
|
var indexRequest = new Request("POST", "/k8s/_doc");
|
||||||
|
indexRequest.addParameter("refresh", "true");
|
||||||
Instant time = parseInstant(startTimeFirstBackingIndex);
|
Instant time = parseInstant(startTimeFirstBackingIndex);
|
||||||
indexRequest.setJsonEntity(DOC.replace("$time", formatInstantNanos(time)));
|
indexRequest.setJsonEntity(DOC.replace("$time", formatInstantNanos(time)));
|
||||||
response = client().performRequest(indexRequest);
|
response = client().performRequest(indexRequest);
|
||||||
|
@ -340,11 +282,45 @@ public class TsdbDataStreamRestIT extends ESRestTestCase {
|
||||||
assertThat(entityAsMap(response).get("_index"), equalTo(firstBackingIndex));
|
assertThat(entityAsMap(response).get("_index"), equalTo(firstBackingIndex));
|
||||||
|
|
||||||
indexRequest = new Request("POST", "/k8s/_doc");
|
indexRequest = new Request("POST", "/k8s/_doc");
|
||||||
|
indexRequest.addParameter("refresh", "true");
|
||||||
time = parseInstant(endTimeSecondBackingIndex).minusMillis(1);
|
time = parseInstant(endTimeSecondBackingIndex).minusMillis(1);
|
||||||
indexRequest.setJsonEntity(DOC.replace("$time", formatInstantNanos(time)));
|
indexRequest.setJsonEntity(DOC.replace("$time", formatInstantNanos(time)));
|
||||||
response = client().performRequest(indexRequest);
|
response = client().performRequest(indexRequest);
|
||||||
assertOK(response);
|
assertOK(response);
|
||||||
assertThat(entityAsMap(response).get("_index"), equalTo(secondBackingIndex));
|
assertThat(entityAsMap(response).get("_index"), equalTo(secondBackingIndex));
|
||||||
|
|
||||||
|
var searchRequest = new Request("GET", "k8s/_search");
|
||||||
|
searchRequest.setJsonEntity("""
|
||||||
|
{
|
||||||
|
"query": {
|
||||||
|
"range":{
|
||||||
|
"@timestamp":{
|
||||||
|
"gte": "now-7d",
|
||||||
|
"lte": "now+7d"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"sort": [
|
||||||
|
{
|
||||||
|
"@timestamp": {
|
||||||
|
"order": "desc"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
""");
|
||||||
|
response = client().performRequest(searchRequest);
|
||||||
|
assertOK(response);
|
||||||
|
responseBody = entityAsMap(response);
|
||||||
|
try {
|
||||||
|
assertThat(ObjectPath.evaluate(responseBody, "hits.total.value"), equalTo(10));
|
||||||
|
assertThat(ObjectPath.evaluate(responseBody, "hits.total.relation"), equalTo("eq"));
|
||||||
|
assertThat(ObjectPath.evaluate(responseBody, "hits.hits.0._index"), equalTo(secondBackingIndex));
|
||||||
|
assertThat(ObjectPath.evaluate(responseBody, "hits.hits.1._index"), equalTo(firstBackingIndex));
|
||||||
|
} catch (Exception | AssertionError e) {
|
||||||
|
logger.error("search response body causing assertion error [" + responseBody + "]", e);
|
||||||
|
throw e;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public void testSimulateTsdbDataStreamTemplate() throws Exception {
|
public void testSimulateTsdbDataStreamTemplate() throws Exception {
|
||||||
|
|
|
@ -0,0 +1,80 @@
|
||||||
|
---
|
||||||
|
setup:
|
||||||
|
- skip:
|
||||||
|
version: "9.0.0 - "
|
||||||
|
reason: "compatible from 8.x to 7.x"
|
||||||
|
features:
|
||||||
|
- "headers"
|
||||||
|
- "warnings"
|
||||||
|
- do:
|
||||||
|
indices.create:
|
||||||
|
index: locations
|
||||||
|
body:
|
||||||
|
settings:
|
||||||
|
number_of_shards: 1
|
||||||
|
number_of_replicas: 0
|
||||||
|
mappings:
|
||||||
|
|
||||||
|
properties:
|
||||||
|
location:
|
||||||
|
type: geo_point
|
||||||
|
- do:
|
||||||
|
bulk:
|
||||||
|
index: locations
|
||||||
|
refresh: true
|
||||||
|
body: |
|
||||||
|
{"index":{}}
|
||||||
|
{"location" : {"lat": 13.5, "lon" : 34.89}}
|
||||||
|
{"index":{}}
|
||||||
|
{"location" : {"lat": -7.9, "lon" : 120.78}}
|
||||||
|
{"index":{}}
|
||||||
|
{"location" : {"lat": 45.78, "lon" : -173.45}}
|
||||||
|
{"index":{}}
|
||||||
|
{"location" : {"lat": 32.45, "lon" : 45.6}}
|
||||||
|
{"index":{}}
|
||||||
|
{"location" : {"lat": -63.24, "lon" : 31.0}}
|
||||||
|
{"index":{}}
|
||||||
|
{"location" : {"lat": 0.0, "lon" : 0.0}}
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
"geo bounding box query not compatible":
|
||||||
|
- do:
|
||||||
|
catch: /failed to parse \[geo_bounding_box\] query. unexpected field \[type\]/
|
||||||
|
search:
|
||||||
|
index: locations
|
||||||
|
body:
|
||||||
|
query:
|
||||||
|
geo_bounding_box:
|
||||||
|
type : indexed
|
||||||
|
location:
|
||||||
|
top_left:
|
||||||
|
lat: 10
|
||||||
|
lon: -10
|
||||||
|
bottom_right:
|
||||||
|
lat: -10
|
||||||
|
lon: 10
|
||||||
|
|
||||||
|
---
|
||||||
|
"geo bounding box query compatible":
|
||||||
|
- do:
|
||||||
|
headers:
|
||||||
|
Content-Type: "application/vnd.elasticsearch+json;compatible-with=7"
|
||||||
|
Accept: "application/vnd.elasticsearch+json;compatible-with=7"
|
||||||
|
warnings:
|
||||||
|
- "Deprecated parameter [type] used, it should no longer be specified."
|
||||||
|
search:
|
||||||
|
index: locations
|
||||||
|
body:
|
||||||
|
query:
|
||||||
|
geo_bounding_box:
|
||||||
|
type : indexed
|
||||||
|
location:
|
||||||
|
top_left:
|
||||||
|
lat: 10
|
||||||
|
lon: -10
|
||||||
|
bottom_right:
|
||||||
|
lat: -10
|
||||||
|
lon: 10
|
||||||
|
- match: {hits.total.value: 1}
|
||||||
|
|
|
@ -43,6 +43,7 @@ import org.elasticsearch.gateway.MetadataStateFormat;
|
||||||
import org.elasticsearch.index.Index;
|
import org.elasticsearch.index.Index;
|
||||||
import org.elasticsearch.index.IndexMode;
|
import org.elasticsearch.index.IndexMode;
|
||||||
import org.elasticsearch.index.IndexSettings;
|
import org.elasticsearch.index.IndexSettings;
|
||||||
|
import org.elasticsearch.index.mapper.DateFieldMapper;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.seqno.SequenceNumbers;
|
import org.elasticsearch.index.seqno.SequenceNumbers;
|
||||||
import org.elasticsearch.index.shard.IndexLongFieldRange;
|
import org.elasticsearch.index.shard.IndexLongFieldRange;
|
||||||
|
@ -1297,14 +1298,27 @@ public class IndexMetadata implements Diffable<IndexMetadata>, ToXContentFragmen
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
* @return whether this index has a time series timestamp range
|
||||||
|
*/
|
||||||
|
public boolean hasTimeSeriesTimestampRange() {
|
||||||
|
return indexMode != null && indexMode.getTimestampBound(this) != null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @param dateFieldType the date field type of '@timestamp' field which is
|
||||||
|
* used to convert the start and end times recorded in index metadata
|
||||||
|
* to the right format that is being used by '@timestamp' field.
|
||||||
|
* For example, the '@timestamp' can be configured with nanosecond precision.
|
||||||
* @return the time range this index represents if this index is in time series mode.
|
* @return the time range this index represents if this index is in time series mode.
|
||||||
* Otherwise <code>null</code> is returned.
|
* Otherwise <code>null</code> is returned.
|
||||||
*/
|
*/
|
||||||
@Nullable
|
@Nullable
|
||||||
public IndexLongFieldRange getTimeSeriesTimestampRange() {
|
public IndexLongFieldRange getTimeSeriesTimestampRange(DateFieldMapper.DateFieldType dateFieldType) {
|
||||||
var bounds = indexMode != null ? indexMode.getTimestampBound(this) : null;
|
var bounds = indexMode != null ? indexMode.getTimestampBound(this) : null;
|
||||||
if (bounds != null) {
|
if (bounds != null) {
|
||||||
return IndexLongFieldRange.NO_SHARDS.extendWithShardRange(0, 1, ShardLongFieldRange.of(bounds.startTime(), bounds.endTime()));
|
long start = dateFieldType.resolution().convert(Instant.ofEpochMilli(bounds.startTime()));
|
||||||
|
long end = dateFieldType.resolution().convert(Instant.ofEpochMilli(bounds.endTime()));
|
||||||
|
return IndexLongFieldRange.NO_SHARDS.extendWithShardRange(0, 1, ShardLongFieldRange.of(start, end));
|
||||||
} else {
|
} else {
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
|
@ -31,7 +31,7 @@ public enum ShardRoutingState {
|
||||||
*/
|
*/
|
||||||
RELOCATING((byte) 4);
|
RELOCATING((byte) 4);
|
||||||
|
|
||||||
private byte value;
|
private final byte value;
|
||||||
|
|
||||||
ShardRoutingState(byte value) {
|
ShardRoutingState(byte value) {
|
||||||
this.value = value;
|
this.value = value;
|
||||||
|
|
|
@ -90,6 +90,7 @@ public class AllocationDeciders {
|
||||||
}
|
}
|
||||||
|
|
||||||
public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation allocation) {
|
public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation allocation) {
|
||||||
|
assert shardRouting.started() : "Only started shard could be rebalanced: " + shardRouting;
|
||||||
return withDeciders(
|
return withDeciders(
|
||||||
allocation,
|
allocation,
|
||||||
decider -> decider.canRebalance(shardRouting, allocation),
|
decider -> decider.canRebalance(shardRouting, allocation),
|
||||||
|
|
|
@ -91,6 +91,7 @@ public abstract class AbstractHttpServerTransport extends AbstractLifecycleCompo
|
||||||
|
|
||||||
private final HttpTracer httpLogger;
|
private final HttpTracer httpLogger;
|
||||||
private final Tracer tracer;
|
private final Tracer tracer;
|
||||||
|
private volatile boolean gracefullyCloseConnections;
|
||||||
|
|
||||||
private volatile long slowLogThresholdMs;
|
private volatile long slowLogThresholdMs;
|
||||||
|
|
||||||
|
@ -454,7 +455,8 @@ public abstract class AbstractHttpServerTransport extends AbstractLifecycleCompo
|
||||||
threadContext,
|
threadContext,
|
||||||
corsHandler,
|
corsHandler,
|
||||||
maybeHttpLogger,
|
maybeHttpLogger,
|
||||||
tracer
|
tracer,
|
||||||
|
gracefullyCloseConnections
|
||||||
);
|
);
|
||||||
} catch (final IllegalArgumentException e) {
|
} catch (final IllegalArgumentException e) {
|
||||||
badRequestCause = ExceptionsHelper.useOrSuppress(badRequestCause, e);
|
badRequestCause = ExceptionsHelper.useOrSuppress(badRequestCause, e);
|
||||||
|
@ -468,7 +470,8 @@ public abstract class AbstractHttpServerTransport extends AbstractLifecycleCompo
|
||||||
threadContext,
|
threadContext,
|
||||||
corsHandler,
|
corsHandler,
|
||||||
httpLogger,
|
httpLogger,
|
||||||
tracer
|
tracer,
|
||||||
|
gracefullyCloseConnections
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
channel = innerChannel;
|
channel = innerChannel;
|
||||||
|
@ -510,4 +513,8 @@ public abstract class AbstractHttpServerTransport extends AbstractLifecycleCompo
|
||||||
public ThreadPool getThreadPool() {
|
public ThreadPool getThreadPool() {
|
||||||
return threadPool;
|
return threadPool;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void gracefullyCloseConnections() {
|
||||||
|
gracefullyCloseConnections = true;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -56,6 +56,7 @@ public class DefaultRestChannel extends AbstractRestChannel implements RestChann
|
||||||
private final HttpChannel httpChannel;
|
private final HttpChannel httpChannel;
|
||||||
private final CorsHandler corsHandler;
|
private final CorsHandler corsHandler;
|
||||||
private final Tracer tracer;
|
private final Tracer tracer;
|
||||||
|
private final boolean closeConnection;
|
||||||
|
|
||||||
@Nullable
|
@Nullable
|
||||||
private final HttpTracer httpLogger;
|
private final HttpTracer httpLogger;
|
||||||
|
@ -69,7 +70,8 @@ public class DefaultRestChannel extends AbstractRestChannel implements RestChann
|
||||||
ThreadContext threadContext,
|
ThreadContext threadContext,
|
||||||
CorsHandler corsHandler,
|
CorsHandler corsHandler,
|
||||||
@Nullable HttpTracer httpLogger,
|
@Nullable HttpTracer httpLogger,
|
||||||
Tracer tracer
|
Tracer tracer,
|
||||||
|
boolean closeConnection
|
||||||
) {
|
) {
|
||||||
super(request, settings.detailedErrorsEnabled());
|
super(request, settings.detailedErrorsEnabled());
|
||||||
this.httpChannel = httpChannel;
|
this.httpChannel = httpChannel;
|
||||||
|
@ -80,6 +82,7 @@ public class DefaultRestChannel extends AbstractRestChannel implements RestChann
|
||||||
this.corsHandler = corsHandler;
|
this.corsHandler = corsHandler;
|
||||||
this.httpLogger = httpLogger;
|
this.httpLogger = httpLogger;
|
||||||
this.tracer = tracer;
|
this.tracer = tracer;
|
||||||
|
this.closeConnection = closeConnection;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -95,7 +98,7 @@ public class DefaultRestChannel extends AbstractRestChannel implements RestChann
|
||||||
final SpanId spanId = SpanId.forRestRequest(request);
|
final SpanId spanId = SpanId.forRestRequest(request);
|
||||||
|
|
||||||
final ArrayList<Releasable> toClose = new ArrayList<>(4);
|
final ArrayList<Releasable> toClose = new ArrayList<>(4);
|
||||||
if (HttpUtils.shouldCloseConnection(httpRequest)) {
|
if (HttpUtils.shouldCloseConnection(httpRequest) || closeConnection) {
|
||||||
toClose.add(() -> CloseableChannel.closeChannel(httpChannel));
|
toClose.add(() -> CloseableChannel.closeChannel(httpChannel));
|
||||||
}
|
}
|
||||||
toClose.add(() -> tracer.stopTrace(request));
|
toClose.add(() -> tracer.stopTrace(request));
|
||||||
|
@ -159,6 +162,9 @@ public class DefaultRestChannel extends AbstractRestChannel implements RestChann
|
||||||
// Add all custom headers
|
// Add all custom headers
|
||||||
addCustomHeaders(httpResponse, restResponse.getHeaders());
|
addCustomHeaders(httpResponse, restResponse.getHeaders());
|
||||||
addCustomHeaders(httpResponse, restResponse.filterHeaders(threadContext.getResponseHeaders()));
|
addCustomHeaders(httpResponse, restResponse.filterHeaders(threadContext.getResponseHeaders()));
|
||||||
|
if (closeConnection) {
|
||||||
|
setHeaderField(httpResponse, CONNECTION, CLOSE);
|
||||||
|
}
|
||||||
|
|
||||||
// If our response doesn't specify a content-type header, set one
|
// If our response doesn't specify a content-type header, set one
|
||||||
setHeaderField(httpResponse, CONTENT_TYPE, restResponse.contentType(), false);
|
setHeaderField(httpResponse, CONTENT_TYPE, restResponse.contentType(), false);
|
||||||
|
|
|
@ -49,20 +49,18 @@ public class CoordinatorRewriteContextProvider {
|
||||||
if (indexMetadata == null) {
|
if (indexMetadata == null) {
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
DateFieldMapper.DateFieldType dateFieldType = mappingSupplier.apply(index);
|
||||||
|
if (dateFieldType == null) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
IndexLongFieldRange timestampRange = indexMetadata.getTimestampRange();
|
IndexLongFieldRange timestampRange = indexMetadata.getTimestampRange();
|
||||||
if (timestampRange.containsAllShardRanges() == false) {
|
if (timestampRange.containsAllShardRanges() == false) {
|
||||||
timestampRange = indexMetadata.getTimeSeriesTimestampRange();
|
timestampRange = indexMetadata.getTimeSeriesTimestampRange(dateFieldType);
|
||||||
if (timestampRange == null) {
|
if (timestampRange == null) {
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
DateFieldMapper.DateFieldType dateFieldType = mappingSupplier.apply(index);
|
|
||||||
|
|
||||||
if (dateFieldType == null) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
return new CoordinatorRewriteContext(parserConfig, client, nowInMillis, timestampRange, dateFieldType);
|
return new CoordinatorRewriteContext(parserConfig, client, nowInMillis, timestampRange, dateFieldType);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -21,6 +21,8 @@ import org.elasticsearch.common.geo.ShapeRelation;
|
||||||
import org.elasticsearch.common.geo.SpatialStrategy;
|
import org.elasticsearch.common.geo.SpatialStrategy;
|
||||||
import org.elasticsearch.common.io.stream.StreamInput;
|
import org.elasticsearch.common.io.stream.StreamInput;
|
||||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||||
|
import org.elasticsearch.common.logging.DeprecationLogger;
|
||||||
|
import org.elasticsearch.core.RestApiVersion;
|
||||||
import org.elasticsearch.geometry.Rectangle;
|
import org.elasticsearch.geometry.Rectangle;
|
||||||
import org.elasticsearch.geometry.utils.Geohash;
|
import org.elasticsearch.geometry.utils.Geohash;
|
||||||
import org.elasticsearch.index.mapper.GeoShapeQueryable;
|
import org.elasticsearch.index.mapper.GeoShapeQueryable;
|
||||||
|
@ -42,11 +44,16 @@ import java.util.Objects;
|
||||||
public class GeoBoundingBoxQueryBuilder extends AbstractQueryBuilder<GeoBoundingBoxQueryBuilder> {
|
public class GeoBoundingBoxQueryBuilder extends AbstractQueryBuilder<GeoBoundingBoxQueryBuilder> {
|
||||||
public static final String NAME = "geo_bounding_box";
|
public static final String NAME = "geo_bounding_box";
|
||||||
|
|
||||||
|
private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(GeoBoundingBoxQueryBuilder.class);
|
||||||
|
|
||||||
|
private static final String TYPE_PARAMETER_DEPRECATION_MESSAGE = "Deprecated parameter [type] used, it should no longer be specified.";
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* The default value for ignore_unmapped.
|
* The default value for ignore_unmapped.
|
||||||
*/
|
*/
|
||||||
public static final boolean DEFAULT_IGNORE_UNMAPPED = false;
|
public static final boolean DEFAULT_IGNORE_UNMAPPED = false;
|
||||||
|
|
||||||
|
private static final ParseField TYPE_FIELD = new ParseField("type").forRestApiVersion(RestApiVersion.equalTo(RestApiVersion.V_7));
|
||||||
private static final ParseField VALIDATION_METHOD_FIELD = new ParseField("validation_method");
|
private static final ParseField VALIDATION_METHOD_FIELD = new ParseField("validation_method");
|
||||||
private static final ParseField IGNORE_UNMAPPED_FIELD = new ParseField("ignore_unmapped");
|
private static final ParseField IGNORE_UNMAPPED_FIELD = new ParseField("ignore_unmapped");
|
||||||
|
|
||||||
|
@ -349,6 +356,10 @@ public class GeoBoundingBoxQueryBuilder extends AbstractQueryBuilder<GeoBounding
|
||||||
validationMethod = GeoValidationMethod.fromString(parser.text());
|
validationMethod = GeoValidationMethod.fromString(parser.text());
|
||||||
} else if (IGNORE_UNMAPPED_FIELD.match(currentFieldName, parser.getDeprecationHandler())) {
|
} else if (IGNORE_UNMAPPED_FIELD.match(currentFieldName, parser.getDeprecationHandler())) {
|
||||||
ignoreUnmapped = parser.booleanValue();
|
ignoreUnmapped = parser.booleanValue();
|
||||||
|
} else if (parser.getRestApiVersion() == RestApiVersion.V_7
|
||||||
|
&& TYPE_FIELD.match(currentFieldName, parser.getDeprecationHandler())) {
|
||||||
|
deprecationLogger.compatibleCritical("geo_bounding_box_type", TYPE_PARAMETER_DEPRECATION_MESSAGE);
|
||||||
|
parser.text(); // ignore value
|
||||||
} else {
|
} else {
|
||||||
throw new ParsingException(
|
throw new ParsingException(
|
||||||
parser.getTokenLocation(),
|
parser.getTokenLocation(),
|
||||||
|
|
|
@ -141,7 +141,7 @@ public class TimestampFieldMapperService extends AbstractLifecycleComponent impl
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (indexMetadata.getTimeSeriesTimestampRange() != null) {
|
if (indexMetadata.hasTimeSeriesTimestampRange()) {
|
||||||
// Tsdb indices have @timestamp field and index.time_series.start_time / index.time_series.end_time range
|
// Tsdb indices have @timestamp field and index.time_series.start_time / index.time_series.end_time range
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
|
@ -11,20 +11,20 @@ package org.elasticsearch.cluster.routing.allocation.decider;
|
||||||
import org.elasticsearch.Version;
|
import org.elasticsearch.Version;
|
||||||
import org.elasticsearch.cluster.ClusterName;
|
import org.elasticsearch.cluster.ClusterName;
|
||||||
import org.elasticsearch.cluster.ClusterState;
|
import org.elasticsearch.cluster.ClusterState;
|
||||||
|
import org.elasticsearch.cluster.ESAllocationTestCase;
|
||||||
import org.elasticsearch.cluster.metadata.IndexMetadata;
|
import org.elasticsearch.cluster.metadata.IndexMetadata;
|
||||||
import org.elasticsearch.cluster.metadata.Metadata;
|
import org.elasticsearch.cluster.metadata.Metadata;
|
||||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||||
import org.elasticsearch.cluster.node.TestDiscoveryNode;
|
|
||||||
import org.elasticsearch.cluster.routing.RecoverySource;
|
import org.elasticsearch.cluster.routing.RecoverySource;
|
||||||
import org.elasticsearch.cluster.routing.RoutingNode;
|
import org.elasticsearch.cluster.routing.RoutingNode;
|
||||||
import org.elasticsearch.cluster.routing.RoutingNodesHelper;
|
import org.elasticsearch.cluster.routing.RoutingNodesHelper;
|
||||||
import org.elasticsearch.cluster.routing.ShardRouting;
|
import org.elasticsearch.cluster.routing.ShardRouting;
|
||||||
|
import org.elasticsearch.cluster.routing.ShardRoutingState;
|
||||||
|
import org.elasticsearch.cluster.routing.TestShardRouting;
|
||||||
import org.elasticsearch.cluster.routing.UnassignedInfo;
|
import org.elasticsearch.cluster.routing.UnassignedInfo;
|
||||||
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
|
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
|
||||||
import org.elasticsearch.common.transport.TransportAddress;
|
|
||||||
import org.elasticsearch.index.Index;
|
import org.elasticsearch.index.Index;
|
||||||
import org.elasticsearch.index.shard.ShardId;
|
import org.elasticsearch.index.shard.ShardId;
|
||||||
import org.elasticsearch.test.ESTestCase;
|
|
||||||
|
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
@ -37,7 +37,7 @@ import java.util.stream.Collector;
|
||||||
|
|
||||||
import static org.hamcrest.Matchers.equalTo;
|
import static org.hamcrest.Matchers.equalTo;
|
||||||
|
|
||||||
public class AllocationDecidersTests extends ESTestCase {
|
public class AllocationDecidersTests extends ESAllocationTestCase {
|
||||||
|
|
||||||
public void testCheckAllDecidersBeforeReturningYes() {
|
public void testCheckAllDecidersBeforeReturningYes() {
|
||||||
var allDecisions = generateDecisions(() -> Decision.YES);
|
var allDecisions = generateDecisions(() -> Decision.YES);
|
||||||
|
@ -128,29 +128,29 @@ public class AllocationDecidersTests extends ESTestCase {
|
||||||
int expectedAllocationDecidersCalls,
|
int expectedAllocationDecidersCalls,
|
||||||
Decision expectedDecision
|
Decision expectedDecision
|
||||||
) {
|
) {
|
||||||
IndexMetadata index = IndexMetadata.builder("index")
|
IndexMetadata index = IndexMetadata.builder("index").settings(indexSettings(Version.CURRENT, 1, 0)).build();
|
||||||
.settings(settings(Version.CURRENT))
|
ShardId shardId = new ShardId(index.getIndex(), 0);
|
||||||
.numberOfShards(1)
|
|
||||||
.numberOfReplicas(0)
|
|
||||||
.build();
|
|
||||||
ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)
|
ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT)
|
||||||
.metadata(Metadata.builder().put(index, false).build())
|
.metadata(Metadata.builder().put(index, false).build())
|
||||||
.build();
|
.build();
|
||||||
ShardRouting shardRouting = createShardRouting(index.getIndex());
|
|
||||||
|
ShardRouting startedShard = TestShardRouting.newShardRouting(shardId, "node", true, ShardRoutingState.STARTED);
|
||||||
|
ShardRouting unassignedShard = createUnassignedShard(index.getIndex());
|
||||||
|
|
||||||
RoutingNode routingNode = RoutingNodesHelper.routingNode("node", null);
|
RoutingNode routingNode = RoutingNodesHelper.routingNode("node", null);
|
||||||
DiscoveryNode discoveryNode = TestDiscoveryNode.create("node", new TransportAddress(TransportAddress.META_ADDRESS, 0));
|
DiscoveryNode discoveryNode = newNode("node");
|
||||||
|
|
||||||
List.<BiFunction<RoutingAllocation, AllocationDeciders, Decision>>of(
|
List.<BiFunction<RoutingAllocation, AllocationDeciders, Decision>>of(
|
||||||
(allocation, deciders) -> deciders.canAllocate(shardRouting, allocation),
|
(allocation, deciders) -> deciders.canAllocate(unassignedShard, allocation),
|
||||||
(allocation, deciders) -> deciders.canAllocate(shardRouting, routingNode, allocation),
|
(allocation, deciders) -> deciders.canAllocate(unassignedShard, routingNode, allocation),
|
||||||
(allocation, deciders) -> deciders.canAllocate(index, routingNode, allocation),
|
(allocation, deciders) -> deciders.canAllocate(index, routingNode, allocation),
|
||||||
(allocation, deciders) -> deciders.canRebalance(allocation),
|
(allocation, deciders) -> deciders.canRebalance(allocation),
|
||||||
(allocation, deciders) -> deciders.canRebalance(shardRouting, allocation),
|
(allocation, deciders) -> deciders.canRebalance(startedShard, allocation),
|
||||||
(allocation, deciders) -> deciders.canRemain(shardRouting, routingNode, allocation),
|
(allocation, deciders) -> deciders.canRemain(unassignedShard, routingNode, allocation),
|
||||||
(allocation, deciders) -> deciders.shouldAutoExpandToNode(index, discoveryNode, allocation),
|
(allocation, deciders) -> deciders.shouldAutoExpandToNode(index, discoveryNode, allocation),
|
||||||
(allocation, deciders) -> deciders.canForceAllocatePrimary(shardRouting, routingNode, allocation),
|
(allocation, deciders) -> deciders.canForceAllocatePrimary(unassignedShard, routingNode, allocation),
|
||||||
(allocation, deciders) -> deciders.canForceAllocateDuringReplace(shardRouting, routingNode, allocation),
|
(allocation, deciders) -> deciders.canForceAllocateDuringReplace(unassignedShard, routingNode, allocation),
|
||||||
(allocation, deciders) -> deciders.canAllocateReplicaWhenThereIsRetentionLease(shardRouting, routingNode, allocation)
|
(allocation, deciders) -> deciders.canAllocateReplicaWhenThereIsRetentionLease(unassignedShard, routingNode, allocation)
|
||||||
).forEach(operation -> {
|
).forEach(operation -> {
|
||||||
var decidersCalled = new int[] { 0 };
|
var decidersCalled = new int[] { 0 };
|
||||||
var deciders = new AllocationDeciders(decisions.stream().map(decision -> new TestAllocationDecider(() -> {
|
var deciders = new AllocationDeciders(decisions.stream().map(decision -> new TestAllocationDecider(() -> {
|
||||||
|
@ -180,7 +180,7 @@ public class AllocationDecidersTests extends ESTestCase {
|
||||||
);
|
);
|
||||||
|
|
||||||
assertThat(
|
assertThat(
|
||||||
deciders.getForcedInitialShardAllocationToNodes(createShardRouting(), createRoutingAllocation(deciders)),
|
deciders.getForcedInitialShardAllocationToNodes(createUnassignedShard(), createRoutingAllocation(deciders)),
|
||||||
equalTo(Optional.empty())
|
equalTo(Optional.empty())
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
@ -197,7 +197,7 @@ public class AllocationDecidersTests extends ESTestCase {
|
||||||
);
|
);
|
||||||
|
|
||||||
assertThat(
|
assertThat(
|
||||||
deciders.getForcedInitialShardAllocationToNodes(createShardRouting(), createRoutingAllocation(deciders)),
|
deciders.getForcedInitialShardAllocationToNodes(createUnassignedShard(), createRoutingAllocation(deciders)),
|
||||||
equalTo(Optional.of(Set.of("node-1", "node-2")))
|
equalTo(Optional.of(Set.of("node-1", "node-2")))
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
@ -215,12 +215,12 @@ public class AllocationDecidersTests extends ESTestCase {
|
||||||
);
|
);
|
||||||
|
|
||||||
assertThat(
|
assertThat(
|
||||||
deciders.getForcedInitialShardAllocationToNodes(createShardRouting(), createRoutingAllocation(deciders)),
|
deciders.getForcedInitialShardAllocationToNodes(createUnassignedShard(), createRoutingAllocation(deciders)),
|
||||||
equalTo(Optional.of(Set.of("node-2")))
|
equalTo(Optional.of(Set.of("node-2")))
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
private static ShardRouting createShardRouting(Index index) {
|
private static ShardRouting createUnassignedShard(Index index) {
|
||||||
return ShardRouting.newUnassigned(
|
return ShardRouting.newUnassigned(
|
||||||
new ShardId(index, 0),
|
new ShardId(index, 0),
|
||||||
true,
|
true,
|
||||||
|
@ -230,8 +230,8 @@ public class AllocationDecidersTests extends ESTestCase {
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
private static ShardRouting createShardRouting() {
|
private static ShardRouting createUnassignedShard() {
|
||||||
return createShardRouting(new Index("test", "testUUID"));
|
return createUnassignedShard(new Index("test", "testUUID"));
|
||||||
}
|
}
|
||||||
|
|
||||||
private static RoutingAllocation createRoutingAllocation(AllocationDeciders deciders) {
|
private static RoutingAllocation createRoutingAllocation(AllocationDeciders deciders) {
|
||||||
|
|
|
@ -12,6 +12,7 @@ import org.apache.logging.log4j.Level;
|
||||||
import org.apache.logging.log4j.LogManager;
|
import org.apache.logging.log4j.LogManager;
|
||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
import org.apache.lucene.util.BytesRef;
|
import org.apache.lucene.util.BytesRef;
|
||||||
|
import org.elasticsearch.action.ActionListener;
|
||||||
import org.elasticsearch.action.ActionModule;
|
import org.elasticsearch.action.ActionModule;
|
||||||
import org.elasticsearch.cluster.service.ClusterService;
|
import org.elasticsearch.cluster.service.ClusterService;
|
||||||
import org.elasticsearch.common.UUIDs;
|
import org.elasticsearch.common.UUIDs;
|
||||||
|
@ -51,6 +52,7 @@ import org.elasticsearch.xcontent.NamedXContentRegistry;
|
||||||
import org.junit.After;
|
import org.junit.After;
|
||||||
import org.junit.Assert;
|
import org.junit.Assert;
|
||||||
import org.junit.Before;
|
import org.junit.Before;
|
||||||
|
import org.mockito.ArgumentCaptor;
|
||||||
|
|
||||||
import java.net.InetSocketAddress;
|
import java.net.InetSocketAddress;
|
||||||
import java.net.UnknownHostException;
|
import java.net.UnknownHostException;
|
||||||
|
@ -69,12 +71,16 @@ import java.util.concurrent.TimeUnit;
|
||||||
import static java.net.InetAddress.getByName;
|
import static java.net.InetAddress.getByName;
|
||||||
import static java.util.Arrays.asList;
|
import static java.util.Arrays.asList;
|
||||||
import static org.elasticsearch.http.AbstractHttpServerTransport.resolvePublishPort;
|
import static org.elasticsearch.http.AbstractHttpServerTransport.resolvePublishPort;
|
||||||
|
import static org.hamcrest.Matchers.containsInAnyOrder;
|
||||||
import static org.hamcrest.Matchers.containsString;
|
import static org.hamcrest.Matchers.containsString;
|
||||||
import static org.hamcrest.Matchers.equalTo;
|
import static org.hamcrest.Matchers.equalTo;
|
||||||
import static org.hamcrest.Matchers.instanceOf;
|
import static org.hamcrest.Matchers.instanceOf;
|
||||||
|
import static org.hamcrest.Matchers.is;
|
||||||
import static org.hamcrest.Matchers.notNullValue;
|
import static org.hamcrest.Matchers.notNullValue;
|
||||||
import static org.hamcrest.Matchers.nullValue;
|
import static org.hamcrest.Matchers.nullValue;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
|
import static org.mockito.Mockito.never;
|
||||||
|
import static org.mockito.Mockito.verify;
|
||||||
|
|
||||||
public class AbstractHttpServerTransportTests extends ESTestCase {
|
public class AbstractHttpServerTransportTests extends ESTestCase {
|
||||||
|
|
||||||
|
@ -883,6 +889,66 @@ public class AbstractHttpServerTransportTests extends ESTestCase {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@SuppressWarnings("unchecked")
|
||||||
|
public void testSetGracefulClose() {
|
||||||
|
try (
|
||||||
|
AbstractHttpServerTransport transport = new AbstractHttpServerTransport(
|
||||||
|
Settings.EMPTY,
|
||||||
|
networkService,
|
||||||
|
recycler,
|
||||||
|
threadPool,
|
||||||
|
xContentRegistry(),
|
||||||
|
new HttpServerTransport.Dispatcher() {
|
||||||
|
@Override
|
||||||
|
public void dispatchRequest(RestRequest request, RestChannel channel, ThreadContext threadContext) {
|
||||||
|
channel.sendResponse(emptyResponse(RestStatus.OK));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void dispatchBadRequest(RestChannel channel, ThreadContext threadContext, Throwable cause) {
|
||||||
|
channel.sendResponse(emptyResponse(RestStatus.BAD_REQUEST));
|
||||||
|
}
|
||||||
|
},
|
||||||
|
new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS),
|
||||||
|
Tracer.NOOP
|
||||||
|
) {
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected HttpServerChannel bind(InetSocketAddress hostAddress) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected void doStart() {}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected void stopInternal() {}
|
||||||
|
}
|
||||||
|
) {
|
||||||
|
final TestHttpRequest httpRequest = new TestHttpRequest(HttpRequest.HttpVersion.HTTP_1_1, RestRequest.Method.GET, "/");
|
||||||
|
|
||||||
|
HttpChannel httpChannel = mock(HttpChannel.class);
|
||||||
|
transport.incomingRequest(httpRequest, httpChannel);
|
||||||
|
|
||||||
|
var response = ArgumentCaptor.forClass(TestHttpResponse.class);
|
||||||
|
var listener = ArgumentCaptor.forClass(ActionListener.class);
|
||||||
|
verify(httpChannel).sendResponse(response.capture(), listener.capture());
|
||||||
|
|
||||||
|
listener.getValue().onResponse(null);
|
||||||
|
assertThat(response.getValue().containsHeader(DefaultRestChannel.CONNECTION), is(false));
|
||||||
|
verify(httpChannel, never()).close();
|
||||||
|
|
||||||
|
httpChannel = mock(HttpChannel.class);
|
||||||
|
transport.gracefullyCloseConnections();
|
||||||
|
transport.incomingRequest(httpRequest, httpChannel);
|
||||||
|
verify(httpChannel).sendResponse(response.capture(), listener.capture());
|
||||||
|
|
||||||
|
listener.getValue().onResponse(null);
|
||||||
|
assertThat(response.getValue().headers().get(DefaultRestChannel.CONNECTION), containsInAnyOrder(DefaultRestChannel.CLOSE));
|
||||||
|
verify(httpChannel).close();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
private static RestResponse emptyResponse(RestStatus status) {
|
private static RestResponse emptyResponse(RestStatus status) {
|
||||||
return new RestResponse(status, RestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);
|
return new RestResponse(status, RestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY);
|
||||||
}
|
}
|
||||||
|
|
|
@ -172,7 +172,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
CorsHandler.fromSettings(settings),
|
CorsHandler.fromSettings(settings),
|
||||||
httpTracer,
|
httpTracer,
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
RestResponse resp = testRestResponse();
|
RestResponse resp = testRestResponse();
|
||||||
final String customHeader = "custom-header";
|
final String customHeader = "custom-header";
|
||||||
|
@ -192,6 +193,65 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
assertEquals(resp.contentType(), headers.get(DefaultRestChannel.CONTENT_TYPE).get(0));
|
assertEquals(resp.contentType(), headers.get(DefaultRestChannel.CONTENT_TYPE).get(0));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void testCloseConnection() {
|
||||||
|
Settings settings = Settings.builder().build();
|
||||||
|
final TestHttpRequest httpRequest = new TestHttpRequest(HttpRequest.HttpVersion.HTTP_1_1, RestRequest.Method.GET, "/");
|
||||||
|
final RestRequest request = RestRequest.request(parserConfig(), httpRequest, httpChannel);
|
||||||
|
HttpHandlingSettings handlingSettings = HttpHandlingSettings.fromSettings(settings);
|
||||||
|
// send a response
|
||||||
|
DefaultRestChannel channel = new DefaultRestChannel(
|
||||||
|
httpChannel,
|
||||||
|
httpRequest,
|
||||||
|
request,
|
||||||
|
bigArrays,
|
||||||
|
handlingSettings,
|
||||||
|
threadPool.getThreadContext(),
|
||||||
|
CorsHandler.fromSettings(settings),
|
||||||
|
httpTracer,
|
||||||
|
tracer,
|
||||||
|
true
|
||||||
|
);
|
||||||
|
|
||||||
|
RestResponse resp = testRestResponse();
|
||||||
|
channel.sendResponse(resp);
|
||||||
|
// inspect what was written
|
||||||
|
ArgumentCaptor<TestHttpResponse> responseCaptor = ArgumentCaptor.forClass(TestHttpResponse.class);
|
||||||
|
verify(httpChannel).sendResponse(responseCaptor.capture(), any());
|
||||||
|
TestHttpResponse httpResponse = responseCaptor.getValue();
|
||||||
|
Map<String, List<String>> headers = httpResponse.headers();
|
||||||
|
assertThat(headers.get(DefaultRestChannel.CONNECTION), containsInAnyOrder(DefaultRestChannel.CLOSE));
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testNormallyNoConnectionClose() {
|
||||||
|
Settings settings = Settings.builder().build();
|
||||||
|
final TestHttpRequest httpRequest = new TestHttpRequest(HttpRequest.HttpVersion.HTTP_1_1, RestRequest.Method.GET, "/");
|
||||||
|
final RestRequest request = RestRequest.request(parserConfig(), httpRequest, httpChannel);
|
||||||
|
HttpHandlingSettings handlingSettings = HttpHandlingSettings.fromSettings(settings);
|
||||||
|
// send a response
|
||||||
|
DefaultRestChannel channel = new DefaultRestChannel(
|
||||||
|
httpChannel,
|
||||||
|
httpRequest,
|
||||||
|
request,
|
||||||
|
bigArrays,
|
||||||
|
handlingSettings,
|
||||||
|
threadPool.getThreadContext(),
|
||||||
|
CorsHandler.fromSettings(settings),
|
||||||
|
httpTracer,
|
||||||
|
tracer,
|
||||||
|
false
|
||||||
|
);
|
||||||
|
|
||||||
|
RestResponse resp = testRestResponse();
|
||||||
|
channel.sendResponse(resp);
|
||||||
|
|
||||||
|
ArgumentCaptor<TestHttpResponse> responseCaptor = ArgumentCaptor.forClass(TestHttpResponse.class);
|
||||||
|
verify(httpChannel).sendResponse(responseCaptor.capture(), any());
|
||||||
|
|
||||||
|
TestHttpResponse httpResponse = responseCaptor.getValue();
|
||||||
|
Map<String, List<String>> headers = httpResponse.headers();
|
||||||
|
assertNull(headers.get(DefaultRestChannel.CONNECTION));
|
||||||
|
}
|
||||||
|
|
||||||
public void testCookiesSet() {
|
public void testCookiesSet() {
|
||||||
Settings settings = Settings.builder().put(HttpTransportSettings.SETTING_HTTP_RESET_COOKIES.getKey(), true).build();
|
Settings settings = Settings.builder().put(HttpTransportSettings.SETTING_HTTP_RESET_COOKIES.getKey(), true).build();
|
||||||
final TestHttpRequest httpRequest = new TestHttpRequest(HttpRequest.HttpVersion.HTTP_1_1, RestRequest.Method.GET, "/");
|
final TestHttpRequest httpRequest = new TestHttpRequest(HttpRequest.HttpVersion.HTTP_1_1, RestRequest.Method.GET, "/");
|
||||||
|
@ -209,7 +269,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
CorsHandler.fromSettings(settings),
|
CorsHandler.fromSettings(settings),
|
||||||
httpTracer,
|
httpTracer,
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
channel.sendResponse(testRestResponse());
|
channel.sendResponse(testRestResponse());
|
||||||
|
|
||||||
|
@ -238,7 +299,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
CorsHandler.fromSettings(settings),
|
CorsHandler.fromSettings(settings),
|
||||||
httpTracer,
|
httpTracer,
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
final RestResponse response = new RestResponse(
|
final RestResponse response = new RestResponse(
|
||||||
RestStatus.INTERNAL_SERVER_ERROR,
|
RestStatus.INTERNAL_SERVER_ERROR,
|
||||||
|
@ -306,7 +368,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
CorsHandler.fromSettings(settings),
|
CorsHandler.fromSettings(settings),
|
||||||
httpTracer,
|
httpTracer,
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
channel.sendResponse(testRestResponse());
|
channel.sendResponse(testRestResponse());
|
||||||
Class<ActionListener<Void>> listenerClass = (Class<ActionListener<Void>>) (Class) ActionListener.class;
|
Class<ActionListener<Void>> listenerClass = (Class<ActionListener<Void>>) (Class) ActionListener.class;
|
||||||
|
@ -338,7 +401,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
CorsHandler.fromSettings(Settings.EMPTY),
|
CorsHandler.fromSettings(Settings.EMPTY),
|
||||||
httpTracer,
|
httpTracer,
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
doAnswer(invocationOnMock -> {
|
doAnswer(invocationOnMock -> {
|
||||||
ActionListener<?> listener = invocationOnMock.getArgument(1);
|
ActionListener<?> listener = invocationOnMock.getArgument(1);
|
||||||
|
@ -385,7 +449,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
CorsHandler.fromSettings(Settings.EMPTY),
|
CorsHandler.fromSettings(Settings.EMPTY),
|
||||||
httpTracer,
|
httpTracer,
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
|
|
||||||
// ESTestCase#after will invoke ensureAllArraysAreReleased which will fail if the response content was not released
|
// ESTestCase#after will invoke ensureAllArraysAreReleased which will fail if the response content was not released
|
||||||
|
@ -432,7 +497,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
CorsHandler.fromSettings(Settings.EMPTY),
|
CorsHandler.fromSettings(Settings.EMPTY),
|
||||||
httpTracer,
|
httpTracer,
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
|
|
||||||
// ESTestCase#after will invoke ensureAllArraysAreReleased which will fail if the response content was not released
|
// ESTestCase#after will invoke ensureAllArraysAreReleased which will fail if the response content was not released
|
||||||
|
@ -481,7 +547,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
CorsHandler.fromSettings(Settings.EMPTY),
|
CorsHandler.fromSettings(Settings.EMPTY),
|
||||||
httpTracer,
|
httpTracer,
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
ArgumentCaptor<HttpResponse> requestCaptor = ArgumentCaptor.forClass(HttpResponse.class);
|
ArgumentCaptor<HttpResponse> requestCaptor = ArgumentCaptor.forClass(HttpResponse.class);
|
||||||
{
|
{
|
||||||
|
@ -541,7 +608,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
new CorsHandler(CorsHandler.buildConfig(Settings.EMPTY)),
|
new CorsHandler(CorsHandler.buildConfig(Settings.EMPTY)),
|
||||||
new HttpTracer(),
|
new HttpTracer(),
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
|
|
||||||
final MockLogAppender sendingResponseMockLog = new MockLogAppender();
|
final MockLogAppender sendingResponseMockLog = new MockLogAppender();
|
||||||
|
@ -603,7 +671,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
new CorsHandler(CorsHandler.buildConfig(Settings.EMPTY)),
|
new CorsHandler(CorsHandler.buildConfig(Settings.EMPTY)),
|
||||||
new HttpTracer(),
|
new HttpTracer(),
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
|
|
||||||
MockLogAppender mockLogAppender = new MockLogAppender();
|
MockLogAppender mockLogAppender = new MockLogAppender();
|
||||||
|
@ -659,7 +728,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
CorsHandler.fromSettings(Settings.EMPTY),
|
CorsHandler.fromSettings(Settings.EMPTY),
|
||||||
new HttpTracer(),
|
new HttpTracer(),
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
|
|
||||||
var responseBody = new BytesArray(randomUnicodeOfLengthBetween(1, 100).getBytes(StandardCharsets.UTF_8));
|
var responseBody = new BytesArray(randomUnicodeOfLengthBetween(1, 100).getBytes(StandardCharsets.UTF_8));
|
||||||
|
@ -729,7 +799,8 @@ public class DefaultRestChannelTests extends ESTestCase {
|
||||||
threadPool.getThreadContext(),
|
threadPool.getThreadContext(),
|
||||||
new CorsHandler(CorsHandler.buildConfig(settings)),
|
new CorsHandler(CorsHandler.buildConfig(settings)),
|
||||||
httpTracer,
|
httpTracer,
|
||||||
tracer
|
tracer,
|
||||||
|
false
|
||||||
);
|
);
|
||||||
channel.sendResponse(testRestResponse());
|
channel.sendResponse(testRestResponse());
|
||||||
|
|
||||||
|
|
|
@ -207,7 +207,7 @@ public abstract class NumberFieldMapperTests extends MapperTestCase {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
protected void testNullValue() throws IOException {
|
public void testNullValue() throws IOException {
|
||||||
DocumentMapper mapper = createDocumentMapper(fieldMapping(this::minimalMapping));
|
DocumentMapper mapper = createDocumentMapper(fieldMapping(this::minimalMapping));
|
||||||
SourceToParse source = source(b -> b.nullField("field"));
|
SourceToParse source = source(b -> b.nullField("field"));
|
||||||
ParsedDocument doc = mapper.parse(source);
|
ParsedDocument doc = mapper.parse(source);
|
||||||
|
@ -220,14 +220,15 @@ public abstract class NumberFieldMapperTests extends MapperTestCase {
|
||||||
}));
|
}));
|
||||||
doc = mapper.parse(source);
|
doc = mapper.parse(source);
|
||||||
List<IndexableField> fields = doc.rootDoc().getFields("field");
|
List<IndexableField> fields = doc.rootDoc().getFields("field");
|
||||||
assertEquals(2, fields.size());
|
List<IndexableField> pointFields = fields.stream().filter(f -> f.fieldType().pointIndexDimensionCount() != 0).toList();
|
||||||
IndexableField pointField = fields.get(0);
|
assertEquals(1, pointFields.size());
|
||||||
assertEquals(1, pointField.fieldType().pointIndexDimensionCount());
|
assertEquals(1, pointFields.get(0).fieldType().pointIndexDimensionCount());
|
||||||
assertFalse(pointField.fieldType().stored());
|
assertFalse(pointFields.get(0).fieldType().stored());
|
||||||
assertEquals(123, pointField.numericValue().doubleValue(), 0d);
|
|
||||||
IndexableField dvField = fields.get(1);
|
List<IndexableField> dvFields = fields.stream().filter(f -> f.fieldType().docValuesType() != DocValuesType.NONE).toList();
|
||||||
assertEquals(DocValuesType.SORTED_NUMERIC, dvField.fieldType().docValuesType());
|
assertEquals(1, dvFields.size());
|
||||||
assertFalse(dvField.fieldType().stored());
|
assertEquals(DocValuesType.SORTED_NUMERIC, dvFields.get(0).fieldType().docValuesType());
|
||||||
|
assertFalse(dvFields.get(0).fieldType().stored());
|
||||||
}
|
}
|
||||||
|
|
||||||
public void testOutOfRangeValues() throws IOException {
|
public void testOutOfRangeValues() throws IOException {
|
||||||
|
|
|
@ -302,7 +302,7 @@ public class RemoteClusterConnectionTests extends ESTestCase {
|
||||||
service.acceptIncomingRequests();
|
service.acceptIncomingRequests();
|
||||||
String clusterAlias = "test-cluster";
|
String clusterAlias = "test-cluster";
|
||||||
Settings settings = buildRandomSettings(clusterAlias, seedNodes);
|
Settings settings = buildRandomSettings(clusterAlias, seedNodes);
|
||||||
try (RemoteClusterConnection connection = new RemoteClusterConnection(settings, clusterAlias, service, randomBoolean())) {
|
try (RemoteClusterConnection connection = new RemoteClusterConnection(settings, clusterAlias, service, false)) {
|
||||||
int numThreads = randomIntBetween(4, 10);
|
int numThreads = randomIntBetween(4, 10);
|
||||||
Thread[] threads = new Thread[numThreads];
|
Thread[] threads = new Thread[numThreads];
|
||||||
CyclicBarrier barrier = new CyclicBarrier(numThreads + 1);
|
CyclicBarrier barrier = new CyclicBarrier(numThreads + 1);
|
||||||
|
|
|
@ -8,7 +8,7 @@
|
||||||
Submits a SAML `Response` message to {es} for consumption.
|
Submits a SAML `Response` message to {es} for consumption.
|
||||||
|
|
||||||
NOTE: This API is intended for use by custom web applications other than {kib}.
|
NOTE: This API is intended for use by custom web applications other than {kib}.
|
||||||
If you are using {kib}, see the <<saml-guide>>.
|
If you are using {kib}, see the <<saml-guide-stack>>.
|
||||||
|
|
||||||
[[security-api-saml-authenticate-request]]
|
[[security-api-saml-authenticate-request]]
|
||||||
==== {api-request-title}
|
==== {api-request-title}
|
||||||
|
|
|
@ -8,7 +8,7 @@
|
||||||
Verifies the logout response sent from the SAML IdP.
|
Verifies the logout response sent from the SAML IdP.
|
||||||
|
|
||||||
NOTE: This API is intended for use by custom web applications other than {kib}.
|
NOTE: This API is intended for use by custom web applications other than {kib}.
|
||||||
If you are using {kib}, see the <<saml-guide>>.
|
If you are using {kib}, see the <<saml-guide-stack>>.
|
||||||
|
|
||||||
[[security-api-saml-complete-logout-request]]
|
[[security-api-saml-complete-logout-request]]
|
||||||
==== {api-request-title}
|
==== {api-request-title}
|
||||||
|
|
|
@ -8,7 +8,7 @@
|
||||||
Submits a SAML LogoutRequest message to {es} for consumption.
|
Submits a SAML LogoutRequest message to {es} for consumption.
|
||||||
|
|
||||||
NOTE: This API is intended for use by custom web applications other than {kib}.
|
NOTE: This API is intended for use by custom web applications other than {kib}.
|
||||||
If you are using {kib}, see the <<saml-guide>>.
|
If you are using {kib}, see the <<saml-guide-stack>>.
|
||||||
|
|
||||||
[[security-api-saml-invalidate-request]]
|
[[security-api-saml-invalidate-request]]
|
||||||
==== {api-request-title}
|
==== {api-request-title}
|
||||||
|
|
|
@ -8,7 +8,7 @@
|
||||||
Submits a request to invalidate an access token and refresh token.
|
Submits a request to invalidate an access token and refresh token.
|
||||||
|
|
||||||
NOTE: This API is intended for use by custom web applications other than {kib}.
|
NOTE: This API is intended for use by custom web applications other than {kib}.
|
||||||
If you are using {kib}, see the <<saml-guide>>.
|
If you are using {kib}, see the <<saml-guide-stack>>.
|
||||||
|
|
||||||
[[security-api-saml-logout-request]]
|
[[security-api-saml-logout-request]]
|
||||||
==== {api-request-title}
|
==== {api-request-title}
|
||||||
|
|
|
@ -8,7 +8,7 @@
|
||||||
Creates a SAML authentication request (`<AuthnRequest>`) as a URL string, based on the configuration of the respective SAML realm in {es}.
|
Creates a SAML authentication request (`<AuthnRequest>`) as a URL string, based on the configuration of the respective SAML realm in {es}.
|
||||||
|
|
||||||
NOTE: This API is intended for use by custom web applications other than {kib}.
|
NOTE: This API is intended for use by custom web applications other than {kib}.
|
||||||
If you are using {kib}, see the <<saml-guide>>.
|
If you are using {kib}, see the <<saml-guide-stack>>.
|
||||||
|
|
||||||
[[security-api-saml-prepare-authentication-request]]
|
[[security-api-saml-prepare-authentication-request]]
|
||||||
==== {api-request-title}
|
==== {api-request-title}
|
||||||
|
|
|
@ -39,9 +39,9 @@ This API supports the following fields:
|
||||||
|
|
||||||
| `query` | no | null | Optional, <<query-dsl,query>> filter watches to be returned.
|
| `query` | no | null | Optional, <<query-dsl,query>> filter watches to be returned.
|
||||||
|
|
||||||
| `sort` | no | null | Optional <<search-request-sort,sort definition>>.
|
| `sort` | no | null | Optional <<sort-search-results,sort definition>>.
|
||||||
|
|
||||||
| `search_after` | no | null | Optional <<search-request-search-after,search After>> to do pagination
|
| `search_after` | no | null | Optional <<search-after,search After>> to do pagination
|
||||||
using last hit's sort values.
|
using last hit's sort values.
|
||||||
|======
|
|======
|
||||||
|
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue