[DOCS] Remove redirect pages (#88738)

* [DOCS] Remove manual redirects

* [DOCS] Removed refs to modules-discovery-hosts-providers

* [DOCS] Fixed broken internal refs

* Fixing bad cross links in ES book, and adding redirects.asciidoc[] back into docs/reference/index.asciidoc.

* Update docs/reference/search/point-in-time-api.asciidoc

Co-authored-by: James Rodewig <james.rodewig@elastic.co>

* Update docs/reference/setup/restart-cluster.asciidoc

Co-authored-by: James Rodewig <james.rodewig@elastic.co>

* Update docs/reference/sql/endpoints/translate.asciidoc

Co-authored-by: James Rodewig <james.rodewig@elastic.co>

* Update docs/reference/snapshot-restore/restore-snapshot.asciidoc

Co-authored-by: James Rodewig <james.rodewig@elastic.co>

* Update repository-azure.asciidoc

* Update node-tool.asciidoc

* Update repository-azure.asciidoc

---------

Co-authored-by: amyjtechwriter <61687663+amyjtechwriter@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Co-authored-by: Amy Jonsson <amy.jonsson@elastic.co>
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
This commit is contained in:
debadair 2023-05-24 04:32:46 -07:00 committed by GitHub
parent 52ed03c975
commit 777598d602
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
92 changed files with 135 additions and 143 deletions

View file

@ -28,7 +28,7 @@ Each document in the `seat` data contains the following fields:
The date and time of the play as a date object. The date and time of the play as a date object.
==== Prerequisites ==== Prerequisites
Start an {ref}/getting-started-install.html[{es} instance], and then access the Start an {ref}/configuring-stack-security.html[{es} instance], and then access the
{kibana-ref}/console-kibana.html[Console] in {kib}. {kibana-ref}/console-kibana.html[Console] in {kib}.
==== Configure the `seat` sample data ==== Configure the `seat` sample data

View file

@ -47,9 +47,6 @@ Use `bin/elasticsearch-plugin install file:///path/to/your/plugin`
to install your plugin for testing. The Java plugin is auto-loaded only if it's in the to install your plugin for testing. The Java plugin is auto-loaded only if it's in the
`plugins/` directory. `plugins/` directory.
You may also load your plugin within the test framework for integration tests.
Check {ref}/integration-tests.html#changing-node-configuration[Changing Node Configuration] for more information.
[discrete] [discrete]
[[plugin-authors-jsm]] [[plugin-authors-jsm]]
==== Java Security permissions ==== Java Security permissions

View file

@ -2,7 +2,7 @@
=== EC2 Discovery plugin === EC2 Discovery plugin
The EC2 discovery plugin provides a list of seed addresses to the The EC2 discovery plugin provides a list of seed addresses to the
{ref}/modules-discovery-hosts-providers.html[discovery process] by querying the {ref}/discovery-hosts-providers.html[discovery process] by querying the
https://github.com/aws/aws-sdk-java[AWS API] for a list of EC2 instances https://github.com/aws/aws-sdk-java[AWS API] for a list of EC2 instances
matching certain criteria determined by the <<discovery-ec2-usage,plugin matching certain criteria determined by the <<discovery-ec2-usage,plugin
settings>>. settings>>.

View file

@ -2,7 +2,7 @@
=== Hadoop HDFS repository plugin === Hadoop HDFS repository plugin
The HDFS repository plugin adds support for using HDFS File System as a repository for The HDFS repository plugin adds support for using HDFS File System as a repository for
{ref}/modules-snapshots.html[Snapshot/Restore]. {ref}/snapshot-restore.html[Snapshot/Restore].
:plugin_name: repository-hdfs :plugin_name: repository-hdfs
include::install_remove.asciidoc[] include::install_remove.asciidoc[]
@ -23,7 +23,7 @@ plugin folder and point `HADOOP_HOME` variable to it; this should minimize the a
==== Configuration properties ==== Configuration properties
Once installed, define the configuration for the `hdfs` repository through the Once installed, define the configuration for the `hdfs` repository through the
{ref}/modules-snapshots.html[REST API]: {ref}/snapshot-restore.html[REST API]:
[source,console] [source,console]
---- ----

View file

@ -1,7 +1,7 @@
[[repository]] [[repository]]
== Snapshot/restore repository plugins == Snapshot/restore repository plugins
Repository plugins extend the {ref}/modules-snapshots.html[Snapshot/Restore] Repository plugins extend the {ref}/snapshot-restore.html[Snapshot/Restore]
functionality in Elasticsearch by adding repositories backed by the cloud or functionality in Elasticsearch by adding repositories backed by the cloud or
by distributed file systems: by distributed file systems:

View file

@ -107,7 +107,7 @@ or <<binary, `binary`>>.
NOTE: By default, you cannot run a `terms` aggregation on a `text` field. Use a NOTE: By default, you cannot run a `terms` aggregation on a `text` field. Use a
`keyword` <<multi-fields,sub-field>> instead. Alternatively, you can enable `keyword` <<multi-fields,sub-field>> instead. Alternatively, you can enable
<<fielddata,`fielddata`>> on the `text` field to create buckets for the field's <<fielddata-mapping-param,`fielddata`>> on the `text` field to create buckets for the field's
<<analysis,analyzed>> terms. Enabling `fielddata` can significantly increase <<analysis,analyzed>> terms. Enabling `fielddata` can significantly increase
memory usage. memory usage.

View file

@ -81,7 +81,7 @@ hard-linked files.
`disk.avail`:: `disk.avail`::
Free disk space available to {es}. {es} retrieves this metric from the node's Free disk space available to {es}. {es} retrieves this metric from the node's
OS. <<disk-allocator,Disk-based shard allocation>> uses this metric to assign OS. <<disk-based-shard-allocation,Disk-based shard allocation>> uses this metric to assign
shards to nodes based on available disk space. shards to nodes based on available disk space.
`disk.total`:: `disk.total`::

View file

@ -135,7 +135,7 @@ measurements.
[[cat-recovery-api-ex-snapshot]] [[cat-recovery-api-ex-snapshot]]
===== Example with a snapshot recovery ===== Example with a snapshot recovery
You can restore backups of an index using the <<modules-snapshots,snapshot and You can restore backups of an index using the <<snapshot-restore,snapshot and
restore>> API. You can use the cat recovery API retrieve information about a restore>> API. You can use the cat recovery API retrieve information about a
snapshot recovery. snapshot recovery.

View file

@ -11,7 +11,7 @@ console. They are _not_ intended for use by applications. For application
consumption, use the <<get-snapshot-repo-api,get snapshot repository API>>. consumption, use the <<get-snapshot-repo-api,get snapshot repository API>>.
==== ====
Returns the <<snapshots-repositories,snapshot repositories>> for a cluster. Returns the <<snapshots-register-repository,snapshot repositories>> for a cluster.
[[cat-repositories-api-request]] [[cat-repositories-api-request]]

View file

@ -11,7 +11,7 @@ console. They are _not_ intended for use by applications. For application
consumption, use the <<get-snapshot-api,get snapshot API>>. consumption, use the <<get-snapshot-api,get snapshot API>>.
==== ====
Returns information about the <<modules-snapshots,snapshots>> stored in one or Returns information about the <<snapshot-restore,snapshots>> stored in one or
more repositories. A snapshot is a backup of an index or running {es} cluster. more repositories. A snapshot is a backup of an index or running {es} cluster.

View file

@ -31,7 +31,7 @@ When the {es} keystore is password protected and not simply obfuscated, you must
provide the password for the keystore when you reload the secure settings. provide the password for the keystore when you reload the secure settings.
Reloading the settings for the whole cluster assumes that all nodes' keystores Reloading the settings for the whole cluster assumes that all nodes' keystores
are protected with the same password; this method is allowed only when are protected with the same password; this method is allowed only when
<<tls-transport,inter-node communications are encrypted>>. Alternatively, you can <<encrypt-internode-communication,inter-node communications are encrypted>>. Alternatively, you can
reload the secure settings on each node by locally accessing the API and passing reload the secure settings on each node by locally accessing the API and passing
the node-specific {es} keystore password. the node-specific {es} keystore password.

View file

@ -1294,7 +1294,7 @@ Number of selected nodes using the HTTP type.
`discovery_types`:: `discovery_types`::
(object) (object)
Contains statistics about the <<modules-discovery-hosts-providers,discovery Contains statistics about the <<discovery-hosts-providers,discovery
types>> used by selected nodes. types>> used by selected nodes.
+ +
.Properties of `discovery_types` .Properties of `discovery_types`
@ -1302,7 +1302,7 @@ types>> used by selected nodes.
===== =====
`<discovery_type>`:: `<discovery_type>`::
(integer) (integer)
Number of selected nodes using the <<modules-discovery-hosts-providers,discovery Number of selected nodes using the <<discovery-hosts-providers,discovery
type>> to find other nodes. type>> to find other nodes.
===== =====

View file

@ -1,7 +1,7 @@
[[elasticsearch-croneval]] [[elasticsearch-croneval]]
== elasticsearch-croneval == elasticsearch-croneval
Validates and evaluates a <<cron-expressions,cron expression>>. Validates and evaluates a <<api-cron-expressions,cron expression>>.
[discrete] [discrete]
=== Synopsis === Synopsis

View file

@ -284,12 +284,11 @@ unsafely-bootstrapped cluster.
Unsafe cluster bootstrapping is only possible if there is at least one Unsafe cluster bootstrapping is only possible if there is at least one
surviving master-eligible node. If there are no remaining master-eligible nodes surviving master-eligible node. If there are no remaining master-eligible nodes
then the cluster metadata is completely lost. However, the individual data then the cluster metadata is completely lost. However, the individual data
nodes also contain a copy of the index metadata corresponding with their nodes also contain a copy of the index metadata corresponding with their shards. This sometimes allows a new cluster to import these shards as
shards. It is therefore sometimes possible to manually import these shards as <<dangling-indices,dangling indices>>. You can sometimes
<<dangling-indices,dangling indices>>. For example you can sometimes recover some recover some indices after the loss of all main-eligible nodes in a cluster
indices after the loss of all master-eligible nodes in a cluster by creating a new by creating a new cluster and then using the `elasticsearch-node
cluster and then using the `elasticsearch-node detach-cluster` command to move any detach-cluster` command to move any surviving nodes into this new cluster. Once the new cluster is fully formed,
surviving nodes into this new cluster. Once the new cluster is fully formed,
use the <<dangling-indices-api,Dangling indices API>> to list, import or delete use the <<dangling-indices-api,Dangling indices API>> to list, import or delete
any dangling indices. any dangling indices.
@ -317,7 +316,7 @@ cluster formed as described above.
below. Verify that the tool reported `Node was successfully detached from the below. Verify that the tool reported `Node was successfully detached from the
cluster`. cluster`.
5. If necessary, configure each data node to 5. If necessary, configure each data node to
<<modules-discovery-hosts-providers,discover the new cluster>>. <<discovery-hosts-providers,discover the new cluster>>.
6. Start each data node and verify that it has joined the new cluster. 6. Start each data node and verify that it has joined the new cluster.
7. Wait for all recoveries to have completed, and investigate the data in the 7. Wait for all recoveries to have completed, and investigate the data in the
cluster to discover if any was lost during this process. Use the cluster to discover if any was lost during this process. Use the

View file

@ -231,7 +231,7 @@ participate in the `_bulk` request at all.
[[bulk-security]] [[bulk-security]]
===== Security ===== Security
See <<url-access-control>>. See <<api-url-access-control>>.
[[docs-bulk-api-path-params]] [[docs-bulk-api-path-params]]
==== {api-path-parms-title} ==== {api-path-parms-title}

View file

@ -46,7 +46,7 @@ If you specify an index in the request URI, you only need to specify the documen
[[mget-security]] [[mget-security]]
===== Security ===== Security
See <<url-access-control>>. See <<api-url-access-control>>.
[[multi-get-partial-responses]] [[multi-get-partial-responses]]
===== Partial responses ===== Partial responses

View file

@ -73,7 +73,7 @@ See <<run-eql-search-across-clusters>>.
(Optional, Boolean) (Optional, Boolean)
+ +
NOTE: This parameter's behavior differs from the `allow_no_indices` parameter NOTE: This parameter's behavior differs from the `allow_no_indices` parameter
used in other <<multi-index,multi-target APIs>>. used in other <<api-multi-index,multi-target APIs>>.
+ +
If `false`, the request returns an error if any wildcard pattern, alias, or If `false`, the request returns an error if any wildcard pattern, alias, or
`_all` value targets only missing or closed indices. This behavior applies even `_all` value targets only missing or closed indices. This behavior applies even

View file

@ -69,7 +69,7 @@ cluster can report a `green` status, override the default by setting
<<dynamic-index-settings,`index.number_of_replicas`>> to `0` on every index. <<dynamic-index-settings,`index.number_of_replicas`>> to `0` on every index.
If the node fails, you may need to restore an older copy of any lost indices If the node fails, you may need to restore an older copy of any lost indices
from a <<modules-snapshots,snapshot>>. from a <<snapshot-restore,snapshot>>.
Because they are not resilient to any failures, we do not recommend using Because they are not resilient to any failures, we do not recommend using
one-node clusters in production. one-node clusters in production.
@ -281,7 +281,7 @@ cluster when handling such a failure.
For resilience against whole-zone failures, it is important that there is a copy For resilience against whole-zone failures, it is important that there is a copy
of each shard in more than one zone, which can be achieved by placing data of each shard in more than one zone, which can be achieved by placing data
nodes in multiple zones and configuring <<allocation-awareness,shard allocation nodes in multiple zones and configuring <<shard-allocation-awareness,shard allocation
awareness>>. You should also ensure that client requests are sent to nodes in awareness>>. You should also ensure that client requests are sent to nodes in
more than one zone. more than one zone.
@ -334,7 +334,7 @@ tiebreaker need not be as powerful as the other two nodes since it has no other
roles and will not perform any searches nor coordinate any client requests nor roles and will not perform any searches nor coordinate any client requests nor
be elected as the master of the cluster. be elected as the master of the cluster.
You should use <<allocation-awareness,shard allocation awareness>> to ensure You should use <<shard-allocation-awareness,shard allocation awareness>> to ensure
that there is a copy of each shard in each zone. This means either zone remains that there is a copy of each shard in each zone. This means either zone remains
fully available if the other zone fails. fully available if the other zone fails.
@ -359,7 +359,7 @@ mean that the cluster can still elect a master even if one of the zones fails.
As always, your indices should have at least one replica in case a node fails, As always, your indices should have at least one replica in case a node fails,
unless they are <<searchable-snapshots,searchable snapshot indices>>. You unless they are <<searchable-snapshots,searchable snapshot indices>>. You
should also use <<allocation-awareness,shard allocation awareness>> to limit should also use <<shard-allocation-awareness,shard allocation awareness>> to limit
the number of copies of each shard in each zone. For instance, if you have an the number of copies of each shard in each zone. For instance, if you have an
index with one or two replicas configured then allocation awareness will ensure index with one or two replicas configured then allocation awareness will ensure
that the replicas of the shard are in a different zone from the primary. This that the replicas of the shard are in a different zone from the primary. This

View file

@ -181,7 +181,7 @@ For high-cardinality `text` fields, fielddata can use a large amount of JVM
memory. To avoid this, {es} disables fielddata on `text` fields by default. If memory. To avoid this, {es} disables fielddata on `text` fields by default. If
you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata
circuit breaker>>, consider disabling it and using a `keyword` field instead. circuit breaker>>, consider disabling it and using a `keyword` field instead.
See <<fielddata>>. See <<fielddata-mapping-param>>.
**Clear the fieldata cache** **Clear the fieldata cache**

View file

@ -107,7 +107,7 @@ that it will increase the risk of failure since the failure of any one SSD
destroys the index. However this is typically the right tradeoff to make: destroys the index. However this is typically the right tradeoff to make:
optimize single shards for maximum performance, and then add replicas across optimize single shards for maximum performance, and then add replicas across
different nodes so there's redundancy for any node failures. You can also use different nodes so there's redundancy for any node failures. You can also use
<<modules-snapshots,snapshot and restore>> to backup the index for further <<snapshot-restore,snapshot and restore>> to backup the index for further
insurance. insurance.
Directly-attached (local) storage generally performs better than remote storage Directly-attached (local) storage generally performs better than remote storage

View file

@ -93,7 +93,7 @@ Use {kib}'s **Dashboard** feature to visualize your data in a chart, table, map,
and more. See {kib}'s {kibana-ref}/dashboard.html[Dashboard documentation]. and more. See {kib}'s {kibana-ref}/dashboard.html[Dashboard documentation].
You can also search and aggregate your data using the <<search-search,search You can also search and aggregate your data using the <<search-search,search
API>>. Use <<runtime-search-request,runtime fields>> and <<grok-basics,grok API>>. Use <<runtime-search-request,runtime fields>> and <<grok,grok
patterns>> to dynamically extract data from log messages and other unstructured patterns>> to dynamically extract data from log messages and other unstructured
content at search time. content at search time.

View file

@ -47,7 +47,7 @@ to use {ilm-init} for new data.
[[ilm-existing-indices-reindex]] [[ilm-existing-indices-reindex]]
=== Reindex into a managed index === Reindex into a managed index
An alternative to <<ilm-with-existing-periodic-indices,applying policies to existing indices>> is to An alternative to <<ilm-existing-indices-apply,applying policies to existing indices>> is to
reindex your data into an {ilm-init}-managed index. reindex your data into an {ilm-init}-managed index.
You might want to do this if creating periodic indices with very small amounts of data You might want to do this if creating periodic indices with very small amounts of data
has led to excessive shard counts, or if continually indexing into the same index has led to large shards has led to excessive shard counts, or if continually indexing into the same index has led to large shards

View file

@ -12,7 +12,7 @@ These actions are intended to protect the cluster against data loss by
ensuring that every shard is fully replicated as soon as possible. ensuring that every shard is fully replicated as soon as possible.
Even though we throttle concurrent recoveries both at the Even though we throttle concurrent recoveries both at the
<<recovery,node level>> and at the <<shards-allocation,cluster level>>, this <<recovery,node level>> and at the <<cluster-shard-allocation-settings,cluster level>>, this
``shard-shuffle'' can still put a lot of extra load on the cluster which ``shard-shuffle'' can still put a lot of extra load on the cluster which
may not be necessary if the missing node is likely to return soon. Imagine may not be necessary if the missing node is likely to return soon. Imagine
this scenario: this scenario:

View file

@ -77,4 +77,4 @@ include::release-notes.asciidoc[]
include::dependencies-versions.asciidoc[] include::dependencies-versions.asciidoc[]
include::redirects.asciidoc[] include::redirects.asciidoc[]

View file

@ -36,7 +36,7 @@ or indices.
`<alias>`:: `<alias>`::
(Required, string) Alias to update. If the alias doesn't exist, the request (Required, string) Alias to update. If the alias doesn't exist, the request
creates it. Index alias names support <<date-math-index-names,date math>>. creates it. Index alias names support <<api-date-math-index-names,date math>>.
`<target>`:: `<target>`::
(Required, string) Comma-separated list of data streams or indices to add. (Required, string) Comma-separated list of data streams or indices to add.

View file

@ -79,14 +79,14 @@ The object body contains options for the alias. Supports an empty object.
===== =====
`alias`:: `alias`::
(Required*, string) Alias for the action. Index alias names support (Required*, string) Alias for the action. Index alias names support
<<date-math-index-names,date math>>. If `aliases` is not specified, the `add` <<api-date-math-index-names,date math>>. If `aliases` is not specified, the `add`
and `remove` actions require this parameter. For the `remove` action, this and `remove` actions require this parameter. For the `remove` action, this
parameter supports wildcards (`*`). The `remove_index` action doesn't support parameter supports wildcards (`*`). The `remove_index` action doesn't support
this parameter. this parameter.
`aliases`:: `aliases`::
(Required*, array of strings) Aliases for the action. Index alias names support (Required*, array of strings) Aliases for the action. Index alias names support
<<date-math-index-names,date math>>. If `alias` is not specified, the `add` and <<api-date-math-index-names,date math>>. If `alias` is not specified, the `add` and
`remove` actions require this parameter. For the `remove` action, this parameter `remove` actions require this parameter. For the `remove` action, this parameter
supports wildcards (`*`). The `remove_index` action doesn't support this supports wildcards (`*`). The `remove_index` action doesn't support this
parameter. parameter.
@ -122,7 +122,7 @@ Only the `add` action supports this parameter.
// tag::alias-options[] // tag::alias-options[]
`is_hidden`:: `is_hidden`::
(Optional, Boolean) If `true`, the alias is <<hidden,hidden>>. Defaults to (Optional, Boolean) If `true`, the alias is <<multi-hidden,hidden>>. Defaults to
`false`. All data streams or indices for the alias must have the same `false`. All data streams or indices for the alias must have the same
`is_hidden` value. `is_hidden` value.
// end::alias-options[] // end::alias-options[]

View file

@ -78,7 +78,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
======= =======
`<alias>`:: `<alias>`::
(Required, object) The key is the alias name. Index alias names support (Required, object) The key is the alias name. Index alias names support
<<date-math-index-names,date math>>. <<api-date-math-index-names,date math>>.
+ +
The object body contains options for the alias. Supports an empty object. The object body contains options for the alias. Supports an empty object.
+ +
@ -94,7 +94,7 @@ alias can access.
If specified, this overwrites the `routing` value for indexing operations. If specified, this overwrites the `routing` value for indexing operations.
`is_hidden`:: `is_hidden`::
(Optional, Boolean) If `true`, the alias is <<hidden,hidden>>. Defaults to (Optional, Boolean) If `true`, the alias is <<multi-hidden,hidden>>. Defaults to
`false`. All indices for the alias must have the same `is_hidden` value. `false`. All indices for the alias must have the same `is_hidden` value.
`is_write_index`:: `is_write_index`::
@ -204,7 +204,7 @@ PUT /test
} }
-------------------------------------------------- --------------------------------------------------
Index alias names also support <<date-math-index-names,date math>>. Index alias names also support <<api-date-math-index-names,date math>>.
[source,console] [source,console]
---- ----

View file

@ -205,7 +205,7 @@ policies. To retrieve the lifecycle policy for individual backing indices,
use the <<indices-get-settings,get index settings API>>. use the <<indices-get-settings,get index settings API>>.
`hidden`:: `hidden`::
(Boolean) If `true`, the data stream is <<hidden,hidden>>. (Boolean) If `true`, the data stream is <<multi-hidden,hidden>>.
`system`:: `system`::
(Boolean) (Boolean)

View file

@ -108,7 +108,7 @@ See <<create-index-template,create an index template>>.
<<mapping-routing-field,custom routing>>. Defaults to `false`. <<mapping-routing-field,custom routing>>. Defaults to `false`.
`hidden`:: `hidden`::
(Optional, Boolean) If `true`, the data stream is <<hidden,hidden>>. Defaults to (Optional, Boolean) If `true`, the data stream is <<multi-hidden,hidden>>. Defaults to
`false`. `false`.
`index_mode`:: `index_mode`::

View file

@ -75,7 +75,7 @@ index's name.
.Use date math with index alias rollovers .Use date math with index alias rollovers
**** ****
If you use an index alias for time series data, you can use If you use an index alias for time series data, you can use
<<date-math-index-names,date math>> in the index name to track the rollover <<api-date-math-index-names,date math>> in the index name to track the rollover
date. For example, you can create an alias that points to an index named date. For example, you can create an alias that points to an index named
`<my-index-{now/d}-000001>`. If you create the index on May 6, 2099, the index's `<my-index-{now/d}-000001>`. If you create the index on May 6, 2099, the index's
name is `my-index-2099.05.06-000001`. If you roll over the alias on May 7, 2099, name is `my-index-2099.05.06-000001`. If you roll over the alias on May 7, 2099,
@ -98,7 +98,7 @@ Name of the data stream or index alias to roll over.
`<target-index>`:: `<target-index>`::
(Optional, string) (Optional, string)
Name of the index to create. Supports <<date-math-index-names,date math>>. Data Name of the index to create. Supports <<api-date-math-index-names,date math>>. Data
streams do not support this parameter. streams do not support this parameter.
+ +
If the name of the alias's current write index does not end with `-` and a If the name of the alias's current write index does not end with `-` and a

View file

@ -112,7 +112,7 @@ access.
overwrites the `routing` value for indexing operations. overwrites the `routing` value for indexing operations.
`is_hidden`:: `is_hidden`::
(Boolean) If `true`, the alias is <<hidden,hidden>>. (Boolean) If `true`, the alias is <<multi-hidden,hidden>>.
`is_write_index`:: `is_write_index`::
(Boolean) If `true`, the index is the <<write-index,write index>> for the alias. (Boolean) If `true`, the index is the <<write-index,write index>> for the alias.

View file

@ -861,7 +861,7 @@ PUT _ingest/pipeline/my-pipeline
} }
---- ----
You can also specify a <<modules-scripting-stored-scripts,stored script>> as the You can also specify a <<script-stored-scripts,stored script>> as the
`if` condition. `if` condition.
[source,console] [source,console]

View file

@ -34,7 +34,7 @@ pipeline.
.. Click **Add a processor** and select the **Grok** processor type. .. Click **Add a processor** and select the **Grok** processor type.
.. Set **Field** to `message` and **Patterns** to the following .. Set **Field** to `message` and **Patterns** to the following
<<grok-basics,grok pattern>>: <<grok,grok pattern>>:
+ +
[source,grok] [source,grok]
---- ----

View file

@ -5,7 +5,7 @@
++++ ++++
The purpose of this processor is to point documents to the right time based index based The purpose of this processor is to point documents to the right time based index based
on a date or timestamp field in a document by using the <<date-math-index-names, date math index name support>>. on a date or timestamp field in a document by using the <<api-date-math-index-names, date math index name support>>.
The processor sets the `_index` metadata field with a date math index name expression based on the provided index name The processor sets the `_index` metadata field with a date math index name expression based on the provided index name
prefix, a date or timestamp field in the documents being processed and the provided date rounding. prefix, a date or timestamp field in the documents being processed and the provided date rounding.
@ -126,7 +126,7 @@ and the result:
// TESTRESPONSE[s/2016-11-08T19:43:03.850\+0000/$body.docs.0.doc._ingest.timestamp/] // TESTRESPONSE[s/2016-11-08T19:43:03.850\+0000/$body.docs.0.doc._ingest.timestamp/]
The above example shows that `_index` was set to `<my-index-{2016-04-25||/M{yyyy-MM-dd|UTC}}>`. Elasticsearch The above example shows that `_index` was set to `<my-index-{2016-04-25||/M{yyyy-MM-dd|UTC}}>`. Elasticsearch
understands this to mean `2016-04-01` as is explained in the <<date-math-index-names, date math index name documentation>> understands this to mean `2016-04-01` as is explained in the <<api-date-math-index-names, date math index name documentation>>
[[date-index-name-options]] [[date-index-name-options]]
.Date index name options .Date index name options

View file

@ -14,7 +14,7 @@ The following mapping parameters are common to some or all field data types:
* <<dynamic,`dynamic`>> * <<dynamic,`dynamic`>>
* <<eager-global-ordinals,`eager_global_ordinals`>> * <<eager-global-ordinals,`eager_global_ordinals`>>
* <<enabled,`enabled`>> * <<enabled,`enabled`>>
* <<fielddata,`fielddata`>> * <<fielddata-mapping-param,`fielddata`>>
* <<multi-fields,`fields`>> * <<multi-fields,`fields`>>
* <<mapping-date-format,`format`>> * <<mapping-date-format,`format`>>
* <<ignore-above,`ignore_above`>> * <<ignore-above,`ignore_above`>>

View file

@ -29,7 +29,7 @@ Global ordinals are used if a search contains any of the following components:
* Certain bucket aggregations on `keyword`, `ip`, and `flattened` fields. This * Certain bucket aggregations on `keyword`, `ip`, and `flattened` fields. This
includes `terms` aggregations as mentioned above, as well as `composite`, includes `terms` aggregations as mentioned above, as well as `composite`,
`diversified_sampler`, and `significant_terms`. `diversified_sampler`, and `significant_terms`.
* Bucket aggregations on `text` fields that require <<fielddata, `fielddata`>> * Bucket aggregations on `text` fields that require <<fielddata-mapping-param, `fielddata`>>
to be enabled. to be enabled.
* Operations on parent and child documents from a `join` field, including * Operations on parent and child documents from a `join` field, including
`has_child` queries and `parent` aggregations. `has_child` queries and `parent` aggregations.

View file

@ -77,7 +77,7 @@ The following parameters are accepted by `text` fields:
(default). Enabling this is a good idea on fields that are frequently used for (default). Enabling this is a good idea on fields that are frequently used for
(significant) terms aggregations. (significant) terms aggregations.
<<fielddata,`fielddata`>>:: <<fielddata-mapping-param,`fielddata`>>::
Can the field use in-memory fielddata for sorting, aggregations, Can the field use in-memory fielddata for sorting, aggregations,
or scripting? Accepts `true` or `false` (default). or scripting? Accepts `true` or `false` (default).

View file

@ -403,7 +403,7 @@ If you do not want to enable SSL and are currently using other
* Discontinue use of other `xpack.security.http.ssl` settings * Discontinue use of other `xpack.security.http.ssl` settings
If you want to enable SSL, follow the instructions in If you want to enable SSL, follow the instructions in
{ref}/configuring-tls.html#tls-http[Encrypting HTTP client communications]. As part {ref}/security-basic-setup-https.html#encrypt-http-communication[Encrypting HTTP client communications]. As part
of this configuration, explicitly specify `xpack.security.http.ssl.enabled` of this configuration, explicitly specify `xpack.security.http.ssl.enabled`
as `true`. as `true`.

View file

@ -891,7 +891,7 @@ For example:
"ignore_throttled": true "ignore_throttled": true
} }
``` ```
For more information about these options, see <<multi-index>>. For more information about these options, see <<api-multi-index>>.
-- --
end::indices-options[] end::indices-options[]

View file

@ -8,7 +8,7 @@ each time it changes.
The following processes and settings are part of discovery and cluster The following processes and settings are part of discovery and cluster
formation: formation:
<<modules-discovery-hosts-providers>>:: <<discovery-hosts-providers>>::
Discovery is the process where nodes find each other when the master is Discovery is the process where nodes find each other when the master is
unknown, such as when a node has just started up or when the previous unknown, such as when a node has just started up or when the previous
@ -34,7 +34,7 @@ formation:
<<dev-vs-prod-mode,production mode>> requires bootstrapping to be <<dev-vs-prod-mode,production mode>> requires bootstrapping to be
<<modules-discovery-bootstrap-cluster,explicitly configured>>. <<modules-discovery-bootstrap-cluster,explicitly configured>>.
<<modules-discovery-adding-removing-nodes,Adding and removing master-eligible nodes>>:: <<add-elasticsearch-nodes,Adding and removing master-eligible nodes>>::
It is recommended to have a small and fixed number of master-eligible nodes It is recommended to have a small and fixed number of master-eligible nodes
in a cluster, and to scale the cluster up and down by adding and removing in a cluster, and to scale the cluster up and down by adding and removing

View file

@ -4,11 +4,9 @@
Starting an Elasticsearch cluster for the very first time requires the initial Starting an Elasticsearch cluster for the very first time requires the initial
set of <<master-node,master-eligible nodes>> to be explicitly defined on one or set of <<master-node,master-eligible nodes>> to be explicitly defined on one or
more of the master-eligible nodes in the cluster. This is known as _cluster more of the master-eligible nodes in the cluster. This is known as _cluster
bootstrapping_. This is only required the first time a cluster starts up: nodes bootstrapping_. This is only required the first time a cluster starts up.
that have already joined a cluster store this information in their data folder Freshly-started nodes that are joining a running cluster obtain this
for use in a <<restart-upgrade,full cluster restart>>, and freshly-started nodes information from the cluster's elected master.
that are joining a running cluster obtain this information from the cluster's
elected master.
The initial set of master-eligible nodes is defined in the The initial set of master-eligible nodes is defined in the
<<initial_master_nodes,`cluster.initial_master_nodes` setting>>. This should be <<initial_master_nodes,`cluster.initial_master_nodes` setting>>. This should be

View file

@ -187,7 +187,7 @@ considered to have failed and is removed from the cluster. See
`cluster.max_voting_config_exclusions`:: `cluster.max_voting_config_exclusions`::
(<<dynamic-cluster-setting,Dynamic>>) (<<dynamic-cluster-setting,Dynamic>>)
Sets a limit on the number of voting configuration exclusions at any one time. Sets a limit on the number of voting configuration exclusions at any one time.
The default value is `10`. See <<modules-discovery-adding-removing-nodes>>. The default value is `10`. See <<add-elasticsearch-nodes>>.
`cluster.publish.info_timeout`:: `cluster.publish.info_timeout`::
(<<static-cluster-setting,Static>>) (<<static-cluster-setting,Static>>)

View file

@ -15,7 +15,7 @@ those of the other piece.
Elasticsearch allows you to add and remove master-eligible nodes to a running Elasticsearch allows you to add and remove master-eligible nodes to a running
cluster. In many cases you can do this simply by starting or stopping the nodes cluster. In many cases you can do this simply by starting or stopping the nodes
as required. See <<modules-discovery-adding-removing-nodes>>. as required. See <<add-elasticsearch-nodes>>.
As nodes are added or removed Elasticsearch maintains an optimal level of fault As nodes are added or removed Elasticsearch maintains an optimal level of fault
tolerance by updating the cluster's <<modules-discovery-voting,voting tolerance by updating the cluster's <<modules-discovery-voting,voting

View file

@ -22,7 +22,7 @@ After a node joins or leaves the cluster, {es} reacts by automatically making
corresponding changes to the voting configuration in order to ensure that the corresponding changes to the voting configuration in order to ensure that the
cluster is as resilient as possible. It is important to wait for this adjustment cluster is as resilient as possible. It is important to wait for this adjustment
to complete before you remove more nodes from the cluster. For more information, to complete before you remove more nodes from the cluster. For more information,
see <<modules-discovery-adding-removing-nodes>>. see <<add-elasticsearch-nodes>>.
The current voting configuration is stored in the cluster state so you can The current voting configuration is stored in the cluster state so you can
inspect its current contents as follows: inspect its current contents as follows:

View file

@ -411,7 +411,7 @@ Similarly, each master-eligible node maintains the following data on disk:
Each node checks the contents of its data path at startup. If it discovers Each node checks the contents of its data path at startup. If it discovers
unexpected data then it will refuse to start. This is to avoid importing unexpected data then it will refuse to start. This is to avoid importing
unwanted <<modules-gateway-dangling-indices,dangling indices>> which can lead unwanted <<dangling-indices,dangling indices>> which can lead
to a red cluster health. To be more precise, nodes without the `data` role will to a red cluster health. To be more precise, nodes without the `data` role will
refuse to start if they find any shard data on disk at startup, and nodes refuse to start if they find any shard data on disk at startup, and nodes
without both the `master` and `data` roles will refuse to start if they have any without both the `master` and `data` roles will refuse to start if they have any
@ -424,13 +424,13 @@ must perform some extra steps to prepare a node for repurposing when starting
the node without the `data` or `master` roles. the node without the `data` or `master` roles.
* If you want to repurpose a data node by removing the `data` role then you * If you want to repurpose a data node by removing the `data` role then you
should first use an <<allocation-filtering,allocation filter>> to safely should first use an <<cluster-shard-allocation-filtering,allocation filter>> to safely
migrate all the shard data onto other nodes in the cluster. migrate all the shard data onto other nodes in the cluster.
* If you want to repurpose a node to have neither the `data` nor `master` roles * If you want to repurpose a node to have neither the `data` nor `master` roles
then it is simplest to start a brand-new node with an empty data path and the then it is simplest to start a brand-new node with an empty data path and the
desired roles. You may find it safest to use an desired roles. You may find it safest to use an
<<allocation-filtering,allocation filter>> to migrate the shard data elsewhere <<cluster-shard-allocation-filtering,allocation filter>> to migrate the shard data elsewhere
in the cluster first. in the cluster first.
If it is not possible to follow these extra steps then you may be able to use If it is not possible to follow these extra steps then you may be able to use

View file

@ -186,7 +186,7 @@ The `transport.compress` setting always configures local cluster request
compression and is the fallback setting for remote cluster request compression. compression and is the fallback setting for remote cluster request compression.
If you want to configure remote request compression differently than local If you want to configure remote request compression differently than local
request compression, you can set it on a per-remote cluster basis using the request compression, you can set it on a per-remote cluster basis using the
<<remote-cluster-settings,`cluster.remote.${cluster_alias}.transport.compress` setting>>. <<remote-clusters-settings,`cluster.remote.${cluster_alias}.transport.compress` setting>>.
[[response-compression]] [[response-compression]]

View file

@ -222,4 +222,4 @@ document's field value.
Unlike the <<query-dsl-function-score-query,`function_score`>> query or other Unlike the <<query-dsl-function-score-query,`function_score`>> query or other
ways to change <<relevance-scores,relevance scores>>, the ways to change <<relevance-scores,relevance scores>>, the
`distance_feature` query efficiently skips non-competitive hits when the `distance_feature` query efficiently skips non-competitive hits when the
<<search-uri-request,`track_total_hits`>> parameter is **not** `true`. <<search-search,`track_total_hits`>> parameter is **not** `true`.

View file

@ -9,7 +9,7 @@ By default, Elasticsearch sorts matching search results by **relevance
score**, which measures how well each document matches a query. score**, which measures how well each document matches a query.
The relevance score is a positive floating point number, returned in the The relevance score is a positive floating point number, returned in the
`_score` metadata field of the <<search-request-body,search>> API. The higher the `_score` metadata field of the <<search-search,search>> API. The higher the
`_score`, the more relevant the document. While each query type can calculate `_score`, the more relevant the document. While each query type can calculate
relevance scores differently, score calculation also depends on whether the relevance scores differently, score calculation also depends on whether the
query clause is run in a **query** or **filter** context. query clause is run in a **query** or **filter** context.

View file

@ -569,7 +569,7 @@ instead. A regular flush has the same effect as a synced flush in 7.6 and later.
[role="exclude",id="_repositories"] [role="exclude",id="_repositories"]
=== Snapshot repositories === Snapshot repositories
See <<snapshots-repositories>>. See <<snapshots-register-repository>>.
[role="exclude",id="_snapshot"] [role="exclude",id="_snapshot"]
=== Snapshot === Snapshot

View file

@ -274,7 +274,7 @@ Type of data stream that wildcard patterns can match. Supports
comma-separated values, such as `open,hidden`. Valid values are: comma-separated values, such as `open,hidden`. Valid values are:
`all`, `hidden`:: `all`, `hidden`::
Match any data stream, including <<hidden,hidden>> ones. Match any data stream, including <<multi-hidden,hidden>> ones.
`open`, `closed`:: `open`, `closed`::
Matches any non-hidden data stream. Data streams cannot be closed. Matches any non-hidden data stream. Data streams cannot be closed.
@ -295,7 +295,7 @@ streams. Supports comma-separated values, such as `open,hidden`. Valid values
are: are:
`all`:: `all`::
Match any data stream or index, including <<hidden,hidden>> ones. Match any data stream or index, including <<multi-hidden,hidden>> ones.
`open`:: `open`::
Match open, non-hidden indices. Also matches any non-hidden data stream. Match open, non-hidden indices. Also matches any non-hidden data stream.
@ -510,7 +510,7 @@ Number of documents and deleted docs, which have not yet merged out.
<<indices-refresh,Index refreshes>> can affect this statistic. <<indices-refresh,Index refreshes>> can affect this statistic.
`fielddata`:: `fielddata`::
<<fielddata,Fielddata>> statistics. <<fielddata-mapping-param,Fielddata>> statistics.
`flush`:: `flush`::
<<indices-flush,Flush>> statistics. <<indices-flush,Flush>> statistics.
@ -554,9 +554,6 @@ Size of the index in <<byte-units, byte units>>.
`translog`:: `translog`::
<<index-modules-translog,Translog>> statistics. <<index-modules-translog,Translog>> statistics.
`warmer`::
<<indices-warmers,Warmer>> statistics.
-- --
end::index-metric[] end::index-metric[]

View file

@ -133,7 +133,7 @@ existence of the field in mappings in an `expression` script.
=================================================== ===================================================
The `doc['field']` syntax can also be used for <<text,analyzed `text` fields>> The `doc['field']` syntax can also be used for <<text,analyzed `text` fields>>
if <<fielddata,`fielddata`>> is enabled, but *BEWARE*: enabling fielddata on a if <<fielddata-mapping-param,`fielddata`>> is enabled, but *BEWARE*: enabling fielddata on a
`text` field requires loading all of the terms into the JVM heap, which can be `text` field requires loading all of the terms into the JVM heap, which can be
very expensive both in terms of memory and CPU. It seldom makes sense to very expensive both in terms of memory and CPU. It seldom makes sense to
access `text` fields from scripts. access `text` fields from scripts.

View file

@ -334,7 +334,7 @@ all search requests.
[[msearch-security]] [[msearch-security]]
==== Security ==== Security
See <<url-access-control>> See <<api-url-access-control>>
[[multi-search-partial-responses]] [[multi-search-partial-responses]]

View file

@ -57,7 +57,7 @@ POST /_search <1>
// TEST[catch:unavailable] // TEST[catch:unavailable]
<1> A search request with the `pit` parameter must not specify `index`, `routing`, <1> A search request with the `pit` parameter must not specify `index`, `routing`,
and {ref}/search-request-body.html#request-body-search-preference[`preference`] or <<search-preference,`preference`>>
as these parameters are copied from the point in time. as these parameters are copied from the point in time.
<2> Just like regular searches, you can <<paginate-search-results,use `from` and <2> Just like regular searches, you can <<paginate-search-results,use `from` and
`size` to page through search results>>, up to the first 10,000 hits. If you `size` to page through search results>>, up to the first 10,000 hits. If you

View file

@ -50,7 +50,7 @@ https://github.com/mapbox/vector-tile-spec[Mapbox vector tile specification].
* If the {es} {security-features} are enabled, you must have the `read` * If the {es} {security-features} are enabled, you must have the `read`
<<privileges-list-indices,index privilege>> for the target data stream, index, <<privileges-list-indices,index privilege>> for the target data stream, index,
or alias. For cross-cluster search, see <<cross-cluster-configuring>>. or alias. For cross-cluster search, see <<remote-clusters-security>>.
[[search-vector-tile-api-path-params]] [[search-vector-tile-api-path-params]]
==== {api-path-parms-title} ==== {api-path-parms-title}

View file

@ -74,7 +74,7 @@ Inner hits also supports the following per document features:
* <<highlighting,Highlighting>> * <<highlighting,Highlighting>>
* <<request-body-search-explain,Explain>> * <<request-body-search-explain,Explain>>
* <<search-fields-param,Search fields>> * <<search-fields-param,Search fields>>
* <<request-body-search-source-filtering,Source filtering>> * <<source-filtering,Source filtering>>
* <<script-fields,Script fields>> * <<script-fields,Script fields>>
* <<docvalue-fields,Doc value fields>> * <<docvalue-fields,Doc value fields>>
* <<request-body-search-version,Include versions>> * <<request-body-search-version,Include versions>>

View file

@ -588,7 +588,7 @@ for loading fields:
parameter to get values for selected fields. This can be a good parameter to get values for selected fields. This can be a good
choice when returning a fairly small number of fields that support doc values, choice when returning a fairly small number of fields that support doc values,
such as keywords and dates. such as keywords and dates.
* Use the <<request-body-search-stored-fields, `stored_fields`>> parameter to * Use the <<stored-fields, `stored_fields`>> parameter to
get the values for specific stored fields (fields that use the get the values for specific stored fields (fields that use the
<<mapping-store,`store`>> mapping option). <<mapping-store,`store`>> mapping option).

View file

@ -158,7 +158,7 @@ the request hits. However, hitting a large number of shards can significantly
increase CPU and memory usage. increase CPU and memory usage.
TIP: For tips on preventing indices with large numbers of shards, see TIP: For tips on preventing indices with large numbers of shards, see
<<avoid-oversharding>>. <<size-your-shards>>.
You can use the `max_concurrent_shard_requests` query parameter to control You can use the `max_concurrent_shard_requests` query parameter to control
maximum number of concurrent shards a search request can hit per node. This maximum number of concurrent shards a search request can hit per node. This

View file

@ -38,7 +38,7 @@ must have the `read` index privilege for the alias's data streams or indices.
Allows you to execute a search query and get back search hits that match the Allows you to execute a search query and get back search hits that match the
query. You can provide search queries using the <<search-api-query-params-q,`q` query. You can provide search queries using the <<search-api-query-params-q,`q`
query string parameter>> or <<search-request-body,request body>>. query string parameter>> or <<search-search,request body>>.
[[search-search-api-path-params]] [[search-search-api-path-params]]
==== {api-path-parms-title} ==== {api-path-parms-title}

View file

@ -2342,7 +2342,7 @@ Contents of a JSON Web Key Set (JWKS), including the secret key that the JWT
realm uses to verify token signatures. This format supports multiple keys and realm uses to verify token signatures. This format supports multiple keys and
optional attributes, and is preferred over the `hmac_key` setting. Cannot be optional attributes, and is preferred over the `hmac_key` setting. Cannot be
used in conjunction with the `hmac_key` setting. Refer to used in conjunction with the `hmac_key` setting. Refer to
<<jwt-realm-configuration,Configure {es} to use a JWT realm>>. <<jwt-auth-realm,Configure {es} to use a JWT realm>>.
// end::jwt-hmac-jwkset-tag[] // end::jwt-hmac-jwkset-tag[]
// tag::jwt-hmac-key-tag[] // tag::jwt-hmac-key-tag[]
@ -2354,7 +2354,7 @@ without attributes, and cannot be used with the `hmac_jwkset` setting. This
format is compatible with OIDC. The HMAC key must be a UNICODE string, where format is compatible with OIDC. The HMAC key must be a UNICODE string, where
the key bytes are the UTF-8 encoding of the UNICODE string. the key bytes are the UTF-8 encoding of the UNICODE string.
The `hmac_jwkset` setting is preferred. Refer to The `hmac_jwkset` setting is preferred. Refer to
<<jwt-realm-configuration,Configure {es} to use a JWT realm>>. <<jwt-auth-realm,Configure {es} to use a JWT realm>>.
// end::jwt-hmac-key-tag[] // end::jwt-hmac-key-tag[]

View file

@ -19,7 +19,7 @@ When you want to form a cluster with nodes on other hosts, use the
<<static-cluster-setting, static>> `discovery.seed_hosts` setting. This setting <<static-cluster-setting, static>> `discovery.seed_hosts` setting. This setting
provides a list of other nodes in the cluster provides a list of other nodes in the cluster
that are master-eligible and likely to be live and contactable to seed that are master-eligible and likely to be live and contactable to seed
the <<modules-discovery-hosts-providers,discovery process>>. This setting the <<discovery-hosts-providers,discovery process>>. This setting
accepts a YAML sequence or array of the addresses of all the master-eligible accepts a YAML sequence or array of the addresses of all the master-eligible
nodes in the cluster. Each address can be either an IP address or a hostname nodes in the cluster. Each address can be either an IP address or a hostname
that resolves to one or more IP addresses via DNS. that resolves to one or more IP addresses via DNS.

View file

@ -143,7 +143,7 @@ documentation].
Each Java package in the {es-repo}[{es} source code] has a related logger. For Each Java package in the {es-repo}[{es} source code] has a related logger. For
example, the `org.elasticsearch.discovery` package has example, the `org.elasticsearch.discovery` package has
`logger.org.elasticsearch.discovery` for logs related to the `logger.org.elasticsearch.discovery` for logs related to the
<<modules-discovery-hosts-providers,discovery>> process. <<discovery-hosts-providers,discovery>> process.
To get more or less verbose logs, use the <<cluster-update-settings,cluster To get more or less verbose logs, use the <<cluster-update-settings,cluster
update settings API>> to change the related logger's log level. Each logger update settings API>> to change the related logger's log level. Each logger

View file

@ -1,8 +1,8 @@
[[restart-cluster]] [[restart-cluster]]
== Full-cluster restart and rolling restart == Full-cluster restart and rolling restart
There may be {ref}/configuring-tls.html#tls-transport[situations where you want There may be <<security-basic-setup,situations where you want
to perform a full-cluster restart] or a rolling restart. In the case of to perform a full-cluster restart>> or a rolling restart. In the case of
<<restart-cluster-full,full-cluster restart>>, you shut down and restart all the <<restart-cluster-full,full-cluster restart>>, you shut down and restart all the
nodes in the cluster while in the case of nodes in the cluster while in the case of
<<restart-cluster-rolling,rolling restart>>, you shut down only one node at a <<restart-cluster-rolling,rolling restart>>, you shut down only one node at a

View file

@ -41,7 +41,7 @@ include::install/systemd.asciidoc[]
If you installed a Docker image, you can start {es} from the command line. There If you installed a Docker image, you can start {es} from the command line. There
are different methods depending on whether you're using development mode or are different methods depending on whether you're using development mode or
production mode. See <<docker-cli-run>>. production mode. See <<docker-cli-run-dev-mode>>.
[discrete] [discrete]
[[start-rpm]] [[start-rpm]]

View file

@ -62,7 +62,7 @@ include::{es-repo-dir}/snapshot-restore/apis/create-snapshot-api.asciidoc[tag=sn
`name`:: `name`::
(Required, string) (Required, string)
Name automatically assigned to each snapshot created by the policy. Name automatically assigned to each snapshot created by the policy.
<<date-math-index-names,Date math>> is supported. <<api-date-math-index-names,Date math>> is supported.
To prevent conflicting snapshot names, a UUID is automatically appended to each To prevent conflicting snapshot names, a UUID is automatically appended to each
snapshot name. snapshot name.
@ -70,7 +70,7 @@ snapshot name.
(Required, string) (Required, string)
Repository used to store snapshots created by this policy. This repository must Repository used to store snapshots created by this policy. This repository must
exist prior to the policy's creation. You can create a repository using the exist prior to the policy's creation. You can create a repository using the
<<modules-snapshots,snapshot repository API>>. <<snapshot-restore,snapshot repository API>>.
[[slm-api-put-retention]] [[slm-api-put-retention]]
`retention`:: `retention`::
@ -100,7 +100,7 @@ Minimum number of snapshots to retain, even if the snapshots have expired.
==== ====
`schedule`:: `schedule`::
(Required, <<cron-expressions,Cron syntax>>) (Required, <<api-cron-expressions,Cron syntax>>)
Periodic or absolute schedule at which the policy creates snapshots. {slm-init} Periodic or absolute schedule at which the policy creates snapshots. {slm-init}
applies `schedule` changes immediately. applies `schedule` changes immediately.

View file

@ -55,4 +55,4 @@ fails and returns an error. Defaults to `30s`.
`indices`:: `indices`::
(Required, string) (Required, string)
A comma-separated list of indices to include in the snapshot. A comma-separated list of indices to include in the snapshot.
<<multi-index,multi-target syntax>> is supported. <<api-multi-index,multi-target syntax>> is supported.

View file

@ -78,7 +78,7 @@ match data streams and indices. Supports comma-separated values, such as
`open,hidden`. Defaults to `all`. Valid values are: `open,hidden`. Defaults to `all`. Valid values are:
`all`::: `all`:::
Match any data stream or index, including <<hidden,hidden>> ones. Match any data stream or index, including <<multi-hidden,hidden>> ones.
`open`::: `open`:::
Match open indices and data streams. Match open indices and data streams.

View file

@ -1,8 +1,7 @@
[[repository-azure]] [[repository-azure]]
=== Azure repository === Azure repository
You can use https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction[Azure Blob storage] as a repository for You can use https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction[Azure Blob storage] as a repository for <<snapshot-restore,Snapshot/Restore>>.
{ref}/modules-snapshots.html[Snapshot/Restore].
[[repository-azure-usage]] [[repository-azure-usage]]
==== Setup ==== Setup

View file

@ -2,7 +2,7 @@
=== Google Cloud Storage repository === Google Cloud Storage repository
You can use the https://cloud.google.com/storage/[Google Cloud Storage] You can use the https://cloud.google.com/storage/[Google Cloud Storage]
service as a repository for {ref}/modules-snapshots.html[Snapshot/Restore]. service as a repository for {ref}/snapshot-restore.html[Snapshot/Restore].
[[repository-gcs-usage]] [[repository-gcs-usage]]
==== Getting started ==== Getting started

View file

@ -1,7 +1,7 @@
[[repository-s3]] [[repository-s3]]
=== S3 repository === S3 repository
You can use AWS S3 as a repository for {ref}/modules-snapshots.html[Snapshot/Restore]. You can use AWS S3 as a repository for {ref}/snapshot-restore.html[Snapshot/Restore].
*If you are looking for a hosted solution of Elasticsearch on AWS, please visit *If you are looking for a hosted solution of Elasticsearch on AWS, please visit
https://www.elastic.co/cloud/.* https://www.elastic.co/cloud/.*

View file

@ -276,7 +276,7 @@ before you start.
. If you <<back-up-config-files,backed up the cluster's configuration . If you <<back-up-config-files,backed up the cluster's configuration
files>>, you can restore them to each node. This step is optional and requires a files>>, you can restore them to each node. This step is optional and requires a
<<restart-upgrade,full cluster restart>>. <<restart-cluster, full cluster restart>>.
+ +
After you shut down a node, copy the backed-up configuration files over to the After you shut down a node, copy the backed-up configuration files over to the
node's `$ES_PATH_CONF` directory. Before restarting the node, ensure node's `$ES_PATH_CONF` directory. Before restarting the node, ensure

View file

@ -53,7 +53,7 @@ Which returns:
Which is the request that SQL will run to provide the results. Which is the request that SQL will run to provide the results.
In this case, SQL will use the <<scroll-search-results,scroll>> In this case, SQL will use the <<scroll-search-results,scroll>>
API. If the result contained an aggregation then SQL would use API. If the result contained an aggregation then SQL would use
the normal <<search-request-body,search>> API. the normal <<search-search,search API>>.
The request body accepts the same <<sql-search-api-request-body,parameters>> as The request body accepts the same <<sql-search-api-request-body,parameters>> as
the <<sql-search-api,SQL search API>>, excluding `cursor`. the <<sql-search-api,SQL search API>>, excluding `cursor`.

View file

@ -10,7 +10,7 @@
A common requirement when dealing with date/time in general revolves around A common requirement when dealing with date/time in general revolves around
the notion of `interval`, a topic that is worth exploring in the context of {es} and {es-sql}. the notion of `interval`, a topic that is worth exploring in the context of {es} and {es-sql}.
{es} has comprehensive support for <<date-math, date math>> both inside <<date-math-index-names, index names>> and <<mapping-date-format, queries>>. {es} has comprehensive support for <<date-math, date math>> both inside <<api-date-math-index-names, index names>> and <<mapping-date-format, queries>>.
Inside {es-sql} the former is supported as is by passing the expression in the table name, while the latter is supported through the standard SQL `INTERVAL`. Inside {es-sql} the former is supported as is by passing the expression in the table name, while the latter is supported through the standard SQL `INTERVAL`.
The table below shows the mapping between {es} and {es-sql}: The table below shows the mapping between {es} and {es-sql}:

View file

@ -116,7 +116,7 @@ If the table name contains special SQL characters (such as `.`,`-`,`*`,etc...) u
include-tagged::{sql-specs}/docs/docs.csv-spec[fromTableQuoted] include-tagged::{sql-specs}/docs/docs.csv-spec[fromTableQuoted]
---- ----
The name can be a <<multi-index, pattern>> pointing to multiple indices (likely requiring quoting as mentioned above) with the restriction that *all* resolved concrete tables have **exact mapping**. The name can be a <<api-multi-index, pattern>> pointing to multiple indices (likely requiring quoting as mentioned above) with the restriction that *all* resolved concrete tables have **exact mapping**.
[source, sql] [source, sql]
---- ----

View file

@ -42,7 +42,7 @@ Identifiers can be of two types: __quoted__ and __unquoted__:
SELECT ip_address FROM "hosts-*" SELECT ip_address FROM "hosts-*"
---- ----
This query has two identifiers, `ip_address` and `hosts-*` (an <<multi-index,index pattern>>). As `ip_address` does not clash with any key words it can be used verbatim, `hosts-*` on the other hand cannot as it clashes with `-` (minus operation) and `*` hence the double quotes. This query has two identifiers, `ip_address` and `hosts-*` (an <<api-multi-index,index pattern>>). As `ip_address` does not clash with any key words it can be used verbatim, `hosts-*` on the other hand cannot as it clashes with `-` (minus operation) and `*` hence the double quotes.
Another example: Another example:
@ -51,7 +51,7 @@ Another example:
SELECT "from" FROM "<logstash-{now/d}>" SELECT "from" FROM "<logstash-{now/d}>"
---- ----
The first identifier from needs to quoted as otherwise it clashes with the `FROM` key word (which is case insensitive as thus can be written as `from`) while the second identifier using {es} <<date-math-index-names>> would have otherwise confuse the parser. The first identifier from needs to quoted as otherwise it clashes with the `FROM` key word (which is case insensitive as thus can be written as `from`) while the second identifier using {es} <<api-date-math-index-names>> would have otherwise confuse the parser.
Hence why in general, *especially* when dealing with user input it is *highly* recommended to use quotes for identifiers. It adds minimal increase to your queries and in return offers clarity and disambiguation. Hence why in general, *especially* when dealing with user input it is *highly* recommended to use quotes for identifiers. It adds minimal increase to your queries and in return offers clarity and disambiguation.

View file

@ -80,7 +80,7 @@ For high-cardinality `text` fields, fielddata can use a large amount of JVM
memory. To avoid this, {es} disables fielddata on `text` fields by default. If memory. To avoid this, {es} disables fielddata on `text` fields by default. If
you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata
circuit breaker>>, consider disabling it and using a `keyword` field instead. circuit breaker>>, consider disabling it and using a `keyword` field instead.
See <<fielddata>>. See <<fielddata-mapping-param>>.
**Clear the fielddata cache** **Clear the fielddata cache**

View file

@ -8,7 +8,7 @@
Submits a SAML `Response` message to {es} for consumption. Submits a SAML `Response` message to {es} for consumption.
NOTE: This API is intended for use by custom web applications other than {kib}. NOTE: This API is intended for use by custom web applications other than {kib}.
If you are using {kib}, see the <<saml-guide>>. If you are using {kib}, see the <<saml-guide-stack>>.
[[security-api-saml-authenticate-request]] [[security-api-saml-authenticate-request]]
==== {api-request-title} ==== {api-request-title}

View file

@ -8,7 +8,7 @@
Verifies the logout response sent from the SAML IdP. Verifies the logout response sent from the SAML IdP.
NOTE: This API is intended for use by custom web applications other than {kib}. NOTE: This API is intended for use by custom web applications other than {kib}.
If you are using {kib}, see the <<saml-guide>>. If you are using {kib}, see the <<saml-guide-stack>>.
[[security-api-saml-complete-logout-request]] [[security-api-saml-complete-logout-request]]
==== {api-request-title} ==== {api-request-title}

View file

@ -8,7 +8,7 @@
Submits a SAML LogoutRequest message to {es} for consumption. Submits a SAML LogoutRequest message to {es} for consumption.
NOTE: This API is intended for use by custom web applications other than {kib}. NOTE: This API is intended for use by custom web applications other than {kib}.
If you are using {kib}, see the <<saml-guide>>. If you are using {kib}, see the <<saml-guide-stack>>.
[[security-api-saml-invalidate-request]] [[security-api-saml-invalidate-request]]
==== {api-request-title} ==== {api-request-title}

View file

@ -8,7 +8,7 @@
Submits a request to invalidate an access token and refresh token. Submits a request to invalidate an access token and refresh token.
NOTE: This API is intended for use by custom web applications other than {kib}. NOTE: This API is intended for use by custom web applications other than {kib}.
If you are using {kib}, see the <<saml-guide>>. If you are using {kib}, see the <<saml-guide-stack>>.
[[security-api-saml-logout-request]] [[security-api-saml-logout-request]]
==== {api-request-title} ==== {api-request-title}

View file

@ -8,7 +8,7 @@
Creates a SAML authentication request (`<AuthnRequest>`) as a URL string, based on the configuration of the respective SAML realm in {es}. Creates a SAML authentication request (`<AuthnRequest>`) as a URL string, based on the configuration of the respective SAML realm in {es}.
NOTE: This API is intended for use by custom web applications other than {kib}. NOTE: This API is intended for use by custom web applications other than {kib}.
If you are using {kib}, see the <<saml-guide>>. If you are using {kib}, see the <<saml-guide-stack>>.
[[security-api-saml-prepare-authentication-request]] [[security-api-saml-prepare-authentication-request]]
==== {api-request-title} ==== {api-request-title}

View file

@ -39,9 +39,9 @@ This API supports the following fields:
| `query` | no | null | Optional, <<query-dsl,query>> filter watches to be returned. | `query` | no | null | Optional, <<query-dsl,query>> filter watches to be returned.
| `sort` | no | null | Optional <<search-request-sort,sort definition>>. | `sort` | no | null | Optional <<sort-search-results,sort definition>>.
| `search_after` | no | null | Optional <<search-request-search-after,search After>> to do pagination | `search_after` | no | null | Optional <<search-after,search After>> to do pagination
using last hit's sort values. using last hit's sort values.
|====== |======

View file

@ -89,7 +89,7 @@ default realm, the Key Distribution Center (KDC), and other configuration detail
required for Kerberos authentication. When the JVM needs some configuration required for Kerberos authentication. When the JVM needs some configuration
properties, it tries to find those values by locating and loading this file. The properties, it tries to find those values by locating and loading this file. The
JVM system property to configure the file path is `java.security.krb5.conf`. To JVM system property to configure the file path is `java.security.krb5.conf`. To
configure JVM system properties see <<jvm-options>>. configure JVM system properties see <<set-jvm-options>>.
If this system property is not specified, Java tries to locate the file based on If this system property is not specified, Java tries to locate the file based on
the conventions. the conventions.

View file

@ -12,7 +12,7 @@ Elastic Stack Relying Party will be registered.
NOTE: The OpenID Connect realm support in {kib} is designed with the expectation that it NOTE: The OpenID Connect realm support in {kib} is designed with the expectation that it
will be the primary authentication method for the users of that {kib} instance. The will be the primary authentication method for the users of that {kib} instance. The
<<oidc-kibana>> section describes what this entails and how you can set it up to support <<oidc-configure-kibana>> section describes what this entails and how you can set it up to support
other realms if necessary. other realms if necessary.
[[oidc-guide-op]] [[oidc-guide-op]]
@ -591,7 +591,7 @@ client with the OpenID Connect Provider. Note that when registering the
==== OpenID Connect Realm ==== OpenID Connect Realm
An OpenID Connect realm needs to be created and configured accordingly An OpenID Connect realm needs to be created and configured accordingly
in {es}. See <<oidc-guide-authentication>> in {es}. See <<oidc-elasticsearch-authentication>>
==== Service Account user for accessing the APIs ==== Service Account user for accessing the APIs

View file

@ -51,7 +51,7 @@ A realm that facilitates authentication using OpenID Connect. It enables {es} to
_jwt_:: _jwt_::
A realm that facilitates using JWT identity tokens as authentication bearer tokens. A realm that facilitates using JWT identity tokens as authentication bearer tokens.
Compatible tokens are OpenID Connect ID Tokens, or custom JWTs containing the same claims. Compatible tokens are OpenID Connect ID Tokens, or custom JWTs containing the same claims.
See <<jwt-realm>>. See <<jwt-auth-realm>>.
The {security-features} also support custom realms. If you need to integrate The {security-features} also support custom realms. If you need to integrate
with another authentication system, you can build a custom realm plugin. For with another authentication system, you can build a custom realm plugin. For

View file

@ -231,7 +231,7 @@ The recommended steps for configuring these SAML attributes are as follows:
This varies greatly between providers, but you should be able to obtain a list This varies greatly between providers, but you should be able to obtain a list
from the documentation, or from your local admin. from the documentation, or from your local admin.
. Read through the list of <<saml-user-properties, user properties>> that {es} . Read through the list of <<saml-es-user-properties, user properties>> that {es}
supports, and decide which of them are useful to you, and can be provided by supports, and decide which of them are useful to you, and can be provided by
your IdP. At a _minimum_, the `principal` attribute is required. your IdP. At a _minimum_, the `principal` attribute is required.
@ -244,7 +244,7 @@ The recommended steps for configuring these SAML attributes are as follows:
URIs are used. URIs are used.
. Configure the SAML realm in {es} to associate the {es} user properties (see . Configure the SAML realm in {es} to associate the {es} user properties (see
<<saml-user-properties, the listing>> below), to the URIs that you configured <<saml-es-user-properties, the listing>> below), to the URIs that you configured
in your IdP. In the example above, we have configured the `principal` and in your IdP. In the example above, we have configured the `principal` and
`groups` attributes. `groups` attributes.
@ -281,7 +281,7 @@ NOTE: Identity Providers can be either statically configured to release a `NameI
with a specific format, or they can be configured to try to conform with the with a specific format, or they can be configured to try to conform with the
requirements of the SP. The SP declares its requirements as part of the requirements of the SP. The SP declares its requirements as part of the
Authentication Request, using an element which is called the `NameIDPolicy`. If Authentication Request, using an element which is called the `NameIDPolicy`. If
this is needed, you can set the relevant <<saml-settings, settings>> named this is needed, you can set the relevant <<ref-saml-settings, settings>> named
`nameid_format` in order to request that the IdP releases a `NameID` with a `nameid_format` in order to request that the IdP releases a `NameID` with a
specific format. specific format.
@ -925,7 +925,7 @@ access tokens after the current one expires.
==== SAML realm ==== SAML realm
You must create a SAML realm and configure it accordingly You must create a SAML realm and configure it accordingly
in {es}. See <<saml-guide-authentication>> in {es}. See <<saml-elasticsearch-authentication>>
[[saml-no-kibana-user]] [[saml-no-kibana-user]]
==== Service Account user for accessing the APIs ==== Service Account user for accessing the APIs

View file

@ -81,6 +81,6 @@ you to invalidate the tokens. See
<<security-api-invalidate-api-key,invalidate API key API>>. <<security-api-invalidate-api-key,invalidate API key API>>.
IMPORTANT: Authentication support for JWT bearer tokens was introduced in {es} IMPORTANT: Authentication support for JWT bearer tokens was introduced in {es}
8.2 through the <<jwt-realm>>, which cannot be enabled through 8.2 through the <<jwt-auth-realm>>, which cannot be enabled through
token-authentication services. Realms offer flexible order and configurations of token-authentication services. Realms offer flexible order and configurations of
zero, one, or multiple JWT realms. zero, one, or multiple JWT realms.

View file

@ -67,7 +67,7 @@ GET .ds-my-data-stream-2099.03.09-000003/_doc/2
Use <<privileges-list-indices,index privileges>> to control access to an Use <<privileges-list-indices,index privileges>> to control access to an
<<aliases,alias>>. Privileges on an index or data stream do not grant privileges <<aliases,alias>>. Privileges on an index or data stream do not grant privileges
on its aliases. For information about managing aliases, see <<alias>>. on its aliases. For information about managing aliases, see <<aliases>>.
IMPORTANT: Don't use <<filter-alias,filtered aliases>> in place of IMPORTANT: Don't use <<filter-alias,filtered aliases>> in place of
<<document-level-security,document level security>>. {es} doesn't always apply <<document-level-security,document level security>>. {es} doesn't always apply

View file

@ -200,7 +200,7 @@ on all {es} API keys.
`transport_client`:: `transport_client`::
All privileges necessary for a transport client to connect. Required by the remote All privileges necessary for a transport client to connect. Required by the remote
cluster to enable <<cross-cluster-configuring,{ccs}>>. cluster to enable <<remote-clusters-security,{ccs}>>.
[[privileges-list-indices]] [[privileges-list-indices]]
==== Indices privileges ==== Indices privileges
@ -318,7 +318,7 @@ more like this, multi percolate/search/termvector, percolate, scroll,
clear_scroll, search, suggest, tv). clear_scroll, search, suggest, tv).
`read_cross_cluster`:: `read_cross_cluster`::
Read-only access to the search action from a <<cross-cluster-configuring,remote cluster>>. Read-only access to the search action from a <<remote-clusters-security,remote cluster>>.
`view_index_metadata`:: `view_index_metadata`::
Read-only access to index and data stream metadata (aliases, exists, Read-only access to index and data stream metadata (aliases, exists,

View file

@ -444,7 +444,7 @@ Kerberos/SPNEGO debug logging on JVM, add following JVM system properties:
`-Dsun.security.spnego.debug=true` `-Dsun.security.spnego.debug=true`
For more information about JVM system properties, see <<jvm-options>>. For more information about JVM system properties, see <<set-jvm-options>>.
[[trb-security-saml]] [[trb-security-saml]]
=== Common SAML issues === Common SAML issues
@ -589,7 +589,7 @@ Identity Provider sent. In this example, {es} is configured as follows:
xpack.security.authc.realms.saml.<saml-realm-name>.attributes.principal: AttributeName0 xpack.security.authc.realms.saml.<saml-realm-name>.attributes.principal: AttributeName0
.... ....
This configuration means that {es} expects to find a SAML Attribute with the name `AttributeName0` or a `NameID` with the appropriate format in the SAML This configuration means that {es} expects to find a SAML Attribute with the name `AttributeName0` or a `NameID` with the appropriate format in the SAML
response so that <<saml-attribute-mapping,it can map it>> to the `principal` user property. The `principal` user property is a response so that <<saml-attributes-mapping,it can map it>> to the `principal` user property. The `principal` user property is a
mandatory one, so if this mapping can't happen, the authentication fails. mandatory one, so if this mapping can't happen, the authentication fails.
If you are attempting to map a `NameID`, make sure that the expected `NameID` format matches the one that is sent. If you are attempting to map a `NameID`, make sure that the expected `NameID` format matches the one that is sent.

View file

@ -173,7 +173,7 @@ accurately.
| `request.indices` | no | - | The indices to search. If omitted, all indices are searched, which is the | `request.indices` | no | - | The indices to search. If omitted, all indices are searched, which is the
default behaviour in Elasticsearch. default behaviour in Elasticsearch.
| `request.body` | no | - | The body of the request. The <<search-request-body,request body>> | `request.body` | no | - | The body of the request. The <<search-search,request body>>
follows the same structure you normally send in the body of a REST `_search` follows the same structure you normally send in the body of a REST `_search`
request. The body can be static text or include `mustache` <<templates,templates>>. request. The body can be static text or include `mustache` <<templates,templates>>.
@ -181,13 +181,13 @@ accurately.
for more information. for more information.
| `request.indices_options.expand_wildcards` | no | `open` | How to expand wildcards. Valid values are: `all`, `open`, `closed`, and `none` | `request.indices_options.expand_wildcards` | no | `open` | How to expand wildcards. Valid values are: `all`, `open`, `closed`, and `none`
See <<multi-index,`expand_wildcards`>> for more information. See <<api-multi-index,`expand_wildcards`>> for more information.
| `request.indices_options.ignore_unavailable` | no | `true` | Whether the search should ignore unavailable indices. See | `request.indices_options.ignore_unavailable` | no | `true` | Whether the search should ignore unavailable indices. See
<<multi-index,`ignore_unavailable`>> for more information. <<api-multi-index,`ignore_unavailable`>> for more information.
| `request.indices_options.allow_no_indices` | no | `true` | Whether to allow a search where a wildcard indices expression results in no | `request.indices_options.allow_no_indices` | no | `true` | Whether to allow a search where a wildcard indices expression results in no
concrete indices. See <<multi-index,allow_no_indices>> concrete indices. See <<api-multi-index,allow_no_indices>>
for more information. for more information.
| `extract` | no | - | A array of JSON keys to extract from the search response and load as the payload. | `extract` | no | - | A array of JSON keys to extract from the search response and load as the payload.

View file

@ -63,7 +63,7 @@ The following table lists all available settings for the search
| `request.indices` | no | all indices | One or more indices to search on. | `request.indices` | no | all indices | One or more indices to search on.
| `request.body` | no | `match_all` query | The body of the request. The | `request.body` | no | `match_all` query | The body of the request. The
<<search-request-body,request body>> follows <<search-search,request body>> follows
the same structure you normally send in the body of the same structure you normally send in the body of
a REST `_search` request. The body can be static text a REST `_search` request. The body can be static text
or include `mustache` <<templates,templates>>. or include `mustache` <<templates,templates>>.
@ -71,15 +71,15 @@ The following table lists all available settings for the search
| `request.indices_options.expand_wildcards` | no | `open` | Determines how to expand indices wildcards. An array | `request.indices_options.expand_wildcards` | no | `open` | Determines how to expand indices wildcards. An array
consisting of a combination of `open`, `closed`, consisting of a combination of `open`, `closed`,
and `hidden`. Alternatively a value of `none` or `all`. and `hidden`. Alternatively a value of `none` or `all`.
(see <<multi-index,multi-target syntax>>) (see <<api-multi-index,multi-target syntax>>)
| `request.indices_options.ignore_unavailable` | no | `true` | A boolean value that determines whether the search | `request.indices_options.ignore_unavailable` | no | `true` | A boolean value that determines whether the search
should leniently ignore unavailable indices should leniently ignore unavailable indices
(see <<multi-index,multi-target syntax>>) (see <<api-multi-index,multi-target syntax>>)
| `request.indices_options.allow_no_indices` | no | `true` | A boolean value that determines whether the search | `request.indices_options.allow_no_indices` | no | `true` | A boolean value that determines whether the search
should leniently return no results when no indices should leniently return no results when no indices
are resolved (see <<multi-index,multi-target syntax>>) are resolved (see <<api-multi-index,multi-target syntax>>)
| `request.template` | no | - | The body of the search template. See | `request.template` | no | - | The body of the search template. See
<<templates,configure templates>> for more information. <<templates,configure templates>> for more information.

View file

@ -4,8 +4,10 @@
<titleabbrev>Cron schedule</titleabbrev> <titleabbrev>Cron schedule</titleabbrev>
++++ ++++
Defines a <<trigger-schedule, `schedule`>> using a <<cron-expressions, cron expression>>
that specifies when to execute a watch. Defines a <<trigger-schedule, `schedule`>> using a <<api-cron-expressions, cron expression>>
that specifiues when to execute a watch.
TIP: While cron expressions are powerful, a regularly occurring schedule TIP: While cron expressions are powerful, a regularly occurring schedule
is easier to configure with the other schedule types. is easier to configure with the other schedule types.