[DOCS] Update remote cluster docs (#77043)

* [DOCS] Update remote cluster docs

* Add files, rename files, write new stuff

* Plethora of changes

* Add test and update snippets

* Redirects, moved files, and test updates

* Moved file to x-pack for tests

* Remove older CCS page and add redirects

* Cleanup, link updates, and some rewrites

* Update image

* Incorporating user feedback and rewriting much of the remote clusters page

* More changes from review feedback

* Numerous updates, including request examples for CCS and Kibana

* More changes from review feedback

* Minor clarifications on security for remote clusters

* Incorporate review feedback

Co-authored-by: Yang Wang <ywangd@gmail.com>

* Some review feedback and some editorial changes

Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Co-authored-by: Yang Wang <ywangd@gmail.com>
This commit is contained in:
Adam Locke 2021-09-22 16:02:33 -04:00 committed by GitHub
parent 15baf4017a
commit 6940673e8a
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
34 changed files with 830 additions and 628 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

View file

@ -73,7 +73,7 @@ the new patterns.
==== {api-request-body-title}
`remote_cluster`::
(Required, string) The <<modules-remote-clusters,remote cluster>> containing
(Required, string) The <<remote-clusters,remote cluster>> containing
the leader indices to match against.
`leader_index_patterns`::

View file

@ -122,7 +122,7 @@ to read from the leader again.
//End parameters
`remote_cluster`::
(string) The <<modules-remote-clusters,remote cluster>> that contains the
(string) The <<remote-clusters,remote cluster>> that contains the
leader index.
`status`::

View file

@ -162,7 +162,7 @@ task.
//End read_exceptions
`remote_cluster`::
(string) The <<modules-remote-clusters,remote cluster>> containing the leader
(string) The <<remote-clusters,remote cluster>> containing the leader
index.
`shard_id`::

View file

@ -115,7 +115,7 @@ the <<ccr-post-unfollow,unfollow API>> is invoked.
`leader_remote_cluster`::
(Required, string) The alias (from the perspective of the cluster containing
the follower index) of the <<modules-remote-clusters,remote cluster>>
the follower index) of the <<remote-clusters,remote cluster>>
containing the leader index.
[[ccr-post-forget-follower-examples]]

View file

@ -74,7 +74,7 @@ referenced leader index. When this API returns, the follower index exists, and
(Required, string) The name of the index in the leader cluster to follow.
`remote_cluster`::
(Required, string) The <<modules-remote-clusters,remote cluster>> containing
(Required, string) The <<remote-clusters,remote cluster>> containing
the leader index.
include::../follow-request-body.asciidoc[]

View file

@ -29,9 +29,9 @@ Auto-follow patterns are especially useful with
new indices on the cluster containing the leader index.
[[ccr-access-ccr-auto-follow]]
To start using {ccr} auto-follow patterns, access {kib} and go to
*Management > Stack Management*. In the side navigation, select
*Cross-Cluster Replication* and choose the *Auto-follow patterns* tab
To start using {ccr} auto-follow patterns from Stack Management in {kib}, select
*Cross-Cluster Replication* from the side navigation and choose the
*Auto-follow patterns* tab.
[[ccr-auto-follow-create]]
==== Create auto-follow patterns
@ -41,12 +41,8 @@ When an index is created in the remote cluster with a name that matches one of
the patterns in the collection, a follower index is configured in the local
cluster. The follower index uses the new index as its leader index.
[%collapsible]
.Use the API
====
Use the <<ccr-put-auto-follow-pattern,create auto-follow pattern API>> to add a
new auto-follow pattern configuration.
====
[[ccr-auto-follow-retrieve]]
==== Retrieve auto-follow patterns
@ -57,12 +53,8 @@ Select the auto-follow pattern that you want to view details about. From there,
you can make changes to the auto-follow pattern. You can also view your
follower indices included in the auto-follow pattern.
[%collapsible]
.Use the API
====
Use the <<ccr-get-auto-follow-pattern,get auto-follow pattern API>> to inspect
all configured auto-follow pattern collections.
====
[[ccr-auto-follow-pause]]
==== Pause and resume auto-follow patterns
@ -73,14 +65,10 @@ and pause replication.
To resume replication, select the pattern and choose
*Manage pattern > Resume replication*.
[%collapsible]
.Use the API
====
Use the <<ccr-pause-auto-follow-pattern,pause auto-follow pattern API>> to
pause auto-follow patterns.
Use the <<ccr-resume-auto-follow-pattern,resume auto-follow pattern API>> to
resume auto-follow patterns.
====
[[ccr-auto-follow-delete]]
==== Delete auto-follow patterns
@ -91,9 +79,5 @@ and pause replication.
When the pattern status changes to Paused, choose
*Manage pattern > Delete pattern*.
[%collapsible]
.Use the API
====
Use the <<ccr-delete-auto-follow-pattern,delete auto-follow pattern API>> to
delete a configured auto-follow pattern collection.
====

View file

@ -1,6 +1,6 @@
[role="xpack"]
[testenv="platinum"]
[[ccr-getting-started]]
[[ccr-getting-started-tutorial]]
=== Tutorial: Set up {ccr}
++++
<titleabbrev>Set up {ccr}</titleabbrev>
@ -56,7 +56,7 @@ response time
In this guide, you'll learn how to:
* Configure a <<modules-remote-clusters,remote cluster>> with a leader index
* Configure a <<remote-clusters,remote cluster>> with a leader index
* Create a follower index on a local cluster
* Create an auto-follow pattern to automatically follow time series indices
that are periodically created in a remote cluster
@ -72,23 +72,18 @@ can <<modules-cross-cluster-search,search across clusters>> and set up {ccr}.
==== Prerequisites
To complete this tutorial, you need:
* The `manage` cluster privilege on the local cluster.
* A license on both clusters that includes {ccr}. {kibana-ref}/managing-licenses.html[Activate a free 30-day trial].
* The `read_ccr` cluster privilege and `monitor` and `read` privileges
for the leader index on the remote cluster. <<stack-management-ccr-remote,Configure remote cluster privileges>>.
* The `manage_ccr` cluster privilege and `monitor`, `read`, `write` and
`manage_follow_index` privileges to configure remote clusters and follower
indices on the local cluster. <<stack-management-ccr-local,Configure local cluster privileges>>.
* An index on the remote cluster that contains the data you want to replicate.
This tutorial uses the sample eCommerce orders data set.
{kibana-ref}/get-started.html#gs-get-data-into-kibana[Load sample data].
* In the local cluster, all nodes with the `master` <<node-roles,node role>> must
also have the <<remote-node,`remote_cluster_client`>> role. The local cluster
must also have at least one node with both a data role and the
<<remote-node,`remote_cluster_client`>> role. Individual tasks for coordinating
replication scale based on the number of data nodes with the
`remote_cluster_client` role in the local cluster.
also have the <<remote-node,`remote_cluster_client`>> role. The local cluster
must also have at least one node with both a data role and the
<<remote-node,`remote_cluster_client`>> role. Individual tasks for coordinating
replication scale based on the number of data nodes with the
`remote_cluster_client` role in the local cluster.
[[ccr-getting-started-remote-cluster]]
==== Connect to a remote cluster
To replicate an index on a remote cluster (Cluster A) to a local cluster (Cluster B), you configure Cluster A as a remote on Cluster B.
@ -102,13 +97,14 @@ cluster (`ClusterA`) followed by the transport port (defaults to `9300`). For
example, `cluster.es.eastus2.staging.azure.foundit.no:9400` or
`192.168.1.1:9300`.
[%collapsible]
[%collapsible%open]
.API example
====
Use the <<cluster-update-settings,cluster update settings API>> to add a remote cluster:
You can also use the <<cluster-update-settings,cluster update settings API>> to
add a remote cluster:
[source,console]
--------------------------------------------------
----
PUT /_cluster/settings
{
"persistent" : {
@ -123,7 +119,7 @@ PUT /_cluster/settings
}
}
}
--------------------------------------------------
----
// TEST[setup:host]
// TEST[s/127.0.0.1:9300/\${transport_host}/]
<1> Specifies the hostname and transport port of a seed node in the remote
@ -133,36 +129,34 @@ You can verify that the local cluster is successfully connected to the remote
cluster.
[source,console]
--------------------------------------------------
----
GET /_remote/info
--------------------------------------------------
----
// TEST[continued]
The API will respond by showing that the local cluster is connected to the
remote cluster.
The API response indicates that the local cluster is connected to the remote
cluster with cluster alias `leader`.
[source,console-result]
--------------------------------------------------
----
{
"leader" : {
"seeds" : [
"127.0.0.1:9300"
],
"connected" : true, <1>
"num_nodes_connected" : 1, <2>
"connected" : true,
"num_nodes_connected" : 1, <1>
"max_connections_per_cluster" : 3,
"initial_connect_timeout" : "30s",
"skip_unavailable" : false,
"mode" : "sniff"
}
}
--------------------------------------------------
----
// TESTRESPONSE[s/127.0.0.1:9300/$body.leader.seeds.0/]
// TEST[s/"connected" : true/"connected" : $body.leader.connected/]
// TEST[s/"num_nodes_connected" : 1/"num_nodes_connected" : $body.leader.num_nodes_connected/]
<1> This shows the local cluster is connected to the remote cluster with cluster
alias `leader`
<2> This shows the number of nodes in the remote cluster the local cluster is
<1> The number of nodes in the remote cluster the local cluster is
connected to.
====
@ -174,6 +168,8 @@ soft deletes enabled, you must reindex it and use the new index as the leader
index. Soft deletes are enabled by default on new indices
created with {es} 7.0.0 and later.
include::../../../x-pack/docs/en/security/authentication/remote-clusters-privileges.asciidoc[tag=configure-ccr-privileges]
[[ccr-getting-started-follower-index]]
==== Create a follower index to replicate a specific index
When you create a follower index, you reference the remote cluster and the
@ -202,26 +198,25 @@ in the follower index.
[role="screenshot"]
image::images/ccr-follower-index.png["The Cross-Cluster Replication page in {kib}"]
[%collapsible]
[%collapsible%open]
.API example
====
Use the <<ccr-put-follow,create follower API>> to create follower indices.
When you create a follower index, you must reference the remote cluster and the
leader index that you created in the
remote cluster.
You can also use the <<ccr-put-follow,create follower API>> to create follower
indices. When you create a follower index, you must reference the remote cluster
and the leader index that you created in the remote cluster.
When initiating the follower request, the response returns before the
<<ccr-remote-recovery, remote recovery>> process completes. To wait for the process
to complete, add the `wait_for_active_shards` parameter to your request.
[source,console]
--------------------------------------------------
----
PUT /server-metrics-follower/_ccr/follow?wait_for_active_shards=1
{
"remote_cluster" : "leader",
"leader_index" : "server-metrics"
}
--------------------------------------------------
----
// TEST[continued]
//////////////////////////
@ -239,7 +234,7 @@ PUT /server-metrics-follower/_ccr/follow?wait_for_active_shards=1
Use the
<<ccr-get-follow-stats,get follower stats API>> to inspect the status of
replication
replication.
//////////////////////////
@ -289,14 +284,14 @@ image::images/auto-follow-patterns.png["The Auto-follow patterns page in {kib}"]
// end::ccr-create-auto-follow-pattern-tag[]
[%collapsible]
[%collapsible%open]
.API example
====
Use the <<ccr-put-auto-follow-pattern,create auto-follow pattern API>> to
configure auto-follow patterns.
[source,console]
--------------------------------------------------
----
PUT /_ccr/auto_follow/beats
{
"remote_cluster" : "leader",
@ -307,7 +302,7 @@ PUT /_ccr/auto_follow/beats
],
"follow_index_pattern" : "{{leader_index}}-copy" <3>
}
--------------------------------------------------
----
// TEST[continued]
<1> Automatically follow new {metricbeat} indices.
<2> Automatically follow new {packetbeat} indices.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 75 KiB

Before After
Before After

View file

@ -52,7 +52,7 @@ secondary cluster
Watch the
https://www.elastic.co/webinars/replicate-elasticsearch-data-with-cross-cluster-replication-ccr[{ccr} webinar] to learn more about the following use cases.
Then, <<ccr-getting-started,set up {ccr}>> on your local machine and work
Then, <<ccr-getting-started-tutorial,set up {ccr}>> on your local machine and work
through the demo from the webinar.
[discrete]
@ -163,7 +163,7 @@ image::images/ccr-arch-central-reporting.png[Three clusters in different regions
[discrete]
[[ccr-replication-mechanics]]
=== Replication mechanics
Although you <<ccr-getting-started,set up {ccr}>> at the index level, {es}
Although you <<ccr-getting-started-tutorial,set up {ccr}>> at the index level, {es}
achieves replication at the shard level. When a follower index is created,
each shard in that index pulls changes from its corresponding shard in the
leader index, which means that a follower index has the same number of
@ -306,7 +306,7 @@ enabled.
This following sections provide more information about how to configure
and use {ccr}:
* <<ccr-getting-started>>
* <<ccr-getting-started-tutorial>>
* <<ccr-managing>>
* <<ccr-auto-follow>>
* <<ccr-upgrading>>

View file

@ -49,7 +49,7 @@ or alias.
* See <<eql-required-fields>>.
* experimental:[] For cross-cluster search, the local and remote clusters must
use the same {es} version. For security, see <<cross-cluster-configuring>>.
use the same {es} version. For security, see <<remote-clusters-security>>.
[[eql-search-api-limitations]]
===== Limitations

View file

@ -799,7 +799,7 @@ You can also manually delete saved synchronous searches using the
experimental::[]
The EQL search API supports <<modules-cross-cluster-search,cross-cluster
search>>. However, the local and <<modules-remote-clusters,remote clusters>>
search>>. However, the local and <<remote-clusters,remote clusters>>
must use the same {es} version.
The following <<cluster-update-settings,update cluster settings>> request

View file

@ -71,7 +71,7 @@ for the target data stream, index, or index alias.
--
(Required, string) Comma-separated name(s) or index pattern(s) of the
indices, aliases, and data streams to resolve. Resources on
<<modules-remote-clusters,remote clusters>> can be specified using the
<<remote-clusters,remote clusters>> can be specified using the
`<cluster>:<name>` syntax.
--

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

View file

@ -0,0 +1,225 @@
[[remote-clusters-connect]]
=== Connect to remote clusters
Your local cluster uses the <<modules-network,transport interface>> to establish
communication with remote clusters. The coordinating nodes in the local cluster
establish <<long-lived-connections,long-lived>> TCP connections with specific
nodes in the remote cluster. {es} requires these connections to remain open,
even if the connections are idle for an extended period.
NOTE: You must have the `manage` cluster privilege to connect remote clusters.
To add a remote cluster from Stack Management in {kib}:
. Select *Remote Clusters* from the side navigation.
. Specify the {es} endpoint URL, or the IP address or host name of the remote
cluster followed by the transport port (defaults to `9300`). For example,
`cluster.es.eastus2.staging.azure.foundit.no:9400` or `192.168.1.1:9300`.
Alternatively, use the <<cluster-update-settings,cluster update settings API>>
to add a remote cluster. You can also use this API to
<<configure-remote-clusters-dynamic,dynamically configure>> remote clusters for
_every_ node in the local cluster. To configure remote clusters on individual
nodes in the local cluster, define
<<configure-remote-clusters-static,static settings>> in `elasticsearch.yml` for
each node.
After connecting remote clusters,
<<remote-clusters-privileges,configure roles and users for remote clusters>>.
The following request adds a remote cluster with an alias of `cluster_one`. This
_cluster alias_ is a unique identifier that represents the connection to the
remote cluster and is used to distinguish between local and remote indices.
[source,console]
----
PUT /_cluster/settings
{
"persistent" : {
"cluster" : {
"remote" : {
"cluster_one" : { <1>
"seeds" : [
"127.0.0.1:9300" <2>
]
}
}
}
}
}
----
// TEST[setup:host]
// TEST[s/127.0.0.1:9300/\${transport_host}/]
<1> The cluster alias of this remote cluster is `cluster_one`.
<2> Specifies the hostname and transport port of a seed node in the remote
cluster.
You can use the <<cluster-remote-info,remote cluster info API>> to verify that
the local cluster is successfully connected to the remote cluster:
[source,console]
----
GET /_remote/info
----
// TEST[continued]
The API response indicates that the local cluster is connected to the remote
cluster with the cluster alias `cluster_one`:
[source,console-result]
----
{
"cluster_one" : {
"seeds" : [
"127.0.0.1:9300"
],
"connected" : true,
"num_nodes_connected" : 1, <1>
"max_connections_per_cluster" : 3,
"initial_connect_timeout" : "30s",
"skip_unavailable" : false, <2>
"mode" : "sniff"
}
}
----
// TESTRESPONSE[s/127.0.0.1:9300/$body.cluster_one.seeds.0/]
// TEST[s/"connected" : true/"connected" : $body.cluster_one.connected/]
// TEST[s/"num_nodes_connected" : 1/"num_nodes_connected" : $body.cluster_one.num_nodes_connected/]
<1> The number of nodes in the remote cluster the local cluster is
connected to.
<2> Indicates whether to skip the remote cluster if searched through {ccs} but
no nodes are available.
[[configure-remote-clusters-dynamic]]
==== Dynamically configure remote clusters
Use the <<cluster-update-settings,cluster update settings API>> to dynamically
configure remote settings on every node in the cluster. The following request
adds three remote clusters: `cluster_one`, `cluster_two`, and `cluster_three`.
The `seeds` parameter specifies the hostname and
<<transport-settings,transport port>> (default `9300`) of a seed node in the
remote cluster.
The `mode` parameter determines the configured connection mode, which defaults
to <<sniff-mode,`sniff`>>. Because `cluster_one` doesn't specify a `mode`, it
uses the default. Both `cluster_two` and `cluster_three` explicitly use
different modes.
[source,console]
----
PUT _cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"cluster_one": {
"seeds": [
"127.0.0.1:9300"
]
},
"cluster_two": {
"mode": "sniff",
"seeds": [
"127.0.0.1:9301"
],
"transport.compress": true,
"skip_unavailable": true
},
"cluster_three": {
"mode": "proxy",
"proxy_address": "127.0.0.1:9302"
}
}
}
}
}
----
// TEST[setup:host]
// TEST[s/127.0.0.1:9300/\${transport_host}/]
You can dynamically update settings for a remote cluster after the initial configuration. The following request updates the
compression settings for `cluster_two`, and the compression and ping schedule
settings for `cluster_three`.
NOTE: When the compression or ping schedule settings change, all existing
node connections must close and re-open, which can cause in-flight requests to
fail.
[source,console]
----
PUT _cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"cluster_two": {
"transport.compress": false
},
"cluster_three": {
"transport.compress": true,
"transport.ping_schedule": "60s"
}
}
}
}
}
----
// TEST[continued]
You can delete a remote cluster from the cluster settings by passing `null`
values for each remote cluster setting. The following request removes
`cluster_two` from the cluster settings, leaving `cluster_one` and
`cluster_three` intact:
[source,console]
----
PUT _cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"cluster_two": {
"mode": null,
"seeds": null,
"skip_unavailable": null,
"transport.compress": null
}
}
}
}
}
----
// TEST[continued]
[[configure-remote-clusters-static]]
==== Statically configure remote clusters
If you specify settings in `elasticsearch.yml`, only the nodes with
those settings can connect to the remote cluster and serve remote cluster
requests.
NOTE: Remote cluster settings that are specified using the
<<cluster-update-settings,cluster update settings API>> take precedence over
settings that you specify in `elasticsearch.yml` for individual nodes.
In the following example, `cluster_one`, `cluster_two`, and `cluster_three` are
arbitrary cluster aliases representing the connection to each cluster. These
names are subsequently used to distinguish between local and remote indices.
[source,yaml]
----
cluster:
remote:
cluster_one:
seeds: 127.0.0.1:9300
cluster_two:
mode: sniff
seeds: 127.0.0.1:9301
transport.compress: true <1>
skip_unavailable: true <2>
cluster_three:
mode: proxy
proxy_address: 127.0.0.1:9302 <3>
----
<1> Compression is explicitly enabled for requests to `cluster_two`.
<2> Disconnected remote clusters are optional for `cluster_two`.
<3> The address for the proxy endpoint used to connect to `cluster_three`.

View file

@ -0,0 +1,50 @@
[[remote-clusters-security]]
=== Configure remote clusters with security
To use {ccr} or {ccs} safely with remote clusters, enable security on all
connected clusters and configure Transport Layer Security (TLS) on every node.
Configuring TLS security on the transport interface is minimally required for
remote clusters. For additional security, configure TLS on the
<<security-basic-setup-https,HTTP interface>> as well.
All connected clusters must trust one another and be mutually authenticated
with TLS on the transport interface. This means that the local cluster
trusts the certificate authority (CA) of the remote cluster, and the remote
cluster trusts the CA of the local cluster. When establishing a connection, all
nodes will verify certificates from nodes on the other side. This mutual trust
is required to securely connect a remote cluster, because all connected nodes
effectively form a single security domain.
User authentication is performed on the local cluster and the user and users
roles names are passed to the remote clusters. A remote cluster checks the users
role names against its local role definitions to determine which indices the user is
allowed to access.
Before using {ccr} or {ccs} with secured {es} clusters, complete the following
configuration tasks:
. Enable the {es} {security-features} on every node in each connected cluster by
setting `xpack.security.enabled` to `true` in `elasticsearch.yml`. Refer to the
<<general-security-settings,{es} security settings>>.
. Configure Transport Layer Security (TLS) on every node to encrypt internode
traffic and authenticate nodes in the local cluster with nodes in all remote
clusters. Refer to
<<security-basic-setup,set up basic security for the {stack}>> for the required
steps to configure security.
+
NOTE: This procedure uses the same CA to generate certificates for all nodes.
Alternatively, you can add the certificates from the local cluster as a
trusted CA in each remote cluster. You must also add the certificates from
remote clusters as a trusted CA on the local cluster. Using the same CA to
generate certificates for all nodes simplifies this task.
After enabling and configuring security, you can
<<remote-clusters-connect,connect remote clusters>> from a local cluster.
With your clusters connected, you'll need to
<<remote-clusters-privileges,configure users and privileges>> on both the local
and remote clusters.
If you're configuring a remote cluster for {ccr}, you need to
<<ccr-getting-started-follower-index,configure a follower index>> on your local
cluster to replicate the leader index on a remote cluster.

View file

@ -0,0 +1,105 @@
[[remote-clusters-settings]]
=== Remote cluster settings
The following settings apply to both <<sniff-mode,sniff mode>> and
<<proxy-mode,proxy mode>>. Settings that are specific to sniff mode and proxy
mode are described separately.
`cluster.remote.<cluster_alias>.mode`::
The mode used for a remote cluster connection. The only supported modes are
`sniff` and `proxy`.
`cluster.remote.initial_connect_timeout`::
The time to wait for remote connections to be established when the node
starts. The default is `30s`.
`remote_cluster_client` <<node-roles,role>>::
By default, any node in the cluster can act as a cross-cluster client and
connect to remote clusters. To prevent a node from connecting to remote
clusters, specify the <<node-roles,node.roles>> setting in `elasticsearch.yml`
and exclude `remote_cluster_client` from the listed roles. Search requests
targeting remote clusters must be sent to a node that is allowed to act as a
cross-cluster client. Other features such as {ml} <<general-ml-settings,data
feeds>>, <<general-transform-settings,transforms>>, and
<<ccr-getting-started-tutorial,{ccr}>> require the `remote_cluster_client` role.
`cluster.remote.<cluster_alias>.skip_unavailable`::
Per cluster boolean setting that allows to skip specific clusters when no
nodes belonging to them are available and they are the target of a remote
cluster request. Default is `false`, meaning that all clusters are mandatory
by default, but they can selectively be made optional by setting this setting
to `true`.
`cluster.remote.<cluster_alias>.transport.ping_schedule`::
Sets the time interval between regular application-level ping messages that
are sent to try and keep remote cluster connections alive. If set to `-1`,
application-level ping messages to this remote cluster are not sent. If
unset, application-level ping messages are sent according to the global
`transport.ping_schedule` setting, which defaults to `-1` meaning that pings
are not sent. It is preferable to correctly configure TCP keep-alives instead
of configuring a `ping_schedule`, because TCP keep-alives are handled by the
operating system and not by {es}. By default {es} enables TCP keep-alives on
remote cluster connections. Remote cluster connections are transport
connections so the `transport.tcp.*` <<transport-settings,advanced settings>>
regarding TCP keep-alives apply to them.
`cluster.remote.<cluster_alias>.transport.compress`::
Per cluster setting that enables you to configure compression for requests
to a specific remote cluster. This setting impacts only requests
sent to the remote cluster. If the inbound request is compressed,
Elasticsearch compresses the response. The setting options are `true`,
`indexing_data`, and `false`. If unset, the global `transport.compress` is
used as the fallback setting.
`cluster.remote.<cluster_alias>.transport.compression_scheme`::
Per cluster setting that enables you to configure compression scheme for
requests to a specific remote cluster. This setting impacts only requests
sent to the remote cluster. If an inbound request is compressed, {es}
compresses the response using the same compression scheme. The setting options
are `deflate` and `lz4`. If unset, the global `transport.compression_scheme`
is used as the fallback setting.
[[remote-cluster-sniff-settings]]
==== Sniff mode remote cluster settings
`cluster.remote.<cluster_alias>.seeds`::
The list of seed nodes used to sniff the remote cluster state.
`cluster.remote.<cluster_alias>.node_connections`::
The number of gateway nodes to connect to for this remote cluster. The default
is `3`.
`cluster.remote.node.attr`::
A node attribute to filter out nodes that are eligible as a gateway node in
the remote cluster. For instance a node can have a node attribute
`node.attr.gateway: true` such that only nodes with this attribute will be
connected to if `cluster.remote.node.attr` is set to `gateway`.
[[remote-cluster-proxy-settings]]
==== Proxy mode remote cluster settings
`cluster.remote.<cluster_alias>.proxy_address`::
The address used for all remote connections.
`cluster.remote.<cluster_alias>.proxy_socket_connections`::
The number of socket connections to open per remote cluster. The default is
`18`.
[role="xpack"]
`cluster.remote.<cluster_alias>.server_name`::
An optional hostname string which is sent in the `server_name` field of
the TLS Server Name Indication extension if
<<encrypt-internode-communication,TLS is enabled>>. The TLS transport will fail to open
remote connections if this field is not a valid hostname as defined by the
TLS SNI specification.

View file

@ -5,7 +5,7 @@ Each {es} node has two different network interfaces. Clients send requests to
{es}'s REST APIs using its <<http-settings,HTTP interface>>, but nodes
communicate with other nodes using the <<transport-settings,transport
interface>>. The transport interface is also used for communication with
<<modules-remote-clusters,remote clusters>>.
<<remote-clusters,remote clusters>>.
You can configure both of these interfaces at the same time using the
`network.*` settings. If you have a more complicated network, you might need to

View file

@ -346,7 +346,7 @@ node.roles: [ ]
==== Remote-eligible node
A remote-eligible node acts as a cross-cluster client and connects to
<<modules-remote-clusters,remote clusters>>. Once connected, you can search
<<remote-clusters,remote clusters>>. Once connected, you can search
remote clusters using <<modules-cross-cluster-search,{ccs}>>. You can also sync
data between clusters using <<xpack-ccr,{ccr}>>.

View file

@ -1,29 +1,42 @@
[[modules-remote-clusters]]
[[remote-clusters]]
== Remote clusters
You can connect a local cluster to other {es} clusters, known as _remote
clusters_. Once connected, you can search remote clusters using
<<modules-cross-cluster-search,{ccs}>>. You can also sync data between clusters
using <<xpack-ccr,{ccr}>>.
clusters_. Remote clusters can be located in different datacenters or
geographic regions, and contain indices or data streams that can be replicated
with {ccr} or searched by a local cluster using {ccs}.
To register a remote cluster, connect the local cluster to nodes in the
remote cluster using one of two connection modes:
With <<xpack-ccr,{ccr}>>, you ingest data to an index on a remote cluster. This
_leader_ index is replicated to one or more read-only _follower_ indices on your local cluster. Creating a multi-cluster architecture with {ccr} enables you to
configure disaster recovery, bring data closer to your users, or establish a
centralized reporting cluster to process reports locally.
* <<sniff-mode,Sniff mode>>
* <<proxy-mode,Proxy mode>>
<<modules-cross-cluster-search,{ccs-cap}>> enables you to run a search request
against one or more remote clusters. This capability provides each region
with a global view of all clusters, allowing you to send a search request from
a local cluster and return results from all connected remote clusters.
Your local cluster uses the <<modules-network,transport layer>> to establish
communication with remote clusters. The coordinating nodes in the local cluster
establish <<long-lived-connections,long-lived>> TCP connections with specific
nodes in the remote cluster. {es} requires these connections to remain open,
even if the connections are idle for an extended period.
Enabling and configuring security is important on both local and remote
clusters. When connecting a local cluster to remote clusters, an {es} superuser
(such as the `elastic` user) on the local cluster gains total read access to the
remote clusters. To use {ccr} and {ccs} safely,
<<remote-clusters-security,enable security>> on all connected clusters
and configure Transport Layer Security (TLS) on at least the transport level on
every node.
You can use the <<cluster-remote-info, remote cluster info API>> to get
information about registered remote clusters.
Furthermore, a local administrator at the operating system level
with sufficient access to {es} configuration files and private keys can
potentially take over a remote cluster. Ensure that your security strategy
includes securing local _and_ remote clusters at the operating system level.
To register a remote cluster,
<<remote-clusters-connect,connect the local cluster>> to nodes in the
remote cluster using sniff mode (default) or proxy mode. After registering
remote clusters, <<remote-clusters-privileges,configure privileges>> for {ccr}
and {ccs}.
[[sniff-mode]]
[discrete]
==== Sniff mode
=== Sniff mode
In sniff mode, a cluster is created using a name and a list of seed nodes. When
a remote cluster is registered, its cluster state is retrieved from one of the
@ -38,7 +51,7 @@ The _gateway nodes_ selection depends on the following criteria:
* *version*: Remote nodes must be compatible with the cluster they are
registered to, similar to the rules for
<<rolling-upgrades>>:
<<rolling-upgrades,rolling upgrades>>:
** Any node can communicate with another node on the same
major version. For example, 7.0 can talk to any 7.x node.
** Only nodes on the last minor version of a certain major version can
@ -49,7 +62,7 @@ symmetric, meaning that if 6.7 can communicate with 7.0, 7.0 can also
communicate with 6.7. The following table depicts version compatibility between
local and remote nodes.
+
[%collapsible]
[%collapsible%open]
.Version compatibility table
====
// tag::remote-cluster-compatibility-matrix[]
@ -70,13 +83,12 @@ h| Remote cluster | 5.0->5.5 | 5.6 | 6.0->6.6 | 6.7 | 6.8 | 7.0 | 7.1->7.x
* *role*: Dedicated master nodes are never selected as gateway nodes.
* *attributes*: You can tag which nodes should be selected
(see <<remote-cluster-settings>>), though such tagged nodes still have
(see <<remote-clusters-settings,remote cluster settings>>), though such tagged nodes still have
to satisfy the two above requirements.
[[proxy-mode]]
[discrete]
==== Proxy mode
=== Proxy mode
In proxy mode, a cluster is created using a name and a single proxy address.
When you register a remote cluster, a configurable number of socket connections
are opened to the proxy address. The proxy is required to route those
@ -86,262 +98,9 @@ nodes to have accessible publish addresses.
The proxy mode is not the default connection mode and must be configured. Similar
to the sniff <<gateway-nodes-selection,gateway nodes>>, the remote
connections are subject to the same version compatibility rules as
<<rolling-upgrades>>.
<<rolling-upgrades,rolling upgrades>>.
[discrete]
[[configuring-remote-clusters]]
=== Configuring remote clusters
You can configure remote clusters settings <<configure-remote-clusters-dynamic,globally>>, or configure
settings <<configure-remote-clusters-static,on individual nodes>> in the
`elasticsearch.yml` file.
[discrete]
[[configure-remote-clusters-dynamic]]
===== Dynamically configure remote clusters
Use the <<cluster-update-settings,cluster update settings API>> to dynamically
configure remote settings on every node in the cluster. For example:
[source,console]
--------------------------------
PUT _cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"cluster_one": {
"seeds": [
"127.0.0.1:9300"
]
},
"cluster_two": {
"mode": "sniff",
"seeds": [
"127.0.0.1:9301"
],
"transport.compress": true,
"skip_unavailable": true
},
"cluster_three": {
"mode": "proxy",
"proxy_address": "127.0.0.1:9302"
}
}
}
}
}
--------------------------------
// TEST[setup:host]
// TEST[s/127.0.0.1:9300/\${transport_host}/]
You can dynamically update the compression and ping schedule settings. However,
you must include the `seeds` or `proxy_address` in the settings update request.
For example:
[source,console]
--------------------------------
PUT _cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"cluster_one": {
"seeds": [
"127.0.0.1:9300"
]
},
"cluster_two": {
"mode": "sniff",
"seeds": [
"127.0.0.1:9301"
],
"transport.compress": false
},
"cluster_three": {
"mode": "proxy",
"proxy_address": "127.0.0.1:9302",
"transport.compress": true,
"transport.ping_schedule": "60s"
}
}
}
}
}
--------------------------------
// TEST[continued]
NOTE: When the compression or ping schedule settings change, all the existing
node connections must close and re-open, which can cause in-flight requests to
fail.
You can delete a remote cluster from the cluster settings by passing `null`
values for each remote cluster setting:
[source,console]
--------------------------------
PUT _cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"cluster_two": { <1>
"mode": null,
"seeds": null,
"skip_unavailable": null,
"transport.compress": null
}
}
}
}
}
--------------------------------
// TEST[continued]
<1> `cluster_two` would be removed from the cluster settings, leaving
`cluster_one` and `cluster_three` intact.
[discrete]
[[configure-remote-clusters-static]]
===== Statically configure remote clusters
If you specify settings in `elasticsearch.yml` files, only the nodes with
those settings can connect to the remote cluster and serve remote cluster requests. For example:
[source,yaml]
--------------------------------
cluster:
remote:
cluster_one: <1>
seeds: 127.0.0.1:9300 <2>
cluster_two: <1>
mode: sniff <3>
seeds: 127.0.0.1:9301 <2>
transport.compress: true <4>
skip_unavailable: true <5>
cluster_three: <1>
mode: proxy <3>
proxy_address: 127.0.0.1:9302 <6>
--------------------------------
<1> `cluster_one`, `cluster_two`, and `cluster_three` are arbitrary _cluster aliases_
representing the connection to each cluster. These names are subsequently used to
distinguish between local and remote indices.
<2> The hostname and <<transport-settings,transport port>> (default: 9300) of a
seed node in the remote cluster.
<3> The configured connection mode. By default, this is <<sniff-mode,`sniff`>>, so
the mode is implicit for `cluster_one`. However, it can be explicitly configured
as demonstrated by `cluster_two` and must be explicitly configured for
<<proxy-mode,proxy mode>> as demonstrated by `cluster_three`.
<4> Compression is explicitly enabled for requests to `cluster_two`.
<5> Disconnected remote clusters are optional for `cluster_two`.
<6> The address for the proxy endpoint used to connect to `cluster_three`.
[discrete]
[[remote-cluster-settings]]
=== Global remote cluster settings
These settings apply to both <<sniff-mode,sniff mode>> and
<<proxy-mode,proxy mode>>. <<remote-cluster-sniff-settings,Sniff mode settings>>
and <<remote-cluster-proxy-settings,proxy mode settings>> are described
separately.
`cluster.remote.<cluster_alias>.mode`::
The mode used for a remote cluster connection. The only supported modes are
`sniff` and `proxy`.
`cluster.remote.initial_connect_timeout`::
The time to wait for remote connections to be established when the node
starts. The default is `30s`.
`remote_cluster_client` <<node-roles,role>>::
By default, any node in the cluster can act as a cross-cluster client and
connect to remote clusters. To prevent a node from connecting to remote
clusters, specify the <<node-roles,node.roles>> setting in `elasticsearch.yml`
and exclude `remote_cluster_client` from the listed roles. Search requests
targeting remote clusters must be sent to a node that is allowed to act as a
cross-cluster client. Other features such as {ml} <<general-ml-settings,data
feeds>>, <<general-transform-settings,transforms>>, and
<<ccr-getting-started,{ccr}>> require the `remote_cluster_client` role.
`cluster.remote.<cluster_alias>.skip_unavailable`::
Per cluster boolean setting that allows to skip specific clusters when no
nodes belonging to them are available and they are the target of a remote
cluster request. Default is `false`, meaning that all clusters are mandatory
by default, but they can selectively be made optional by setting this setting
to `true`.
`cluster.remote.<cluster_alias>.transport.ping_schedule`::
Sets the time interval between regular application-level ping messages that
are sent to try and keep remote cluster connections alive. If set to `-1`,
application-level ping messages to this remote cluster are not sent. If
unset, application-level ping messages are sent according to the global
`transport.ping_schedule` setting, which defaults to `-1` meaning that pings
are not sent. It is preferable to correctly configure TCP keep-alives instead
of configuring a `ping_schedule`, because TCP keep-alives are handled by the
operating system and not by {es}. By default {es} enables TCP keep-alives on
remote cluster connections. Remote cluster connections are transport
connections so the `transport.tcp.*` <<transport-settings,advanced settings>>
regarding TCP keep-alives apply to them.
`cluster.remote.<cluster_alias>.transport.compress`::
Per cluster setting that enables you to configure compression for requests
to a specific remote cluster. This setting impacts only requests
sent to the remote cluster. If the inbound request is compressed,
Elasticsearch compresses the response. The setting options are `true`,
`indexing_data`, and `false`. If unset, the global `transport.compress` is
used as the fallback setting.
`cluster.remote.<cluster_alias>.transport.compression_scheme`::
Per cluster setting that enables you to configure compression scheme for
requests to a specific remote cluster. This setting impacts only requests
sent to the remote cluster. If an inbound request is compressed, {es}
compresses the response using the same compression scheme. The setting options
are `deflate` and `lz4`. If unset, the global `transport.compression_scheme`
is used as the fallback setting.
[discrete]
[[remote-cluster-sniff-settings]]
=== Sniff mode remote cluster settings
`cluster.remote.<cluster_alias>.seeds`::
The list of seed nodes used to sniff the remote cluster state.
`cluster.remote.<cluster_alias>.node_connections`::
The number of gateway nodes to connect to for this remote cluster. The default
is `3`.
`cluster.remote.node.attr`::
A node attribute to filter out nodes that are eligible as a gateway node in
the remote cluster. For instance a node can have a node attribute
`node.attr.gateway: true` such that only nodes with this attribute will be
connected to if `cluster.remote.node.attr` is set to `gateway`.
[discrete]
[[remote-cluster-proxy-settings]]
=== Proxy mode remote cluster settings
`cluster.remote.<cluster_alias>.proxy_address`::
The address used for all remote connections.
`cluster.remote.<cluster_alias>.proxy_socket_connections`::
The number of socket connections to open per remote cluster. The default is
`18`.
[role="xpack"]
`cluster.remote.<cluster_alias>.server_name`::
An optional hostname string which is sent in the `server_name` field of
the TLS Server Name Indication extension if
<<encrypt-internode-communication,TLS is enabled>>. The TLS transport will fail to open
remote connections if this field is not a valid hostname as defined by the
TLS SNI specification.
include::cluster/remote-clusters-security.asciidoc[]
include::cluster/remote-clusters-connect.asciidoc[]
include::../../../x-pack/docs/en/security/authentication/remote-clusters-privileges.asciidoc[]
include::cluster/remote-clusters-settings.asciidoc[]

View file

@ -3,10 +3,55 @@
The following pages have moved or been deleted.
// [START] Remote clusters
[role="exclude",id="ccs-clients-integrations"]
== {ccs-cap}, clients, and integrations
Refer to <<security-clients-integrations,Securing clients and integrations>>.
[role="exclude",id="cross-cluster-configuring"]
=== {ccs-cap} and security
Refer to <<remote-clusters-security,configure remote clusters with security>>.
[role="exclude",id="cross-cluster-kibana"]
==== {ccs-cap} and {kib}
Refer to <<clusters-privileges-ccs-kibana,Configure privileges for {ccs} and {kib}>>.
[role="exclude",id="ccr-getting-started"]
=== Configure {ccr}
Refer to <<ccr-getting-started-tutorial,Set up {ccr}>>.
[role="exclude",id="ccr-getting-started-remote-cluster"]
==== Connect a remote cluster
Refer to <<remote-clusters-connect,Connecting remote clusters>>.
[role="exclude",id="modules-remote-clusters"]
=== Remote clusters
Refer to <<remote-clusters,Remote clusters>>.
[role="exclude",id="configuring-remote-clusters"]
==== Configuring remote clusters
Refer to <<remote-clusters-connect,Connect remote clusters>>.
[role="exclude",id="remote-cluster-settings"]
==== Remote cluster settings
Refer to <<remote-clusters-settings,Remote clusters settings>>.
// [END] Remote clusters
[role="exclude",id="restore-cluster-data"]
=== Restore a cluster's data
See <<restore-entire-cluster>>.
[role="exclude",id="alias"]
=== Aliases
@ -15,6 +60,7 @@ For field aliases, refer to the <<field-alias,alias field type>>.
For index and data stream aliases, refer to <<aliases>>.
[role="exclude",id="modules-scripting-other-layers"]
=== Other security layers
Refer to <<modules-scripting-security>>.
@ -1102,7 +1148,7 @@ For other information, see:
* <<modules-threadpool>>
* <<modules-node>>
* <<modules-plugins>>
* <<modules-remote-clusters>>
* <<remote-clusters>>
[role="exclude",id="modules-http"]
=== HTTP

View file

@ -63,7 +63,7 @@ GET my-index/_msearch/template
* If the {es} {security-features} are enabled, you must have the `read`
<<privileges-list-indices,index privilege>> for the target data stream, index,
or alias. For cross-cluster search, see <<cross-cluster-configuring>>.
or alias. For cross-cluster search, see <<remote-clusters-security>>.
[[multi-search-template-api-path-params]]
==== {api-path-parms-title}

View file

@ -26,7 +26,7 @@ GET my-index-000001/_msearch
* If the {es} {security-features} are enabled, you must have the `read`
<<privileges-list-indices,index privilege>> for the target data stream, index,
or alias. For cross-cluster search, see <<cross-cluster-configuring>>.
or alias. For cross-cluster search, see <<remote-clusters-security>>.
[[search-multi-search-api-desc]]
==== {api-description-title}

View file

@ -65,7 +65,7 @@ GET my-index/_search/template
* If the {es} {security-features} are enabled, you must have the `read`
<<privileges-list-indices,index privilege>> for the target data stream, index,
or alias. For cross-cluster search, see <<cross-cluster-configuring>>.
or alias. For cross-cluster search, see <<remote-clusters-security>>.
[[search-template-api-path-params]]
==== {api-path-parms-title}

View file

@ -6,7 +6,7 @@
{es} generally allows you to quickly search across big amounts of data. There are
situations where a search executes on many shards, possibly against
<<freeze-index-api,frozen indices>> and spanning multiple
<<modules-remote-clusters,remote clusters>>, for which
<<remote-clusters,remote clusters>>, for which
results are not expected to be returned in milliseconds. When you need to
execute long-running searches, synchronously
waiting for its results to be returned is not ideal. Instead, Async search lets

View file

@ -1,11 +1,11 @@
[[modules-cross-cluster-search]]
== Search across clusters
*{ccs-cap}* lets you run a single search request against one or more
<<modules-remote-clusters,remote clusters>>. For example, you can use a {ccs} to
filter and analyze log data stored on clusters in different data centers.
*{ccs-cap}* lets you run a single search request against one or more remote
clusters. For example, you can use a {ccs} to filter and analyze log data stored
on clusters in different data centers.
IMPORTANT: {ccs-cap} requires <<modules-remote-clusters, remote clusters>>.
IMPORTANT: {ccs-cap} requires <<remote-clusters,remote clusters>>.
[discrete]
[[ccs-supported-apis]]
@ -29,11 +29,12 @@ The following APIs support {ccs}:
[[ccs-remote-cluster-setup]]
==== Remote cluster setup
To perform a {ccs}, you must have at least one remote cluster configured.
To perform a {ccs}, you must have at least one
<<remote-clusters-connect,remote cluster configured>>.
TIP: If you want to search across clusters in the cloud, you can
link:{cloud}/ec-enable-ccs.html[configure remote clusters on {ess}]. Then, you
can search across clusters and <<ccr-getting-started,set up {ccr}>>.
can search across clusters and <<ccr-getting-started-tutorial,set up {ccr}>>.
The following <<cluster-update-settings,cluster update settings>> API request
adds three remote clusters:`cluster_one`, `cluster_two`, and `cluster_three`.

View file

@ -28,7 +28,7 @@ GET /my-index-000001/_search
* If the {es} {security-features} are enabled, you must have the `read`
<<privileges-list-indices,index privilege>> for the target data stream, index,
or alias. For cross-cluster search, see <<cross-cluster-configuring>>.
or alias. For cross-cluster search, see <<remote-clusters-privileges-ccs>>.
+
To search a <<point-in-time-api,point in time (PIT)>> for an alias, you
must have the `read` index privilege for the alias's data streams or indices.

View file

@ -26,7 +26,7 @@ detects a node failure it reacts by reallocating lost shards, rerouting
searches, and maybe electing a new master node. Highly available clusters must
be able to detect node failures promptly, which can be achieved by reducing the
permitted number of retransmissions. Connections to
<<modules-remote-clusters,remote clusters>> should also prefer to detect
<<remote-clusters,remote clusters>> should also prefer to detect
failures much more quickly than the Linux default allows. Linux users should
therefore reduce the maximum number of TCP retransmissions.

View file

@ -0,0 +1,296 @@
[[remote-clusters-privileges]]
=== Configure roles and users for remote clusters
After <<remote-clusters-connect,connecting remote clusters>>, you create a
user role on both the local and remote clusters and assign necessary privileges.
These roles are required to use {ccr} and {ccs}.
IMPORTANT: You must use the same role names on both the local
and remote clusters. For example, the following configuration for {ccr} uses the
`remote-replication` role name on both the local and remote clusters. However,
you can specify different role definitions on each cluster.
You can manage users and roles from Stack Management in {kib} by selecting
*Security > Roles* from the side navigation. You can also use the
<<security-role-mapping-apis,role management APIs>> to add, update, remove, and
retrieve roles dynamically. When you use the APIs to manage roles in the
`native` realm, the roles are stored in an internal {es} index.
The following requests use the
<<security-api-put-role,create or update roles API>>. You must have at least the
`manage_security` cluster privilege to use this API.
[[remote-clusters-privileges-ccr]]
//tag::configure-ccr-privileges[]
==== Configure privileges for {ccr}
The {ccr} user requires different cluster and index privileges on the remote
cluster and local cluster. Use the following requests to create separate roles
on the local and remote clusters, and then create a user with the required
roles.
[discrete]
===== Remote cluster
On the remote cluster that contains the leader index, the {ccr} role requires
the `read_ccr` cluster privilege, and `monitor` and `read` privileges on the
leader index.
NOTE: If requests will be issued <<run-as-privilege,on behalf of other users>>,
then the the authenticating user must have the `run_as` privilege on the remote
cluster.
The following request creates a `remote-replication` role on the remote cluster:
[source,console]
----
POST /_security/role/remote-replication
{
"cluster": [
"read_ccr"
],
"indices": [
{
"names": [
"leader-index-name"
],
"privileges": [
"monitor",
"read"
]
}
]
}
----
////
[source,console]
----
DELETE /_security/role/remote-replication
----
// TEST[continued]
////
[discrete]
===== Local cluster
On the local cluster that contains the follower index, the `remote-replication`
role requires the `manage_ccr` cluster privilege, and `monitor`, `read`, `write`,
and `manage_follow_index` privileges on the follower index.
The following request creates a `remote-replication` role on the local cluster:
[source,console]
----
POST /_security/role/remote-replication
{
"cluster": [
"manage_ccr"
],
"indices": [
{
"names": [
"follower-index-name"
],
"privileges": [
"monitor",
"read",
"write",
"manage_follow_index"
]
}
]
}
----
After creating the `remote-replication` role on each cluster, use the
<<security-api-put-user,create or update users API>> to create a user on
the local cluster cluster and assign the `remote-replication` role. For
example, the following request assigns the `remote-replication` role to a user
named `cross-cluster-user`:
[source,console]
----
POST /_security/user/cross-cluster-user
{
"password" : "l0ng-r4nd0m-p@ssw0rd",
"roles" : [ "remote-replication" ]
}
----
// TEST[continued]
NOTE: You only need to create this user on the *local* cluster.
//end::configure-ccr-privileges[]
You can then <<ccr-getting-started-tutorial,configure {ccr}>> to replicate your
data across datacenters.
[[remote-clusters-privileges-ccs]]
==== Configure privileges for {ccs}
The {ccs} user requires different cluster and index privileges on the remote
cluster and local cluster. The following requests create separate roles on the
local and remote clusters, and then create a user with the required roles.
[discrete]
===== Remote cluster
On the remote cluster, the {ccs} role requires the `read` and
`read_cross_cluster` privileges for the target indices.
NOTE: If requests will be issued <<run-as-privilege,on behalf of other users>>,
then the the authenticating user must have the `run_as` privilege on the remote
cluster.
The following request creates a `remote-search` role on the remote cluster:
[source,console]
----
POST /_security/role/remote-search
{
"indices": [
{
"names": [
"target-indices"
],
"privileges": [
"read",
"read_cross_cluster"
]
}
]
}
----
////
[source,console]
----
DELETE /_security/role/remote-search
----
// TEST[continued]
////
[discrete]
===== Local cluster
On the local cluster, which is the cluster used to initiate cross cluster
search, a user only needs the `remote-search` role. The role privileges can be
empty.
The following request creates a `remote-search` role on the remote cluster:
[source,console]
----
POST /_security/role/remote-search
{}
----
After creating the `remote-search` role on each cluster, use the
<<security-api-put-user,create or update users API>> to create a user on the
local cluster and assign the `remote-search` role. For example, the following
request assigns the `remote-search` role to a user named `cross-search-user`:
[source,console]
----
POST /_security/user/cross-search-user
{
"password" : "l0ng-r4nd0m-p@ssw0rd",
"roles" : [ "remote-search" ]
}
----
// TEST[continued]
NOTE: You only need to create this user on the *local* cluster.
Users with the `remote-search` role can then
<<modules-cross-cluster-search,search across clusters>>.
[[clusters-privileges-ccs-kibana]]
==== Configure privileges for {ccs} and {kib}
When using {kib} to search across multiple clusters, a two-step authorization
process determines whether or not the user can access data streams and indices
on a remote cluster:
* First, the local cluster determines if the user is authorized to access remote
clusters. The local cluster is the cluster that {kib} is connected to.
* If the user is authorized, the remote cluster then determines if the user has
access to the specified data streams and indices.
To grant {kib} users access to remote clusters, assign them a local role
with read privileges to indices on the remote clusters. You specify data streams
and indices in a remote cluster as `<remote_cluster_name>:<target>`.
To grant users read access on the remote data streams and indices, you must
create a matching role on the remote clusters that grants the
`read_cross_cluster` privilege with access to the appropriate data streams and
indices.
For example, you might be actively indexing {ls} data on a local cluster and
and periodically offload older time-based indices to an archive on your remote
cluster. You want to search across both clusters, so you must enable {kib}
users on both clusters.
[discrete]
===== Local cluster
On the local cluster, create a `logstash-reader` role that grants
`read` and `view_index_metadata` privileges on the local `logstash-*` indices.
NOTE: If you configure the local cluster as another remote in {es}, the
`logstash-reader` role on your local cluster also needs to grant the
`read_cross_cluster` privilege.
[source,console]
----
POST /_security/role/logstash-reader
{
"indices": [
{
"names": [
"logstash-*"
],
"privileges": [
"read",
"view_index_metadata"
]
}
]
}
----
Assign your {kib} users a role that grants
{kibana-ref}/xpack-security-authorization.html[access to {kib}], as well as your
`logstash_reader` role. For example, the following request creates the
`cross-cluster-kibana` user and assigns the `kibana-access` and
`logstash-reader` roles.
[source,console]
----
PUT /_security/user/cross-cluster-kibana
{
"password" : "l0ng-r4nd0m-p@ssw0rd",
"roles" : [
"logstash-reader",
"kibana-access"
]
}
----
[discrete]
===== Remote cluster
On the remote cluster, create a `logstash-reader` role that grants the
`read_cross_cluster` privilege and `read` and `view_index_metadata` privileges
for the `logstash-*` indices.
[source,console]
----
POST /_security/role/logstash-reader
{
"indices": [
{
"names": [
"logstash-*"
],
"privileges": [
"read_cross_cluster",
"read",
"view_index_metadata"
]
}
]
}
----

View file

@ -78,8 +78,6 @@ include::built-in-roles.asciidoc[]
include::managing-roles.asciidoc[]
include::stack-management.asciidoc[]
include::privileges.asciidoc[]
include::document-level-security.asciidoc[]

View file

@ -1,50 +0,0 @@
[role="xpack"]
[[stack-management]]
=== Granting access to Stack Management features
You <<defining-roles,define roles>> and set user privileges at different levels
to grant access to each of the Elastic Stack features.
[[stack-management-ccr]]
==== {ccr-cap}
The {ccr} user requires different cluster and index privileges on the remote
cluster and local cluster.
[[stack-management-ccr-remote]]
On the remote cluster that contains the leader index, the {ccr} user requires
`read_ccr` cluster privilege and `monitor` and `read` privileges on the
leader index.
[source,yml]
--------------------------------------------------
ccr_user:
cluster:
- read_ccr
indices:
- names: [ 'leader-index' ]
privileges:
- monitor
- read
--------------------------------------------------
[[stack-management-ccr-local]]
On the local cluster that contains the follower index, the {ccr} user requires the `manage_ccr` cluster privilege and `monitor`, `read`, `write` and
`manage_follow_index` privileges on the follower index.
[source,yml]
--------------------------------------------------
ccr_user:
cluster:
- manage_ccr
indices:
- names: [ 'follower-index' ]
privileges:
- monitor
- read
- write
- manage_follow_index
--------------------------------------------------
If you are managing
<<ccr-getting-started-remote-cluster,connecting to the remote cluster>> using
the cluster update settings API, you will also need a user with the `all`
cluster privilege.

View file

@ -1,39 +0,0 @@
[[cross-cluster-kibana]]
==== {ccs-cap} and {kib}
When {kib} is used to search across multiple clusters, a two-step authorization
process determines whether or not the user can access data streams and indices on a remote
cluster:
* First, the local cluster determines if the user is authorized to access remote
clusters. (The local cluster is the cluster {kib} is connected to.)
* If they are, the remote cluster then determines if the user has access
to the specified data streams and indices.
To grant {kib} users access to remote clusters, assign them a local role
with read privileges to indices on the remote clusters. You specify data streams and indices in a remote cluster as `<remote_cluster_name>:<target>`.
To enable users to actually read the remote data streams and indices, you must create a matching
role on the remote clusters that grants the `read_cross_cluster` privilege
and access to the appropriate data streams and indices.
For example, if {kib} is connected to the cluster where you're actively
indexing {ls} data (your _local cluster_) and you're periodically
offloading older time-based indices to an archive cluster
(your _remote cluster_) and you want to enable {kib} users to search both
clusters:
. On the local cluster, create a `logstash_reader` role that grants
`read` and `view_index_metadata` privileges on the local `logstash-*` indices.
+
NOTE: If you configure the local cluster as another remote in {es}, the
`logstash_reader` role on your local cluster also needs to grant the
`read_cross_cluster` privilege.
. Assign your {kib} users a role that grants
{kibana-ref}/xpack-security-authorization.html[access to {kib}]
as well as your `logstash_reader` role.
. On the remote cluster, create a `logstash_reader` role that grants the
`read_cross_cluster` privilege and `read` and `view_index_metadata` privileges
for the `logstash-*` indices.

View file

@ -1,157 +0,0 @@
[[cross-cluster-configuring]]
=== {ccs-cap} and security
<<modules-cross-cluster-search,{ccs-cap}>> enables
federated search across multiple clusters. When using cross cluster search
with secured clusters, all clusters must have the {es} {security-features}
enabled.
The local cluster (the cluster used to initiate cross cluster search) must be
allowed to connect to the remote clusters, which means that the CA used to
sign the SSL/TLS key of the local cluster must be trusted by the remote
clusters.
User authentication is performed on the local cluster and the user and user's
roles are passed to the remote clusters. A remote cluster checks the user's
roles against its local role definitions to determine which indices the user
is allowed to access.
[WARNING]
This feature was added as Beta in {es} `v5.3` with further improvements made in
5.4 and 5.5. It requires gateway eligible nodes to be on `v5.5` onwards.
To use cross cluster search with secured clusters:
* Enable the {es} {security-features} on every node in each connected cluster.
For more information about the `xpack.security.enabled` setting, see
<<security-settings>>.
* Enable encryption globally. To encrypt communications, you must enable
<<encrypt-internode-communication,enable SSL/TLS>> on every node.
* Enable a trust relationship between the cluster used for performing cross
cluster search (the local cluster) and all remote clusters. This can be done
either by:
+
** Using the same certificate authority to generate certificates for all
connected clusters, or
** Adding the CA certificate from the local cluster as a trusted CA in
each remote cluster (see <<transport-tls-ssl-settings>>).
* On the local cluster, ensure that users are assigned to (at least) one role
that exists on the remote clusters. On the remote clusters, use that role
to define which indices the user may access. (See <<authorization>>).
* Configure the local cluster to connect to remote clusters as described
in <<configuring-remote-clusters>>.
For example, the following configuration adds two remote clusters
to the local cluster:
+
--
[source,console]
-----------------------------------------------------------
PUT _cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"one": {
"seeds": [ "10.0.1.1:9300" ]
},
"two": {
"seeds": [ "10.0.2.1:9300" ]
}
}
}
}
}
-----------------------------------------------------------
--
[[cross-cluster-configuring-example]]
==== Example configuration of cross cluster search
In the following example, we will configure the user `alice` to have permissions
to search any data stream or index starting with `logs-` in cluster `two` from
cluster `one`.
First, enable cluster `one` to perform cross cluster search on remote cluster
`two` by running the following request as the superuser on cluster `one`:
[source,console]
-----------------------------------------------------------
PUT _cluster/settings
{
"persistent": {
"cluster.remote.two.seeds": [ "10.0.2.1:9300" ]
}
}
-----------------------------------------------------------
Next, set up a role called `cluster_two_logs` on both cluster `one` and
cluster `two`.
On cluster `one`, this role does not need any special privileges:
[source,console]
-----------------------------------------------------------
POST /_security/role/cluster_two_logs
{
}
-----------------------------------------------------------
On cluster `two`, this role allows the user to query local indices called
`logs-` from a remote cluster:
[source,console]
-----------------------------------------------------------
POST /_security/role/cluster_two_logs
{
"cluster": [],
"indices": [
{
"names": [
"logs-*"
],
"privileges": [
"read",
"read_cross_cluster"
]
}
]
}
-----------------------------------------------------------
Finally, create a user on cluster `one` and apply the `cluster_two_logs` role:
[source,console]
-----------------------------------------------------------
POST /_security/user/alice
{
"password" : "somepasswordhere",
"roles" : [ "cluster_two_logs" ],
"full_name" : "Alice",
"email" : "alice@example.com",
"enabled": true
}
-----------------------------------------------------------
With all of the above setup, the user `alice` is able to search indices in
cluster `two` as follows:
[source,console]
-----------------------------------------------------------
GET two:logs-2017.04/_search
{
"query": {
"match_all": {}
}
}
-----------------------------------------------------------
// TEST[skip:todo]
include::cross-cluster-kibana.asciidoc[]

View file

@ -1,22 +1,13 @@
[role="xpack"]
[[ccs-clients-integrations]]
== {ccs-cap}, clients, and integrations
When using <<modules-cross-cluster-search,{ccs}>>
you need to take extra steps to secure communications with the connected
clusters.
* <<cross-cluster-configuring>>
You will need to update the configuration for several clients to work with a
secured cluster:
* <<http-clients>>
[[security-clients-integrations]]
== Securing clients and integrations
You will need to update the configuration for several <<http-clients,clients>>
to work with a secured {es} cluster.
The {es} {security-features} enable you to secure your {es} cluster. But
{es} itself is only one product within the {stack}. It is often the case that
other products in the stack are connected to the cluster and therefore need to
other products in the {stack} are connected to the cluster and therefore need to
be secured as well, or at least communicate with the cluster in a secured way:
* <<hadoop, Apache Hadoop>>
@ -31,8 +22,6 @@ be secured as well, or at least communicate with the cluster in a secured way:
* {kibana-ref}/secure-reporting.html[Reporting]
* {winlogbeat-ref}/securing-winlogbeat.html[Winlogbeat]
include::cross-cluster.asciidoc[]
include::http.asciidoc[]
include::hadoop.asciidoc[]