[DOCS] New docs for remote clusters using API key authentication (#98330)

* New docs structure for remote clusters

* Fix broken cross-book link errors

* More broken cross-book link errors

* Remove redirects for new pages

* Link to generic remote cluster docs instead

* Drop 'API' from the abbreviated title

* Add 'Establish trust with a remote cluster' section

* Restructure 'Establish trust' section into Prprequisite/local/remote instructions

* Add 'Configure roles and users' section

* Add 'Connect to a remote cluster' section

* Move version compatibility to prerequisites

* Fix test errors

* Incorporate review feedback

* Mention version 8.10 or later in the intro for API keys

* Add license prerequisite
This commit is contained in:
Abdon Pijpelink 2023-08-24 12:30:03 +02:00 committed by GitHub
parent b9b818e28e
commit 1955bd8ad4
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
18 changed files with 513 additions and 177 deletions

View file

@ -159,7 +159,7 @@ cluster with cluster alias `leader`.
connected to.
====
include::../../../x-pack/docs/en/security/authentication/remote-clusters-privileges.asciidoc[tag=configure-ccr-privileges]
include::../../../x-pack/docs/en/security/authentication/remote-clusters-privileges-cert.asciidoc[tag=configure-ccr-privileges]
[[ccr-getting-started-follower-index]]
==== Create a follower index to replicate a specific index

View file

@ -48,7 +48,7 @@ or alias.
* See <<eql-required-fields>>.
* experimental:[] For cross-cluster search, the local and remote clusters must
use the same {es} version if they have versions prior to 7.17.7 (included) or prior to 8.5.1 (included). For security, see <<remote-clusters-security>>.
use the same {es} version if they have versions prior to 7.17.7 (included) or prior to 8.5.1 (included). For security, see <<remote-clusters>>.
[[eql-search-api-limitations]]
===== Limitations

View file

@ -0,0 +1,184 @@
[[remote-clusters-api-key]]
=== Add remote clusters using API key authentication
coming::[8.10]
beta::[]
API key authentication enables a local cluster to authenticate itself with a
remote cluster via a <<security-api-create-cross-cluster-api-key,cross-cluster
API key>>. The API key needs to be created by an administrator of the remote
cluster. The local cluster is configured to provide this API key on each request
to the remote cluster. The remote cluster verifies the API key and grants
access, based on the API key's privileges.
All cross-cluster requests from the local cluster are bound by the API key's
privileges, regardless of local users associated with the requests. For example,
if the API key only allows read access to `my-index` on the remote cluster, even
a superuser from the local cluster is limited by this constraint. This mechanism
enables the remote cluster's administrator to have full control over who can
access what data with cross-cluster search and/or cross-cluster replication. The
remote cluster's administrator can be confident that no access is possible
beyond what is explicitly assigned to the API key.
On the local cluster side, not every local user needs to access every piece of
data allowed by the API key. An administrator of the local cluster can further
configure additional permission constraints on local users so each user only
gets access to the necessary remote data. Note it is only possible to further
reduce the permissions allowed by the API key for individual local users. It is
impossible to increase the permissions to go beyond what is allowed by the API
key.
In this model, cross-cluster operations use <<remote_cluster.port,a dedicated
server port>> (remote cluster interface) for communication between clusters. A
remote cluster must enable this port for local clusters to connect. Configure
Transport Layer Security (TLS) for this port to maximize security (as explained
in <<remote-clusters-security-api-key>>).
The local cluster must trust the remote cluster on the remote cluster interface.
This means that the local cluster trusts the remote cluster's certificate
authority (CA) that signs the server certificate used by the remote cluster
interface. When establishing a connection, all nodes from the local cluster that
participate in cross-cluster communication verify certificates from nodes on the
other side, based on the TLS trust configuration.
To add a remote cluster using API key authentication:
. <<remote-clusters-prerequisites-api-key,Review the prerequisites>>
. <<remote-clusters-security-api-key>>
. <<remote-clusters-connect-api-key>>
. <<remote-clusters-privileges-api-key>>
[[remote-clusters-prerequisites-api-key]]
==== Prerequisites
* The {es} security features need to be enabled on both clusters, on every node.
Security is enabled by default. If it's disabled, set `xpack.security.enabled`
to `true` in `elasticsearch.yml`. Refer to <<general-security-settings>>.
* The nodes of the local and remote clusters must be on version 8.10 or later.
* The local and remote clusters must have an appropriate license. For more
information, refer to https://www.elastic.co/subscriptions.
[[remote-clusters-security-api-key]]
==== Establish trust with a remote cluster
===== On the remote cluster
. Enable the remote cluster server port on every node of the remote cluster by
setting `remote_cluster_server.enabled` to `true` in `elasticsearch.yml`. The
port number defaults to `9443` and can be configured with the
`remote_cluster.port` setting. Refer to <<remote-cluster-network-settings>>.
. Next, generate a CA and a server certificate/key pair. On one of the nodes
of the remote cluster, from the directory where {es} has been installed:
.. Create a CA, if you don't have a CA already:
+
[source,sh]
----
./bin/elasticsearch-certutil ca --pem --out=cross-cluster-ca.zip --pass CA_PASSWORD
----
+
Replace `CA_PASSWORD` with the password you want to use for the CA. You can
remove the `--pass` option and its argument if you are not deploying to a
production environment.
.. Unzip the generated `cross-cluster-ca.zip` file. This compressed file
contains the following content:
+
[source,txt]
----
/ca
|_ ca.crt
|_ ca.key
----
.. Generate a certificate and private key pair for the nodes in the remote
cluster:
+
[source,sh]
----
./bin/elasticsearch-certutil cert --out=cross-cluster.p12 --pass=CERT_PASSWORD --ca-cert=ca/ca.crt --ca-key=ca/ca.key --ca-pass=CA_PASSWORD --dns=example.com --ip=127.0.0.1
----
+
* Replace `CA_PASSWORD` with the CA password from the previous step.
* Replace `CERT_PASSWORD` with the password you want to use for the generated
private key.
* Use the `--dns` option to specify the relevant DNS name for the certificate.
You can specify it multiple times for multiple DNS.
* Use the `--ip` option to specify the relevant IP address for the certificate.
You can specify it multiple times for multiple IP addresses.
.. If the remote cluster has multiple nodes, you can either:
+
* create a single wildcard certificate for all nodes;
* or, create separate certificates for each node either manually or in batch
with the <<certutil-silent,silent mode>>.
. On every node of the remote cluster:
.. Copy the `cross-cluster.p12` file from the earlier step to the `config`
directory. If you didn't create a wildcard certificate, make sure you copy the
correct node-specific p12 file.
.. Add following configuration to `elasticsearch.yml`:
+
[source,yaml]
----
xpack.security.remote_cluster_server.ssl.enabled: true
xpack.security.remote_cluster_server.ssl.keystore.path: cross-cluster.p12
----
.. Add the SSL keystore password to the {es} keystore:
+
[source,sh]
----
./bin/elasticsearch-keystore add xpack.security.remote_cluster_server.ssl.keystore.secure_password
----
+
When prompted, enter the `CERT_PASSWORD` from the earlier step.
. Restart the remote cluster.
. On the remote cluster, generate a cross-cluster API key using the
<<security-api-create-cross-cluster-api-key>> API or
{kibana-ref}/api-keys.html[Kibana]. Grant the key the required access for {ccs}
or {ccr}.
. Copy the encoded key (`encoded` in the response) to a safe location. You will
need it to connect to the remote cluster later.
===== On the local cluster
. On every node of the local cluster:
.. Copy the `ca.crt` file generated on the remote cluster earlier into the
`config` directory, renaming the file `remote-cluster-ca.crt`.
.. Add following configuration to `elasticsearch.yml`:
+
[source,yaml]
----
xpack.security.remote_cluster_client.ssl.enabled: true
xpack.security.remote_cluster_client.ssl.certificate_authorities: [ "remote-cluster-ca.crt" ]
----
.. Add the cross-cluster API key, created on the remote cluster earlier, to the
keystore:
+
[source,sh]
----
./bin/elasticsearch-keystore add cluster.remote.ALIAS.credentials
----
+
Replace `ALIAS` with the alias you will use to connect to the remote cluster
later. When prompted, enter the encoded cross-cluster API key created on the
remote cluster earlier.
. Restart the local cluster to load the keystore change.
[[remote-clusters-connect-api-key]]
==== Connect to a remote cluster
:trust-mechanism: api-key
include::remote-clusters-connect.asciidoc[]
:!trust-mechanism:
include::../../../../x-pack/docs/en/security/authentication/remote-clusters-privileges-api-key.asciidoc[leveloffset=+1]

View file

@ -0,0 +1,81 @@
[[remote-clusters-cert]]
=== Add remote clusters using TLS certificate authentication
To add a remote cluster using TLS certificate authentication:
. <<remote-clusters-prerequisites-cert,Review the prerequisites>>
. <<remote-clusters-security-cert>>
. <<remote-clusters-connect-cert>>
. <<remote-clusters-privileges-cert>>
[[remote-clusters-prerequisites-cert]]
==== Prerequisites
. The {es} security features need to be enabled on both clusters, on every node.
Security is enabled by default. If it's disabled, set `xpack.security.enabled`
to `true` in `elasticsearch.yml`. Refer to <<general-security-settings>>.
. The local and remote clusters versions must be compatible.
** Any node can communicate with another node on the same
major version. For example, 7.0 can talk to any 7.x node.
** Only nodes on the last minor version of a certain major version can
communicate with nodes on the following major version. In the 6.x series, 6.8
can communicate with any 7.x node, while 6.7 can only communicate with 7.0.
** Version compatibility is
symmetric, meaning that if 6.7 can communicate with 7.0, 7.0 can also
communicate with 6.7. The following table depicts version compatibility between
local and remote nodes.
+
[%collapsible%open]
.Version compatibility table
====
include::../remote-clusters-shared.asciidoc[tag=remote-cluster-compatibility-matrix]
====
+
IMPORTANT: Elastic only supports {ccs} on a subset of these configurations. See
<<ccs-supported-configurations>>.
[[remote-clusters-security-cert]]
==== Establish trust with a remote cluster
To use {ccr} or {ccs} safely with remote clusters, enable security on all
connected clusters and configure Transport Layer Security (TLS) on every node.
Configuring TLS security on the transport interface is minimally required for
remote clusters. For additional security, configure TLS on the
<<security-basic-setup-https,HTTP interface>> as well.
All connected clusters must trust one another and be mutually authenticated
with TLS on the transport interface. This means that the local cluster
trusts the certificate authority (CA) of the remote cluster, and the remote
cluster trusts the CA of the local cluster. When establishing a connection, all
nodes will verify certificates from nodes on the other side. This mutual trust
is required to securely connect a remote cluster, because all connected nodes
effectively form a single security domain.
User authentication is performed on the local cluster and the user and users
roles names are passed to the remote clusters. A remote cluster checks the
users role names against its local role definitions to determine which indices
the user is allowed to access.
Before using {ccr} or {ccs} with secured {es} clusters, complete the following
configuration task:
. Configure Transport Layer Security (TLS) on every node to encrypt internode
traffic and authenticate nodes in the local cluster with nodes in all remote
clusters. Refer to
<<security-basic-setup,set up basic security for the {stack}>> for the required
steps to configure security.
+
NOTE: This procedure uses the same CA to generate certificates for all nodes.
Alternatively, you can add the certificates from the local cluster as a
trusted CA in each remote cluster. You must also add the certificates from
remote clusters as a trusted CA on the local cluster. Using the same CA to
generate certificates for all nodes simplifies this task.
[[remote-clusters-connect-cert]]
==== Connect to a remote cluster
:trust-mechanism: cert
include::remote-clusters-connect.asciidoc[]
:!trust-mechanism:
include::../../../../x-pack/docs/en/security/authentication/remote-clusters-privileges-cert.asciidoc[leveloffset=+1]

View file

@ -1,36 +1,46 @@
[[remote-clusters-connect]]
=== Connect to remote clusters
Your local cluster uses the <<modules-network,transport interface>> to establish
communication with remote clusters. The coordinating nodes in the local cluster
establish <<long-lived-connections,long-lived>> TCP connections with specific
nodes in the remote cluster. {es} requires these connections to remain open,
even if the connections are idle for an extended period.
ifeval::["{trust-mechanism}"=="cert"]
:remote-interface: transport
:remote-interface-default-port: 9300
:remote-interface-default-port-plus1: 9301
:remote-interface-default-port-plus2: 9302
endif::[]
ifeval::["{trust-mechanism}"=="api-key"]
:remote-interface: remote cluster
:remote-interface-default-port: 9443
:remote-interface-default-port-plus1: 9444
:remote-interface-default-port-plus2: 9445
endif::[]
NOTE: You must have the `manage` cluster privilege to connect remote clusters.
The local cluster uses the <<modules-network,{remote-interface} interface>> to
establish communication with remote clusters. The coordinating nodes in the
local cluster establish <<long-lived-connections,long-lived>> TCP connections
with specific nodes in the remote cluster. {es} requires these connections to
remain open, even if the connections are idle for an extended period.
To add a remote cluster from Stack Management in {kib}:
. Select *Remote Clusters* from the side navigation.
. Enter a name (_cluster alias_) for the remote cluster.
. Specify the {es} endpoint URL, or the IP address or host name of the remote
cluster followed by the transport port (defaults to `9300`). For example,
`cluster.es.eastus2.staging.azure.foundit.no:9400` or `192.168.1.1:9300`.
cluster followed by the {remote-interface} port (defaults to
+{remote-interface-default-port}+). For example,
+cluster.es.eastus2.staging.azure.foundit.no:{remote-interface-default-port}+ or
+192.168.1.1:{remote-interface-default-port}+.
Alternatively, use the <<cluster-update-settings,cluster update settings API>>
to add a remote cluster. You can also use this API to
<<configure-remote-clusters-dynamic,dynamically configure>> remote clusters for
_every_ node in the local cluster. To configure remote clusters on individual
nodes in the local cluster, define
<<configure-remote-clusters-static,static settings>> in `elasticsearch.yml` for
each node.
After connecting remote clusters,
<<remote-clusters-privileges,configure roles and users for remote clusters>>.
to add a remote cluster. You can also use this API to dynamically configure
remote clusters for _every_ node in the local cluster. To configure remote
clusters on individual nodes in the local cluster, define static settings in
`elasticsearch.yml` for each node.
The following request adds a remote cluster with an alias of `cluster_one`. This
_cluster alias_ is a unique identifier that represents the connection to the
remote cluster and is used to distinguish between local and remote indices.
[source,console]
[source,console,subs=attributes+]
----
PUT /_cluster/settings
{
@ -39,7 +49,7 @@ PUT /_cluster/settings
"remote" : {
"cluster_one" : { <1>
"seeds" : [
"127.0.0.1:9300" <2>
"127.0.0.1:{remote-interface-default-port}" <2>
]
}
}
@ -48,10 +58,10 @@ PUT /_cluster/settings
}
----
// TEST[setup:host]
// TEST[s/127.0.0.1:9300/\${transport_host}/]
// TEST[s/127.0.0.1:\{remote-interface-default-port\}/\${transport_host}/]
<1> The cluster alias of this remote cluster is `cluster_one`.
<2> Specifies the hostname and transport port of a seed node in the remote
cluster.
<2> Specifies the hostname and {remote-interface} port of a seed node in the
remote cluster.
You can use the <<cluster-remote-info,remote cluster info API>> to verify that
the local cluster is successfully connected to the remote cluster:
@ -65,46 +75,53 @@ GET /_remote/info
The API response indicates that the local cluster is connected to the remote
cluster with the cluster alias `cluster_one`:
[source,console-result]
[source,console-result,subs=attributes+]
----
{
"cluster_one" : {
"seeds" : [
"127.0.0.1:9300"
"127.0.0.1:{remote-interface-default-port}"
],
"connected" : true,
"num_nodes_connected" : 1, <1>
"max_connections_per_cluster" : 3,
"initial_connect_timeout" : "30s",
"skip_unavailable" : false, <2>
ifeval::["{trust-mechanism}"=="api-key"]
"cluster_credentials": "::es_redacted::", <3>
endif::[]
"mode" : "sniff"
}
}
----
// TESTRESPONSE[s/127.0.0.1:9300/$body.cluster_one.seeds.0/]
// TESTRESPONSE[s/127.0.0.1:\{remote-interface-default-port\}/$body.cluster_one.seeds.0/]
// TESTRESPONSE[s/ifeval::(.|\n)*endif::\[\]//]
// TEST[s/"connected" : true/"connected" : $body.cluster_one.connected/]
// TEST[s/"num_nodes_connected" : 1/"num_nodes_connected" : $body.cluster_one.num_nodes_connected/]
<1> The number of nodes in the remote cluster the local cluster is
connected to.
<2> Indicates whether to skip the remote cluster if searched through {ccs} but
no nodes are available.
ifeval::["{trust-mechanism}"=="api-key"]
<3> If present, indicates the remote cluster has connected using API key
authentication.
endif::[]
[[configure-remote-clusters-dynamic]]
==== Dynamically configure remote clusters
===== Dynamically configure remote clusters
Use the <<cluster-update-settings,cluster update settings API>> to dynamically
configure remote settings on every node in the cluster. The following request
adds three remote clusters: `cluster_one`, `cluster_two`, and `cluster_three`.
The `seeds` parameter specifies the hostname and
<<transport-settings,transport port>> (default `9300`) of a seed node in the
remote cluster.
<<modules-network,{remote-interface} port>> (default
+{remote-interface-default-port}+) of a seed node in the remote cluster.
The `mode` parameter determines the configured connection mode, which defaults
to <<sniff-mode,`sniff`>>. Because `cluster_one` doesn't specify a `mode`, it
uses the default. Both `cluster_two` and `cluster_three` explicitly use
different modes.
[source,console]
[source,console,subs=attributes+]
----
PUT _cluster/settings
{
@ -113,20 +130,20 @@ PUT _cluster/settings
"remote": {
"cluster_one": {
"seeds": [
"127.0.0.1:9300"
"127.0.0.1:{remote-interface-default-port}"
]
},
"cluster_two": {
"mode": "sniff",
"seeds": [
"127.0.0.1:9301"
"127.0.0.1:{remote-interface-default-port-plus1}"
],
"transport.compress": true,
"skip_unavailable": true
},
"cluster_three": {
"mode": "proxy",
"proxy_address": "127.0.0.1:9302"
"proxy_address": "127.0.0.1:{remote-interface-default-port-plus2}"
}
}
}
@ -134,11 +151,14 @@ PUT _cluster/settings
}
----
// TEST[setup:host]
// TEST[s/127.0.0.1:9300/\${transport_host}/]
// TEST[s/127.0.0.1:\{remote-interface-default-port\}/\${transport_host}/]
// TEST[s/\{remote-interface-default-port-plus1\}/9301/]
// TEST[s/\{remote-interface-default-port-plus2\}/9302/]
You can dynamically update settings for a remote cluster after the initial configuration. The following request updates the
compression settings for `cluster_two`, and the compression and ping schedule
settings for `cluster_three`.
You can dynamically update settings for a remote cluster after the initial
configuration. The following request updates the compression settings for
`cluster_two`, and the compression and ping schedule settings for
`cluster_three`.
NOTE: When the compression or ping schedule settings change, all existing
node connections must close and re-open, which can cause in-flight requests to
@ -190,8 +210,7 @@ PUT _cluster/settings
----
// TEST[continued]
[[configure-remote-clusters-static]]
==== Statically configure remote clusters
===== Statically configure remote clusters
If you specify settings in `elasticsearch.yml`, only the nodes with
those settings can connect to the remote cluster and serve remote cluster
requests.
@ -204,22 +223,27 @@ In the following example, `cluster_one`, `cluster_two`, and `cluster_three` are
arbitrary cluster aliases representing the connection to each cluster. These
names are subsequently used to distinguish between local and remote indices.
[source,yaml]
[source,yaml,subs=attributes+]
----
cluster:
remote:
cluster_one:
seeds: 127.0.0.1:9300
seeds: 127.0.0.1:{remote-interface-default-port}
cluster_two:
mode: sniff
seeds: 127.0.0.1:9301
seeds: 127.0.0.1:{remote-interface-default-port-plus1}
transport.compress: true <1>
skip_unavailable: true <2>
cluster_three:
mode: proxy
proxy_address: 127.0.0.1:9302 <3>
proxy_address: 127.0.0.1:{remote-interface-default-port-plus2} <3>
----
<1> Compression is explicitly enabled for requests to `cluster_two`.
<2> Disconnected remote clusters are optional for `cluster_two`.
<3> The address for the proxy endpoint used to connect to `cluster_three`.
<3> The address for the proxy endpoint used to connect to `cluster_three`.
:!remote-interface:
:!remote-interface-default-port:
:!remote-interface-default-port-plus1:
:!remote-interface-default-port-plus2:

View file

@ -1,50 +0,0 @@
[[remote-clusters-security]]
=== Configure remote clusters with security
To use {ccr} or {ccs} safely with remote clusters, enable security on all
connected clusters and configure Transport Layer Security (TLS) on every node.
Configuring TLS security on the transport interface is minimally required for
remote clusters. For additional security, configure TLS on the
<<security-basic-setup-https,HTTP interface>> as well.
All connected clusters must trust one another and be mutually authenticated
with TLS on the transport interface. This means that the local cluster
trusts the certificate authority (CA) of the remote cluster, and the remote
cluster trusts the CA of the local cluster. When establishing a connection, all
nodes will verify certificates from nodes on the other side. This mutual trust
is required to securely connect a remote cluster, because all connected nodes
effectively form a single security domain.
User authentication is performed on the local cluster and the user and users
roles names are passed to the remote clusters. A remote cluster checks the users
role names against its local role definitions to determine which indices the user is
allowed to access.
Before using {ccr} or {ccs} with secured {es} clusters, complete the following
configuration tasks:
. Enable the {es} {security-features} on every node in each connected cluster by
setting `xpack.security.enabled` to `true` in `elasticsearch.yml`. Refer to the
<<general-security-settings,{es} security settings>>.
. Configure Transport Layer Security (TLS) on every node to encrypt internode
traffic and authenticate nodes in the local cluster with nodes in all remote
clusters. Refer to
<<security-basic-setup,set up basic security for the {stack}>> for the required
steps to configure security.
+
NOTE: This procedure uses the same CA to generate certificates for all nodes.
Alternatively, you can add the certificates from the local cluster as a
trusted CA in each remote cluster. You must also add the certificates from
remote clusters as a trusted CA on the local cluster. Using the same CA to
generate certificates for all nodes simplifies this task.
After enabling and configuring security, you can
<<remote-clusters-connect,connect remote clusters>> from a local cluster.
With your clusters connected, you'll need to
<<remote-clusters-privileges,configure users and privileges>> on both the local
and remote clusters.
If you're configuring a remote cluster for {ccr}, you need to
<<ccr-getting-started-follower-index,configure a follower index>> on your local
cluster to replicate the leader index on a remote cluster.

View file

@ -62,6 +62,7 @@ master-eligible node.
+
Defaults to `9300-9400`.
[[remote_cluster.port]]
`remote_cluster.port`::
(<<static-cluster-setting,Static>>, integer)
beta:[]

View file

@ -5,11 +5,20 @@ clusters_. Remote clusters can be located in different datacenters or
geographic regions, and contain indices or data streams that can be replicated
with {ccr} or searched by a local cluster using {ccs}.
With <<xpack-ccr,{ccr}>>, you ingest data to an index on a remote cluster. This
_leader_ index is replicated to one or more read-only _follower_ indices on your local cluster. Creating a multi-cluster architecture with {ccr} enables you to
configure disaster recovery, bring data closer to your users, or establish a
[[remote-clusters-ccr]]
[discrete]
=== {ccr-cap}
With <<xpack-ccr,{ccr}>>, you ingest data to an index on a remote cluster. This
_leader_ index is replicated to one or more read-only _follower_ indices on your
local cluster. Creating a multi-cluster architecture with {ccr} enables you to
configure disaster recovery, bring data closer to your users, or establish a
centralized reporting cluster to process reports locally.
[[remote-clusters-ccs]]
[discrete]
=== {ccs-cap}
<<modules-cross-cluster-search,{ccs-cap}>> enables you to run a search request
against one or more remote clusters. This capability provides each region with a
global view of all clusters, allowing you to send a search request from a local
@ -17,92 +26,72 @@ cluster and return results from all connected remote clusters. For full {ccs}
capabilities, the local and remote cluster must be on the same
{subscriptions}[subscription level].
Enabling and configuring security is important on both local and remote
clusters. When connecting a local cluster to remote clusters, an {es} superuser
(such as the `elastic` user) on the local cluster gains total read access to the
remote clusters. To use {ccr} and {ccs} safely,
<<remote-clusters-security,enable security>> on all connected clusters
and configure Transport Layer Security (TLS) on at least the transport level on
every node.
[[add-remote-clusters]]
[discrete]
=== Add remote clusters
Furthermore, a local administrator at the operating system level
with sufficient access to {es} configuration files and private keys can
potentially take over a remote cluster. Ensure that your security strategy
includes securing local _and_ remote clusters at the operating system level.
To add remote clusters, you can choose between
<<remote-clusters-security-models,two security models>> and
<<sniff-proxy-modes,two connection modes>>. Both security models are compatible
with either of the connection modes.
To register a remote cluster,
<<remote-clusters-connect,connect the local cluster>> to nodes in the
remote cluster using sniff mode (default) or proxy mode. After registering
remote clusters, <<remote-clusters-privileges,configure privileges>> for {ccr}
and {ccs}.
[[remote-clusters-security-models]]
[discrete]
==== Security models
API key based security model::
beta:[]
For clusters on version 8.10 or later, you can use an API key to authenticate
and authorize cross-cluster operations to a remote cluster. This model offers
administrators of both the local and the remote cluster fine-grained access
controls. <<remote-clusters-api-key>>.
Certificate based security model::
Uses mutual TLS authentication for cross-cluster operations. User authentication
is performed on the local cluster and a user's role names are passed to the
remote cluster. In this model, a superuser on the local cluster gains total read
access to the remote cluster, so it is only suitable for clusters that are in
the same security domain. <<remote-clusters-cert>>.
[[sniff-proxy-modes]]
[discrete]
==== Connection modes
[[sniff-mode]]
[discrete]
=== Sniff mode
Sniff mode::
In sniff mode, a cluster is created using a name and a list of seed nodes. When
a remote cluster is registered, its cluster state is retrieved from one of the
seed nodes and up to three _gateway nodes_ are selected as part of remote
cluster requests. This mode requires that the gateway node's publish addresses
are accessible by the local cluster.
+
Sniff mode is the default connection mode.
+
[[gateway-nodes-selection]]
The _gateway nodes_ selection depends on the following criteria:
* *version*: Remote nodes must be compatible with the cluster they are
registered to:
** Any node can communicate with another node on the same
major version. For example, 7.0 can talk to any 7.x node.
** Only nodes on the last minor version of a certain major version can
communicate with nodes on the following major version. In the 6.x series, 6.8
can communicate with any 7.x node, while 6.7 can only communicate with 7.0.
** Version compatibility is
symmetric, meaning that if 6.7 can communicate with 7.0, 7.0 can also
communicate with 6.7. The following table depicts version compatibility between
local and remote nodes.
+
[%collapsible%open]
.Version compatibility table
====
include::remote-clusters-shared.asciidoc[tag=remote-cluster-compatibility-matrix]
====
IMPORTANT: Elastic only supports {ccs} on a subset of these configurations. See
<<ccs-supported-configurations>>.
* *version*: Remote nodes must be compatible with the cluster they are
registered to.
* *role*: By default, any non-<<master-node,master-eligible>> node can act as a
gateway node. Dedicated master nodes are never selected as gateway nodes.
* *attributes*: You can define the gateway nodes for a cluster by setting
<<cluster-remote-node-attr,`cluster.remote.node.attr.gateway`>> to `true`.
However, such nodes still have to satisfy the two above requirements.
[[proxy-mode]]
[discrete]
=== Proxy mode
Proxy mode::
In proxy mode, a cluster is created using a name and a single proxy address.
When you register a remote cluster, a configurable number of socket connections
are opened to the proxy address. The proxy is required to route those
connections to the remote cluster. Proxy mode does not require remote cluster
nodes to have accessible publish addresses.
+
The proxy mode is not the default connection mode and must be configured.
Proxy mode has the same <<gateway-nodes-selection, version compatibility
requirements>> as sniff mode.
requirements>> as sniff mode.
[%collapsible]
[[proxy-mode-version-compatibility]]
.Version compatibility matrix
====
include::remote-clusters-shared.asciidoc[tag=remote-cluster-compatibility-matrix]
====
include::cluster/remote-clusters-api-key.asciidoc[]
IMPORTANT: Elastic only supports {ccs} on a subset of these configurations. See
<<ccs-supported-configurations>>.
include::cluster/remote-clusters-cert.asciidoc[]
include::cluster/remote-clusters-security.asciidoc[]
include::cluster/remote-clusters-connect.asciidoc[]
include::../../../x-pack/docs/en/security/authentication/remote-clusters-privileges.asciidoc[]
include::cluster/remote-clusters-settings.asciidoc[]

View file

@ -111,12 +111,12 @@ Refer to <<security-clients-integrations,Securing clients and integrations>>.
[role="exclude",id="cross-cluster-configuring"]
=== {ccs-cap} and security
Refer to <<remote-clusters-security,configure remote clusters with security>>.
Refer to <<remote-clusters>>.
[role="exclude",id="cross-cluster-kibana"]
==== {ccs-cap} and {kib}
Refer to <<clusters-privileges-ccs-kibana,Configure privileges for {ccs} and {kib}>>.
Refer to <<remote-clusters>>.
[role="exclude",id="ccr-getting-started"]
=== Configure {ccr}
@ -1917,12 +1917,12 @@ See <<delete-synonyms-set>>
See <<put-synonyms-set>>
[role="exclude",id="remote-clusters-api-key"]
=== Add remote clusters using API key authentication
[role="exclude",id="remote-clusters-connect"]
=== Remote clusters
coming::[8.10]
Refer to <<remote-clusters>>
[role="exclude",id="remote-clusters-cert"]
=== Add remote clusters using TLS certificate authentication
[role="exclude",id="remote-clusters-privileges"]
=== Configure roles and users for remote clusters
coming::[8.10]
Refer to <<remote-clusters>>

View file

@ -63,7 +63,7 @@ GET my-index/_msearch/template
* If the {es} {security-features} are enabled, you must have the `read`
<<privileges-list-indices,index privilege>> for the target data stream, index,
or alias. For cross-cluster search, see <<remote-clusters-security>>.
or alias. For cross-cluster search, see <<remote-clusters>>.
[[multi-search-template-api-path-params]]
==== {api-path-parms-title}

View file

@ -26,7 +26,7 @@ GET my-index-000001/_msearch
* If the {es} {security-features} are enabled, you must have the `read`
<<privileges-list-indices,index privilege>> for the target data stream, index,
or alias. For cross-cluster search, see <<remote-clusters-security>>.
or alias. For cross-cluster search, see <<remote-clusters>>.
[[search-multi-search-api-desc]]
==== {api-description-title}

View file

@ -65,7 +65,7 @@ GET my-index/_search/template
* If the {es} {security-features} are enabled, you must have the `read`
<<privileges-list-indices,index privilege>> for the target data stream, index,
or alias. For cross-cluster search, see <<remote-clusters-security>>.
or alias. For cross-cluster search, see <<remote-clusters>>.
[[search-template-api-path-params]]
==== {api-path-parms-title}

View file

@ -50,7 +50,7 @@ https://github.com/mapbox/vector-tile-spec[Mapbox vector tile specification].
* If the {es} {security-features} are enabled, you must have the `read`
<<privileges-list-indices,index privilege>> for the target data stream, index,
or alias. For cross-cluster search, see <<remote-clusters-security>>.
or alias. For cross-cluster search, see <<remote-clusters>>.
[[search-vector-tile-api-path-params]]
==== {api-path-parms-title}

View file

@ -54,7 +54,7 @@ cluster.
* {ccs-cap} requires different security privileges on the local cluster and
remote cluster. See <<remote-clusters-privileges-ccs>> and
<<clusters-privileges-ccs-kibana>>.
<<remote-clusters>>.
[discrete]
[[ccs-example]]

View file

@ -5,7 +5,7 @@
beta::[]
++++
<titleabbrev>Create Cross-Cluster API key API</titleabbrev>
<titleabbrev>Create Cross-Cluster API key</titleabbrev>
++++
Creates an API key of the `cross_cluster` type for the API key based remote cluster access.

View file

@ -0,0 +1,107 @@
[[remote-clusters-privileges-api-key]]
=== Configure roles and users
To use a remote cluster for {ccr} or {ccs}, you need to create user roles with
<<roles-remote-indices-priv,remote indices privileges>> on the local cluster.
You can manage users and roles from Stack Management in {kib} by selecting
*Security > Roles* from the side navigation. You can also use the
<<security-role-apis,role management APIs>> to add, update, remove, and retrieve
roles dynamically.
The following examples use the <<security-api-put-role>> API. You must have at
least the `manage_security` cluster privilege to use this API.
NOTE: The cross-cluster API key used by the local cluster to connect the remote
cluster must have sufficient privileges to cover all remote indices privileges
required by individual users.
==== Configure privileges for {ccr}
Assuming the remote cluster is connected under the name of `my_remote_cluster`,
the following request creates a role called `remote-replication` on the local
cluster that allows replicating the remote `leader-index` index:
[source,console]
----
POST /_security/role/remote-replication
{
"cluster": [
"manage_ccr"
],
"remote_indices": [
{
"clusters": [ "my_remote_cluster" ],
"names": [
"leader-index"
],
"privileges": [
"cross_cluster_replication"
]
}
]
}
----
// TEST[skip:TODO]
After creating the local `remote-replication` role, use the
<<security-api-put-user>> API to create a user on the local cluster cluster and
assign the `remote-replication` role. For example, the following request assigns
the `remote-replication` role to a user named `cross-cluster-user`:
[source,console]
----
POST /_security/user/cross-cluster-user
{
"password" : "l0ng-r4nd0m-p@ssw0rd",
"roles" : [ "remote-replication" ]
}
----
// TEST[skip:TODO]
Note that you only need to create this user on the local cluster.
==== Configure privileges for {ccs}
Assuming the remote cluster is connected under the name of `my_remote_cluster`,
the following request creates a `remote-search` role on the local cluster that
allows searching the remote `target-index` index:
[source,console]
----
POST /_security/role/remote-search
{
"remote_indices": [
{
"clusters": [ "my_remote_cluster" ],
"names": [
"target-index"
],
"privileges": [
"read",
"read_cross_cluster",
"view_index_metadata"
]
}
]
}
----
// TEST[skip:TODO]
After creating the `remote-search` role, use the <<security-api-put-user>> API
to create a user on the local cluster and assign the `remote-search` role. For
example, the following request assigns the `remote-search` role to a user named
`cross-search-user`:
[source,console]
----
POST /_security/user/cross-search-user
{
"password" : "l0ng-r4nd0m-p@ssw0rd",
"roles" : [ "remote-search" ]
}
----
// TEST[skip:TODO]
Note that you only need to create this user on the local cluster.

View file

@ -1,4 +1,4 @@
[[remote-clusters-privileges]]
[[remote-clusters-privileges-cert]]
=== Configure roles and users for remote clusters
After <<remote-clusters-connect,connecting remote clusters>>, you create a
user role on both the local and remote clusters and assign necessary privileges.
@ -206,7 +206,7 @@ NOTE: You only need to create this user on the *local* cluster.
Users with the `remote-search` role can then
<<modules-cross-cluster-search,search across clusters>>.
[[clusters-privileges-ccs-kibana]]
[[clusters-privileges-ccs-kibana-cert]]
==== Configure privileges for {ccs} and {kib}
When using {kib} to search across multiple clusters, a two-step authorization
process determines whether or not the user can access data streams and indices
@ -299,4 +299,4 @@ POST /_security/role/logstash-reader
}
]
}
----
----

View file

@ -231,7 +231,7 @@ on all {es} API keys.
`transport_client`::
All privileges necessary for a transport client to connect. Required by the remote
cluster to enable <<remote-clusters-security,{ccs}>>.
cluster to enable <<remote-clusters,{ccs}>>.
[[privileges-list-indices]]
==== Indices privileges
@ -371,7 +371,7 @@ more like this, multi percolate/search/termvector, percolate, scroll,
clear_scroll, search, suggest, tv).
`read_cross_cluster`::
Read-only access to the search action from a <<remote-clusters-security,remote cluster>>.
Read-only access to the search action from a <<remote-clusters,remote cluster>>.
`view_index_metadata`::
Read-only access to index and data stream metadata (aliases, exists,