mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-04-25 15:47:23 -04:00
With this commit, Elasticsearch will no longer prefer using shards in the same location (with the same awareness attribute values) to process `_search` and `_get` requests. Instead, adaptive replica selection (the default since 7.0) should route requests more efficiently using the service time of prior inter-node communications. Clusters with big latencies between nodes should switch to cross cluster replication to isolate nodes within the same zone. Note that this change only targets 8.0 since it is considered as breaking. However a follow up pr should add an option to activate this behavior in 7.x in order to allow users to opt-in early. Closes #43453
106 lines
4.5 KiB
Text
106 lines
4.5 KiB
Text
[[allocation-awareness]]
|
|
=== Shard allocation awareness
|
|
|
|
You can use custom node attributes as _awareness attributes_ to enable {es}
|
|
to take your physical hardware configuration into account when allocating shards.
|
|
If {es} knows which nodes are on the same physical server, in the same rack, or
|
|
in the same zone, it can distribute the primary shard and its replica shards to
|
|
minimise the risk of losing all shard copies in the event of a failure.
|
|
|
|
When shard allocation awareness is enabled with the
|
|
`cluster.routing.allocation.awareness.attributes` setting, shards are only
|
|
allocated to nodes that have values set for the specified awareness
|
|
attributes. If you use multiple awareness attributes, {es} considers
|
|
each attribute separately when allocating shards.
|
|
|
|
The allocation awareness settings can be configured in
|
|
`elasticsearch.yml` and updated dynamically with the
|
|
<<cluster-update-settings,cluster-update-settings>> API.
|
|
|
|
NOTE: The number of attribute values determines how many shard copies are
|
|
allocated in each location. If the number of nodes in each location is
|
|
unbalanced and there are a lot of replicas, replica shards might be left
|
|
unassigned.
|
|
|
|
[float]
|
|
[[enabling-awareness]]
|
|
==== Enabling shard allocation awareness
|
|
|
|
To enable shard allocation awareness:
|
|
|
|
. Specify the location of each node with a custom node attribute. For example,
|
|
if you want Elasticsearch to distribute shards across different racks, you might
|
|
set an awareness attribute called `rack_id` in each node's `elasticsearch.yml`
|
|
config file.
|
|
+
|
|
[source,yaml]
|
|
--------------------------------------------------------
|
|
node.attr.rack_id: rack_one
|
|
--------------------------------------------------------
|
|
+
|
|
You can also set custom attributes when you start a node:
|
|
+
|
|
[source,sh]
|
|
--------------------------------------------------------
|
|
./bin/elasticsearch -Enode.attr.rack_id=rack_one
|
|
--------------------------------------------------------
|
|
|
|
. Tell {es} to take one or more awareness attributes into account when
|
|
allocating shards by setting
|
|
`cluster.routing.allocation.awareness.attributes` in *every* master-eligible
|
|
node's `elasticsearch.yml` config file.
|
|
+
|
|
--
|
|
[source,yaml]
|
|
--------------------------------------------------------
|
|
cluster.routing.allocation.awareness.attributes: rack_id <1>
|
|
--------------------------------------------------------
|
|
<1> Specify multiple attributes as a comma-separated list.
|
|
--
|
|
+
|
|
You can also use the
|
|
<<cluster-update-settings,cluster-update-settings>> API to set or update
|
|
a cluster's awareness attributes.
|
|
|
|
With this example configuration, if you start two nodes with
|
|
`node.attr.rack_id` set to `rack_one` and create an index with 5 primary
|
|
shards and 1 replica of each primary, all primaries and replicas are
|
|
allocated across the two nodes.
|
|
|
|
If you add two nodes with `node.attr.rack_id` set to `rack_two`,
|
|
{es} moves shards to the new nodes, ensuring (if possible)
|
|
that no two copies of the same shard are in the same rack.
|
|
|
|
If `rack_two` fails and takes down both its nodes, by default {es}
|
|
allocates the lost shard copies to nodes in `rack_one`. To prevent multiple
|
|
copies of a particular shard from being allocated in the same location, you can
|
|
enable forced awareness.
|
|
|
|
[float]
|
|
[[forced-awareness]]
|
|
==== Forced awareness
|
|
|
|
By default, if one location fails, Elasticsearch assigns all of the missing
|
|
replica shards to the remaining locations. While you might have sufficient
|
|
resources across all locations to host your primary and replica shards, a single
|
|
location might be unable to host *ALL* of the shards.
|
|
|
|
To prevent a single location from being overloaded in the event of a failure,
|
|
you can set `cluster.routing.allocation.awareness.force` so no replicas are
|
|
allocated until nodes are available in another location.
|
|
|
|
For example, if you have an awareness attribute called `zone` and configure nodes
|
|
in `zone1` and `zone2`, you can use forced awareness to prevent Elasticsearch
|
|
from allocating replicas if only one zone is available:
|
|
|
|
[source,yaml]
|
|
-------------------------------------------------------------------
|
|
cluster.routing.allocation.awareness.attributes: zone
|
|
cluster.routing.allocation.awareness.force.zone.values: zone1,zone2 <1>
|
|
-------------------------------------------------------------------
|
|
<1> Specify all possible values for the awareness attribute.
|
|
|
|
With this example configuration, if you start two nodes with `node.attr.zone` set
|
|
to `zone1` and create an index with 5 shards and 1 replica, Elasticsearch creates
|
|
the index and allocates the 5 primary shards but no replicas. Replicas are
|
|
only allocated once nodes with `node.attr.zone` set to `zone2` are available.
|