elasticsearch/docs/reference/troubleshooting/common-issues/red-yellow-cluster-status.asciidoc
Arianna Laudazzi 70e5a67904
[AutoOps] Reference AutoOps solution on troubleshooting pages (#119630)
* Reference AutoOps on troubleshooting pages

* Integrate reviewer's feedback
2025-01-09 16:24:20 +01:00

274 lines
No EOL
9.2 KiB
Text
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[[red-yellow-cluster-status]]
=== Red or yellow cluster health status
A red or yellow cluster health status indicates one or more shards are not assigned to
a node.
* **Red health status**: The cluster has some unassigned primary shards, which
means that some operations such as searches and indexing may fail.
* **Yellow health status**: The cluster has no unassigned primary shards but some
unassigned replica shards. This increases your risk of data loss and can degrade
cluster performance.
When your cluster has a red or yellow health status, it will continue to process
searches and indexing where possible, but may delay certain management and
cleanup activities until the cluster returns to green health status. For instance,
some <<index-lifecycle-management,{ilm-init}>> actions require the index on which they
operate to have a green health status.
In many cases, your cluster will recover to green health status automatically.
If the cluster doesn't automatically recover, then you must <<fix-red-yellow-cluster-status,manually address>>
the remaining problems so management and cleanup activities can proceed.
See https://www.youtube.com/watch?v=v2mbeSd1vTQ[this video]
for a walkthrough of monitoring allocation health.
****
If you're using Elastic Cloud Hosted, then you can use AutoOps to monitor your cluster. AutoOps significantly simplifies cluster management with performance recommendations, resource utilization visibility, real-time issue detection and resolution paths. For more information, refer to https://www.elastic.co/guide/en/cloud/current/ec-autoops.html[Monitor with AutoOps].
****
[discrete]
[[diagnose-cluster-status]]
==== Diagnose your cluster status
**Check your cluster status**
Use the <<cluster-health,cluster health API>>.
[source,console]
----
GET _cluster/health?filter_path=status,*_shards
----
A healthy cluster has a green `status` and zero `unassigned_shards`. A yellow
status means only replicas are unassigned. A red status means one or
more primary shards are unassigned.
**View unassigned shards**
To view unassigned shards, use the <<cat-shards,cat shards API>>.
[source,console]
----
GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
----
Unassigned shards have a `state` of `UNASSIGNED`. The `prirep` value is `p` for
primary shards and `r` for replicas.
To understand why an unassigned shard is not being assigned and what action
you must take to allow {es} to assign it, use the
<<cluster-allocation-explain,cluster allocation explanation API>>.
[source,console]
----
GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.*
{
"index": "my-index",
"shard": 0,
"primary": false
}
----
// TEST[s/^/PUT my-index\n/]
[discrete]
[[fix-red-yellow-cluster-status]]
==== Fix a red or yellow cluster status
A shard can become unassigned for several reasons. The following tips outline the
most common causes and their solutions.
[discrete]
[[fix-cluster-status-reenable-allocation]]
===== Re-enable shard allocation
You typically disable allocation during a <<restart-cluster,restart>> or other
cluster maintenance. If you forgot to re-enable allocation afterward, {es} will
be unable to assign shards. To re-enable allocation, reset the
`cluster.routing.allocation.enable` cluster setting.
[source,console]
----
PUT _cluster/settings
{
"persistent" : {
"cluster.routing.allocation.enable" : null
}
}
----
See https://www.youtube.com/watch?v=MiKKUdZvwnI[this video] for walkthrough of troubleshooting "no allocations are allowed".
[discrete]
[[fix-cluster-status-recover-nodes]]
===== Recover lost nodes
Shards often become unassigned when a data node leaves the cluster. This can
occur for several reasons, ranging from connectivity issues to hardware failure.
After you resolve the issue and recover the node, it will rejoin the cluster.
{es} will then automatically allocate any unassigned shards.
To avoid wasting resources on temporary issues, {es} <<delayed-allocation,delays
allocation>> by one minute by default. If you've recovered a node and dont want
to wait for the delay period, you can call the <<cluster-reroute,cluster reroute
API>> with no arguments to start the allocation process. The process runs
asynchronously in the background.
[source,console]
----
POST _cluster/reroute
----
[discrete]
[[fix-cluster-status-allocation-settings]]
===== Fix allocation settings
Misconfigured allocation settings can result in an unassigned primary shard.
These settings include:
* <<shard-allocation-filtering,Shard allocation>> index settings
* <<cluster-shard-allocation-filtering,Allocation filtering>> cluster settings
* <<shard-allocation-awareness,Allocation awareness>> cluster settings
To review your allocation settings, use the <<indices-get-settings,get index
settings>> and <<cluster-get-settings,cluster get settings>> APIs.
[source,console]
----
GET my-index/_settings?flat_settings=true&include_defaults=true
GET _cluster/settings?flat_settings=true&include_defaults=true
----
// TEST[s/^/PUT my-index\n/]
You can change the settings using the <<indices-update-settings,update index
settings>> and <<cluster-update-settings,cluster update settings>> APIs.
[discrete]
[[fix-cluster-status-allocation-replicas]]
===== Allocate or reduce replicas
To protect against hardware failure, {es} will not assign a replica to the same
node as its primary shard. If no other data nodes are available to host the
replica, it remains unassigned. To fix this, you can:
* Add a data node to the same tier to host the replica.
* Change the `index.number_of_replicas` index setting to reduce the number of
replicas for each primary shard. We recommend keeping at least one replica per
primary.
[source,console]
----
PUT _settings
{
"index.number_of_replicas": 1
}
----
// TEST[s/^/PUT my-index\n/]
[discrete]
[[fix-cluster-status-disk-space]]
===== Free up or increase disk space
{es} uses a <<disk-based-shard-allocation,low disk watermark>> to ensure data
nodes have enough disk space for incoming shards. By default, {es} does not
allocate shards to nodes using more than 85% of disk space.
To check the current disk space of your nodes, use the <<cat-allocation,cat
allocation API>>.
[source,console]
----
GET _cat/allocation?v=true&h=node,shards,disk.*
----
If your nodes are running low on disk space, you have a few options:
* Upgrade your nodes to increase disk space.
* Delete unneeded indices to free up space. If you use {ilm-init}, you can
update your lifecycle policy to use <<ilm-searchable-snapshot,searchable
snapshots>> or add a delete phase. If you no longer need to search the data, you
can use a <<snapshot-restore,snapshot>> to store it off-cluster.
* If you no longer write to an index, use the <<indices-forcemerge,force merge
API>> or {ilm-init}'s <<ilm-forcemerge,force merge action>> to merge its
segments into larger ones.
+
[source,console]
----
POST my-index/_forcemerge
----
// TEST[s/^/PUT my-index\n/]
* If an index is read-only, use the <<indices-shrink-index,shrink index API>> or
{ilm-init}'s <<ilm-shrink,shrink action>> to reduce its primary shard count.
+
[source,console]
----
POST my-index/_shrink/my-shrunken-index
----
// TEST[s/^/PUT my-index\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/]
* If your node has a large disk capacity, you can increase the low disk
watermark or set it to an explicit byte value.
+
[source,console]
----
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": "30gb"
}
}
----
// TEST[s/"30gb"/null/]
[discrete]
[[fix-cluster-status-jvm]]
===== Reduce JVM memory pressure
Shard allocation requires JVM heap memory. High JVM memory pressure can trigger
<<circuit-breaker,circuit breakers>> that stop allocation and leave shards
unassigned. See <<high-jvm-memory-pressure>>.
[discrete]
[[fix-cluster-status-restore]]
===== Recover data for a lost primary shard
If a node containing a primary shard is lost, {es} can typically replace it
using a replica on another node. If you can't recover the node and replicas
don't exist or are irrecoverable, <<cluster-allocation-explain,Allocation
Explain>> will report `no_valid_shard_copy` and you'll need to do one of the following:
* restore the missing data from <<snapshot-restore,snapshot>>
* index the missing data from its original data source
* accept data loss on the index-level by running <<indices-delete-index,Delete Index>>
* accept data loss on the shard-level by executing <<cluster-reroute,Cluster Reroute>> allocate_stale_primary or allocate_empty_primary command with `accept_data_loss: true`
+
WARNING: Only use this option if node recovery is no longer possible. This
process allocates an empty primary shard. If the node later rejoins the cluster,
{es} will overwrite its primary shard with data from this newer empty shard,
resulting in data loss.
+
[source,console]
----
POST _cluster/reroute
{
"commands": [
{
"allocate_empty_primary": {
"index": "my-index",
"shard": 0,
"node": "my-node",
"accept_data_loss": "true"
}
}
]
}
----
// TEST[s/^/PUT my-index\n/]
// TEST[catch:bad_request]
See https://www.youtube.com/watch?v=6OAg9IyXFO4[this video] for a walkthrough of troubleshooting `no_valid_shard_copy`.