[[red-yellow-cluster-status]] === Red or yellow cluster status A red or yellow cluster status indicates one or more shards are missing or unallocated. These unassigned shards increase your risk of data loss and can degrade cluster performance. [discrete] [[diagnose-cluster-status]] ==== Diagnose your cluster status **Check your cluster status** Use the <>. [source,console] ---- GET _cluster/health?filter_path=status,*_shards ---- A healthy cluster has a green `status` and zero `unassigned_shards`. A yellow status means only replicas are unassigned. A red status means one or more primary shards are unassigned. **View unassigned shards** To view unassigned shards, use the <>. [source,console] ---- GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state ---- Unassigned shards have a `state` of `UNASSIGNED`. The `prirep` value is `p` for primary shards and `r` for replicas. To understand why an unassigned shard is not being assigned and what action you must take to allow {es} to assign it, use the <>. [source,console] ---- GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.* { "index": "my-index", "shard": 0, "primary": false, "current_node": "my-node" } ---- // TEST[s/^/PUT my-index\n/] // TEST[s/"primary": false,/"primary": false/] // TEST[s/"current_node": "my-node"//] [discrete] [[fix-red-yellow-cluster-status]] ==== Fix a red or yellow cluster status A shard can become unassigned for several reasons. The following tips outline the most common causes and their solutions. **Re-enable shard allocation** You typically disable allocation during a <> or other cluster maintenance. If you forgot to re-enable allocation afterward, {es} will be unable to assign shards. To re-enable allocation, reset the `cluster.routing.allocation.enable` cluster setting. [source,console] ---- PUT _cluster/settings { "persistent" : { "cluster.routing.allocation.enable" : null } } ---- **Recover lost nodes** Shards often become unassigned when a data node leaves the cluster. This can occur for several reasons, ranging from connectivity issues to hardware failure. After you resolve the issue and recover the node, it will rejoin the cluster. {es} will then automatically allocate any unassigned shards. To avoid wasting resources on temporary issues, {es} <> by one minute by default. If you've recovered a node and don’t want to wait for the delay period, you can call the <> with no arguments to start the allocation process. The process runs asynchronously in the background. [source,console] ---- POST _cluster/reroute ---- **Fix allocation settings** Misconfigured allocation settings can result in an unassigned primary shard. These settings include: * <> index settings * <> cluster settings * <> cluster settings To review your allocation settings, use the <> and <> APIs. [source,console] ---- GET my-index/_settings?flat_settings=true&include_defaults=true GET _cluster/settings?flat_settings=true&include_defaults=true ---- // TEST[s/^/PUT my-index\n/] You can change the settings using the <> and <> APIs. **Allocate or reduce replicas** To protect against hardware failure, {es} will not assign a replica to the same node as its primary shard. If no other data nodes are available to host the replica, it remains unassigned. To fix this, you can: * Add a data node to the same tier to host the replica. * Change the `index.number_of_replicas` index setting to reduce the number of replicas for each primary shard. We recommend keeping at least one replica per primary. [source,console] ---- PUT _settings { "index.number_of_replicas": 1 } ---- // TEST[s/^/PUT my-index\n/] **Free up or increase disk space** {es} uses a <> to ensure data nodes have enough disk space for incoming shards. By default, {es} does not allocate shards to nodes using more than 85% of disk space. To check the current disk space of your nodes, use the <>. [source,console] ---- GET _cat/allocation?v=true&h=node,shards,disk.* ---- If your nodes are running low on disk space, you have a few options: * Upgrade your nodes to increase disk space. * Delete unneeded indices to free up space. If you use {ilm-init}, you can update your lifecycle policy to use <> or add a delete phase. If you no longer need to search the data, you can use a <> to store it off-cluster. * If you no longer write to an index, use the <> or {ilm-init}'s <> to merge its segments into larger ones. + [source,console] ---- POST my-index/_forcemerge ---- // TEST[s/^/PUT my-index\n/] * If an index is read-only, use the <> or {ilm-init}'s <> to reduce its primary shard count. + [source,console] ---- POST my-index/_shrink/my-shrunken-index ---- // TEST[s/^/PUT my-index\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/] * If your node has a large disk capacity, you can increase the low disk watermark or set it to an explicit byte value. + [source,console] ---- PUT _cluster/settings { "persistent": { "cluster.routing.allocation.disk.watermark.low": "30gb" } } ---- // TEST[s/"30gb"/null/] **Reduce JVM memory pressure** Shard allocation requires JVM heap memory. High JVM memory pressure can trigger <> that stop allocation and leave shards unassigned. See <>. **Recover data for a lost primary shard** If a node containing a primary shard is lost, {es} can typically replace it using a replica on another node. If you can't recover the node and replicas don't exist or are irrecoverable, you'll need to re-add the missing data from a <> or the original data source. WARNING: Only use this option if node recovery is no longer possible. This process allocates an empty primary shard. If the node later rejoins the cluster, {es} will overwrite its primary shard with data from this newer empty shard, resulting in data loss. Use the <> to manually allocate the unassigned primary shard to another data node in the same tier. Set `accept_data_loss` to `true`. [source,console] ---- POST _cluster/reroute { "commands": [ { "allocate_empty_primary": { "index": "my-index", "shard": 0, "node": "my-node", "accept_data_loss": "true" } } ] } ---- // TEST[s/^/PUT my-index\n/] // TEST[catch:bad_request] If you backed up the missing index data to a snapshot, use the <> to restore the individual index. Alternatively, you can index the missing data from the original data source.