[[fix-watermark-errors]] === Fix watermark errors ++++ Watermark errors ++++ :keywords: {es}, high watermark, low watermark, full disk, flood stage watermark When a data node is critically low on disk space and has reached the <>, the following error is logged: `Error: disk usage exceeded flood-stage watermark, index has read-only-allow-delete block`. To prevent a full disk, when a node reaches this watermark, {es} <> to any index with a shard on the node. If the block affects related system indices, {kib} and other {stack} features may become unavailable. For example, this could induce {kib}'s `Kibana Server is not Ready yet` {kibana-ref}/access.html#not-ready[error message]. {es} will automatically remove the write block when the affected node's disk usage falls below the <>. To achieve this, {es} attempts to rebalance some of the affected node's shards to other nodes in the same data tier. **** If you're using Elastic Cloud Hosted, then you can use AutoOps to monitor your cluster. AutoOps significantly simplifies cluster management with performance recommendations, resource utilization visibility, real-time issue detection and resolution paths. For more information, refer to https://www.elastic.co/guide/en/cloud/current/ec-autoops.html[Monitor with AutoOps]. **** [[fix-watermark-errors-rebalance]] ==== Monitor rebalancing To verify that shards are moving off the affected node until it falls below high watermark., use the <> and <>: [source,console] ---- GET _cat/shards?v=true GET _cat/recovery?v=true&active_only=true ---- If shards remain on the node keeping it about high watermark, use the <> to get an explanation for their allocation status. [source,console] ---- GET _cluster/allocation/explain { "index": "my-index", "shard": 0, "primary": false } ---- // TEST[s/^/PUT my-index\n/] // TEST[s/"primary": false,/"primary": false/] [[fix-watermark-errors-temporary]] ==== Temporary Relief To immediately restore write operations, you can temporarily increase the <> and remove the <>. [source,console] ---- PUT _cluster/settings { "persistent": { "cluster.routing.allocation.disk.watermark.low": "90%", "cluster.routing.allocation.disk.watermark.low.max_headroom": "100GB", "cluster.routing.allocation.disk.watermark.high": "95%", "cluster.routing.allocation.disk.watermark.high.max_headroom": "20GB", "cluster.routing.allocation.disk.watermark.flood_stage": "97%", "cluster.routing.allocation.disk.watermark.flood_stage.max_headroom": "5GB", "cluster.routing.allocation.disk.watermark.flood_stage.frozen": "97%", "cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom": "5GB" } } PUT */_settings?expand_wildcards=all { "index.blocks.read_only_allow_delete": null } ---- // TEST[s/^/PUT my-index\n/] When a long-term solution is in place, to reset or reconfigure the disk watermarks: [source,console] ---- PUT _cluster/settings { "persistent": { "cluster.routing.allocation.disk.watermark.low": null, "cluster.routing.allocation.disk.watermark.low.max_headroom": null, "cluster.routing.allocation.disk.watermark.high": null, "cluster.routing.allocation.disk.watermark.high.max_headroom": null, "cluster.routing.allocation.disk.watermark.flood_stage": null, "cluster.routing.allocation.disk.watermark.flood_stage.max_headroom": null, "cluster.routing.allocation.disk.watermark.flood_stage.frozen": null, "cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom": null } } ---- [[fix-watermark-errors-resolve]] ==== Resolve As a long-term solution, we recommend you do one of the following best suited to your use case: * add nodes to the affected <> + TIP: You should enable <> for clusters deployed using our {ess}, {ece}, and {eck} platforms. * upgrade existing nodes to increase disk space + TIP: On {ess}, https://support.elastic.co[Elastic Support] intervention may become necessary if <> reaches `status:red`. * delete unneeded indices using the <> * update related <> to push indices through to later <>