mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-04-24 23:27:25 -04:00
124 lines
4.6 KiB
Text
124 lines
4.6 KiB
Text
[[fix-watermark-errors]]
|
|
=== Fix watermark errors
|
|
|
|
++++
|
|
<titleabbrev>Watermark errors</titleabbrev>
|
|
++++
|
|
:keywords: {es}, high watermark, low watermark, full disk, flood stage watermark
|
|
|
|
When a data node is critically low on disk space and has reached the
|
|
<<cluster-routing-flood-stage,flood-stage disk usage watermark>>, the following
|
|
error is logged: `Error: disk usage exceeded flood-stage watermark, index has read-only-allow-delete block`.
|
|
|
|
To prevent a full disk, when a node reaches this watermark, {es} <<index-block-settings,blocks writes>>
|
|
to any index with a shard on the node. If the block affects related system
|
|
indices, {kib} and other {stack} features may become unavailable. For example,
|
|
this could induce {kib}'s `Kibana Server is not Ready yet`
|
|
{kibana-ref}/access.html#not-ready[error message].
|
|
|
|
{es} will automatically remove the write block when the affected node's disk
|
|
usage falls below the <<cluster-routing-watermark-high,high disk watermark>>.
|
|
To achieve this, {es} attempts to rebalance some of the affected node's shards
|
|
to other nodes in the same data tier.
|
|
|
|
****
|
|
If you're using Elastic Cloud Hosted, then you can use AutoOps to monitor your cluster. AutoOps significantly simplifies cluster management with performance recommendations, resource utilization visibility, real-time issue detection and resolution paths. For more information, refer to https://www.elastic.co/guide/en/cloud/current/ec-autoops.html[Monitor with AutoOps].
|
|
****
|
|
|
|
[[fix-watermark-errors-rebalance]]
|
|
==== Monitor rebalancing
|
|
|
|
To verify that shards are moving off the affected node until it falls below high
|
|
watermark., use the <<cat-shards,cat shards API>> and <<cat-recovery,cat recovery API>>:
|
|
|
|
[source,console]
|
|
----
|
|
GET _cat/shards?v=true
|
|
|
|
GET _cat/recovery?v=true&active_only=true
|
|
----
|
|
|
|
If shards remain on the node keeping it about high watermark, use the
|
|
<<cluster-allocation-explain,cluster allocation explanation API>> to get an
|
|
explanation for their allocation status.
|
|
|
|
[source,console]
|
|
----
|
|
GET _cluster/allocation/explain
|
|
{
|
|
"index": "my-index",
|
|
"shard": 0,
|
|
"primary": false
|
|
}
|
|
----
|
|
// TEST[s/^/PUT my-index\n/]
|
|
// TEST[s/"primary": false,/"primary": false/]
|
|
|
|
[[fix-watermark-errors-temporary]]
|
|
==== Temporary Relief
|
|
|
|
To immediately restore write operations, you can temporarily increase the
|
|
<<disk-based-shard-allocation,disk watermarks>> and remove the
|
|
<<index-block-settings,write block>>.
|
|
|
|
[source,console]
|
|
----
|
|
PUT _cluster/settings
|
|
{
|
|
"persistent": {
|
|
"cluster.routing.allocation.disk.watermark.low": "90%",
|
|
"cluster.routing.allocation.disk.watermark.low.max_headroom": "100GB",
|
|
"cluster.routing.allocation.disk.watermark.high": "95%",
|
|
"cluster.routing.allocation.disk.watermark.high.max_headroom": "20GB",
|
|
"cluster.routing.allocation.disk.watermark.flood_stage": "97%",
|
|
"cluster.routing.allocation.disk.watermark.flood_stage.max_headroom": "5GB",
|
|
"cluster.routing.allocation.disk.watermark.flood_stage.frozen": "97%",
|
|
"cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom": "5GB"
|
|
}
|
|
}
|
|
|
|
PUT */_settings?expand_wildcards=all
|
|
{
|
|
"index.blocks.read_only_allow_delete": null
|
|
}
|
|
----
|
|
// TEST[s/^/PUT my-index\n/]
|
|
|
|
When a long-term solution is in place, to reset or reconfigure the disk watermarks:
|
|
|
|
[source,console]
|
|
----
|
|
PUT _cluster/settings
|
|
{
|
|
"persistent": {
|
|
"cluster.routing.allocation.disk.watermark.low": null,
|
|
"cluster.routing.allocation.disk.watermark.low.max_headroom": null,
|
|
"cluster.routing.allocation.disk.watermark.high": null,
|
|
"cluster.routing.allocation.disk.watermark.high.max_headroom": null,
|
|
"cluster.routing.allocation.disk.watermark.flood_stage": null,
|
|
"cluster.routing.allocation.disk.watermark.flood_stage.max_headroom": null,
|
|
"cluster.routing.allocation.disk.watermark.flood_stage.frozen": null,
|
|
"cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom": null
|
|
}
|
|
}
|
|
----
|
|
|
|
[[fix-watermark-errors-resolve]]
|
|
==== Resolve
|
|
|
|
As a long-term solution, we recommend you do one of the following best suited
|
|
to your use case:
|
|
|
|
* add nodes to the affected <<data-tiers,data tiers>>
|
|
+
|
|
TIP: You should enable <<xpack-autoscaling,autoscaling>> for clusters deployed using our {ess}, {ece}, and {eck} platforms.
|
|
|
|
* upgrade existing nodes to increase disk space
|
|
+
|
|
TIP: On {ess}, https://support.elastic.co[Elastic Support] intervention may
|
|
become necessary if <<cluster-health,cluster health>> reaches `status:red`.
|
|
|
|
* delete unneeded indices using the <<indices-delete-index,delete index API>>
|
|
|
|
* update related <<index-lifecycle-management,ILM policy>> to push indices
|
|
through to later <<data-tiers,data tiers>>
|