Commit graph

42 commits

Author SHA1 Message Date
shainaraskas
ae3db6042a
[8.x] [DOCS] Concept cleanup 2 - ES settings (#119373) (#119642) 2025-01-10 10:31:16 -05:00
David Turner
1c11249c05
Fix docs about uneven disk usage (#104541)
There's a note in the docs saying we only consider shard count and not
disk usage which is no longer true. This commit fixes the note to
reflect today's implementation.
2024-01-18 16:02:37 +00:00
Abdon Pijpelink
2808512397
[DOCS] Improve watermark troubleshooting documentation (#94222) 2023-03-01 14:34:14 +01:00
Iraklis Psaroudakis
0f4374f4fb
Explain disk headroom settings more in docs (#90763)
Relates to #81406
2022-10-20 18:45:23 +03:00
Iraklis Psaroudakis
34471b1cd2
Introduce max headroom for disk watermark stages (#88639)
Introduce max headroom settings for the low, high, and flood disk watermark stages, similar to the existing max headroom setting for the flood stage of the frozen tier. Introduce new max headrooms in HealthMetadata and in ReactiveStorageDeciderService. Add multiple tests in DiskThresholdDeciderUnitTests, DiskThresholdDeciderTests and DiskThresholdMonitorTests. Moreover, addition & subtraction for ByteSizeValue, and min.
2022-09-19 14:59:18 +03:00
Pooya Salehi
806d2976aa
Remove Blocks when disk threshold monitoring is disabled (#87841)
This change ensures that existing read_only_allow_delete blocks that
are placed on indices when the flood_stage watermark threshold is
exceeded, are removed when the disk threshold monitoring is disabled.

This is done by changing how InternalClusterInfoService behaves when
disabled. With this change, it will keep calling the registered
listeners periodically, but with an empty ClusterInfo.

Closes #86383
2022-07-26 14:26:43 +02:00
Iraklis Psaroudakis
f284cc16f4
Convert disk watermarks to RelativeByteSizeValues (#88719)
* Convert disk watermarks to RelativeByteSizeValues

Similar to the existing watermark setting for the frozen tier.

Pre-requisite for PR 88639 that plans to introduce max headroom
settings for the disk watermarks, similar to the frozen tier max
headroom setting.

* Add changelog

* Revert 20gb to 20GB

* Make formatNoTrailingZerosPercent non static

* ByteSizeValue.MINUS_ONE

* Remove getMinimumTotalSizeForBelowWatermark

* Remove comment

* Fix minor stuff

* Make parsing of RelativeByteSizeValue faster

Mimicks older definitelyNotPercentage function

* Remove Locale from Strings.format

* More MINUS_ONE
2022-07-22 18:39:07 +03:00
Nikola Grcevski
055c770083
Deprecation of transient cluster settings (#78794)
This PR changes uses of transient cluster settings to
persistent cluster settings. 

The PR also deprecates the transient settings usage.

Relates to #49540
2021-10-15 13:00:52 -04:00
Henning Andersen
57e503ca78
[DOCS] disk.threshold_enabled not cloud (#79225)
Mark `cluster.routing.allocation.disk.threshold_enabled` not for cloud
and add it to list of operator only settings.

Relates #78822
2021-10-15 16:19:04 +02:00
Henning Andersen
a11e6f5c6e
Breaking change for single data node setting (#73737)
In #55805, we added a setting to allow single data node clusters to
respect the high watermark. In #73733 we added the related deprecations.
This commit ensures the only valid value for the setting is true and
adds deprecations if the setting is set. The setting will be removed
in a future release.

Co-authored-by: David Turner <david.turner@elastic.co>
2021-06-07 13:12:04 +02:00
Henning Andersen
794869cfbb
Add separate flood stage limit for frozen (#71855)
Dedicated frozen nodes can survive less headroom than other data nodes.
This commits introduces a separate flood stage threshold for frozen as
well as an accompanying max_headroom setting that caps the amount of
free space necessary on frozen.

Relates #71844
2021-04-20 15:51:52 +02:00
James Rodewig
693807a6d3
[DOCS] Fix double spaces (#71082) 2021-03-31 09:57:47 -04:00
David Turner
aa4ab0bc26
Expand docs on disk-based shard allocation (#65668)
Today we document the settings used to control rebalancing and
disk-based shard allocation but there isn't really any discussion around
what these processes do so it's hard to know what, if any, adjustments
to make.

This commit adds some words to help folk understand this area better.
2020-12-07 14:51:26 +00:00
Howard
e50799bc7e
[DOCS] Remove duplicate disk.threshold_enabled setting. (#62924) 2020-09-29 08:58:46 -04:00
James Rodewig
617652b969
[DOCS] Document dynamic cluster-lvl shard alloc settings (#61338) 2020-08-31 11:04:11 -04:00
James Rodewig
ae01606785
[DOCS] Replace twitter dataset in docs (#60604) 2020-08-03 12:49:56 -04:00
Adam Locke
3a1258fe97
[DOCS] Add supported ESS settings to ES docs (#57953)
* Adding ESS icons to supported ES settings.

* Adding new file for supported ESS settings.

* Adding supported ESS settings for HTTP and disk-based shard allocation.

* Adding more supported settings for ESS.

* Adding descriptions for each Cloud section, plus additional settings.

* Adding new warehouse file for Cloud, plus additional settings.

* Adding node settings for Cloud.

* Adding audit settings for Cloud.

* Resolving merge conflict.

* Adding SAML settings (part 1).

* Adding SAML realm encryption and signing settings.

* Adding SAML SSL settings.

* Adding Kerberos realm settings.

* Adding OpenID Connect Realm settings.

* Adding OpenID Connect SSL settings.

* Resolving leftover Git merge markers.

* Removing Cloud settings page and link to it.

* Add link to mapping source

* Update docs/reference/docs/reindex.asciidoc

* Incorporate edit of HTTP settings

* Remove "cloud" from tag and ID

* Remove "cloud" from tag and update description

* Remove "cloud" from tag and ID

* Change "whitelists" to "specifies"

* Remove "cloud" from end tag

* Removing cloud from IDs and tags.

* Changing link reference to fix build issue.

* Adding index management page for missing settings.

* Removing warehouse file for Cloud and moving settings elsewhere.

* Clarifying true/false usage of http.detailed_errors.enabled.

* Changing underscore to dash in link to fix ci build.
2020-07-02 14:13:06 -04:00
David Turner
acf031cdb5
Forbid read-only-allow-delete block in blocks API (#58727)
* Forbid read-only-allow-delete block in blocks API

The read-only-allow-delete block is not really under the user's control
since Elasticsearch adds/removes it automatically. This commit removes
support for it from the new API for adding blocks to indices that was
introduced in #58094.

* Missing xref

* Reword paragraph on read-only-allow-delete block
2020-07-01 12:57:34 +01:00
David Turner
83d6589b2a
Account for remaining recovery in disk allocator (#58029)
Today the disk-based shard allocator accounts for incoming shards by
subtracting the estimated size of the incoming shard from the free space on the
node. This is an overly conservative estimate if the incoming shard has almost
finished its recovery since in that case it is already consuming most of the
disk space it needs.

This change adds to the shard stats a measure of how much larger each store is
expected to grow, computed from the ongoing recovery, and uses this to account
for the disk usage of incoming shards more accurately.
2020-07-01 08:04:45 +01:00
Adam Locke
7dd731b9a2
[DOCS] Explain flood stage watermark. (#57184)
* Changes for issue #36114.

* Adding stronger wording to the new note.

* Removing statement about typically not needting to set the read-only allow delete block.

* Replacing Elasticsearch with {es} variable.
2020-05-28 10:57:40 -04:00
James Rodewig
7c449319a1
[DOCS] Relocate shard allocation module content (#56535) 2020-05-12 08:55:57 -04:00
Henning Andersen
0bd28aed4e
Disk decider respect watermarks for single data node (#55805)
The disk decider had special handling for the single data node case,
allowing any allocation (skipping watermark checks) for such clusters.
This special handling can now be avoided via a setting.
2020-04-28 11:55:42 +02:00
François-Clément Brossard
0b107a0a09 Clarify low watermark documentation (#48112)
Today the docs say that the low watermark has no effect on any shards that have
never been allocated, but this is confusing. Here "shard" means "replication
group" not "shard copy" but this conflicts with the "never been allocated"
qualifier since one allocates shard copies and not replication groups.

This commit removes the misleading words. A newly-created replication group
remains newly-created until one of its copies is assigned, which might be quite
some time later, but it seems better to leave this implicit.
2019-10-16 12:27:39 +01:00
David Turner
7b652adfbf
Remove include_relocations setting (#47717)
Setting `cluster.routing.allocation.disk.include_relocations` to `false` is a
bad idea since it will lead to the kinds of overshoot that were otherwise fixed
in #46079. This setting was deprecated in #47443. This commit removes it.
2019-10-08 13:33:49 +02:00
David Turner
9d67a02a56
Deprecate include_relocations setting (#47443)
Setting `cluster.routing.allocation.disk.include_relocations` to `false` is a
bad idea since it will lead to the kinds of overshoot that were otherwise fixed
in #46079. This commit deprecates this setting so it can be removed in the next
major release.
2019-10-08 09:15:13 +02:00
James Rodewig
5c78f606c2
[DOCS] Change // CONSOLE comments to [source,console] (#46440) 2019-09-09 10:45:37 -04:00
David Turner
bc31ea752e
Always auto-release the flood-stage block (#45274)
Removes support for using a system property to disable the automatic release of
the write block applied when a node exceeds the flood-stage watermark.

Relates #42559
2019-08-08 11:47:14 +01:00
Bukhtawar
c592d24300 Auto-release flood-stage write block (#42559)
If a node exceeds the flood-stage disk watermark then we add a block to all of
its indices to prevent further writes as a last-ditch attempt to prevent the
node completely exhausting its disk space. However today this block remains in
place until manually removed, and this block is a source of confusion for users
who current have ample disk space and did not even realise they nearly ran out
at some point in the past.

This commit changes our behaviour to automatically remove this block when a
node drops below the high watermark again. The expectation is that the high
watermark is some distance below the flood-stage watermark and therefore the
disk space problem is truly resolved.

Fixes #39334
2019-08-07 10:53:17 +01:00
debadair
c9e03e6ead
[DOCS] Reworked the shard allocation filtering info. (#36456)
* [DOCS] Reworked the shard allocation filtering info. Closes #36079

* Added multiple index allocation settings example back.

* Removed extraneous space
2018-12-11 07:44:57 -08:00
David Turner
d553a8be2f
Improve docs for disk watermarks (#30249)
* Clarify that the low watermark does not affect brand-new shards.
* Replace ES -> Elasticsearch.
* Format to 80 columns.

Resolves #25163
2018-04-30 17:31:11 +01:00
Nik Everett
66ff1b2a59
Tests: Wipe cluster settings after every test (#28410)
Cluster settings shouldn't leak into the next test.

I played with failing the test if it left over any settings but that
felt like it added more ceremony then it was worth. The advantage is
that any test that intentionally wants to leave settings in place after
the test would fail and require looking at but, so far as I can tell, we
don't have any such tests.
2018-01-29 11:47:04 -05:00
Nik Everett
3d19006cfa
Docs: Clear watermarks after setting them (#28402)
Clear the disk watermark after the snippet showing users how to set it.
Without this our tests will fail if the disks have less than 10GB free.

Closes #28325
2018-01-26 15:42:53 -05:00
Clinton Gormley
3e568f52c1 Fixed asciidoc formatting 2017-07-27 15:55:52 +02:00
Jason Tedor
e165c405ac Add an underscore to flood stage setting
This is a minor nitty bikeshedding change that renames the suffix of the
disk flood stage setting to "flood_stage" from "floodstage".

Relates #25659
2017-07-11 22:02:00 -04:00
Jason Tedor
8148e25087 Fix disk allocator docs
This commit fixes the disk allocator docs which were broken due to the
inadvertent removal of some docs snippet markup.
2017-07-07 22:11:09 -04:00
Jason Tedor
bc22c1c286 Add disk threshold settings validation
This commit adds cross-settings validation for the low/high/flood stage
disk watermark settings. This validation was enabled by the introduction
of multiple settings validation.

Relates #25600
2017-07-07 19:54:36 -04:00
Clinton Gormley
ca12b1f2a6 Tidied up the disk allocator docs 2017-07-06 12:16:53 +02:00
Simon Willnauer
6e5cc424a8 Switch indices read-only if a node runs out of disk space (#25541)
Today when we run out of disk all kinds of crazy things can happen
and nodes are becoming hard to maintain once out of disk is hit.
While we try to move shards away if we hit watermarks this might not
be possible in many situations. Based on the discussion in #24299
this change monitors disk utilization and adds a flood-stage watermark
that causes all indices that are allocated on a node hitting the flood-stage
mark to be switched read-only (with the option to be deleted). This allows users to react on the low disk
situation while subsequent write requests will be rejected. Users can switch
individual indices read-write once the situation is sorted out. There is no
automatic read-write switch once the node has enough space. This requires
user interaction.

The flood-stage watermark is set to `95%` utilization by default.

Closes #24299
2017-07-05 22:18:23 +02:00
Clinton Gormley
3f594089c2 Renamed all AUTOSENSE snippets to CONSOLE (#18210) 2016-05-09 15:42:23 +02:00
Nik Everett
4b1c116461 Generate and run tests from the docs
Adds infrastructure so `gradle :docs:check` will extract tests from
snippets in the documentation and execute the tests. This is included
in `gradle check` so it should happen on CI and during a normal build.

By default each `// AUTOSENSE` snippet creates a unique REST test. These
tests are executed in a random order and the cluster is wiped between
each one. If multiple snippets chain together into a test you can annotate
all snippets after the first with `// TEST[continued]` to have the
generated tests for both snippets joined.

Snippets marked as `// TESTRESPONSE` are checked against the response
of the last action.

See docs/README.asciidoc for lots more.

Closes #12583. That issue is about catching bugs in the docs during build.
This catches *some* bugs in the docs during build which is a good start.
2016-05-05 13:58:03 -04:00
Simon Willnauer
66b78341e4 Add note about multi data path and disk threshold deciders
Prior to 2.0 we summed up the available space on all disk on a node
due to the raid-0 like behavior. Now we don't do this anymore and use the
min & max disk space to make decisions.

Closes #13106
2015-08-31 16:23:54 +02:00
Clinton Gormley
f123a53d72 Docs: Refactored modules and index modules sections 2015-06-22 23:49:45 +02:00