mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-04-25 07:37:19 -04:00
* correct way of getting node heap size in [[shard-count-recommendation]], we explain that the number of shards should be at most 20 shards per GB of heap. but the command to get relevant heap size should be _cat/nodes?v=true&h=heap.max and not _cat/nodes?v=true&h=heap.current . The latter gives the current memory consumption, which is alway moving. Here we need to consider the max allocated heap size (-Xmx) * Adds heap.max to valid columns Co-authored-by: Adam Locke <adam.locke@elastic.co> |
||
---|---|---|
.. | ||
alias.asciidoc | ||
allocation.asciidoc | ||
anomaly-detectors.asciidoc | ||
count.asciidoc | ||
datafeeds.asciidoc | ||
dataframeanalytics.asciidoc | ||
fielddata.asciidoc | ||
health.asciidoc | ||
indices.asciidoc | ||
master.asciidoc | ||
nodeattrs.asciidoc | ||
nodes.asciidoc | ||
pending_tasks.asciidoc | ||
plugins.asciidoc | ||
recovery.asciidoc | ||
repositories.asciidoc | ||
segments.asciidoc | ||
shards.asciidoc | ||
snapshots.asciidoc | ||
tasks.asciidoc | ||
templates.asciidoc | ||
thread_pool.asciidoc | ||
trainedmodel.asciidoc | ||
transforms.asciidoc |