[[high-jvm-memory-pressure]] === High JVM memory pressure High JVM memory usage can degrade cluster performance and trigger <>. To prevent this, we recommend taking steps to reduce memory pressure if a node's JVM memory usage consistently exceeds 85%. **** If you're using Elastic Cloud Hosted, then you can use AutoOps to monitor your cluster. AutoOps significantly simplifies cluster management with performance recommendations, resource utilization visibility, real-time issue detection and resolution paths. For more information, refer to https://www.elastic.co/guide/en/cloud/current/ec-autoops.html[Monitor with AutoOps]. **** [discrete] [[diagnose-high-jvm-memory-pressure]] ==== Diagnose high JVM memory pressure **Check JVM memory pressure** include::{es-ref-dir}/tab-widgets/jvm-memory-pressure-widget.asciidoc[] **Check garbage collection logs** As memory usage increases, garbage collection becomes more frequent and takes longer. You can track the frequency and length of garbage collection events in <>. For example, the following event states {es} spent more than 50% (21 seconds) of the last 40 seconds performing garbage collection. [source,log] ---- [timestamp_short_interval_from_last][INFO ][o.e.m.j.JvmGcMonitorService] [node_id] [gc][number] overhead, spent [21s] collecting in the last [40s] ---- **Capture a JVM heap dump** To determine the exact reason for the high JVM memory pressure, capture a heap dump of the JVM while its memory usage is high, and also capture the <> covering the same time period. [discrete] [[reduce-jvm-memory-pressure]] ==== Reduce JVM memory pressure This section contains some common suggestions for reducing JVM memory pressure. **Reduce your shard count** Every shard uses memory. In most cases, a small set of large shards uses fewer resources than many small shards. For tips on reducing your shard count, see <>. [[avoid-expensive-searches]] **Avoid expensive searches** Expensive searches can use large amounts of memory. To better track expensive searches on your cluster, enable <>. Expensive searches may have a large <>, use aggregations with a large number of buckets, or include <>. To prevent expensive searches, consider the following setting changes: * Lower the `size` limit using the <> index setting. * Decrease the maximum number of allowed aggregation buckets using the <> cluster setting. * Disable expensive queries using the <> cluster setting. * Set a default search timeout using the <> cluster setting. [source,console] ---- PUT _settings { "index.max_result_window": 5000 } PUT _cluster/settings { "persistent": { "search.max_buckets": 20000, "search.allow_expensive_queries": false } } ---- // TEST[s/^/PUT my-index\n/] **Prevent mapping explosions** Defining too many fields or nesting fields too deeply can lead to <> that use large amounts of memory. To prevent mapping explosions, use the <> to limit the number of field mappings. **Spread out bulk requests** While more efficient than individual requests, large <> or <> requests can still create high JVM memory pressure. If possible, submit smaller requests and allow more time between them. **Upgrade node memory** Heavy indexing and search loads can cause high JVM memory pressure. To better handle heavy workloads, upgrade your nodes to increase their memory capacity.