From 7c5b6483a1994fbacbeeb98a47a67e95d288c440 Mon Sep 17 00:00:00 2001 From: Luca Belluccini Date: Mon, 28 Nov 2022 12:55:47 +0000 Subject: [PATCH] [DOCS] Typo in Search speed (#91934) * [DOCS] Typo in Search speed The PR https://github.com/elastic/elasticsearch/pull/89782 introduced some broken tags to leak in the text * Fix tags * Make all headings discrete Co-authored-by: Abdon Pijpelink --- docs/reference/how-to/search-speed.asciidoc | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/docs/reference/how-to/search-speed.asciidoc b/docs/reference/how-to/search-speed.asciidoc index 04b670c1a724..0db3ca04e99a 100644 --- a/docs/reference/how-to/search-speed.asciidoc +++ b/docs/reference/how-to/search-speed.asciidoc @@ -10,7 +10,7 @@ goes to the filesystem cache so that Elasticsearch can keep hot regions of the index in physical memory. [discrete] -tag::readahead[] +// tag::readahead[] === Avoid page cache thrashing by using modest readahead values on Linux Search can cause a lot of randomized read I/O. When the underlying block @@ -35,7 +35,7 @@ as a transient setting). We recommend a value of `128KiB` for readahead. WARNING: `blockdev` expects values in 512 byte sectors whereas `lsblk` reports values in `KiB`. As an example, to temporarily set readahead to `128KiB` for `/dev/nvme0n1`, specify `blockdev --setra 256 /dev/nvme0n1`. -end::readahead[] +// end::readahead[] [discrete] === Use faster hardware @@ -358,7 +358,7 @@ PUT index } -------------------------------------------------- -tag::warm-fs-cache[] +// tag::warm-fs-cache[] [discrete] === Warm up the filesystem cache @@ -372,7 +372,7 @@ depending on the file extension using the WARNING: Loading data into the filesystem cache eagerly on too many indices or too many files will make search _slower_ if the filesystem cache is not large enough to hold all the data. Use with caution. -end::warm-fs-cache[] +// end::warm-fs-cache[] [discrete] === Use index sorting to speed up conjunctions @@ -424,6 +424,7 @@ be able to cope with `max_failures` node failures at once at most, then the right number of replicas for you is `max(max_failures, ceil(num_nodes / num_primaries) - 1)`. +[discrete] === Tune your queries with the Search Profiler The {ref}/search-profile.html[Profile API] provides detailed information about @@ -438,6 +439,7 @@ Because the Profile API itself adds significant overhead to the query, this information is best used to understand the relative cost of the various query components. It does not provide a reliable measure of actual processing time. +[discrete] [[faster-phrase-queries]] === Faster phrase queries with `index_phrases` @@ -446,6 +448,7 @@ indexes 2-shingles and is automatically leveraged by query parsers to run phrase queries that don't have a slop. If your use-case involves running lots of phrase queries, this can speed up queries significantly. +[discrete] [[faster-prefix-queries]] === Faster prefix queries with `index_prefixes` @@ -454,6 +457,7 @@ indexes prefixes of all terms and is automatically leveraged by query parsers to run prefix queries. If your use-case involves running lots of prefix queries, this can speed up queries significantly. +[discrete] [[faster-filtering-with-constant-keyword]] === Use `constant_keyword` to speed up filtering