diff --git a/docs/painless/painless-guide/painless-walkthrough.asciidoc b/docs/painless/painless-guide/painless-walkthrough.asciidoc index 8648e30b07ee..771330ad9a5b 100644 --- a/docs/painless/painless-guide/painless-walkthrough.asciidoc +++ b/docs/painless/painless-guide/painless-walkthrough.asciidoc @@ -31,7 +31,7 @@ PUT hockey/_bulk?refresh ---------------------------------------------------------------- // TESTSETUP -[float] +[discrete] ==== Accessing Doc Values from Painless Document values can be accessed from a `Map` named `doc`. @@ -111,7 +111,7 @@ GET hockey/_search ---------------------------------------------------------------- -[float] +[discrete] ==== Missing values `doc['field'].value` throws an exception if @@ -121,7 +121,7 @@ To check if a document is missing a value, you can call `doc['field'].size() == 0`. -[float] +[discrete] ==== Updating Fields with Painless You can also easily update fields. You access the original source for a field as `ctx._source.`. @@ -177,7 +177,7 @@ POST hockey/_update/1 } ---------------------------------------------------------------- -[float] +[discrete] [[modules-scripting-painless-dates]] ==== Dates @@ -202,7 +202,7 @@ GET hockey/_search } ---------------------------------------------------------------- -[float] +[discrete] [[modules-scripting-painless-regex]] ==== Regular expressions diff --git a/docs/plugins/alerting.asciidoc b/docs/plugins/alerting.asciidoc index 1e365306a84e..a440b6b83675 100644 --- a/docs/plugins/alerting.asciidoc +++ b/docs/plugins/alerting.asciidoc @@ -3,7 +3,7 @@ Alerting plugins allow Elasticsearch to monitor indices and to trigger alerts when thresholds are breached. -[float] +[discrete] === Core alerting plugins The core alerting plugins are: diff --git a/docs/plugins/analysis-phonetic.asciidoc b/docs/plugins/analysis-phonetic.asciidoc index 6e81e8efdcaf..1f43862bac82 100644 --- a/docs/plugins/analysis-phonetic.asciidoc +++ b/docs/plugins/analysis-phonetic.asciidoc @@ -73,7 +73,7 @@ is often beneficial to use separate fields for analysis with and without phoneti That way searches can be run against both fields with differing boosts and trade-offs (e.g. only run a fuzzy `match` query on the original text field, but not on the phonetic version). -[float] +[discrete] ===== Double metaphone settings If the `double_metaphone` encoder is used, then this additional setting is @@ -83,7 +83,7 @@ supported: The maximum length of the emitted metaphone token. Defaults to `4`. -[float] +[discrete] ===== Beider Morse settings If the `beider_morse` encoder is used, then these additional settings are diff --git a/docs/plugins/analysis-smartcn.asciidoc b/docs/plugins/analysis-smartcn.asciidoc index 2d9d363760ab..704c15b56e67 100644 --- a/docs/plugins/analysis-smartcn.asciidoc +++ b/docs/plugins/analysis-smartcn.asciidoc @@ -14,7 +14,7 @@ include::install_remove.asciidoc[] [[analysis-smartcn-tokenizer]] -[float] +[discrete] ==== `smartcn` tokenizer and token filter The plugin provides the `smartcn` analyzer, `smartcn_tokenizer` tokenizer, and diff --git a/docs/plugins/analysis-stempel.asciidoc b/docs/plugins/analysis-stempel.asciidoc index 6afa88013c12..18d4f73af3be 100644 --- a/docs/plugins/analysis-stempel.asciidoc +++ b/docs/plugins/analysis-stempel.asciidoc @@ -11,7 +11,7 @@ http://www.egothor.org/[Egothor project]. include::install_remove.asciidoc[] [[analysis-stempel-tokenizer]] -[float] +[discrete] ==== `stempel` tokenizer and token filters The plugin provides the `polish` analyzer and the `polish_stem` and `polish_stop` token filters, diff --git a/docs/plugins/analysis-ukrainian.asciidoc b/docs/plugins/analysis-ukrainian.asciidoc index a86c1d18eea9..178fc6d507c6 100644 --- a/docs/plugins/analysis-ukrainian.asciidoc +++ b/docs/plugins/analysis-ukrainian.asciidoc @@ -9,7 +9,7 @@ It provides stemming for Ukrainian using the http://github.com/morfologik/morfol include::install_remove.asciidoc[] [[analysis-ukrainian-analyzer]] -[float] +[discrete] ==== `ukrainian` analyzer The plugin provides the `ukrainian` analyzer. diff --git a/docs/plugins/analysis.asciidoc b/docs/plugins/analysis.asciidoc index 68ba99f7a423..82f3f15ab9d9 100644 --- a/docs/plugins/analysis.asciidoc +++ b/docs/plugins/analysis.asciidoc @@ -4,7 +4,7 @@ Analysis plugins extend Elasticsearch by adding new analyzers, tokenizers, token filters, or character filters to Elasticsearch. -[float] +[discrete] ==== Core analysis plugins The core analysis plugins are: @@ -44,7 +44,7 @@ Provides high quality stemming for Polish. Provides stemming for Ukrainian. -[float] +[discrete] ==== Community contributed analysis plugins A number of analysis plugins have been contributed by our community: diff --git a/docs/plugins/api.asciidoc b/docs/plugins/api.asciidoc index 7eeba28b2226..96d54f591aac 100644 --- a/docs/plugins/api.asciidoc +++ b/docs/plugins/api.asciidoc @@ -3,7 +3,7 @@ API extension plugins add new functionality to Elasticsearch by adding new APIs or features, usually to do with search or mapping. -[float] +[discrete] === Community contributed API extension plugins A number of plugins have been contributed by our community: diff --git a/docs/plugins/authors.asciidoc b/docs/plugins/authors.asciidoc index d0dc227df0b6..531aea142a08 100644 --- a/docs/plugins/authors.asciidoc +++ b/docs/plugins/authors.asciidoc @@ -18,7 +18,7 @@ These examples provide the bare bones needed to get started. For more information about how to write a plugin, we recommend looking at the plugins listed in this documentation for inspiration. -[float] +[discrete] === Plugin descriptor file All plugins must contain a file called `plugin-descriptor.properties`. @@ -32,7 +32,7 @@ include::{plugin-properties-files}/plugin-descriptor.properties[] Either fill in this template yourself or, if you are using Elasticsearch's Gradle build system, you can fill in the necessary values in the `build.gradle` file for your plugin. -[float] +[discrete] ==== Mandatory elements for plugins @@ -70,7 +70,7 @@ in the presence of plugins with the incorrect `elasticsearch.version`. ============================================== -[float] +[discrete] === Testing your plugin When testing a Java plugin, it will only be auto-loaded if it is in the @@ -81,7 +81,7 @@ You may also load your plugin within the test framework for integration tests. Read more in {ref}/integration-tests.html#changing-node-configuration[Changing Node Configuration]. -[float] +[discrete] [[plugin-authors-jsm]] === Java Security permissions diff --git a/docs/plugins/discovery.asciidoc b/docs/plugins/discovery.asciidoc index 2e021eb7657e..b3090616add2 100644 --- a/docs/plugins/discovery.asciidoc +++ b/docs/plugins/discovery.asciidoc @@ -5,7 +5,7 @@ Discovery plugins extend Elasticsearch by adding new seed hosts providers that can be used to extend the {ref}/modules-discovery.html[cluster formation module]. -[float] +[discrete] ==== Core discovery plugins The core discovery plugins are: @@ -25,7 +25,7 @@ addresses of seed hosts. The Google Compute Engine discovery plugin uses the GCE API to identify the addresses of seed hosts. -[float] +[discrete] ==== Community contributed discovery plugins The following discovery plugins have been contributed by our community: diff --git a/docs/plugins/ingest.asciidoc b/docs/plugins/ingest.asciidoc index ef5cfb72f254..89075c32ab95 100644 --- a/docs/plugins/ingest.asciidoc +++ b/docs/plugins/ingest.asciidoc @@ -3,7 +3,7 @@ The ingest plugins extend Elasticsearch by providing additional ingest node capabilities. -[float] +[discrete] === Core Ingest Plugins The core ingest plugins are: @@ -29,7 +29,7 @@ A processor that extracts details from the User-Agent header value. The distributed by default with Elasticsearch. See {ref}/user-agent-processor.html[User Agent processor] for more details. -[float] +[discrete] === Community contributed ingest plugins The following plugin has been contributed by our community: diff --git a/docs/plugins/install_remove.asciidoc b/docs/plugins/install_remove.asciidoc index f41bea39bbc4..98652ffb357a 100644 --- a/docs/plugins/install_remove.asciidoc +++ b/docs/plugins/install_remove.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [id="{plugin_name}-install"] ==== Installation @@ -25,7 +25,7 @@ This plugin can be downloaded for < --------------------- -[float] +[discrete] === Proxy settings To install a plugin via a proxy, you can add the proxy details to the diff --git a/docs/plugins/repository-hdfs.asciidoc b/docs/plugins/repository-hdfs.asciidoc index 95dd89e5947b..dcb2255d5b42 100644 --- a/docs/plugins/repository-hdfs.asciidoc +++ b/docs/plugins/repository-hdfs.asciidoc @@ -78,7 +78,7 @@ include::repository-shared-settings.asciidoc[] link:repository-hdfs-security-runtime[Creating the Secure Repository]). [[repository-hdfs-availability]] -[float] +[discrete] ===== A Note on HDFS Availability When you initialize a repository, its settings are persisted in the cluster state. When a node comes online, it will attempt to initialize all repositories for which it has settings. If your cluster has an HDFS repository configured, then @@ -106,7 +106,7 @@ methods are supported by the plugin: <> for more info) [[repository-hdfs-security-keytabs]] -[float] +[discrete] ===== Principals and Keytabs Before attempting to connect to a secured HDFS cluster, provision the Kerberos principals and keytabs that the Elasticsearch nodes will use for authenticating to Kerberos. For maximum security and to avoid tripping up the Kerberos @@ -137,7 +137,7 @@ host! // Setup at runtime (principal name) [[repository-hdfs-security-runtime]] -[float] +[discrete] ===== Creating the Secure Repository Once your keytab files are in place and your cluster is started, creating a secured HDFS repository is simple. Just add the name of the principal that you will be authenticating as in the repository settings under the @@ -175,7 +175,7 @@ PUT _snapshot/my_hdfs_repository // TEST[skip:we don't have hdfs set up while testing this] [[repository-hdfs-security-authorization]] -[float] +[discrete] ===== Authorization Once Elasticsearch is connected and authenticated to HDFS, HDFS will infer a username to use for authorizing file access for the client. By default, it picks this username from the primary part of diff --git a/docs/plugins/repository-s3.asciidoc b/docs/plugins/repository-s3.asciidoc index ac8a2cc1328c..3c3600aea8b1 100644 --- a/docs/plugins/repository-s3.asciidoc +++ b/docs/plugins/repository-s3.asciidoc @@ -200,7 +200,7 @@ pattern then you should set this setting to `true` when upgrading. https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setSignerOverride-java.lang.String-[AWS Java SDK documentation] for details. Defaults to empty string which means that no signing algorithm override will be used. -[float] +[discrete] [[repository-s3-compatible-services]] ===== S3-compatible services @@ -435,7 +435,7 @@ The bucket needs to exist to register a repository for snapshots. If you did not create the bucket then the repository registration will fail. [[repository-s3-aws-vpc]] -[float] +[discrete] ==== AWS VPC Bandwidth Settings AWS instances resolve S3 endpoints to a public IP. If the Elasticsearch diff --git a/docs/plugins/repository.asciidoc b/docs/plugins/repository.asciidoc index d44417080183..58da220862bb 100644 --- a/docs/plugins/repository.asciidoc +++ b/docs/plugins/repository.asciidoc @@ -5,7 +5,7 @@ Repository plugins extend the {ref}/modules-snapshots.html[Snapshot/Restore] functionality in Elasticsearch by adding repositories backed by the cloud or by distributed file systems: -[float] +[discrete] ==== Core repository plugins The core repository plugins are: @@ -27,7 +27,7 @@ The Hadoop HDFS Repository plugin adds support for using HDFS as a repository. The GCS repository plugin adds support for using Google Cloud Storage service as a repository. -[float] +[discrete] === Community contributed repository plugins The following plugin has been contributed by our community: diff --git a/docs/plugins/security.asciidoc b/docs/plugins/security.asciidoc index 1b6262211303..89927a3d6da3 100644 --- a/docs/plugins/security.asciidoc +++ b/docs/plugins/security.asciidoc @@ -3,7 +3,7 @@ Security plugins add a security layer to Elasticsearch. -[float] +[discrete] === Core security plugins The core security plugins are: @@ -15,7 +15,7 @@ enterprise-grade security to their Elastic Stack. Designed to address the growing security needs of thousands of enterprises using the Elastic Stack today, X-Pack provides peace of mind when it comes to protecting your data. -[float] +[discrete] === Community contributed security plugins The following plugins have been contributed by our community: diff --git a/docs/plugins/store.asciidoc b/docs/plugins/store.asciidoc index 8a4a520443a9..b3d732217a5a 100644 --- a/docs/plugins/store.asciidoc +++ b/docs/plugins/store.asciidoc @@ -3,7 +3,7 @@ Store plugins offer alternatives to default Lucene stores. -[float] +[discrete] === Core store plugins The core store plugins are: diff --git a/docs/reference/aggregations.asciidoc b/docs/reference/aggregations.asciidoc index 472b87b72fe6..fe311e8dc167 100644 --- a/docs/reference/aggregations.asciidoc +++ b/docs/reference/aggregations.asciidoc @@ -44,7 +44,7 @@ NOTE: Aggregations operate on the `double` representation of the data. As a consequence, the result may be approximate when running on longs whose absolute value is greater than `2^53`. -[float] +[discrete] == Structuring Aggregations The following snippet captures the basic structure of aggregations: @@ -76,7 +76,7 @@ sub-aggregations you define on the bucketing aggregation level will be computed bucketing aggregation. For example, if you define a set of aggregations under the `range` aggregation, the sub-aggregations will be computed for the range buckets that are defined. -[float] +[discrete] === Values Source Some aggregations work on values extracted from the aggregated documents. Typically, the values will be extracted from diff --git a/docs/reference/aggregations/pipeline.asciidoc b/docs/reference/aggregations/pipeline.asciidoc index 4d6c7724cf12..406bd457c6d0 100644 --- a/docs/reference/aggregations/pipeline.asciidoc +++ b/docs/reference/aggregations/pipeline.asciidoc @@ -26,7 +26,7 @@ NOTE: Because pipeline aggregations only add to the output, when chaining pipeli will be included in the final output. [[buckets-path-syntax]] -[float] +[discrete] === `buckets_path` Syntax Most pipeline aggregations require another aggregation as their input. The input aggregation is defined via the `buckets_path` @@ -157,7 +157,7 @@ POST /_search <1> `buckets_path` selects the hats and bags buckets (via `['hat']`/`['bag']``) to use in the script specifically, instead of fetching all the buckets from `sale_type` aggregation -[float] +[discrete] === Special Paths Instead of pathing to a metric, `buckets_path` can use a special `"_count"` path. This instructs @@ -228,7 +228,7 @@ POST /sales/_search for the `categories` aggregation [[dots-in-agg-names]] -[float] +[discrete] === Dealing with dots in agg names An alternate syntax is supported to cope with aggregations or metrics which @@ -243,7 +243,7 @@ may be referred to as: // NOTCONSOLE [[gap-policy]] -[float] +[discrete] === Dealing with gaps in the data Data in the real world is often noisy and sometimes contains *gaps* -- places where data simply doesn't exist. This can diff --git a/docs/reference/analysis.asciidoc b/docs/reference/analysis.asciidoc index 460335a20eb4..15684f60a882 100644 --- a/docs/reference/analysis.asciidoc +++ b/docs/reference/analysis.asciidoc @@ -11,7 +11,7 @@ _Text analysis_ is the process of converting unstructured text, like the body of an email or a product description, into a structured format that's optimized for search. -[float] +[discrete] [[when-to-configure-analysis]] === When to configure text analysis @@ -29,7 +29,7 @@ analysis configuration if you're using {es} to: * Fine-tune search for a specific language * Perform lexicographic or linguistic research -[float] +[discrete] [[analysis-toc]] === In this section diff --git a/docs/reference/analysis/analyzers.asciidoc b/docs/reference/analysis/analyzers.asciidoc index fe527b56bb69..15e8fb435f24 100644 --- a/docs/reference/analysis/analyzers.asciidoc +++ b/docs/reference/analysis/analyzers.asciidoc @@ -45,7 +45,7 @@ Elasticsearch provides many language-specific analyzers like `english` or The `fingerprint` analyzer is a specialist analyzer which creates a fingerprint which can be used for duplicate detection. -[float] +[discrete] === Custom analyzers If you do not find an analyzer suitable for your needs, you can create a diff --git a/docs/reference/analysis/analyzers/custom-analyzer.asciidoc b/docs/reference/analysis/analyzers/custom-analyzer.asciidoc index e7907f7e4bb7..f44e9daa56b5 100644 --- a/docs/reference/analysis/analyzers/custom-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/custom-analyzer.asciidoc @@ -8,7 +8,7 @@ When the built-in analyzers do not fulfill your needs, you can create a * a <> * zero or more <>. -[float] +[discrete] === Configuration The `custom` analyzer accepts the following parameters: @@ -36,7 +36,7 @@ The `custom` analyzer accepts the following parameters: ensure that a phrase query doesn't match two terms from different array elements. Defaults to `100`. See <> for more. -[float] +[discrete] === Example configuration Here is an example that combines the following: diff --git a/docs/reference/analysis/analyzers/fingerprint-analyzer.asciidoc b/docs/reference/analysis/analyzers/fingerprint-analyzer.asciidoc index a04141395951..34d7aa85ea2c 100644 --- a/docs/reference/analysis/analyzers/fingerprint-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/fingerprint-analyzer.asciidoc @@ -12,7 +12,7 @@ Input text is lowercased, normalized to remove extended characters, sorted, deduplicated and concatenated into a single token. If a stopword list is configured, stop words will also be removed. -[float] +[discrete] === Example output [source,console] @@ -51,7 +51,7 @@ The above sentence would produce the following single term: [ and consistent godel is said sentence this yes ] --------------------------- -[float] +[discrete] === Configuration The `fingerprint` analyzer accepts the following parameters: @@ -79,7 +79,7 @@ See the <> for more information about stop word configuration. -[float] +[discrete] === Example configuration In this example, we configure the `fingerprint` analyzer to use the @@ -135,7 +135,7 @@ The above example produces the following term: [ consistent godel said sentence yes ] --------------------------- -[float] +[discrete] === Definition The `fingerprint` tokenizer consists of: diff --git a/docs/reference/analysis/analyzers/keyword-analyzer.asciidoc b/docs/reference/analysis/analyzers/keyword-analyzer.asciidoc index aacfd047650d..888376bc46fa 100644 --- a/docs/reference/analysis/analyzers/keyword-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/keyword-analyzer.asciidoc @@ -7,7 +7,7 @@ The `keyword` analyzer is a ``noop'' analyzer which returns the entire input string as a single token. -[float] +[discrete] === Example output [source,console] @@ -46,12 +46,12 @@ The above sentence would produce the following single term: [ The 2 QUICK Brown-Foxes jumped over the lazy dog's bone. ] --------------------------- -[float] +[discrete] === Configuration The `keyword` analyzer is not configurable. -[float] +[discrete] === Definition The `keyword` analyzer consists of: diff --git a/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc b/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc index 0c96ceaced41..77c4ec3bd954 100644 --- a/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc @@ -22,7 +22,7 @@ Read more about http://www.regular-expressions.info/catastrophic.html[pathologic ======================================== -[float] +[discrete] === Example output [source,console] @@ -138,7 +138,7 @@ The above sentence would produce the following terms: [ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ] --------------------------- -[float] +[discrete] === Configuration The `pattern` analyzer accepts the following parameters: @@ -170,7 +170,7 @@ See the <> for more information about stop word configuration. -[float] +[discrete] === Example configuration In this example, we configure the `pattern` analyzer to split email addresses @@ -258,7 +258,7 @@ The above example produces the following terms: [ john, smith, foo, bar, com ] --------------------------- -[float] +[discrete] ==== CamelCase tokenizer The following more complicated example splits CamelCase text into tokens: @@ -363,7 +363,7 @@ The regex above is easier to understand as: ) -------------------------------------------------- -[float] +[discrete] === Definition The `pattern` anlayzer consists of: diff --git a/docs/reference/analysis/analyzers/standard-analyzer.asciidoc b/docs/reference/analysis/analyzers/standard-analyzer.asciidoc index b74270df6f13..93f4e44ea188 100644 --- a/docs/reference/analysis/analyzers/standard-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/standard-analyzer.asciidoc @@ -10,7 +10,7 @@ Segmentation algorithm, as specified in http://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well for most languages. -[float] +[discrete] === Example output [source,console] @@ -119,7 +119,7 @@ The above sentence would produce the following terms: [ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog's, bone ] --------------------------- -[float] +[discrete] === Configuration The `standard` analyzer accepts the following parameters: @@ -143,7 +143,7 @@ See the <> for more information about stop word configuration. -[float] +[discrete] === Example configuration In this example, we configure the `standard` analyzer to have a @@ -263,7 +263,7 @@ The above example produces the following terms: [ 2, quick, brown, foxes, jumpe, d, over, lazy, dog's, bone ] --------------------------- -[float] +[discrete] === Definition The `standard` analyzer consists of: diff --git a/docs/reference/analysis/analyzers/stop-analyzer.asciidoc b/docs/reference/analysis/analyzers/stop-analyzer.asciidoc index 40b1eed49c27..af0ca61f2189 100644 --- a/docs/reference/analysis/analyzers/stop-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/stop-analyzer.asciidoc @@ -8,7 +8,7 @@ The `stop` analyzer is the same as the <> for more information about stop word configuration. -[float] +[discrete] === Example configuration In this example, we configure the `stop` analyzer to use a specified list of @@ -228,7 +228,7 @@ The above example produces the following terms: [ quick, brown, foxes, jumped, lazy, dog, s, bone ] --------------------------- -[float] +[discrete] === Definition It consists of: diff --git a/docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc b/docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc index 937680332d25..3af4f140b586 100644 --- a/docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc @@ -7,7 +7,7 @@ The `whitespace` analyzer breaks text into terms whenever it encounters a whitespace character. -[float] +[discrete] === Example output [source,console] @@ -109,12 +109,12 @@ The above sentence would produce the following terms: [ The, 2, QUICK, Brown-Foxes, jumped, over, the, lazy, dog's, bone. ] --------------------------- -[float] +[discrete] === Configuration The `whitespace` analyzer is not configurable. -[float] +[discrete] === Definition It consists of: diff --git a/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc b/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc index 2da5e1938991..caa5f819ef01 100644 --- a/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc +++ b/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc @@ -22,7 +22,7 @@ Read more about http://www.regular-expressions.info/catastrophic.html[pathologic ======================================== -[float] +[discrete] === Configuration The `pattern_replace` character filter accepts the following parameters: @@ -43,7 +43,7 @@ The `pattern_replace` character filter accepts the following parameters: Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`. -[float] +[discrete] === Example configuration In this example, we configure the `pattern_replace` character filter to diff --git a/docs/reference/analysis/normalizers.asciidoc b/docs/reference/analysis/normalizers.asciidoc index eff9295857da..6646ffb2bdd3 100644 --- a/docs/reference/analysis/normalizers.asciidoc +++ b/docs/reference/analysis/normalizers.asciidoc @@ -16,7 +16,7 @@ following: `arabic_normalization`, `asciifolding`, `bengali_normalization`, Elasticsearch ships with a `lowercase` built-in normalizer. For other forms of normalization a custom configuration is required. -[float] +[discrete] === Custom normalizers Custom normalizers take a list of diff --git a/docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc index 41e757e5f91e..bcd04e865233 100644 --- a/docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc @@ -11,7 +11,7 @@ output tokens at the same position will be removed. WARNING: If the incoming token stream has duplicate tokens, then these will also be removed by the multiplexer -[float] +[discrete] === Options [horizontal] filters:: a list of token filters to apply to incoming tokens. These can be any @@ -27,7 +27,7 @@ preserve_original:: if `true` (the default) then emit the original token in addition to the filtered tokens -[float] +[discrete] === Settings example You can set it up like: diff --git a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc index 778769a3233a..c0c3799cdb9e 100644 --- a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc @@ -94,7 +94,7 @@ set to `false` no mapping would get added as when `expand=false` the target mapp `expand=true` then the mappings added would be equivalent to `foo, baz => foo, baz` i.e, all mappings other than the stop word. -[float] +[discrete] [[synonym-graph-tokenizer-ignore_case-deprecated]] ==== `tokenizer` and `ignore_case` are deprecated @@ -104,7 +104,7 @@ The `ignore_case` parameter works with `tokenizer` parameter only. Two synonym formats are supported: Solr, WordNet. -[float] +[discrete] ==== Solr synonyms The following is a sample format of the file: @@ -142,7 +142,7 @@ PUT /test_index However, it is recommended to define large synonyms set in a file using `synonyms_path`, because specifying them inline increases cluster size unnecessarily. -[float] +[discrete] ==== WordNet synonyms Synonyms based on http://wordnet.princeton.edu/[WordNet] format can be @@ -175,7 +175,7 @@ PUT /test_index Using `synonyms_path` to define WordNet synonyms in a file is supported as well. -[float] +[discrete] ==== Parsing synonym files Elasticsearch will use the token filters preceding the synonym filter diff --git a/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc index f42ae89daabd..c803bae05526 100644 --- a/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc @@ -85,7 +85,7 @@ set to `false` no mapping would get added as when `expand=false` the target mapp stop word. -[float] +[discrete] [[synonym-tokenizer-ignore_case-deprecated]] ==== `tokenizer` and `ignore_case` are deprecated @@ -95,7 +95,7 @@ The `ignore_case` parameter works with `tokenizer` parameter only. Two synonym formats are supported: Solr, WordNet. -[float] +[discrete] ==== Solr synonyms The following is a sample format of the file: @@ -133,7 +133,7 @@ PUT /test_index However, it is recommended to define large synonyms set in a file using `synonyms_path`, because specifying them inline increases cluster size unnecessarily. -[float] +[discrete] ==== WordNet synonyms Synonyms based on http://wordnet.princeton.edu/[WordNet] format can be diff --git a/docs/reference/analysis/tokenizers.asciidoc b/docs/reference/analysis/tokenizers.asciidoc index 04f7b673940b..fa47c05e3a00 100644 --- a/docs/reference/analysis/tokenizers.asciidoc +++ b/docs/reference/analysis/tokenizers.asciidoc @@ -18,7 +18,7 @@ represents (used for highlighting search snippets). Elasticsearch has a number of built in tokenizers which can be used to build <>. -[float] +[discrete] === Word Oriented Tokenizers The following tokenizers are usually used for tokenizing full text into @@ -59,7 +59,7 @@ The `classic` tokenizer is a grammar based tokenizer for the English Language. The `thai` tokenizer segments Thai text into words. -[float] +[discrete] === Partial Word Tokenizers These tokenizers break up text or words into small fragments, for partial word @@ -80,7 +80,7 @@ n-grams of each word which are anchored to the start of the word, e.g. `quick` - `[q, qu, qui, quic, quick]`. -[float] +[discrete] === Structured Text Tokenizers The following tokenizers are usually used with structured text like diff --git a/docs/reference/analysis/tokenizers/chargroup-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/chargroup-tokenizer.asciidoc index f9668866f29f..84a29dc5718e 100644 --- a/docs/reference/analysis/tokenizers/chargroup-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/chargroup-tokenizer.asciidoc @@ -9,7 +9,7 @@ character which is in a defined set. It is mostly useful for cases where a simpl custom tokenization is desired, and the overhead of use of the <> is not acceptable. -[float] +[discrete] === Configuration The `char_group` tokenizer accepts one parameter: @@ -26,7 +26,7 @@ The `char_group` tokenizer accepts one parameter: it is split at `max_token_length` intervals. Defaults to `255`. -[float] +[discrete] === Example output [source,console] diff --git a/docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc index dd083a8ab7af..fc14f47f9c80 100644 --- a/docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc @@ -18,7 +18,7 @@ languages other than English: * It recognizes email addresses and internet hostnames as one token. -[float] +[discrete] === Example output [source,console] @@ -127,7 +127,7 @@ The above sentence would produce the following terms: [ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ] --------------------------- -[float] +[discrete] === Configuration The `classic` tokenizer accepts the following parameters: @@ -138,7 +138,7 @@ The `classic` tokenizer accepts the following parameters: The maximum token length. If a token is seen that exceeds this length then it is split at `max_token_length` intervals. Defaults to `255`. -[float] +[discrete] === Example configuration In this example, we configure the `classic` tokenizer to have a diff --git a/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc index 4a34e11a0649..a076bbf7d90a 100644 --- a/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc @@ -17,7 +17,7 @@ order, such as movie or song titles, the choice than edge N-grams. Edge N-grams have the advantage when trying to autocomplete words that can appear in any order. -[float] +[discrete] === Example output With the default settings, the `edge_ngram` tokenizer treats the initial text as a @@ -70,7 +70,7 @@ The above sentence would produce the following terms: NOTE: These default gram lengths are almost entirely useless. You need to configure the `edge_ngram` before using it. -[float] +[discrete] === Configuration The `edge_ngram` tokenizer accepts the following parameters: @@ -108,7 +108,7 @@ Character classes may be any of the following: setting this to `+-_` will make the tokenizer treat the plus, minus and underscore sign as part of a token. -[float] +[discrete] [[max-gram-limits]] === Limitations of the `max_gram` parameter @@ -133,7 +133,7 @@ and `apple`. We recommend testing both approaches to see which best fits your use case and desired search experience. -[float] +[discrete] === Example configuration In this example, we configure the `edge_ngram` tokenizer to treat letters and diff --git a/docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc index 8b5605653bb1..c4ee77458d83 100644 --- a/docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc @@ -8,7 +8,7 @@ The `keyword` tokenizer is a ``noop'' tokenizer that accepts whatever text it is given and outputs the exact same text as a single term. It can be combined with token filters to normalise output, e.g. lower-casing email addresses. -[float] +[discrete] === Example output [source,console] @@ -95,7 +95,7 @@ The request produces the following token: --------------------------- -[float] +[discrete] === Configuration The `keyword` tokenizer accepts the following parameters: diff --git a/docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc index ebec0afd38d4..c5b809fac1c2 100644 --- a/docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc @@ -9,7 +9,7 @@ character which is not a letter. It does a reasonable job for most European languages, but does a terrible job for some Asian languages, where words are not separated by spaces. -[float] +[discrete] === Example output [source,console] @@ -118,7 +118,7 @@ The above sentence would produce the following terms: [ The, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, s, bone ] --------------------------- -[float] +[discrete] === Configuration The `letter` tokenizer is not configurable. diff --git a/docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc index 88bbb77fcac6..ffe44292c52b 100644 --- a/docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc @@ -13,7 +13,7 @@ lowercases all terms. It is functionally equivalent to the efficient as it performs both steps in a single pass. -[float] +[discrete] === Example output [source,console] @@ -122,7 +122,7 @@ The above sentence would produce the following terms: [ the, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ] --------------------------- -[float] +[discrete] === Configuration The `lowercase` tokenizer is not configurable. diff --git a/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc index 1abc5ebc6a03..ecf5c5d851d4 100644 --- a/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc @@ -13,7 +13,7 @@ N-grams are like a sliding window that moves across the word - a continuous sequence of characters of the specified length. They are useful for querying languages that don't use spaces or that have long compound words, like German. -[float] +[discrete] === Example output With the default settings, the `ngram` tokenizer treats the initial text as a @@ -168,7 +168,7 @@ The above sentence would produce the following terms: [ Q, Qu, u, ui, i, ic, c, ck, k, "k ", " ", " F", F, Fo, o, ox, x ] --------------------------- -[float] +[discrete] === Configuration The `ngram` tokenizer accepts the following parameters: @@ -210,7 +210,7 @@ matches. A tri-gram (length `3`) is a good place to start. The index level setting `index.max_ngram_diff` controls the maximum allowed difference between `max_gram` and `min_gram`. -[float] +[discrete] === Example configuration In this example, we configure the `ngram` tokenizer to treat letters and diff --git a/docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc index 2081fdda4000..9aa2175f2123 100644 --- a/docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc @@ -8,7 +8,7 @@ The `path_hierarchy` tokenizer takes a hierarchical value like a filesystem path, splits on the path separator, and emits a term for each component in the tree. -[float] +[discrete] === Example output [source,console] @@ -62,7 +62,7 @@ The above text would produce the following terms: [ /one, /one/two, /one/two/three ] --------------------------- -[float] +[discrete] === Configuration The `path_hierarchy` tokenizer accepts the following parameters: @@ -86,7 +86,7 @@ The `path_hierarchy` tokenizer accepts the following parameters: `skip`:: The number of initial tokens to skip. Defaults to `0`. -[float] +[discrete] === Example configuration In this example, we configure the `path_hierarchy` tokenizer to split on `-` diff --git a/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc index 13eb38f8c4c6..18b9f3acf23f 100644 --- a/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc @@ -25,7 +25,7 @@ Read more about http://www.regular-expressions.info/catastrophic.html[pathologic ======================================== -[float] +[discrete] === Example output [source,console] @@ -99,7 +99,7 @@ The above sentence would produce the following terms: [ The, foo_bar_size, s, default, is, 5 ] --------------------------- -[float] +[discrete] === Configuration The `pattern` tokenizer accepts the following parameters: @@ -118,7 +118,7 @@ The `pattern` tokenizer accepts the following parameters: Which capture group to extract as tokens. Defaults to `-1` (split). -[float] +[discrete] === Example configuration In this example, we configure the `pattern` tokenizer to break text into diff --git a/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc index d7048986870d..bc4e131b5fcc 100644 --- a/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc @@ -20,7 +20,7 @@ For an explanation of the supported features and syntax, see <> except that it recognises URLs and email addresses as single tokens. -[float] +[discrete] === Example output [source,console] @@ -74,7 +74,7 @@ while the `standard` tokenizer would produce: [ Email, me, at, john.smith, global, international.com ] --------------------------- -[float] +[discrete] === Configuration The `uax_url_email` tokenizer accepts the following parameters: @@ -85,7 +85,7 @@ The `uax_url_email` tokenizer accepts the following parameters: The maximum token length. If a token is seen that exceeds this length then it is split at `max_token_length` intervals. Defaults to `255`. -[float] +[discrete] === Example configuration In this example, we configure the `uax_url_email` tokenizer to have a diff --git a/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc index c7e49ba16ea2..525c4bda4fa9 100644 --- a/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc @@ -7,7 +7,7 @@ The `whitespace` tokenizer breaks text into terms whenever it encounters a whitespace character. -[float] +[discrete] === Example output [source,console] @@ -109,7 +109,7 @@ The above sentence would produce the following terms: [ The, 2, QUICK, Brown-Foxes, jumped, over, the, lazy, dog's, bone. ] --------------------------- -[float] +[discrete] === Configuration The `whitespace` tokenizer accepts the following parameters: diff --git a/docs/reference/api-conventions.asciidoc b/docs/reference/api-conventions.asciidoc index 554f529d0dde..3a8fa3961154 100644 --- a/docs/reference/api-conventions.asciidoc +++ b/docs/reference/api-conventions.asciidoc @@ -160,7 +160,7 @@ include::rest-api/cron-expressions.asciidoc[] The following options can be applied to all of the REST APIs. -[float] +[discrete] ==== Pretty Results When appending `?pretty=true` to any request made, the JSON returned @@ -169,7 +169,7 @@ to set `?format=yaml` which will cause the result to be returned in the (sometimes) more readable yaml format. -[float] +[discrete] ==== Human readable output Statistics are returned in a format suitable for humans @@ -182,7 +182,7 @@ consumption. The default for the `human` flag is `false`. [[date-math]] -[float] +[discrete] ==== Date Math Most parameters which accept a formatted date value -- such as `gt` and `lt` @@ -219,7 +219,7 @@ Assuming `now` is `2001-01-01 12:00:00`, some examples are: `now-1h/d`:: `now` in milliseconds minus one hour, rounded down to UTC 00:00. Resolves to: `2001-01-01 00:00:00` `2001.02.01\|\|+1M/d`:: `2001-02-01` in milliseconds plus one month. Resolves to: `2001-03-01 00:00:00` -[float] +[discrete] [[common-options-response-filtering]] ==== Response Filtering @@ -376,7 +376,7 @@ GET /_search?filter_path=hits.hits._source&_source=title&sort=rating:desc -------------------------------------------------- -[float] +[discrete] ==== Flat Settings The `flat_settings` flag affects rendering of the lists of settings. When the @@ -445,27 +445,27 @@ Returns: By default `flat_settings` is set to `false`. -[float] +[discrete] ==== Parameters Rest parameters (when using HTTP, map to HTTP URL parameters) follow the convention of using underscore casing. -[float] +[discrete] ==== Boolean Values All REST API parameters (both request parameters and JSON body) support providing boolean "false" as the value `false` and boolean "true" as the value `true`. All other values will raise an error. -[float] +[discrete] ==== Number Values All REST APIs support providing numbered parameters as `string` on top of supporting the native JSON number types. [[time-units]] -[float] +[discrete] ==== Time units Whenever durations need to be specified, e.g. for a `timeout` parameter, the duration must specify @@ -481,7 +481,7 @@ the unit, like `2d` for 2 days. The supported units are: `nanos`:: Nanoseconds [[byte-units]] -[float] +[discrete] ==== Byte size units Whenever the byte size of data needs to be specified, e.g. when setting a buffer size @@ -497,7 +497,7 @@ these units use powers of 1024, so `1kb` means 1024 bytes. The supported units a `pb`:: Petabytes [[size-units]] -[float] +[discrete] ==== Unit-less quantities Unit-less quantities means that they don't have a "unit" like "bytes" or "Hertz" or "meter" or "long tonne". @@ -513,7 +513,7 @@ when we mean 87 though. These are the supported multipliers: `p`:: Peta [[distance-units]] -[float] +[discrete] ==== Distance Units Wherever distances need to be specified, such as the `distance` parameter in @@ -535,7 +535,7 @@ Millimeter:: `mm` or `millimeters` Nautical mile:: `NM`, `nmi`, or `nauticalmiles` [[fuzziness]] -[float] +[discrete] ==== Fuzziness Some queries and APIs support parameters to allow inexact _fuzzy_ matching, @@ -567,7 +567,7 @@ the default values are 3 and 6, equivalent to `AUTO:3,6` that make for lengths: `AUTO` should generally be the preferred value for `fuzziness`. -- -[float] +[discrete] [[common-options-error-options]] ==== Enabling stack traces @@ -643,7 +643,7 @@ The response looks like: // TESTRESPONSE[s/"stack_trace": "java.lang.IllegalArgum.+\.\.\."/"stack_trace": $body.error.stack_trace/] // TESTRESPONSE[s/"stack_trace": "java.lang.Number.+\.\.\."/"stack_trace": $body.error.caused_by.stack_trace/] -[float] +[discrete] ==== Request body in query string For libraries that don't accept a request body for non-POST requests, @@ -652,7 +652,7 @@ instead. When using this method, the `source_content_type` parameter should also be passed with a media type value that indicates the format of the source, such as `application/json`. -[float] +[discrete] ==== Content-Type Requirements The type of the content sent in a request body must be specified using diff --git a/docs/reference/autoscaling/apis/autoscaling-apis.asciidoc b/docs/reference/autoscaling/apis/autoscaling-apis.asciidoc index f6637ccd6875..d8f97a00771a 100644 --- a/docs/reference/autoscaling/apis/autoscaling-apis.asciidoc +++ b/docs/reference/autoscaling/apis/autoscaling-apis.asciidoc @@ -5,7 +5,7 @@ You can use the following APIs to perform autoscaling operations. -[float] +[discrete] [[autoscaling-api-top-level]] === Top-Level diff --git a/docs/reference/cat.asciidoc b/docs/reference/cat.asciidoc index 2b303e84ab95..983be092247c 100644 --- a/docs/reference/cat.asciidoc +++ b/docs/reference/cat.asciidoc @@ -21,11 +21,11 @@ All the cat commands accept a query string parameter `help` to see all the headers and info they provide, and the `/_cat` command alone lists all the available commands. -[float] +[discrete] [[common-parameters]] === Common parameters -[float] +[discrete] [[verbose]] ==== Verbose @@ -46,7 +46,7 @@ u_n93zwxThWHi1PDBJAGAg 127.0.0.1 127.0.0.1 u_n93zw -------------------------------------------------- // TESTRESPONSE[s/u_n93zw(xThWHi1PDBJAGAg)?/.+/ non_json] -[float] +[discrete] [[help]] ==== Help @@ -74,7 +74,7 @@ For example `GET _cat/shards/twitter?help` or `GET _cat/indices/twi*?help` results in an error. Use `GET _cat/shards?help` or `GET _cat/indices?help` instead. -[float] +[discrete] [[headers]] ==== Headers @@ -98,7 +98,7 @@ You can also request multiple columns using simple wildcards like `/_cat/thread_pool?h=ip,queue*` to get all headers (or aliases) starting with `queue`. -[float] +[discrete] [[numeric-formats]] ==== Numeric formats @@ -141,7 +141,7 @@ If you want to change the <>, use `size` parameter. If you want to change the <>, use `bytes` parameter. -[float] +[discrete] ==== Response as text, json, smile, yaml or cbor [source,sh] @@ -193,7 +193,7 @@ For example: -------------------------------------------------- // NOTCONSOLE -[float] +[discrete] [[sort]] ==== Sort diff --git a/docs/reference/ccr/apis/ccr-apis.asciidoc b/docs/reference/ccr/apis/ccr-apis.asciidoc index 7a0fd55a7b44..dea1f1603e4d 100644 --- a/docs/reference/ccr/apis/ccr-apis.asciidoc +++ b/docs/reference/ccr/apis/ccr-apis.asciidoc @@ -5,13 +5,13 @@ You can use the following APIs to perform {ccr} operations. -[float] +[discrete] [[ccr-api-top-level]] === Top-Level * <> -[float] +[discrete] [[ccr-api-follow]] === Follow @@ -23,7 +23,7 @@ You can use the following APIs to perform {ccr} operations. * <> * <> -[float] +[discrete] [[ccr-api-auto-follow]] === Auto-follow diff --git a/docs/reference/commands/certgen.asciidoc b/docs/reference/commands/certgen.asciidoc index 6087fe8440a0..2114e7a4e1a2 100644 --- a/docs/reference/commands/certgen.asciidoc +++ b/docs/reference/commands/certgen.asciidoc @@ -10,7 +10,7 @@ authorities (CA), certificate signing requests (CSR), and signed certificates for use with the Elastic Stack. Though this command is deprecated, you do not need to replace CAs, CSRs, or certificates that it created. -[float] +[discrete] === Synopsis [source,shell] @@ -23,7 +23,7 @@ bin/elasticsearch-certgen ([-s, --silent] | [-v, --verbose]) -------------------------------------------------- -[float] +[discrete] === Description By default, the command runs in interactive mode and you are prompted for @@ -54,7 +54,7 @@ organization-specific certificate authority to obtain signed certificates. The signed certificates must be in PEM format to work with the {stack} {security-features}. -[float] +[discrete] === Parameters `--cert `:: Specifies to generate new instance certificates and keys @@ -103,10 +103,10 @@ which can be blank. This parameter cannot be used with the `-csr` parameter. `-v, --verbose`:: Shows verbose output. -[float] +[discrete] === Examples -[float] +[discrete] [[certgen-silent]] ==== Using `elasticsearch-certgen` in Silent Mode diff --git a/docs/reference/commands/certutil.asciidoc b/docs/reference/commands/certutil.asciidoc index f69cb71364c0..8eae648ec00e 100644 --- a/docs/reference/commands/certutil.asciidoc +++ b/docs/reference/commands/certutil.asciidoc @@ -6,7 +6,7 @@ The `elasticsearch-certutil` command simplifies the creation of certificates for use with Transport Layer Security (TLS) in the {stack}. -[float] +[discrete] === Synopsis [source,shell] @@ -32,14 +32,14 @@ bin/elasticsearch-certutil [-h, --help] ([-s, --silent] | [-v, --verbose]) -------------------------------------------------- -[float] +[discrete] === Description You can specify one of the following modes: `ca`, `cert`, `csr`, `http`. The `elasticsearch-certutil` command also supports a silent mode of operation to enable easier batch operations. -[float] +[discrete] [[certutil-ca]] ==== CA mode @@ -51,7 +51,7 @@ format. You can subsequently use these files as input for the `cert` mode of the command. -[float] +[discrete] [[certutil-cert]] ==== CERT mode @@ -90,7 +90,7 @@ certificates and keys and packages them into a zip file. If you specify the `--keep-ca-key`, `--multiple` or `--in` parameters, the command produces a zip file containing the generated certificates and keys. -[float] +[discrete] [[certutil-csr]] ==== CSR mode @@ -111,7 +111,7 @@ private keys for each instance. Each CSR is provided as a standard PEM encoding of a PKCS#10 CSR. Each key is provided as a PEM encoding of an RSA private key. -[float] +[discrete] [[certutil-http]] ==== HTTP mode @@ -123,7 +123,7 @@ authority (CA), a certificate signing request (CSR), or certificates and keys for use in {es} and {kib}. Each folder in the zip file contains a readme that explains how to use the files. -[float] +[discrete] === Parameters `ca`:: Specifies to generate a new local certificate authority (CA). This @@ -214,7 +214,7 @@ parameter cannot be used with the `csr` parameter. `-v, --verbose`:: Shows verbose output. -[float] +[discrete] === Examples The following command generates a CA certificate and private key in PKCS#12 @@ -244,7 +244,7 @@ which you can copy to the relevant configuration directory for each Elastic product that you want to configure. For more information, see <>. -[float] +[discrete] [[certutil-silent]] ==== Using `elasticsearch-certutil` in Silent Mode diff --git a/docs/reference/commands/node-tool.asciidoc b/docs/reference/commands/node-tool.asciidoc index 7ec51908b9df..062173292cf8 100644 --- a/docs/reference/commands/node-tool.asciidoc +++ b/docs/reference/commands/node-tool.asciidoc @@ -7,7 +7,7 @@ allows you to adjust the <> of a node, unsafely edit cluster settings and may be able to recover some data after a disaster or start a node even if it is incompatible with the data on disk. -[float] +[discrete] === Synopsis [source,shell] @@ -17,7 +17,7 @@ bin/elasticsearch-node repurpose|unsafe-bootstrap|detach-cluster|override-versio [-h, --help] ([-s, --silent] | [-v, --verbose]) -------------------------------------------------- -[float] +[discrete] === Description This tool has a number of modes: @@ -51,7 +51,7 @@ This tool has a number of modes: {es}. [[node-tool-repurpose]] -[float] +[discrete] ==== Changing the role of a node There may be situations where you want to repurpose a node without following @@ -83,7 +83,7 @@ The tool provides a summary of the data to be deleted and asks for confirmation before making any changes. You can get detailed information about the affected indices and shards by passing the verbose (`-v`) option. -[float] +[discrete] ==== Removing persistent cluster settings There may be situations where a node contains persistent cluster @@ -103,7 +103,7 @@ The intended use is: * Repeat for all other master-eligible nodes * Start the nodes -[float] +[discrete] ==== Removing custom metadata from the cluster state There may be situations where a node contains custom metadata, typically @@ -121,7 +121,7 @@ The intended use is: * Repeat for all other master-eligible nodes * Start the nodes -[float] +[discrete] ==== Recovering data after a disaster Sometimes {es} nodes are temporarily stopped, perhaps because of the need to @@ -161,7 +161,7 @@ way forward that does not risk data loss, but it may be possible to use the data from the failed cluster. [[node-tool-override-version]] -[float] +[discrete] ==== Bypassing version checks The data that {es} writes to disk is designed to be read by the current version @@ -181,7 +181,7 @@ tool to overwrite the version number stored in the data path with the current version, causing {es} to believe that it is compatible with the on-disk data. [[node-tool-unsafe-bootstrap]] -[float] +[discrete] ===== Unsafe cluster bootstrapping If there is at least one remaining master-eligible node, but it is not possible @@ -256,7 +256,7 @@ there has been no data loss, it just means that tool was able to complete its job. [[node-tool-detach-cluster]] -[float] +[discrete] ===== Detaching nodes from their cluster It is unsafe for nodes to move between clusters, because different clusters @@ -321,7 +321,7 @@ that there has been no data loss, it just means that tool was able to complete its job. -[float] +[discrete] === Parameters `repurpose`:: Delete excess data when a node's roles are changed. @@ -346,10 +346,10 @@ from the on-disk cluster state. `-v, --verbose`:: Shows verbose output. -[float] +[discrete] === Examples -[float] +[discrete] ==== Repurposing a node as a dedicated master node In this example, a former data node is repurposed as a dedicated master node. @@ -371,7 +371,7 @@ Confirm [y/N] y Node successfully repurposed to master and no-data. ---- -[float] +[discrete] ==== Repurposing a node as a coordinating-only node In this example, a node that previously held data is repurposed as a @@ -394,7 +394,7 @@ Confirm [y/N] y Node successfully repurposed to no-master and no-data. ---- -[float] +[discrete] ==== Removing persistent cluster settings If your nodes contain persistent cluster settings that prevent the cluster @@ -428,7 +428,7 @@ You can also use wildcards to remove multiple settings, for example using node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.* ---- -[float] +[discrete] ==== Removing custom metadata from the cluster state If the on-disk cluster state contains custom metadata that prevents the node @@ -455,7 +455,7 @@ Confirm [y/N] y Customs were successfully removed from the cluster state ---- -[float] +[discrete] ==== Unsafe cluster bootstrapping Suppose your cluster had five master-eligible nodes and you have permanently @@ -531,7 +531,7 @@ Confirm [y/N] y Master node was successfully bootstrapped ---- -[float] +[discrete] ==== Detaching nodes from their cluster After unsafely bootstrapping a new cluster, run the `elasticsearch-node @@ -557,7 +557,7 @@ Confirm [y/N] y Node was successfully detached from the cluster ---- -[float] +[discrete] ==== Bypassing version checks Run the `elasticsearch-node override-version` command to overwrite the version diff --git a/docs/reference/commands/saml-metadata.asciidoc b/docs/reference/commands/saml-metadata.asciidoc index 78db77ea4661..793e7e22c142 100644 --- a/docs/reference/commands/saml-metadata.asciidoc +++ b/docs/reference/commands/saml-metadata.asciidoc @@ -6,7 +6,7 @@ The `elasticsearch-saml-metadata` command can be used to generate a SAML 2.0 Service Provider Metadata file. -[float] +[discrete] === Synopsis [source,shell] @@ -23,7 +23,7 @@ bin/elasticsearch-saml-metadata [-h, --help] ([-s, --silent] | [-v, --verbose]) -------------------------------------------------- -[float] +[discrete] === Description The SAML 2.0 specification provides a mechanism for Service Providers to @@ -44,7 +44,7 @@ If your {es} keystore is password protected, you are prompted to enter the password when you run the `elasticsearch-saml-metadata` command. -[float] +[discrete] === Parameters `--attribute `:: Specifies a SAML attribute that should be @@ -107,7 +107,7 @@ realm in your {es} configuration. `-v, --verbose`:: Shows verbose output. -[float] +[discrete] === Examples The following command generates a default metadata file for the `saml1` realm: diff --git a/docs/reference/commands/setup-passwords.asciidoc b/docs/reference/commands/setup-passwords.asciidoc index db13dc535020..9d67038db13f 100644 --- a/docs/reference/commands/setup-passwords.asciidoc +++ b/docs/reference/commands/setup-passwords.asciidoc @@ -6,7 +6,7 @@ The `elasticsearch-setup-passwords` command sets the passwords for the <>. -[float] +[discrete] === Synopsis [source,shell] @@ -16,7 +16,7 @@ bin/elasticsearch-setup-passwords auto|interactive [-s, --silent] [-u, --url ""] [-v, --verbose] -------------------------------------------------- -[float] +[discrete] === Description This command is intended for use only during the initial configuration of the @@ -40,7 +40,7 @@ override settings in your `elasticsearch.yml` file by using the `-E` command option. For more information about debugging connection failures, see <>. -[float] +[discrete] === Parameters `auto`:: Outputs randomly-generated passwords to the console. @@ -63,7 +63,7 @@ you must specify an HTTPS URL. `-v, --verbose`:: Shows verbose output. -[float] +[discrete] === Examples The following example uses the `-u` parameter to tell the tool where to submit diff --git a/docs/reference/commands/shard-tool.asciidoc b/docs/reference/commands/shard-tool.asciidoc index 99f33c2f5d34..e2d623cb9b19 100644 --- a/docs/reference/commands/shard-tool.asciidoc +++ b/docs/reference/commands/shard-tool.asciidoc @@ -11,7 +11,7 @@ You will lose the corrupted data when you run `elasticsearch-shard`. This tool should only be used as a last resort if there is no way to recover from another copy of the shard or restore a snapshot. -[float] +[discrete] === Synopsis [source,shell] @@ -23,7 +23,7 @@ bin/elasticsearch-shard remove-corrupted-data [-h, --help] ([-s, --silent] | [-v, --verbose]) -------------------------------------------------- -[float] +[discrete] === Description When {es} detects that a shard's data is corrupted, it fails that shard copy and @@ -44,7 +44,7 @@ There are two ways to specify the path: * Use the `--dir` option to specify the full path to the corrupted index or translog files. -[float] +[discrete] ==== Removing corrupted data `elasticsearch-shard` analyses the shard copy and provides an overview of the diff --git a/docs/reference/commands/syskeygen.asciidoc b/docs/reference/commands/syskeygen.asciidoc index 06d8330a1222..5418d898e849 100644 --- a/docs/reference/commands/syskeygen.asciidoc +++ b/docs/reference/commands/syskeygen.asciidoc @@ -6,7 +6,7 @@ The `elasticsearch-syskeygen` command creates a system key file in the elasticsearch config directory. -[float] +[discrete] === Synopsis [source,shell] @@ -16,7 +16,7 @@ bin/elasticsearch-syskeygen ([-s, --silent] | [-v, --verbose]) -------------------------------------------------- -[float] +[discrete] === Description The command generates a `system_key` file, which you can use to symmetrically @@ -27,7 +27,7 @@ from returning and storing information that contains clear text credentials. See IMPORTANT: The system key is a symmetric key, so the same key must be used on every node in the cluster. -[float] +[discrete] === Parameters `-E `:: Configures a setting. For example, if you have a custom @@ -41,7 +41,7 @@ environment variable. `-v, --verbose`:: Shows verbose output. -[float] +[discrete] === Examples The following command generates a `system_key` file in the diff --git a/docs/reference/commands/users-command.asciidoc b/docs/reference/commands/users-command.asciidoc index d359d3b9b4db..2f668e07e0a9 100644 --- a/docs/reference/commands/users-command.asciidoc +++ b/docs/reference/commands/users-command.asciidoc @@ -6,7 +6,7 @@ If you use file-based user authentication, the `elasticsearch-users` command enables you to add and remove users, assign user roles, and manage passwords. -[float] +[discrete] === Synopsis [source,shell] @@ -19,7 +19,7 @@ bin/elasticsearch-users ([userdel ]) -------------------------------------------------- -[float] +[discrete] === Description If you use the built-in `file` internal realm, users are defined in local files @@ -40,7 +40,7 @@ TIP: To ensure that {es} can read the user and role information at startup, run command as root or some other user updates the permissions for the `users` and `users_roles` files and prevents {es} from accessing them. -[float] +[discrete] === Parameters `-a `:: If used with the `roles` parameter, adds a comma-separated list @@ -81,10 +81,10 @@ removing roles within the same command to change a user's roles. //`-v, --verbose`:: Shows verbose output. -//[float] +//[discrete] //=== Authorization -[float] +[discrete] === Examples The following example adds a new user named `jacknich` to the `file` realm. The diff --git a/docs/reference/docs/bulk.asciidoc b/docs/reference/docs/bulk.asciidoc index 3aa78abc33e1..510ba19cb33d 100644 --- a/docs/reference/docs/bulk.asciidoc +++ b/docs/reference/docs/bulk.asciidoc @@ -89,7 +89,7 @@ Experiment with different settings to find the optimal size for your particular When using the HTTP API, make sure that the client does not send HTTP chunks, as this will slow things down. -[float] +[discrete] [[bulk-clients]] ===== Client support for bulk requests @@ -116,7 +116,7 @@ JavaScript:: .NET:: See https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/indexing-documents.html#bulkall-observable[`BulkAllObservable`] -[float] +[discrete] [[bulk-curl]] ===== Submitting bulk requests with cURL @@ -135,7 +135,7 @@ $ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk -- // NOTCONSOLE // Not converting to console because this shows how curl works -[float] +[discrete] [[bulk-optimistic-concurrency-control]] ===== Optimistic Concurrency Control @@ -146,7 +146,7 @@ how operations are executed, based on the last modification to existing documents. See <> for more details. -[float] +[discrete] [[bulk-versioning]] ===== Versioning @@ -155,7 +155,7 @@ Each bulk item can include the version value using the index / delete operation based on the `_version` mapping. It also support the `version_type` (see <>). -[float] +[discrete] [[bulk-routing]] ===== Routing @@ -166,7 +166,7 @@ index / delete operation based on the `_routing` mapping. NOTE: Data streams do not support custom routing. Instead, target the appropriate backing index for the stream. -[float] +[discrete] [[bulk-wait-for-active-shards]] ===== Wait For Active Shards @@ -176,7 +176,7 @@ before starting to process the bulk request. See <> for further details and a usage example. -[float] +[discrete] [[bulk-refresh]] ===== Refresh @@ -190,7 +190,7 @@ with five shards. The request will only wait for those three shards to refresh. The other two shards that make up the index do not participate in the `_bulk` request at all. -[float] +[discrete] [[bulk-security]] ===== Security @@ -528,7 +528,7 @@ The API returns the following result: // TESTRESPONSE[s/"_seq_no" : 3/"_seq_no" : $body.items.3.update._seq_no/] // TESTRESPONSE[s/"_primary_term" : 4/"_primary_term" : $body.items.3.update._primary_term/] -[float] +[discrete] [[bulk-update]] ===== Bulk update example diff --git a/docs/reference/docs/data-replication.asciidoc b/docs/reference/docs/data-replication.asciidoc index b4bf8c85cad1..a4ca94763b9a 100644 --- a/docs/reference/docs/data-replication.asciidoc +++ b/docs/reference/docs/data-replication.asciidoc @@ -2,7 +2,7 @@ [[docs-replication]] === Reading and Writing documents -[float] +[discrete] ==== Introduction Each index in Elasticsearch is <> @@ -53,7 +53,7 @@ encompasses the lifetime of each subsequent stage. For example, the coordinating stage, which may be spread out across different primary shards, has completed. Each primary stage will not complete until the in-sync replicas have finished indexing the docs locally and responded to the replica requests. -[float] +[discrete] ===== Failure handling Many things can go wrong during indexing -- disks can get corrupted, nodes can be disconnected from each other, or some @@ -94,7 +94,7 @@ into the primary will not be lost. Of course, since at that point we are running issues can cause data loss. See <> for some mitigation options. ************ -[float] +[discrete] ==== Basic read model Reads in Elasticsearch can be very lightweight lookups by ID or a heavy search request with complex aggregations that @@ -112,7 +112,7 @@ is as follows: . Send shard level read requests to the selected copies. . Combine the results and respond. Note that in the case of get by ID look up, only one shard is relevant and this step can be skipped. -[float] +[discrete] [[shard-failures]] ===== Shard failures @@ -132,7 +132,7 @@ Responses containing partial results still provide a `200 OK` HTTP status code. Shard failures are indicated by the `timed_out` and `_shards` fields of the response header. -[float] +[discrete] ==== A few simple implications Each of these basic flows determines how Elasticsearch behaves as a system for both reads and writes. Furthermore, since read @@ -147,7 +147,7 @@ Read unacknowledged:: Since the primary first indexes locally and then replicate Two copies by default:: This model can be fault tolerant while maintaining only two copies of the data. This is in contrast to quorum-based system where the minimum number of copies for fault tolerance is 3. -[float] +[discrete] ==== Failures Under failures, the following is possible: @@ -161,7 +161,7 @@ Dirty reads:: An isolated primary can expose writes that will not be acknowledge At that point the operation is already indexed into the primary and can be read by a concurrent read. Elasticsearch mitigates this risk by pinging the master every second (by default) and rejecting indexing operations if no master is known. -[float] +[discrete] ==== The Tip of the Iceberg This document provides a high level overview of how Elasticsearch deals with data. Of course, there is much much more diff --git a/docs/reference/docs/delete-by-query.asciidoc b/docs/reference/docs/delete-by-query.asciidoc index 36b7d313b866..b09a351f502b 100644 --- a/docs/reference/docs/delete-by-query.asciidoc +++ b/docs/reference/docs/delete-by-query.asciidoc @@ -410,7 +410,7 @@ POST twitter/_delete_by_query?scroll_size=5000 -------------------------------------------------- // TEST[setup:twitter] -[float] +[discrete] [[docs-delete-by-query-manual-slice]] ===== Slice manually @@ -482,7 +482,7 @@ Which results in a sensible `total` like this one: } ---------------------------------------------------------------- -[float] +[discrete] [[docs-delete-by-query-automatic-slice]] ===== Use automatic slicing @@ -565,7 +565,7 @@ being deleted. * Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time. -[float] +[discrete] [[docs-delete-by-query-rethrottle]] ===== Change throttling for a request @@ -657,7 +657,7 @@ and `wait_for_completion=false` was set on it then it'll come back with you to delete that document. -[float] +[discrete] [[docs-delete-by-query-cancel-task-api]] ===== Cancel a delete by query operation diff --git a/docs/reference/docs/delete.asciidoc b/docs/reference/docs/delete.asciidoc index 12f3059b9ee9..7c410e915448 100644 --- a/docs/reference/docs/delete.asciidoc +++ b/docs/reference/docs/delete.asciidoc @@ -21,7 +21,7 @@ NOTE: You cannot send deletion requests directly to a data stream. To delete a document in a data stream, you must target the backing index containing the document. See <>. -[float] +[discrete] [[optimistic-concurrency-control-delete]] ===== Optimistic concurrency control @@ -31,7 +31,7 @@ term specified by the `if_seq_no` and `if_primary_term` parameters. If a mismatch is detected, the operation will result in a `VersionConflictException` and a status code of 409. See <> for more details. -[float] +[discrete] [[delete-versioning]] ===== Versioning @@ -44,7 +44,7 @@ short time after deletion to allow for control of concurrent operations. The length of time for which a deleted document's version remains available is determined by the `index.gc_deletes` index setting and defaults to 60 seconds. -[float] +[discrete] [[delete-routing]] ===== Routing @@ -80,7 +80,7 @@ DELETE /twitter/_doc/1?routing=kimchy This request deletes the tweet with id `1`, but it is routed based on the user. The document is not deleted if the correct routing is not specified. -[float] +[discrete] [[delete-index-creation]] ===== Automatic index creation @@ -89,7 +89,7 @@ the delete operation automatically creates the specified index if it does not exist. For information about manually creating indices, see <>. -[float] +[discrete] [[delete-distributed]] ===== Distributed @@ -97,7 +97,7 @@ The delete operation gets hashed into a specific shard id. It then gets redirected into the primary shard within that id group, and replicated (if needed) to shard replicas within that id group. -[float] +[discrete] [[delete-wait-for-active-shards]] ===== Wait for active shards @@ -107,14 +107,14 @@ before starting to process the delete request. See <> for further details and a usage example. -[float] +[discrete] [[delete-refresh]] ===== Refresh Control when the changes made by this request are visible to search. See <>. -[float] +[discrete] [[delete-timeout]] ===== Timeout diff --git a/docs/reference/docs/get.asciidoc b/docs/reference/docs/get.asciidoc index b0c813c5c01f..bc6028e222c6 100644 --- a/docs/reference/docs/get.asciidoc +++ b/docs/reference/docs/get.asciidoc @@ -30,7 +30,7 @@ particular index. Use HEAD to verify that a document exists. You can use the `_source` resource retrieve just the document source or verify that it exists. -[float] +[discrete] [[realtime]] ===== Realtime @@ -41,7 +41,7 @@ has been updated but is not yet refreshed, the get API will have to parse and analyze the source to extract the stored fields. In order to disable realtime GET, the `realtime` parameter can be set to `false`. -[float] +[discrete] [[get-source-filtering]] ===== Source filtering @@ -75,7 +75,7 @@ GET twitter/_doc/0?_source=*.id,retweeted -------------------------------------------------- // TEST[setup:twitter] -[float] +[discrete] [[get-routing]] ===== Routing @@ -91,7 +91,7 @@ GET twitter/_doc/2?routing=user1 This request gets the tweet with id `2`, but it is routed based on the user. The document is not fetched if the correct routing is not specified. -[float] +[discrete] [[preference]] ===== Preference @@ -112,7 +112,7 @@ Custom (string) value:: states. A sample value can be something like the web session id, or the user name. -[float] +[discrete] [[get-refresh]] ===== Refresh @@ -122,7 +122,7 @@ it to `true` should be done after careful thought and verification that this does not cause a heavy load on the system (and slows down indexing). -[float] +[discrete] [[get-distributed]] ===== Distributed @@ -132,7 +132,7 @@ result. The replicas are the primary shard and its replicas within that shard id group. This means that the more replicas we have, the better GET scaling we will have. -[float] +[discrete] [[get-versioning]] ===== Versioning support @@ -258,7 +258,7 @@ HEAD twitter/_doc/0 {es} returns a status code of `200 - OK` if the document exists, or `404 - Not Found` if it doesn't. -[float] +[discrete] [[_source]] ===== Get the source field only @@ -290,7 +290,7 @@ HEAD twitter/_source/1 -------------------------------------------------- // TEST[continued] -[float] +[discrete] [[get-stored-fields]] ===== Get stored fields diff --git a/docs/reference/docs/index_.asciidoc b/docs/reference/docs/index_.asciidoc index 4eaf94b790d7..5deac0d3ad2c 100644 --- a/docs/reference/docs/index_.asciidoc +++ b/docs/reference/docs/index_.asciidoc @@ -219,7 +219,7 @@ the order specified. <3> Allow automatic creation of any index. This is the default. -[float] +[discrete] [[operation-type]] ===== Put if absent @@ -228,7 +228,7 @@ setting the `op_type` parameter to _create_. In this case, the index operation fails if a document with the specified ID already exists in the index. -[float] +[discrete] ===== Create document IDs automatically When using the `POST //_doc/` request format, the `op_type` is @@ -265,7 +265,7 @@ The API returns the following result: -------------------------------------------------- // TESTRESPONSE[s/W0tpsmIBdwcYyG50zbta/$body._id/ s/"successful": 2/"successful": 1/] -[float] +[discrete] [[optimistic-concurrency-control-index]] ===== Optimistic concurrency control @@ -275,7 +275,7 @@ term specified by the `if_seq_no` and `if_primary_term` parameters. If a mismatch is detected, the operation will result in a `VersionConflictException` and a status code of 409. See <> for more details. -[float] +[discrete] [[index-routing]] ===== Routing @@ -307,7 +307,7 @@ value is provided or extracted. NOTE: Data streams do not support custom routing. Instead, target the appropriate backing index for the stream. -[float] +[discrete] [[index-distributed]] ===== Distributed @@ -316,7 +316,7 @@ The index operation is directed to the primary shard based on its route containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas. -[float] +[discrete] [[index-wait-for-active-shards]] ===== Active shards @@ -374,14 +374,14 @@ replication succeeded/failed. -------------------------------------------------- // NOTCONSOLE -[float] +[discrete] [[index-refresh]] ===== Refresh Control when the changes made by this request are visible to search. See <>. -[float] +[discrete] [[index-noop]] ===== Noop updates @@ -396,7 +396,7 @@ It's a combination of lots of factors like how frequently your data source sends updates that are actually noops and how many queries per second Elasticsearch runs on the shard receiving the updates. -[float] +[discrete] [[timeout]] ===== Timeout @@ -419,7 +419,7 @@ PUT twitter/_doc/1?timeout=5m } -------------------------------------------------- -[float] +[discrete] [[index-versioning]] ===== Versioning @@ -465,7 +465,7 @@ a database is simplified if external versioning is used, as only the latest version will be used if the index operations arrive out of order for whatever reason. -[float] +[discrete] [[index-version-types]] ===== Version types diff --git a/docs/reference/docs/multi-termvectors.asciidoc b/docs/reference/docs/multi-termvectors.asciidoc index 5d98c3552678..9903cfa777eb 100644 --- a/docs/reference/docs/multi-termvectors.asciidoc +++ b/docs/reference/docs/multi-termvectors.asciidoc @@ -80,7 +80,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=version_type] -[float] +[discrete] [[docs-multi-termvectors-api-example]] ==== {api-examples-title} diff --git a/docs/reference/docs/refresh.asciidoc b/docs/reference/docs/refresh.asciidoc index 479e6e8cf26e..2bbac2c0b1cc 100644 --- a/docs/reference/docs/refresh.asciidoc +++ b/docs/reference/docs/refresh.asciidoc @@ -29,7 +29,7 @@ to return. Take no refresh related actions. The changes made by this request will be made visible at some point after the request returns. -[float] +[discrete] ==== Choosing which setting to use // tag::refresh-default[] Unless you have a good reason to wait for the change to become visible, always @@ -62,7 +62,7 @@ refresh immediately, `refresh=true` will affect other ongoing request. In general, if you have a running system you don't wish to disturb then `refresh=wait_for` is a smaller modification. -[float] +[discrete] [[refresh_wait_for-force-refresh]] ==== `refresh=wait_for` Can Force a Refresh @@ -78,7 +78,7 @@ contain `"forced_refresh": true`. Bulk requests only take up one slot on each shard that they touch no matter how many times they modify the shard. -[float] +[discrete] ==== Examples These will create a document and immediately refresh the index so it is visible: diff --git a/docs/reference/docs/update-by-query.asciidoc b/docs/reference/docs/update-by-query.asciidoc index 80589c16df9b..1bdfa5874035 100644 --- a/docs/reference/docs/update-by-query.asciidoc +++ b/docs/reference/docs/update-by-query.asciidoc @@ -409,7 +409,7 @@ POST twitter/_update_by_query?pipeline=set-foo // TEST[setup:twitter] -[float] +[discrete] [[docs-update-by-query-fetch-tasks]] ===== Get the status of update by query operations @@ -488,7 +488,7 @@ and `wait_for_completion=false` was set on it, then it'll come back with a you to delete that document. -[float] +[discrete] [[docs-update-by-query-cancel-task-api]] ===== Cancel an update by query operation @@ -506,7 +506,7 @@ API above will continue to list the update by query task until this task checks that it has been cancelled and terminates itself. -[float] +[discrete] [[docs-update-by-query-rethrottle]] ===== Change throttling for a request @@ -527,7 +527,7 @@ query takes effect immediately, but rethrotting that slows down the query will take effect after completing the current batch. This prevents scroll timeouts. -[float] +[discrete] [[docs-update-by-query-manual-slice]] ===== Slice manually Slice an update by query manually by providing a slice id and total number of @@ -581,7 +581,7 @@ Which results in a sensible `total` like this one: } ---------------------------------------------------------------- -[float] +[discrete] [[docs-update-by-query-automatic-slice]] ===== Use automatic slicing @@ -651,7 +651,7 @@ being updated. * Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time. -[float] +[discrete] [[picking-up-a-new-property]] ===== Pick up a new property diff --git a/docs/reference/docs/update.asciidoc b/docs/reference/docs/update.asciidoc index 8f3a8ff0955a..c5f876b3fde3 100644 --- a/docs/reference/docs/update.asciidoc +++ b/docs/reference/docs/update.asciidoc @@ -190,7 +190,7 @@ POST test/_update/1 -------------------------------------------------- // TEST[continued] -[float] +[discrete] ===== Update part of a document The following partial update adds a new field to the @@ -210,7 +210,7 @@ POST test/_update/1 If both `doc` and `script` are specified, then `doc` is ignored. If you specify a scripted update, include the fields you want to update in the script. -[float] +[discrete] ===== Detect noop updates By default updates that don't change anything detect that they don't change @@ -262,7 +262,7 @@ POST test/_update/1 // TEST[continued] [[upserts]] -[float] +[discrete] ===== Upsert If the document does not already exist, the contents of the `upsert` element @@ -287,7 +287,7 @@ POST test/_update/1 -------------------------------------------------- // TEST[continued] -[float] +[discrete] [[scripted_upsert]] ===== Scripted upsert @@ -315,7 +315,7 @@ POST sessions/_update/dh3sgudg8gsrgl // TEST[s/"id": "my_web_session_summariser"/"source": "ctx._source.page_view_event = params.pageViewEvent"/] // TEST[continued] -[float] +[discrete] [[doc_as_upsert]] ===== Doc as upsert diff --git a/docs/reference/eql/index.asciidoc b/docs/reference/eql/index.asciidoc index 0ac43b8caf05..427ac856af7c 100644 --- a/docs/reference/eql/index.asciidoc +++ b/docs/reference/eql/index.asciidoc @@ -15,7 +15,7 @@ You can use EQL in {es} to easily express relationships between events and quickly match events with shared properties. You can use EQL and query DSL together to better filter your searches. -[float] +[discrete] [[eql-advantages]] === Advantages of EQL @@ -32,7 +32,7 @@ While you can use EQL for any event-based data, we created EQL for threat hunting. EQL not only supports indicator of compromise (IOC) searching but makes it easy to describe activity that goes beyond IOCs. -[float] +[discrete] [[when-to-use-eql]] === When to use EQL @@ -42,7 +42,7 @@ Consider using EQL if you: * Search time-series data or logs, such as network or system logs * Want an easy way to explore relationships between events -[float] +[discrete] [[eql-toc]] === In this section diff --git a/docs/reference/getting-started.asciidoc b/docs/reference/getting-started.asciidoc index 0054c09f45b4..10f055942750 100755 --- a/docs/reference/getting-started.asciidoc +++ b/docs/reference/getting-started.asciidoc @@ -37,7 +37,7 @@ To take {es} for a test drive, you can create a the {ess} or set up a multi-node {es} cluster on your own Linux, macOS, or Windows machine. -[float] +[discrete] [[run-elasticsearch-hosted]] === Run {es} on Elastic Cloud @@ -53,7 +53,7 @@ and verify your email address. Once you've created a deployment, you're ready to <>. -[float] +[discrete] [[run-elasticsearch-local]] === Run {es} locally on Linux, macOS, or Windows @@ -226,7 +226,7 @@ privileges are required to run each API, see <>. {es} responds to each API request with an HTTP status code like `200 OK`. With the exception of `HEAD` requests, it also returns a JSON-encoded response body. -[float] +[discrete] [[gs-other-install]] === Other installation options @@ -312,7 +312,7 @@ and shows the original source fields that were indexed. // TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ ] // TESTRESPONSE[s/"_primary_term" : \d+/"_primary_term" : $body._primary_term/] -[float] +[discrete] [[getting-started-batch-processing]] === Indexing documents in bulk diff --git a/docs/reference/graph/explore.asciidoc b/docs/reference/graph/explore.asciidoc index 74661acff94a..8ab9cba15b6e 100644 --- a/docs/reference/graph/explore.asciidoc +++ b/docs/reference/graph/explore.asciidoc @@ -18,12 +18,12 @@ For additional information about working with the explore API, see the Graph {kibana-ref}/graph-troubleshooting.html[Troubleshooting] and {kibana-ref}/graph-limitations.html[Limitations] topics. -[float] +[discrete] === Request `POST /_graph/explore` -[float] +[discrete] === Description An initial request to the `_explore` API contains a seed query that identifies @@ -32,7 +32,7 @@ and connections you want to include in the graph. Subsequent `_explore` requests enable you to _spider out_ from one more vertices of interest. You can exclude vertices that have already been returned. -[float] +[discrete] === Request Body [role="child_attributes"] @@ -188,13 +188,13 @@ a maximum number of documents per value for that field. For example: ====== ==== -// [float] +// [discrete] // === Authorization -[float] +[discrete] === Examples -[float] +[discrete] [[basic-search]] ==== Basic exploration @@ -292,7 +292,7 @@ to the other as part of exploration. The `doc_count` value indicates how many documents in the sample set contain this pairing of terms (this is not a global count for all documents in the data stream or index). -[float] +[discrete] [[optional-controls]] ==== Optional controls @@ -372,7 +372,7 @@ the connection is returned for global consideration. <8> Restrict which document are considered as you explore connected terms. -[float] +[discrete] [[spider-search]] ==== Spidering operations diff --git a/docs/reference/how-to/disk-usage.asciidoc b/docs/reference/how-to/disk-usage.asciidoc index 5b4a388a860f..be7265692718 100644 --- a/docs/reference/how-to/disk-usage.asciidoc +++ b/docs/reference/how-to/disk-usage.asciidoc @@ -1,7 +1,7 @@ [[tune-for-disk-usage]] == Tune for disk usage -[float] +[discrete] === Disable the features you do not need By default Elasticsearch indexes and adds doc values to most fields so that they @@ -86,7 +86,7 @@ PUT index } -------------------------------------------------- -[float] +[discrete] [[default-dynamic-string-mapping]] === Don't use default dynamic string mappings @@ -121,20 +121,20 @@ PUT index } -------------------------------------------------- -[float] +[discrete] === Watch your shard size Larger shards are going to be more efficient at storing data. To increase the size of your shards, you can decrease the number of primary shards in an index by <> with fewer primary shards, creating fewer indices (e.g. by leveraging the <>), or modifying an existing index using the <>. Keep in mind that large shard sizes come with drawbacks, such as long full recovery times. -[float] +[discrete] [[disable-source]] === Disable `_source` The <> field stores the original JSON body of the document. If you don’t need access to it you can disable it. However, APIs that needs access to `_source` such as update and reindex won’t work. -[float] +[discrete] [[best-compression]] === Use `best_compression` @@ -142,19 +142,19 @@ The `_source` and stored fields can easily take a non negligible amount of disk space. They can be compressed more aggressively by using the `best_compression` <>. -[float] +[discrete] === Force Merge Indices in Elasticsearch are stored in one or more shards. Each shard is a Lucene index and made up of one or more segments - the actual files on disk. Larger segments are more efficient for storing data. The <> can be used to reduce the number of segments per shard. In many cases, the number of segments can be reduced to one per shard by setting `max_num_segments=1`. -[float] +[discrete] === Shrink Index The <> allows you to reduce the number of shards in an index. Together with the Force Merge API above, this can significantly reduce the number of shards and segments of an index. -[float] +[discrete] === Use the smallest numeric type that is sufficient The type that you pick for <> can have a significant impact @@ -164,7 +164,7 @@ stored in a `scaled_float` if appropriate or in the smallest type that fits the use-case: using `float` over `double`, or `half_float` over `float` will help save storage. -[float] +[discrete] === Use index sorting to colocate similar documents When Elasticsearch stores `_source`, it compresses multiple documents at once @@ -178,7 +178,7 @@ to the index. If you enabled <> then instead they are compressed in sorted order. Sorting documents with similar structure, fields, and values together should improve the compression ratio. -[float] +[discrete] === Put fields in the same order in documents Due to the fact that multiple documents are compressed together into blocks, diff --git a/docs/reference/how-to/general.asciidoc b/docs/reference/how-to/general.asciidoc index 4921d7dcb270..ac6760ee8d2a 100644 --- a/docs/reference/how-to/general.asciidoc +++ b/docs/reference/how-to/general.asciidoc @@ -1,7 +1,7 @@ [[general-recommendations]] == General recommendations -[float] +[discrete] [[large-size]] === Don't return large result sets @@ -11,7 +11,7 @@ for workloads that fall into the database domain, such as retrieving all documents that match a particular query. If you need to do this, make sure to use the <> API. -[float] +[discrete] [[maximum-document-size]] === Avoid large documents diff --git a/docs/reference/how-to/indexing-speed.asciidoc b/docs/reference/how-to/indexing-speed.asciidoc index 8fac808eb76f..8da7bb199fd9 100644 --- a/docs/reference/how-to/indexing-speed.asciidoc +++ b/docs/reference/how-to/indexing-speed.asciidoc @@ -1,7 +1,7 @@ [[tune-for-indexing-speed]] == Tune for indexing speed -[float] +[discrete] === Use bulk requests Bulk requests will yield much better performance than single-document index @@ -16,7 +16,7 @@ cluster under memory pressure when many of them are sent concurrently, so it is advisable to avoid going beyond a couple tens of megabytes per request even if larger requests seem to perform better. -[float] +[discrete] [[multiple-workers-threads]] === Use multiple workers/threads to send data to Elasticsearch @@ -36,7 +36,7 @@ Similarly to sizing bulk requests, only testing can tell what the optimal number of workers is. This can be tested by progressively increasing the number of workers until either I/O or CPU is saturated on the cluster. -[float] +[discrete] === Unset or increase the refresh interval The operation that consists of making changes visible to search - called a @@ -57,7 +57,7 @@ gets indexed and when it becomes visible, increasing the <> to a larger value, e.g. `30s`, might help improve indexing speed. -[float] +[discrete] === Disable replicas for initial loads If you have a large amount of data that you want to load all at once into @@ -71,20 +71,20 @@ If `index.refresh_interval` is configured in the index settings, it may further help to unset it during this initial load and setting it back to its original value once the initial load is finished. -[float] +[discrete] === Disable swapping You should make sure that the operating system is not swapping out the java process by <>. -[float] +[discrete] === Give memory to the filesystem cache The filesystem cache will be used in order to buffer I/O operations. You should make sure to give at least half the memory of the machine running Elasticsearch to the filesystem cache. -[float] +[discrete] === Use auto-generated ids When indexing a document that has an explicit id, Elasticsearch needs to check @@ -93,7 +93,7 @@ is a costly operation and gets even more costly as the index grows. By using auto-generated ids, Elasticsearch can skip this check, which makes indexing faster. -[float] +[discrete] === Use faster hardware If indexing is I/O bound, you should investigate giving more memory to the @@ -115,7 +115,7 @@ different nodes so there's redundancy for any node failures. You can also use <> to backup the index for further insurance. -[float] +[discrete] === Indexing buffer size If your node is doing only heavy indexing, be sure @@ -131,7 +131,7 @@ The default is `10%` which is often plenty: for example, if you give the JVM 10GB of memory, it will give 1GB to the index buffer, which is enough to host two shards that are heavily indexing. -[float] +[discrete] === Use {ccr} to prevent searching from stealing resources from indexing Within a single cluster, indexing and searching can compete for resources. By @@ -140,7 +140,7 @@ one cluster to the other one, and routing all searches to the cluster that has the follower indices, search activity will no longer steal resources from indexing on the cluster that hosts the leader indices. -[float] +[discrete] === Additional optimizations Many of the strategies outlined in <> also diff --git a/docs/reference/how-to/recipes/scoring.asciidoc b/docs/reference/how-to/recipes/scoring.asciidoc index 9996a5b193f7..d7657879444c 100644 --- a/docs/reference/how-to/recipes/scoring.asciidoc +++ b/docs/reference/how-to/recipes/scoring.asciidoc @@ -4,7 +4,7 @@ The fact that Elasticsearch operates with shards and replicas adds challenges when it comes to having good scoring. -[float] +[discrete] ==== Scores are not reproducible Say the same user runs the same request twice in a row and documents do not come @@ -39,7 +39,7 @@ they will be sorted by their internal Lucene doc id (which is unrelated to the the same shard. So by always hitting the same shard, we would get more consistent ordering of documents that have the same scores. -[float] +[discrete] ==== Relevancy looks wrong If you notice that two documents with the same content get different scores or diff --git a/docs/reference/how-to/search-speed.asciidoc b/docs/reference/how-to/search-speed.asciidoc index 708c323e881c..d37869047bea 100644 --- a/docs/reference/how-to/search-speed.asciidoc +++ b/docs/reference/how-to/search-speed.asciidoc @@ -1,7 +1,7 @@ [[tune-for-search-speed]] == Tune for search speed -[float] +[discrete] === Give memory to the filesystem cache Elasticsearch heavily relies on the filesystem cache in order to make search @@ -9,7 +9,7 @@ fast. In general, you should make sure that at least half the available memory goes to the filesystem cache so that Elasticsearch can keep hot regions of the index in physical memory. -[float] +[discrete] === Use faster hardware If your search is I/O bound, you should investigate giving more memory to the @@ -25,7 +25,7 @@ throttled. If your search is CPU-bound, you should investigate buying faster CPUs. -[float] +[discrete] === Document modeling Documents should be modeled so that search-time operations are as cheap as possible. @@ -35,7 +35,7 @@ several times slower and <> relations can make queries hundreds of times slower. So if the same questions can be answered without joins by denormalizing documents, significant speedups can be expected. -[float] +[discrete] === Search as few fields as possible The more fields a <> or @@ -70,7 +70,7 @@ PUT movies } -------------------------------------------------- -[float] +[discrete] === Pre-index data You should leverage patterns in your queries to optimize the way data is indexed. @@ -155,13 +155,13 @@ GET index/_search -------------------------------------------------- // TEST[continued] -[float] +[discrete] [[map-ids-as-keyword]] === Consider mapping identifiers as `keyword` include::../mapping/types/numeric.asciidoc[tag=map-ids-as-keyword] -[float] +[discrete] === Avoid scripts If possible, avoid using <> or @@ -169,7 +169,7 @@ If possible, avoid using <> or <>. -[float] +[discrete] === Search rounded dates Queries on date fields that use `now` are typically not cacheable since the @@ -284,7 +284,7 @@ However such practice might make the query run slower in some cases since the overhead introduced by the `bool` query may defeat the savings from better leveraging the query cache. -[float] +[discrete] === Force-merge read-only indices Indices that are read-only may benefit from being <> can be useful in order to make conjunctions faster at the cost of slightly slower indexing. Read more about it in the <>. -[float] +[discrete] [[preference-cache-optimization]] === Use `preference` to optimize cache utilization @@ -364,7 +364,7 @@ one after another, for instance in order to analyze a narrower subset of the index, using a preference value that identifies the current user or session could help optimize usage of the caches. -[float] +[discrete] === Replicas might help with throughput, but not always In addition to improving resiliency, replicas can help improve throughput. For diff --git a/docs/reference/index-modules.asciidoc b/docs/reference/index-modules.asciidoc index 6373706d6b87..73a70122d087 100644 --- a/docs/reference/index-modules.asciidoc +++ b/docs/reference/index-modules.asciidoc @@ -8,7 +8,7 @@ Index Modules are modules created per index and control all aspects related to an index. -[float] +[discrete] [[index-modules-settings]] == Index Settings @@ -31,7 +31,7 @@ WARNING: Changing static or dynamic index settings on a closed index could result in incorrect settings that are impossible to rectify without deleting and recreating the index. -[float] +[discrete] === Static index settings Below is a list of all _static_ index settings that are not associated with any @@ -88,7 +88,7 @@ indices. per request through the use of the `expand_wildcards` parameter. Possible values are `true` and `false` (default). -[float] +[discrete] [[dynamic-index-settings]] === Dynamic index settings @@ -238,7 +238,7 @@ specific index module: the default pipeline (if it exists). The special pipeline name `_none` indicates no ingest pipeline will run. -[float] +[discrete] === Settings in other index modules Other index settings are available in index modules: @@ -285,7 +285,7 @@ Other index settings are available in index modules: Configure indexing back pressure limits. -[float] +[discrete] [[x-pack-index-settings]] === [xpack]#{xpack} index settings# diff --git a/docs/reference/index-modules/allocation/filtering.asciidoc b/docs/reference/index-modules/allocation/filtering.asciidoc index 12ae0e64ebaa..02103b7cc5fb 100644 --- a/docs/reference/index-modules/allocation/filtering.asciidoc +++ b/docs/reference/index-modules/allocation/filtering.asciidoc @@ -21,7 +21,7 @@ For example, you could use a custom node attribute to indicate a node's performance characteristics and use shard allocation filtering to route shards for a particular index to the most appropriate class of hardware. -[float] +[discrete] [[index-allocation-filters]] ==== Enabling index-level shard allocation filtering @@ -74,7 +74,7 @@ PUT test/_settings // TEST[s/^/PUT test\n/] -- -[float] +[discrete] [[index-allocation-settings]] ==== Index allocation filter settings diff --git a/docs/reference/index-modules/history-retention.asciidoc b/docs/reference/index-modules/history-retention.asciidoc index fb4aa26ab9b0..ecee0bd2dff9 100644 --- a/docs/reference/index-modules/history-retention.asciidoc +++ b/docs/reference/index-modules/history-retention.asciidoc @@ -49,7 +49,7 @@ index since it can no longer simply replay the missing history. The expiry time of a retention lease defaults to `12h` which should be long enough for most reasonable recovery scenarios. -[float] +[discrete] === History retention settings `index.soft_deletes.enabled`:: diff --git a/docs/reference/index-modules/index-sorting.asciidoc b/docs/reference/index-modules/index-sorting.asciidoc index ea1bc787b83a..6cedcbee4f01 100644 --- a/docs/reference/index-modules/index-sorting.asciidoc +++ b/docs/reference/index-modules/index-sorting.asciidoc @@ -100,7 +100,7 @@ a sort on an existing index. Index sorting also has a cost in terms of indexing documents must be sorted at flush and merge time. You should test the impact on your application before activating this feature. -[float] +[discrete] [[early-terminate]] === Early termination of search request diff --git a/docs/reference/index-modules/merge.asciidoc b/docs/reference/index-modules/merge.asciidoc index 38f40853db38..3a262b0678e4 100644 --- a/docs/reference/index-modules/merge.asciidoc +++ b/docs/reference/index-modules/merge.asciidoc @@ -10,7 +10,7 @@ deletes. The merge process uses auto-throttling to balance the use of hardware resources between merging and other activities like search. -[float] +[discrete] [[merge-scheduling]] === Merge scheduling diff --git a/docs/reference/index-modules/similarity.asciidoc b/docs/reference/index-modules/similarity.asciidoc index eb0de4ead20a..c3f0b83adf54 100644 --- a/docs/reference/index-modules/similarity.asciidoc +++ b/docs/reference/index-modules/similarity.asciidoc @@ -9,7 +9,7 @@ Configuring a custom similarity is considered an expert feature and the builtin similarities are most likely sufficient as is described in <>. -[float] +[discrete] [[configuration]] === Configuring a similarity @@ -52,10 +52,10 @@ PUT /index/_mapping -------------------------------------------------- // TEST[continued] -[float] +[discrete] === Available similarities -[float] +[discrete] [[bm25]] ==== BM25 similarity (*default*) @@ -80,7 +80,7 @@ This similarity has the following options: Type name: `BM25` -[float] +[discrete] [[dfr]] ==== DFR similarity @@ -110,7 +110,7 @@ All options but the first option need a normalization value. Type name: `DFR` -[float] +[discrete] [[dfi]] ==== DFI similarity @@ -130,7 +130,7 @@ frequency will get a score equal to 0. Type name: `DFI` -[float] +[discrete] [[ib]] ==== IB similarity. @@ -151,7 +151,7 @@ This similarity has the following options: Type name: `IB` -[float] +[discrete] [[lm_dirichlet]] ==== LM Dirichlet similarity. @@ -167,7 +167,7 @@ Lucene, so such terms get a score of 0. Type name: `LMDirichlet` -[float] +[discrete] [[lm_jelinek_mercer]] ==== LM Jelinek Mercer similarity. @@ -180,7 +180,7 @@ for title queries and `0.7` for long queries. Default to `0.1`. When value appro Type name: `LMJelinekMercer` -[float] +[discrete] [[scripted_similarity]] ==== Scripted similarity @@ -506,7 +506,7 @@ GET /index/_search?explain=true Type name: `scripted` -[float] +[discrete] [[default-base]] ==== Default Similarity diff --git a/docs/reference/index-modules/slowlog.asciidoc b/docs/reference/index-modules/slowlog.asciidoc index 5fe197ccdcec..4efd23e59f5d 100644 --- a/docs/reference/index-modules/slowlog.asciidoc +++ b/docs/reference/index-modules/slowlog.asciidoc @@ -1,7 +1,7 @@ [[index-modules-slowlog]] == Slow Log -[float] +[discrete] [[search-slow-log]] === Search Slow Log @@ -55,7 +55,7 @@ level. The search slow log file is configured in the `log4j2.properties` file. -[float] +[discrete] ==== Identifying search slow log origin It is often useful to identify what triggered a slow running query. If a call was initiated with an `X-Opaque-ID` header, then the user ID @@ -85,7 +85,7 @@ is included in Search Slow logs as an additional **id** field --------------------------- // NOTCONSOLE -[float] +[discrete] [[index-slow-log]] === Index Slow log diff --git a/docs/reference/index-modules/store.asciidoc b/docs/reference/index-modules/store.asciidoc index 2f028a5b381c..75fd183927e1 100644 --- a/docs/reference/index-modules/store.asciidoc +++ b/docs/reference/index-modules/store.asciidoc @@ -7,7 +7,7 @@ NOTE: This is a low-level setting. Some store implementations have poor concurrency or disable optimizations for heap memory usage. We recommend sticking to the defaults. -[float] +[discrete] [[file-system]] === File system storage types diff --git a/docs/reference/index-modules/translog.asciidoc b/docs/reference/index-modules/translog.asciidoc index 73a309647185..52631bc0956b 100644 --- a/docs/reference/index-modules/translog.asciidoc +++ b/docs/reference/index-modules/translog.asciidoc @@ -22,7 +22,7 @@ would make replaying its operations take a considerable amount of time during recovery. The ability to perform a flush manually is also exposed through an API, although this is rarely needed. -[float] +[discrete] === Translog settings The data in the translog is only persisted to disk when the translog is diff --git a/docs/reference/indices.asciidoc b/docs/reference/indices.asciidoc index bd04049d97a4..20d0b1b756d4 100644 --- a/docs/reference/indices.asciidoc +++ b/docs/reference/indices.asciidoc @@ -4,7 +4,7 @@ Index APIs are used to manage individual indices, index settings, aliases, mappings, and index templates. -[float] +[discrete] [[index-management]] === Index management: @@ -23,7 +23,7 @@ index settings, aliases, mappings, and index templates. * <> -[float] +[discrete] [[mapping-management]] === Mapping management: @@ -31,7 +31,7 @@ index settings, aliases, mappings, and index templates. * <> * <> -[float] +[discrete] [[alias-management]] === Alias management: * <> @@ -40,14 +40,14 @@ index settings, aliases, mappings, and index templates. * <> * <> -[float] +[discrete] [[index-settings]] === Index settings: * <> * <> * <> -[float] +[discrete] [[index-templates]] === Index templates: * <> @@ -60,7 +60,7 @@ index settings, aliases, mappings, and index templates. * <> * <> -[float] +[discrete] [[monitoring]] === Monitoring: * <> @@ -68,7 +68,7 @@ index settings, aliases, mappings, and index templates. * <> * <> -[float] +[discrete] [[status-management]] === Status management: * <> @@ -76,7 +76,7 @@ index settings, aliases, mappings, and index templates. * <> * <> -[float] +[discrete] [[dangling-indices-api]] === Dangling indices: * <> diff --git a/docs/reference/ingest/enrich.asciidoc b/docs/reference/ingest/enrich.asciidoc index 2d655ba1052f..8d7381a59cb3 100644 --- a/docs/reference/ingest/enrich.asciidoc +++ b/docs/reference/ingest/enrich.asciidoc @@ -118,7 +118,7 @@ The enrich processor works best with reference data that doesn't change frequently. ==== -[float] +[discrete] [[enrich-prereqs]] ==== Prerequisites diff --git a/docs/reference/ingest/ingest-node.asciidoc b/docs/reference/ingest/ingest-node.asciidoc index 92e1ea4fb849..283a62cd47f3 100644 --- a/docs/reference/ingest/ingest-node.asciidoc +++ b/docs/reference/ingest/ingest-node.asciidoc @@ -26,7 +26,7 @@ order. The processors in a pipeline have read and write access to documents that pass through the pipeline. The processors can access fields in the source of a document and the document's metadata fields. -[float] +[discrete] [[accessing-source-fields]] === Accessing Fields in the Source Accessing a field in the source is straightforward. You simply refer to fields by @@ -56,7 +56,7 @@ On top of this, fields from the source are always accessible via the `_source` p -------------------------------------------------- // NOTCONSOLE -[float] +[discrete] [[accessing-metadata-fields]] === Accessing Metadata Fields You can access metadata fields in the same way that you access fields in the source. This @@ -78,7 +78,7 @@ The following example sets the `_id` metadata field of a document to `1`: The following metadata fields are accessible by a processor: `_index`, `_id`, `_routing`. -[float] +[discrete] [[accessing-ingest-metadata]] === Accessing Ingest Metadata Fields Beyond metadata fields and source fields, ingest also adds ingest metadata to the documents that it processes. @@ -106,7 +106,7 @@ Unlike Elasticsearch metadata fields, the ingest metadata field name `_ingest` c in the source of a document. Use `_source._ingest` to refer to the field in the source document. Otherwise, `_ingest` will be interpreted as an ingest metadata field. -[float] +[discrete] [[accessing-template-fields]] === Accessing Fields and Metafields in Templates A number of processor settings also support templating. Settings that support templating can have zero or more @@ -747,7 +747,7 @@ continues to execute, which in this case means that the pipeline does nothing. The `ignore_failure` can be set on any processor and defaults to `false`. -[float] +[discrete] [[accessing-error-metadata]] === Accessing Error Metadata From Processors Handling Exceptions @@ -849,7 +849,7 @@ A node will not start if this plugin is not available. The <> can be used to fetch ingest usage statistics, globally and on a per pipeline basis. Useful to find out which pipelines are used the most or spent the most time on preprocessing. -[float] +[discrete] === Ingest Processor Plugins Additional ingest processors can be implemented and installed as Elasticsearch {plugins}/intro.html[plugins]. diff --git a/docs/reference/intro.asciidoc b/docs/reference/intro.asciidoc index 5ce480a330ae..0e0ee67379d1 100644 --- a/docs/reference/intro.asciidoc +++ b/docs/reference/intro.asciidoc @@ -102,7 +102,7 @@ https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} client] for your language of choice: Java, JavaScript, Go, .NET, PHP, Perl, Python or Ruby. -[float] +[discrete] [[search-data]] ==== Searching your data @@ -127,7 +127,7 @@ construct <> to search and aggregate data natively inside {es}, and JDBC and ODBC drivers enable a broad range of third-party applications to interact with {es} via SQL. -[float] +[discrete] [[analyze-data]] ==== Analyzing your data @@ -159,7 +159,7 @@ size 70 needles, you’re displaying a count of the size 70 needles that match your users' search criteria--for example, all size 70 _non-stick embroidery_ needles. -[float] +[discrete] [[more-features]] ===== But wait, there’s more @@ -206,7 +206,7 @@ The number of primary shards in an index is fixed at the time that an index is created, but the number of replica shards can be changed at any time, without interrupting indexing or query operations. -[float] +[discrete] [[it-depends]] ==== It depends... @@ -234,7 +234,7 @@ The best way to determine the optimal configuration for your use case is through https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[ testing with your own data and queries]. -[float] +[discrete] [[disaster-ccr]] ==== In case of disaster @@ -254,7 +254,7 @@ create secondary clusters to serve read requests in geo-proximity to your users. the active leader index and handles all write requests. Indices replicated to secondary clusters are read-only followers. -[float] +[discrete] [[admin]] ==== Care and feeding diff --git a/docs/reference/licensing/delete-license.asciidoc b/docs/reference/licensing/delete-license.asciidoc index 447f79d9574d..04c095110cc6 100644 --- a/docs/reference/licensing/delete-license.asciidoc +++ b/docs/reference/licensing/delete-license.asciidoc @@ -8,26 +8,26 @@ This API enables you to delete licensing information. -[float] +[discrete] ==== Request `DELETE /_license` -[float] +[discrete] ==== Description When your license expires, {xpack} operates in a degraded mode. For more information, see {kibana-ref}/managing-licenses.html#license-expiration[License expiration]. -[float] +[discrete] ==== Authorization You must have `manage` cluster privileges to use this API. For more information, see <>. -[float] +[discrete] ==== Examples The following example queries the info API: diff --git a/docs/reference/licensing/get-basic-status.asciidoc b/docs/reference/licensing/get-basic-status.asciidoc index 7ad601023ba7..ea5e3466845e 100644 --- a/docs/reference/licensing/get-basic-status.asciidoc +++ b/docs/reference/licensing/get-basic-status.asciidoc @@ -8,12 +8,12 @@ This API enables you to check the status of your basic license. -[float] +[discrete] ==== Request `GET /_license/basic_status` -[float] +[discrete] ==== Description In order to initiate a basic license, you must not currently have a basic @@ -27,7 +27,7 @@ https://www.elastic.co/subscriptions. You must have `monitor` cluster privileges to use this API. For more information, see <>. -[float] +[discrete] ==== Examples The following example checks whether you are eligible to start a basic: diff --git a/docs/reference/licensing/get-license.asciidoc b/docs/reference/licensing/get-license.asciidoc index 34f12ef51440..96c27edbfe2f 100644 --- a/docs/reference/licensing/get-license.asciidoc +++ b/docs/reference/licensing/get-license.asciidoc @@ -8,12 +8,12 @@ This API enables you to retrieve licensing information. -[float] +[discrete] ==== Request `GET /_license` -[float] +[discrete] ==== Description This API returns information about the type of license, when it was issued, and @@ -23,7 +23,7 @@ For more information about the different types of licenses, see https://www.elastic.co/subscriptions. -[float] +[discrete] ==== Query Parameters `local`:: @@ -31,14 +31,14 @@ https://www.elastic.co/subscriptions. is `false`, which means the information is retrieved from the master node. -[float] +[discrete] ==== Authorization You must have `monitor` cluster privileges to use this API. For more information, see <>. -[float] +[discrete] ==== Examples The following example provides information about a trial license: diff --git a/docs/reference/licensing/get-trial-status.asciidoc b/docs/reference/licensing/get-trial-status.asciidoc index 8517c313a92e..d68365762bb5 100644 --- a/docs/reference/licensing/get-trial-status.asciidoc +++ b/docs/reference/licensing/get-trial-status.asciidoc @@ -8,12 +8,12 @@ Enables you to check the status of your trial. -[float] +[discrete] ==== Request `GET /_license/trial_status` -[float] +[discrete] ==== Description If you want to try all the subscription features, you can start a 30-day trial. @@ -32,7 +32,7 @@ You must have `monitor` cluster privileges to use this API. For more information, see <>. -[float] +[discrete] ==== Examples The following example checks whether you are eligible to start a trial: diff --git a/docs/reference/licensing/start-basic.asciidoc b/docs/reference/licensing/start-basic.asciidoc index 8dbc1425b0b2..199e917a2921 100644 --- a/docs/reference/licensing/start-basic.asciidoc +++ b/docs/reference/licensing/start-basic.asciidoc @@ -8,12 +8,12 @@ This API starts an indefinite basic license. -[float] +[discrete] ==== Request `POST /_license/start_basic` -[float] +[discrete] ==== Description The `start basic` API enables you to initiate an indefinite basic license, which @@ -34,7 +34,7 @@ You must have `manage` cluster privileges to use this API. For more information, see <>. -[float] +[discrete] ==== Examples The following example starts a basic license if you do not currently have a license: diff --git a/docs/reference/licensing/start-trial.asciidoc b/docs/reference/licensing/start-trial.asciidoc index 4401feb3062b..ef3bd93410b3 100644 --- a/docs/reference/licensing/start-trial.asciidoc +++ b/docs/reference/licensing/start-trial.asciidoc @@ -8,12 +8,12 @@ Starts a 30-day trial. -[float] +[discrete] ==== Request `POST /_license/start_trial` -[float] +[discrete] ==== Description The `start trial` API enables you to start a 30-day trial, which gives access to @@ -35,7 +35,7 @@ You must have `manage` cluster privileges to use this API. For more information, see <>. -[float] +[discrete] ==== Examples The following example starts a 30-day trial. The acknowledge parameter is diff --git a/docs/reference/mapping.asciidoc b/docs/reference/mapping.asciidoc index 5d4116396839..0d6fef94384d 100644 --- a/docs/reference/mapping.asciidoc +++ b/docs/reference/mapping.asciidoc @@ -30,7 +30,7 @@ document. NOTE: Before 7.0.0, the 'mappings' definition used to include a type name. For more details, please see <>. -[float] +[discrete] [[field-datatypes]] == Field data types @@ -55,7 +55,7 @@ This is the purpose of _multi-fields_. Most data types support multi-fields via the <> parameter. [[mapping-limit-settings]] -[float] +[discrete] === Settings to prevent mappings explosion Defining too many fields in an index can lead to a @@ -115,7 +115,7 @@ If your field mappings contain a large, arbitrary set of keys, consider using th unless a user starts to add a huge number of fields with really long names. Default is `Long.MAX_VALUE` (no limit). -[float] +[discrete] == Dynamic mapping Fields and mapping types do not need to be defined before being used. Thanks @@ -126,7 +126,7 @@ type, and to inner <> and <> fields. The <> rules can be configured to customise the mapping that is used for new fields. -[float] +[discrete] == Explicit mappings You know more about your data than Elasticsearch can guess, so while dynamic @@ -136,7 +136,7 @@ your own explicit mappings. You can create field mappings when you <> and <>. -[float] +[discrete] [[create-mapping]] == Create an index with an explicit mapping @@ -161,7 +161,7 @@ PUT /my-index <2> Creates `email`, a <> field <3> Creates `name`, a <> field -[float] +[discrete] [[add-field-mapping]] == Add a field to an existing mapping @@ -186,7 +186,7 @@ PUT /my-index/_mapping ---- // TEST[continued] -[float] +[discrete] [[update-mapping]] === Update the mapping of a field @@ -194,7 +194,7 @@ include::{es-repo-dir}/indices/put-mapping.asciidoc[tag=change-field-mapping] include::{es-repo-dir}/indices/put-mapping.asciidoc[tag=rename-field] -[float] +[discrete] [[view-mapping]] == View the mapping of an index @@ -235,7 +235,7 @@ The API returns the following response: ---- -[float] +[discrete] [[view-field-mapping]] == View the mapping of specific fields diff --git a/docs/reference/mapping/fields.asciidoc b/docs/reference/mapping/fields.asciidoc index 0ea4b77441c4..ee48f7720f80 100644 --- a/docs/reference/mapping/fields.asciidoc +++ b/docs/reference/mapping/fields.asciidoc @@ -5,7 +5,7 @@ Each document has metadata associated with it, such as the `_index`, mapping <>, and `_id` meta-fields. The behaviour of some of these meta-fields can be customised when a mapping type is created. -[float] +[discrete] === Identity meta-fields [horizontal] @@ -21,7 +21,7 @@ can be customised when a mapping type is created. The document's ID. -[float] +[discrete] === Document source meta-fields <>:: @@ -33,7 +33,7 @@ can be customised when a mapping type is created. The size of the `_source` field in bytes, provided by the {plugins}/mapper-size.html[`mapper-size` plugin]. -[float] +[discrete] === Indexing meta-fields <>:: @@ -45,14 +45,14 @@ can be customised when a mapping type is created. All fields in the document that have been ignored at index time because of <>. -[float] +[discrete] === Routing meta-field <>:: A custom routing value which routes a document to a particular shard. -[float] +[discrete] === Other meta-field <>:: diff --git a/docs/reference/mapping/types.asciidoc b/docs/reference/mapping/types.asciidoc index 5c1671cb30dd..7745bb9131ad 100644 --- a/docs/reference/mapping/types.asciidoc +++ b/docs/reference/mapping/types.asciidoc @@ -4,7 +4,7 @@ Elasticsearch supports a number of different data types for the fields in a document: -[float] +[discrete] [[_core_datatypes]] === Core data types @@ -16,12 +16,12 @@ string:: <>, <> and <>:: `binary` <>:: `integer_range`, `float_range`, `long_range`, `double_range`, `date_range`, `ip_range` -[float] +[discrete] === Complex data types <>:: `object` for single JSON objects <>:: `nested` for arrays of JSON objects -[float] +[discrete] === Spatial data types <>:: `geo_point` for lat/lon points @@ -29,7 +29,7 @@ string:: <>, <> and <>:: `point` for arbitrary cartesian points. <>:: `shape` for arbitrary cartesian geometries. -[float] +[discrete] === Specialised data types <>:: `ip` for IPv4 and IPv6 addresses @@ -60,14 +60,14 @@ string:: <>, <> and <>:: Specialization of `keyword` for the case when all documents have the same value. -[float] +[discrete] [[types-array-handling]] === Arrays In {es}, arrays do not require a dedicated field data type. Any field can contain zero or more values by default, however, all values in the array must be of the same data type. See <>. -[float] +[discrete] === Multi-fields It is often useful to index the same field in different ways for different diff --git a/docs/reference/mapping/types/geo-shape.asciidoc b/docs/reference/mapping/types/geo-shape.asciidoc index 6eab4efa9444..b7b49ae2aabc 100644 --- a/docs/reference/mapping/types/geo-shape.asciidoc +++ b/docs/reference/mapping/types/geo-shape.asciidoc @@ -13,7 +13,7 @@ You can query documents using this type using <>. [[geo-shape-mapping-options]] -[float] +[discrete] ==== Mapping Options The geo_shape mapping maps geo_json geometry objects to the geo_shape @@ -118,7 +118,7 @@ and reject the whole document. [[geoshape-indexing-approach]] -[float] +[discrete] ==== Indexing approach GeoShape types are indexed by decomposing the shape into a triangular mesh and indexing each triangle as a 7 dimension point in a BKD tree. This provides @@ -140,7 +140,7 @@ ElasticSearch 7.5.0 or higher. [[prefix-trees]] -[float] +[discrete] ==== Prefix trees deprecated[6.6, PrefixTrees no longer used] To efficiently represent shapes in @@ -171,7 +171,7 @@ represents 2 bits in this bit set, one for each coordinate. The maximum number of levels for the quad trees in Elasticsearch is 29; the default is 21. [[spatial-strategy]] -[float] +[discrete] ===== Spatial strategies deprecated[6.6, PrefixTrees no longer used] The indexing implementation selected relies on a SpatialStrategy for choosing how to decompose the shapes @@ -194,7 +194,7 @@ are provided: |======================================================================= -[float] +[discrete] ===== Accuracy `Recursive` and `Term` strategies do not provide 100% accuracy and depending on @@ -205,7 +205,7 @@ parameter and to adjust expectations accordingly. For example, a point may be ne the border of a particular grid cell and may thus not match a query that only matches the cell right next to it -- even though the shape is very close to the point. -[float] +[discrete] ===== Example [source,console] @@ -227,7 +227,7 @@ This mapping definition maps the location field to the geo_shape type using the default vector implementation. It provides approximately 1e-7 decimal degree precision. -[float] +[discrete] ===== Performance considerations with Prefix Trees deprecated[6.6, PrefixTrees no longer used] With prefix trees, @@ -251,7 +251,7 @@ Geo-shape queries on geo-shapes implemented with PrefixTrees will not be execute <> is set to false. [[input-structure]] -[float] +[discrete] ==== Input Structure Shapes can be represented using either the http://www.geojson.org[GeoJSON] @@ -293,7 +293,7 @@ use the colloquial latitude, longitude (Y, X). ============================================= [[geo-point-type]] -[float] +[discrete] ===== http://geojson.org/geojson-spec.html#id2[Point] A point is a single geographic coordinate, such as the location of a @@ -321,7 +321,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[geo-linestring]] ===== http://geojson.org/geojson-spec.html#id3[LineString] @@ -354,7 +354,7 @@ POST /example/_doc The above `linestring` would draw a straight line starting at the White House to the US Capitol Building. -[float] +[discrete] [[geo-polygon]] ===== http://www.geojson.org/geojson-spec.html#id4[Polygon] @@ -465,7 +465,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[geo-multipoint]] ===== http://www.geojson.org/geojson-spec.html#id5[MultiPoint] @@ -494,7 +494,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[geo-multilinestring]] ===== http://www.geojson.org/geojson-spec.html#id6[MultiLineString] @@ -525,7 +525,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[geo-multipolygon]] ===== http://www.geojson.org/geojson-spec.html#id7[MultiPolygon] @@ -556,7 +556,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[geo-geometry_collection]] ===== http://geojson.org/geojson-spec.html#geometrycollection[Geometry Collection] @@ -593,7 +593,7 @@ POST /example/_doc -------------------------------------------------- -[float] +[discrete] ===== Envelope Elasticsearch supports an `envelope` type, which consists of coordinates @@ -623,7 +623,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] ===== Circle Elasticsearch supports a `circle` type, which consists of a center @@ -650,7 +650,7 @@ the units of the `radius` will default to `METERS`. *NOTE:* Neither GeoJSON or WKT support a point-radius circle type. -[float] +[discrete] ==== Sorting and Retrieving index Shapes Due to the complex input structure and index representation of shapes, diff --git a/docs/reference/mapping/types/nested.asciidoc b/docs/reference/mapping/types/nested.asciidoc index c6d55d2ce7ed..2f4ca64f754b 100644 --- a/docs/reference/mapping/types/nested.asciidoc +++ b/docs/reference/mapping/types/nested.asciidoc @@ -211,7 +211,7 @@ as standard (flat) fields. Defaults to `false`. If `true`, all fields in the nested object are also added to the root document as standard (flat) fields. Defaults to `false`. -[float] +[discrete] === Limits on `nested` mappings and objects As described earlier, each nested object is indexed as a separate Lucene document. diff --git a/docs/reference/mapping/types/percolator.asciidoc b/docs/reference/mapping/types/percolator.asciidoc index e25c4d29ae4a..a9dabadac717 100644 --- a/docs/reference/mapping/types/percolator.asciidoc +++ b/docs/reference/mapping/types/percolator.asciidoc @@ -57,7 +57,7 @@ add or update a mapping via the <> or < Percolator query hit is now being presented from the new index. -[float] +[discrete] ==== Optimizing query time text analysis When the percolator verifies a percolator candidate match it is going to parse, perform query time text analysis and actually run @@ -411,7 +411,7 @@ This results in a response like this: -------------------------------------------------- // TESTRESPONSE[s/"took": 6,/"took": "$body.took",/] -[float] +[discrete] ==== Optimizing wildcard queries. Wildcard queries are more expensive than other queries for the percolator, @@ -680,7 +680,7 @@ GET /my_queries2/_search -------------------------------------------------- // TEST[continued] -[float] +[discrete] ==== Dedicated Percolator Index Percolate queries can be added to any index. Instead of adding percolate queries to the index the data resides in, @@ -689,7 +689,7 @@ can have its own index settings (For example the number of primary and replica s percolate index, you need to make sure that the mappings from the normal index are also available on the percolate index. Otherwise percolate queries can be parsed incorrectly. -[float] +[discrete] ==== Forcing Unmapped Fields to be Handled as Strings In certain cases it is unknown what kind of percolator queries do get registered, and if no field mapping exists for fields @@ -700,17 +700,17 @@ if all unmapped fields are handled as if these were default text fields. In thos a percolator query does not exist, it will be handled as a default text field so that adding the percolator query doesn't fail. -[float] +[discrete] ==== Limitations -[float] +[discrete] [[parent-child]] ===== Parent/child Because the `percolate` query is processing one document at a time, it doesn't support queries and filters that run against child documents such as `has_child` and `has_parent`. -[float] +[discrete] ===== Fetching queries There are a number of queries that fetch data via a get call during query parsing. For example the `terms` query when @@ -721,7 +721,7 @@ is that fetching of terms that these queries do, happens both each time the perc and replica shards, so the terms that are actually indexed can be different between shard copies, if the source index changed while indexing. -[float] +[discrete] ===== Script query The script inside a `script` query can only access doc values fields. The `percolate` query indexes the provided document @@ -729,7 +729,7 @@ into an in-memory index. This in-memory index doesn't support stored fields and other stored fields are not stored. This is the reason why in the `script` query the `_source` and other stored fields aren't available. -[float] +[discrete] ===== Field aliases Percolator queries that contain <> may not always behave as expected. In particular, if a diff --git a/docs/reference/mapping/types/shape.asciidoc b/docs/reference/mapping/types/shape.asciidoc index dd034e3d98bd..475b3917dff3 100644 --- a/docs/reference/mapping/types/shape.asciidoc +++ b/docs/reference/mapping/types/shape.asciidoc @@ -15,7 +15,7 @@ You can query documents using this type using <>. [[shape-mapping-options]] -[float] +[discrete] ==== Mapping Options Like the <> field type, the `shape` field mapping maps @@ -56,7 +56,7 @@ and reject the whole document. |======================================================================= [[shape-indexing-approach]] -[float] +[discrete] ==== Indexing approach Like `geo_shape`, the `shape` field type is indexed by decomposing geometries into a triangular mesh and indexing each triangle as a 7 dimension point in a BKD tree. @@ -70,7 +70,7 @@ depends on the number of vertices that define the geometry. `CONTAINS` relation query - `shape` queries with `relation` defined as `contains` are supported for indices created with ElasticSearch 7.5.0 or higher. -[float] +[discrete] ===== Example [source,console] @@ -93,7 +93,7 @@ precision floats for the vertex values so accuracy is guaranteed to the same pre `float` values provided by the java virtual machine approximately (typically 1E-38). [[shape-input-structure]] -[float] +[discrete] ==== Input Structure Shapes can be represented using either the http://www.geojson.org[GeoJSON] @@ -130,7 +130,7 @@ typically use the colloquial latitude, longitude (Y, X) ordering. ============================================= [[point-shape]] -[float] +[discrete] ===== http://geojson.org/geojson-spec.html#id2[Point] A point is a single coordinate in cartesian `x, y` space. It may represent the @@ -158,7 +158,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[linestring]] ===== http://geojson.org/geojson-spec.html#id3[LineString] @@ -188,7 +188,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[polygon]] ===== http://www.geojson.org/geojson-spec.html#id4[Polygon] @@ -275,7 +275,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[multipoint]] ===== http://www.geojson.org/geojson-spec.html#id5[MultiPoint] @@ -304,7 +304,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[multilinestring]] ===== http://www.geojson.org/geojson-spec.html#id6[MultiLineString] @@ -335,7 +335,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[multipolygon]] ===== http://www.geojson.org/geojson-spec.html#id7[MultiPolygon] @@ -366,7 +366,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] [[geometry_collection]] ===== http://geojson.org/geojson-spec.html#geometrycollection[Geometry Collection] @@ -402,7 +402,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] ===== Envelope Elasticsearch supports an `envelope` type, which consists of coordinates @@ -432,7 +432,7 @@ POST /example/_doc } -------------------------------------------------- -[float] +[discrete] ==== Sorting and Retrieving index Shapes Due to the complex input structure and index representation of shapes, diff --git a/docs/reference/migration/migrate_8_0/aggregations.asciidoc b/docs/reference/migration/migrate_8_0/aggregations.asciidoc index 6f3a21777d86..8cac942c95ab 100644 --- a/docs/reference/migration/migrate_8_0/aggregations.asciidoc +++ b/docs/reference/migration/migrate_8_0/aggregations.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_aggregations_changes]] === Aggregations changes diff --git a/docs/reference/migration/migrate_8_0/allocation.asciidoc b/docs/reference/migration/migrate_8_0/allocation.asciidoc index 317ae3c4d112..b673813eba85 100644 --- a/docs/reference/migration/migrate_8_0/allocation.asciidoc +++ b/docs/reference/migration/migrate_8_0/allocation.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_allocation_changes]] === Allocation changes diff --git a/docs/reference/migration/migrate_8_0/analysis.asciidoc b/docs/reference/migration/migrate_8_0/analysis.asciidoc index 2c166d7f0cff..cef024d19052 100644 --- a/docs/reference/migration/migrate_8_0/analysis.asciidoc +++ b/docs/reference/migration/migrate_8_0/analysis.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_analysis_changes]] === Analysis changes diff --git a/docs/reference/migration/migrate_8_0/api.asciidoc b/docs/reference/migration/migrate_8_0/api.asciidoc index a037affed0d0..d8dc474ac873 100644 --- a/docs/reference/migration/migrate_8_0/api.asciidoc +++ b/docs/reference/migration/migrate_8_0/api.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_api_changes]] === REST API changes diff --git a/docs/reference/migration/migrate_8_0/breaker.asciidoc b/docs/reference/migration/migrate_8_0/breaker.asciidoc index 0e416978a58d..a28fb3952175 100644 --- a/docs/reference/migration/migrate_8_0/breaker.asciidoc +++ b/docs/reference/migration/migrate_8_0/breaker.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_breaker_changes]] === Circuit breaker changes diff --git a/docs/reference/migration/migrate_8_0/cluster.asciidoc b/docs/reference/migration/migrate_8_0/cluster.asciidoc index 1e1bb4de0f76..b71e65ca04f1 100644 --- a/docs/reference/migration/migrate_8_0/cluster.asciidoc +++ b/docs/reference/migration/migrate_8_0/cluster.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_cluster_changes]] === Cluster changes diff --git a/docs/reference/migration/migrate_8_0/discovery.asciidoc b/docs/reference/migration/migrate_8_0/discovery.asciidoc index af982a868b3f..360b0f044294 100644 --- a/docs/reference/migration/migrate_8_0/discovery.asciidoc +++ b/docs/reference/migration/migrate_8_0/discovery.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_discovery_changes]] === Discovery changes diff --git a/docs/reference/migration/migrate_8_0/http.asciidoc b/docs/reference/migration/migrate_8_0/http.asciidoc index b25ca262b60f..db8562a44806 100644 --- a/docs/reference/migration/migrate_8_0/http.asciidoc +++ b/docs/reference/migration/migrate_8_0/http.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_http_changes]] === HTTP changes diff --git a/docs/reference/migration/migrate_8_0/ilm.asciidoc b/docs/reference/migration/migrate_8_0/ilm.asciidoc index 3f740960a093..6ce26986c36a 100644 --- a/docs/reference/migration/migrate_8_0/ilm.asciidoc +++ b/docs/reference/migration/migrate_8_0/ilm.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_ilm_changes]] === {ilm-cap} changes diff --git a/docs/reference/migration/migrate_8_0/indices.asciidoc b/docs/reference/migration/migrate_8_0/indices.asciidoc index 3dd8943a3462..bc102ac22074 100644 --- a/docs/reference/migration/migrate_8_0/indices.asciidoc +++ b/docs/reference/migration/migrate_8_0/indices.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_indices_changes]] === Indices changes diff --git a/docs/reference/migration/migrate_8_0/java.asciidoc b/docs/reference/migration/migrate_8_0/java.asciidoc index 417fc2f07d57..19c0f2f6ce70 100644 --- a/docs/reference/migration/migrate_8_0/java.asciidoc +++ b/docs/reference/migration/migrate_8_0/java.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_java_changes]] === Java API changes diff --git a/docs/reference/migration/migrate_8_0/mappings.asciidoc b/docs/reference/migration/migrate_8_0/mappings.asciidoc index 7d738b4e85e8..9adecfe6c413 100644 --- a/docs/reference/migration/migrate_8_0/mappings.asciidoc +++ b/docs/reference/migration/migrate_8_0/mappings.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_mappings_changes]] === Mapping changes diff --git a/docs/reference/migration/migrate_8_0/network.asciidoc b/docs/reference/migration/migrate_8_0/network.asciidoc index 2ba2d7a75e29..37bd58e3f783 100644 --- a/docs/reference/migration/migrate_8_0/network.asciidoc +++ b/docs/reference/migration/migrate_8_0/network.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_network_changes]] === Network changes diff --git a/docs/reference/migration/migrate_8_0/node.asciidoc b/docs/reference/migration/migrate_8_0/node.asciidoc index bd9421060064..da6c56ab4be8 100644 --- a/docs/reference/migration/migrate_8_0/node.asciidoc +++ b/docs/reference/migration/migrate_8_0/node.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_node_changes]] === Node changes diff --git a/docs/reference/migration/migrate_8_0/packaging.asciidoc b/docs/reference/migration/migrate_8_0/packaging.asciidoc index 3c0676090a76..a3f7e6444678 100644 --- a/docs/reference/migration/migrate_8_0/packaging.asciidoc +++ b/docs/reference/migration/migrate_8_0/packaging.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_packaging_changes]] === Packaging changes diff --git a/docs/reference/migration/migrate_8_0/reindex.asciidoc b/docs/reference/migration/migrate_8_0/reindex.asciidoc index 8c5d54fcf6de..b424c8465e20 100644 --- a/docs/reference/migration/migrate_8_0/reindex.asciidoc +++ b/docs/reference/migration/migrate_8_0/reindex.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_reindex_changes]] === Reindex changes diff --git a/docs/reference/migration/migrate_8_0/rollup.asciidoc b/docs/reference/migration/migrate_8_0/rollup.asciidoc index 91e3d3029e66..0a2ea0875b2e 100644 --- a/docs/reference/migration/migrate_8_0/rollup.asciidoc +++ b/docs/reference/migration/migrate_8_0/rollup.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_rollup_changes]] === Rollup changes diff --git a/docs/reference/migration/migrate_8_0/search.asciidoc b/docs/reference/migration/migrate_8_0/search.asciidoc index 7a7865575f6c..ce58ca6d245e 100644 --- a/docs/reference/migration/migrate_8_0/search.asciidoc +++ b/docs/reference/migration/migrate_8_0/search.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_search_changes]] === Search Changes diff --git a/docs/reference/migration/migrate_8_0/security.asciidoc b/docs/reference/migration/migrate_8_0/security.asciidoc index 2686793b0ec8..e81f0dab5ded 100644 --- a/docs/reference/migration/migrate_8_0/security.asciidoc +++ b/docs/reference/migration/migrate_8_0/security.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_security_changes]] === Security changes diff --git a/docs/reference/migration/migrate_8_0/settings.asciidoc b/docs/reference/migration/migrate_8_0/settings.asciidoc index 802b39daa174..737c3f3eccba 100644 --- a/docs/reference/migration/migrate_8_0/settings.asciidoc +++ b/docs/reference/migration/migrate_8_0/settings.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_settings_changes]] === Settings changes diff --git a/docs/reference/migration/migrate_8_0/snapshots.asciidoc b/docs/reference/migration/migrate_8_0/snapshots.asciidoc index 33af7d471336..13563be91ef2 100644 --- a/docs/reference/migration/migrate_8_0/snapshots.asciidoc +++ b/docs/reference/migration/migrate_8_0/snapshots.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_snapshots_changes]] === Snapshot and restore changes diff --git a/docs/reference/migration/migrate_8_0/threadpool.asciidoc b/docs/reference/migration/migrate_8_0/threadpool.asciidoc index 675315fd3be6..4dc59911396f 100644 --- a/docs/reference/migration/migrate_8_0/threadpool.asciidoc +++ b/docs/reference/migration/migrate_8_0/threadpool.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_threadpool_changes]] === Thread pool changes diff --git a/docs/reference/migration/migrate_8_0/transport.asciidoc b/docs/reference/migration/migrate_8_0/transport.asciidoc index 9f2a19243ae8..2ef1c3527977 100644 --- a/docs/reference/migration/migrate_8_0/transport.asciidoc +++ b/docs/reference/migration/migrate_8_0/transport.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[breaking_80_transport_changes]] === Transport changes diff --git a/docs/reference/ml/anomaly-detection/functions/ml-count-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-count-functions.asciidoc index 97602f5a1f53..f06eb9146695 100644 --- a/docs/reference/ml/anomaly-detection/functions/ml-count-functions.asciidoc +++ b/docs/reference/ml/anomaly-detection/functions/ml-count-functions.asciidoc @@ -20,7 +20,7 @@ The {ml-features} include the following count functions: * xref:ml-nonzero-count[`non_zero_count`, `high_non_zero_count`, `low_non_zero_count`] * xref:ml-distinct-count[`distinct_count`, `high_distinct_count`, `low_distinct_count`] -[float] +[discrete] [[ml-count]] == Count, high_count, low_count @@ -143,7 +143,7 @@ function (for example, `sum(events_per_min)`). Instead, use the count function and the `summary_count_field_name` property. For more information, see <>. -[float] +[discrete] [[ml-nonzero-count]] == Non_zero_count, high_non_zero_count, low_non_zero_count @@ -213,7 +213,7 @@ supported for the `non_zero_count`, `high_non_zero_count`, and data is sparse, use the `count` functions, which are optimized for that scenario. -[float] +[discrete] [[ml-distinct-count]] == Distinct_count, high_distinct_count, low_distinct_count diff --git a/docs/reference/ml/anomaly-detection/functions/ml-geo-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-geo-functions.asciidoc index b0a77a5f04d7..31ba8121302c 100644 --- a/docs/reference/ml/anomaly-detection/functions/ml-geo-functions.asciidoc +++ b/docs/reference/ml/anomaly-detection/functions/ml-geo-functions.asciidoc @@ -11,7 +11,7 @@ NOTE: You cannot create forecasts for {anomaly-jobs} that contain geographic functions. You also cannot add rules with conditions to detectors that use geographic functions. -[float] +[discrete] [[ml-lat-long]] == Lat_long diff --git a/docs/reference/ml/anomaly-detection/functions/ml-info-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-info-functions.asciidoc index cf365525b39c..ea1014288555 100644 --- a/docs/reference/ml/anomaly-detection/functions/ml-info-functions.asciidoc +++ b/docs/reference/ml/anomaly-detection/functions/ml-info-functions.asciidoc @@ -10,7 +10,7 @@ The {ml-features} include the following information content functions: * `info_content`, `high_info_content`, `low_info_content` -[float] +[discrete] [[ml-info-content]] == Info_content, High_info_content, Low_info_content diff --git a/docs/reference/ml/anomaly-detection/functions/ml-metric-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-metric-functions.asciidoc index e21d480e395d..5091db15173e 100644 --- a/docs/reference/ml/anomaly-detection/functions/ml-metric-functions.asciidoc +++ b/docs/reference/ml/anomaly-detection/functions/ml-metric-functions.asciidoc @@ -18,7 +18,7 @@ The {ml-features} include the following metric functions: NOTE: You cannot add rules with conditions to detectors that use the `metric` function. -[float] +[discrete] [[ml-metric-min]] == Min @@ -53,7 +53,7 @@ where the smallest transaction is lower than previously observed. You can use this function to detect items for sale at unintentionally low prices due to data entry mistakes. It models the minimum amount for each product over time. -[float] +[discrete] [[ml-metric-max]] == Max @@ -111,7 +111,7 @@ functions by application. By combining detectors and using the same influencer this job can detect both unusually long individual response times and average response times for each bucket. -[float] +[discrete] [[ml-metric-median]] == Median, high_median, low_median @@ -149,7 +149,7 @@ If you use this `median` function in a detector in your {anomaly-job}, it models the median `responsetime` for each application over time. It detects when the median `responsetime` is unusual compared to previous `responsetime` values. -[float] +[discrete] [[ml-metric-mean]] == Mean, high_mean, low_mean @@ -219,7 +219,7 @@ models the mean `responsetime` for each application over time. It detects when the mean `responsetime` is unusually low compared to previous `responsetime` values. -[float] +[discrete] [[ml-metric-metric]] == Metric @@ -256,7 +256,7 @@ the mean, min, and max `responsetime` for each application over time. It detects when the mean, min, or max `responsetime` is unusual compared to previous `responsetime` values. -[float] +[discrete] [[ml-metric-varp]] == Varp, high_varp, low_varp diff --git a/docs/reference/ml/anomaly-detection/functions/ml-rare-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-rare-functions.asciidoc index e7ed9a69dc9d..f0a788698a3e 100644 --- a/docs/reference/ml/anomaly-detection/functions/ml-rare-functions.asciidoc +++ b/docs/reference/ml/anomaly-detection/functions/ml-rare-functions.asciidoc @@ -33,7 +33,7 @@ The {ml-features} include the following rare functions: * <> -[float] +[discrete] [[ml-rare]] == Rare @@ -91,7 +91,7 @@ of distinct status codes that occur, not the number of times the status code occurs. If a single client IP experiences a single unique status code, this is rare, even if it occurs for that client IP in every bucket. -[float] +[discrete] [[ml-freq-rare]] == Freq_rare diff --git a/docs/reference/ml/anomaly-detection/functions/ml-sum-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-sum-functions.asciidoc index e84116aa2005..ec0d30365d66 100644 --- a/docs/reference/ml/anomaly-detection/functions/ml-sum-functions.asciidoc +++ b/docs/reference/ml/anomaly-detection/functions/ml-sum-functions.asciidoc @@ -17,7 +17,7 @@ The {ml-features} include the following sum functions: * xref:ml-sum[`sum`, `high_sum`, `low_sum`] * xref:ml-nonnull-sum[`non_null_sum`, `high_non_null_sum`, `low_non_null_sum`] -[float] +[discrete] [[ml-sum]] == Sum, high_sum, low_sum @@ -73,7 +73,7 @@ transferred from a client to a server on the internet that are unusual compared to other clients. This scenario could be useful to detect data exfiltration or to find users that are abusing internet privileges. -[float] +[discrete] [[ml-nonnull-sum]] == Non_null_sum, high_non_null_sum, low_non_null_sum diff --git a/docs/reference/ml/anomaly-detection/functions/ml-time-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-time-functions.asciidoc index deeddaefa049..997566e4856d 100644 --- a/docs/reference/ml/anomaly-detection/functions/ml-time-functions.asciidoc +++ b/docs/reference/ml/anomaly-detection/functions/ml-time-functions.asciidoc @@ -35,7 +35,7 @@ measured against a UTC baseline) has changed. This situation is treated as a step change in behavior and the new times will be learned quickly. ==== -[float] +[discrete] [[ml-time-of-day]] == Time_of_day @@ -71,7 +71,7 @@ models when events occur throughout a day for each process. It detects when an event occurs for a process that is at an unusual time in the day compared to its past behavior. -[float] +[discrete] [[ml-time-of-week]] == Time_of_week diff --git a/docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc index abd8ba80498c..e5bf4a400c1a 100644 --- a/docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc +++ b/docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc @@ -47,7 +47,7 @@ results to the time period two hours before and after the anomaly. You can also specify these custom URL settings when you create or update {anomaly-jobs} by using the APIs. -[float] +[discrete] [[ml-configuring-url-strings]] == String substitution in custom URLs diff --git a/docs/reference/modules/cross-cluster-search.asciidoc b/docs/reference/modules/cross-cluster-search.asciidoc index f79a62a4fcf1..8806a0588417 100644 --- a/docs/reference/modules/cross-cluster-search.asciidoc +++ b/docs/reference/modules/cross-cluster-search.asciidoc @@ -7,7 +7,7 @@ filter and analyze log data stored on clusters in different data centers. IMPORTANT: {ccs-cap} requires <>. -[float] +[discrete] [[ccs-supported-apis]] === Supported APIs @@ -18,11 +18,11 @@ The following APIs support {ccs}: * <> * <> -[float] +[discrete] [[ccs-example]] === {ccs-cap} examples -[float] +[discrete] [[ccs-remote-cluster-setup]] ==== Remote cluster setup @@ -61,7 +61,7 @@ PUT _cluster/settings // TEST[setup:host] // TEST[s/127.0.0.1:930\d+/\${transport_host}/] -[float] +[discrete] [[ccs-search-remote-cluster]] ==== Search a single remote cluster @@ -129,7 +129,7 @@ The API returns the following response: <1> The search response body includes the name of the remote cluster in the `_index` parameter. -[float] +[discrete] [[ccs-search-multi-remote-cluster]] ==== Search multiple remote clusters @@ -225,7 +225,7 @@ means the document came from the local cluster. <2> This document came from `cluster_one`. <3> This document came from `cluster_two`. -[float] +[discrete] [[skip-unavailable-clusters]] === Skip unavailable clusters diff --git a/docs/reference/modules/discovery/bootstrapping.asciidoc b/docs/reference/modules/discovery/bootstrapping.asciidoc index 7d1244de2568..15d637564c12 100644 --- a/docs/reference/modules/discovery/bootstrapping.asciidoc +++ b/docs/reference/modules/discovery/bootstrapping.asciidoc @@ -94,7 +94,7 @@ match exactly. ================================================== -[float] +[discrete] ==== Choosing a cluster name The <> setting enables you to create multiple @@ -104,7 +104,7 @@ will only form a cluster from nodes that all have the same cluster name. The default value for the cluster name is `elasticsearch`, but it is recommended to change this to reflect the logical name of the cluster. -[float] +[discrete] ==== Auto-bootstrapping in development mode If the cluster is running with a completely default configuration then it will diff --git a/docs/reference/modules/discovery/discovery-settings.asciidoc b/docs/reference/modules/discovery/discovery-settings.asciidoc index c0e56c348356..159bd7019804 100644 --- a/docs/reference/modules/discovery/discovery-settings.asciidoc +++ b/docs/reference/modules/discovery/discovery-settings.asciidoc @@ -46,7 +46,7 @@ for `discovery.seed_hosts` is `["127.0.0.1", "[::1]"]`. See <>. default this list is empty, meaning that this node expects to join a cluster that has already been bootstrapped. See <>. -[float] +[discrete] ==== Expert settings Discovery and cluster formation are also affected by the following diff --git a/docs/reference/modules/discovery/discovery.asciidoc b/docs/reference/modules/discovery/discovery.asciidoc index 9b303d7f06d6..0c5057486def 100644 --- a/docs/reference/modules/discovery/discovery.asciidoc +++ b/docs/reference/modules/discovery/discovery.asciidoc @@ -52,7 +52,7 @@ the timeout for each lookup is controlled by `discovery.seed_resolver.timeout` which defaults to `5s`. Note that DNS lookups are subject to <>. -[float] +[discrete] [[settings-based-hosts-provider]] ===== Settings-based seed hosts provider @@ -76,7 +76,7 @@ discovery.seed_hosts: <2> If a hostname resolves to multiple IP addresses, {es} will attempt to connect to every resolved address. -[float] +[discrete] [[file-based-hosts-provider]] ===== File-based seed hosts provider @@ -132,7 +132,7 @@ needed, coming after the brackets. You can also add comments to this file. All comments must appear on their lines starting with `#` (i.e. comments cannot start in the middle of a line). -[float] +[discrete] [[ec2-hosts-provider]] ===== EC2 hosts provider @@ -140,14 +140,14 @@ The {plugins}/discovery-ec2.html[EC2 discovery plugin] adds a hosts provider that uses the https://github.com/aws/aws-sdk-java[AWS API] to find a list of seed nodes. -[float] +[discrete] [[azure-classic-hosts-provider]] ===== Azure Classic hosts provider The {plugins}/discovery-azure-classic.html[Azure Classic discovery plugin] adds a hosts provider that uses the Azure Classic API find a list of seed nodes. -[float] +[discrete] [[gce-hosts-provider]] ===== Google Compute Engine hosts provider diff --git a/docs/reference/modules/discovery/quorums.asciidoc b/docs/reference/modules/discovery/quorums.asciidoc index 1a1954454268..5cf9438544c6 100644 --- a/docs/reference/modules/discovery/quorums.asciidoc +++ b/docs/reference/modules/discovery/quorums.asciidoc @@ -38,7 +38,7 @@ cluster-state update that adjusts the voting configuration to match, and this can take a short time to complete. It is important to wait for this adjustment to complete before removing more nodes from the cluster. -[float] +[discrete] ==== Master elections Elasticsearch uses an election process to agree on an elected master node, both @@ -52,7 +52,7 @@ election will succeed (with arbitrarily high probability). The scheduling of master elections are controlled by the <>. -[float] +[discrete] ==== Cluster maintenance, rolling restarts and migrations Many cluster maintenance tasks involve temporarily shutting down one or more diff --git a/docs/reference/modules/discovery/voting.asciidoc b/docs/reference/modules/discovery/voting.asciidoc index 888620e331d6..8a318591ce09 100644 --- a/docs/reference/modules/discovery/voting.asciidoc +++ b/docs/reference/modules/discovery/voting.asciidoc @@ -73,7 +73,7 @@ setting affects only its availability in the event of the failure of some of its nodes and the administrative tasks that must be performed as nodes join and leave the cluster. -[float] +[discrete] ==== Even numbers of master-eligible nodes There should normally be an odd number of master-eligible nodes in a cluster. @@ -98,7 +98,7 @@ node, but quorum-based decisions require votes from two of the three voting nodes. In the event of an even split, one half will contain two of the three voting nodes so that half will remain available. -[float] +[discrete] ==== Setting the initial voting configuration When a brand-new cluster starts up for the first time, it must elect its first diff --git a/docs/reference/modules/indices/circuit_breaker.asciidoc b/docs/reference/modules/indices/circuit_breaker.asciidoc index 406c193f2a64..92ad22b38d3a 100644 --- a/docs/reference/modules/indices/circuit_breaker.asciidoc +++ b/docs/reference/modules/indices/circuit_breaker.asciidoc @@ -9,7 +9,7 @@ live cluster with the <> API. // end::circuit-breaker-description-tag[] [[parent-circuit-breaker]] -[float] +[discrete] ==== Parent circuit breaker The parent-level breaker can be configured with the following settings: @@ -30,7 +30,7 @@ The parent-level breaker can be configured with the following settings: // end::indices-breaker-total-limit-tag[] [[fielddata-circuit-breaker]] -[float] +[discrete] ==== Field data circuit breaker The field data circuit breaker allows Elasticsearch to estimate the amount of memory a field will require to be loaded into memory. It can then prevent the @@ -54,7 +54,7 @@ parameters: // end::fielddata-circuit-breaker-overhead-tag[] [[request-circuit-breaker]] -[float] +[discrete] ==== Request circuit breaker The request circuit breaker allows Elasticsearch to prevent per-request data @@ -77,7 +77,7 @@ request) from exceeding a certain amount of memory. // end::request-breaker-overhead-tag[] [[in-flight-circuit-breaker]] -[float] +[discrete] ==== In flight requests circuit breaker The in flight requests circuit breaker allows Elasticsearch to limit the memory usage of all @@ -97,7 +97,7 @@ also as a structured object which is reflected by default overhead. final estimation. Defaults to 2. [[accounting-circuit-breaker]] -[float] +[discrete] ==== Accounting requests circuit breaker The accounting circuit breaker allows Elasticsearch to limit the memory @@ -115,7 +115,7 @@ completed. This includes things like the Lucene segment memory. final estimation. Defaults to 1 [[script-compilation-circuit-breaker]] -[float] +[discrete] ==== Script compilation circuit breaker Slightly different than the previous memory-based circuit breaker, the script diff --git a/docs/reference/modules/indices/fielddata.asciidoc b/docs/reference/modules/indices/fielddata.asciidoc index 1d5ad5689394..c69c7d371788 100644 --- a/docs/reference/modules/indices/fielddata.asciidoc +++ b/docs/reference/modules/indices/fielddata.asciidoc @@ -21,7 +21,7 @@ and perform poorly. NOTE: These are static settings which must be configured on every data node in the cluster. -[float] +[discrete] [[fielddata-monitoring]] ==== Monitoring field data diff --git a/docs/reference/modules/indices/recovery.asciidoc b/docs/reference/modules/indices/recovery.asciidoc index 01a6eaa8ed0f..b598a9b57076 100644 --- a/docs/reference/modules/indices/recovery.asciidoc +++ b/docs/reference/modules/indices/recovery.asciidoc @@ -12,7 +12,7 @@ Peer recovery automatically occurs when {es}: You can view a list of in-progress and completed recoveries using the <>. -[float] +[discrete] ==== Recovery settings `indices.recovery.max_bytes_per_sec`:: @@ -37,7 +37,7 @@ you are using <> then you may be able to give your hot nodes a higher recovery bandwidth limit than your warm nodes. -[float] +[discrete] ==== Expert peer recovery settings You can use the following _expert_ setting to manage resources for peer recoveries. diff --git a/docs/reference/modules/indices/request_cache.asciidoc b/docs/reference/modules/indices/request_cache.asciidoc index e2551e188ee3..910586601576 100644 --- a/docs/reference/modules/indices/request_cache.asciidoc +++ b/docs/reference/modules/indices/request_cache.asciidoc @@ -26,7 +26,7 @@ Scripted queries that use the API calls which are non-deterministic, such as `Math.random()` or `new Date()` are not cached. =================================== -[float] +[discrete] ==== Cache invalidation The cache is smart -- it keeps the same _near real-time_ promise as uncached @@ -49,7 +49,7 @@ POST /kimchy,elasticsearch/_cache/clear?request=true ------------------------ // TEST[s/^/PUT kimchy\nPUT elasticsearch\n/] -[float] +[discrete] ==== Enabling and disabling caching The cache is enabled by default, but can be disabled when creating a new @@ -76,7 +76,7 @@ PUT /my_index/_settings // TEST[continued] -[float] +[discrete] ==== Enabling and disabling caching per request The `request_cache` query-string parameter can be used to enable or disable @@ -102,7 +102,7 @@ Requests where `size` is greater than 0 will not be cached even if the request c enabled in the index settings. To cache these requests you will need to use the query-string parameter detailed here. -[float] +[discrete] ==== Cache key The whole JSON body is used as the cache key. This means that if the JSON @@ -113,7 +113,7 @@ TIP: Most JSON libraries support a _canonical_ mode which ensures that JSON keys are always emitted in the same order. This canonical mode can be used in the application to ensure that a request is always serialized in the same way. -[float] +[discrete] ==== Cache settings The cache is managed at the node level, and has a default maximum size of `1%` @@ -129,7 +129,7 @@ for cached results, but there should be no reason to do so. Remember that stale results are automatically invalidated when the index is refreshed. This setting is provided for completeness' sake only. -[float] +[discrete] ==== Monitoring cache usage The size of the cache (in bytes) and the number of evictions can be viewed diff --git a/docs/reference/modules/remote-clusters.asciidoc b/docs/reference/modules/remote-clusters.asciidoc index 91b163accbc6..eec5ae2cfe3f 100644 --- a/docs/reference/modules/remote-clusters.asciidoc +++ b/docs/reference/modules/remote-clusters.asciidoc @@ -15,7 +15,7 @@ goes through the <>. Remote cluster connections consist of uni-directional connections from the coordinating node to the remote remote connections. -[float] +[discrete] [[sniff-mode]] === Sniff mode @@ -27,7 +27,7 @@ are accessible by the local cluster. Sniff mode is the default connection mode. -[float] +[discrete] [[gateway-nodes-selection]] ==== Gateway nodes selection @@ -62,7 +62,7 @@ communicate with 6.7. The matrix below summarizes compatibility as described abo (see <>), though such tagged nodes still have to satisfy the two above requirements. -[float] +[discrete] [[proxy-mode]] === Proxy mode @@ -77,7 +77,7 @@ to the sniff <>, the remote connections are subject to the same version compatibility rules as <>. -[float] +[discrete] [[configuring-remote-clusters]] ==== Configuring remote clusters @@ -229,7 +229,7 @@ PUT _cluster/settings <1> `cluster_two` would be removed from the cluster settings, leaving `cluster_one` and `cluster_three` intact. -[float] +[discrete] [[remote-cluster-settings]] === Remote cluster settings for all modes @@ -279,7 +279,7 @@ and <> are described below. Elasticsearch compresses the response. If unset, the global `transport.compress` is used as the fallback setting. -[float] +[discrete] [[remote-cluster-sniff-settings]] === Remote cluster settings for sniff mode @@ -299,7 +299,7 @@ and <> are described below. `node.attr.gateway: true` such that only nodes with this attribute will be connected to if `cluster.remote.node.attr` is set to `gateway`. -[float] +[discrete] [[remote-cluster-proxy-settings]] === Remote cluster settings for proxy mode @@ -321,7 +321,7 @@ and <> are described below. remote connections if this field is not a valid hostname as defined by the TLS SNI specification. -[float] +[discrete] [[retrieve-remote-clusters-info]] === Retrieving remote clusters info diff --git a/docs/reference/monitoring/collectors.asciidoc b/docs/reference/monitoring/collectors.asciidoc index 32cd07ade56a..c64915ce94ef 100644 --- a/docs/reference/monitoring/collectors.asciidoc +++ b/docs/reference/monitoring/collectors.asciidoc @@ -120,7 +120,7 @@ NOTE: Collection is currently done serially, rather than in parallel, to avoid For more information about the configuration options for the collectors, see <>. -[float] +[discrete] [[es-monitoring-stack]] ==== Collecting data from across the Elastic Stack diff --git a/docs/reference/monitoring/exporters.asciidoc b/docs/reference/monitoring/exporters.asciidoc index 67947319129c..76bf30d0b614 100644 --- a/docs/reference/monitoring/exporters.asciidoc +++ b/docs/reference/monitoring/exporters.asciidoc @@ -76,7 +76,7 @@ again. While an active monitoring index is read-only, it will naturally fail to write (index) new data and will continuously log errors that indicate the write failure. For more information, see <>. -[float] +[discrete] [[es-monitoring-default-exporter]] === Default exporters @@ -96,7 +96,7 @@ If another exporter is already defined, the default exporter is _not_ created. When you define a new exporter, if the default exporter exists, it is automatically removed. -[float] +[discrete] [[es-monitoring-templates]] === Exporter templates and ingest pipelines diff --git a/docs/reference/query-dsl/geo-bounding-box-query.asciidoc b/docs/reference/query-dsl/geo-bounding-box-query.asciidoc index c7784268f9be..ca355413b2e5 100644 --- a/docs/reference/query-dsl/geo-bounding-box-query.asciidoc +++ b/docs/reference/query-dsl/geo-bounding-box-query.asciidoc @@ -67,7 +67,7 @@ GET my_locations/_search } -------------------------------------------------- -[float] +[discrete] ==== Query Options [cols="<,<",options="header",] @@ -85,13 +85,13 @@ Default is `memory`. |======================================================================= [[query-dsl-geo-bounding-box-query-accepted-formats]] -[float] +[discrete] ==== Accepted Formats In much the same way the geo_point type can accept different representations of the geo point, the filter can accept it as well: -[float] +[discrete] ===== Lat Lon As Properties [source,console] @@ -122,7 +122,7 @@ GET my_locations/_search } -------------------------------------------------- -[float] +[discrete] ===== Lat Lon As Array Format in `[lon, lat]`, note, the order of lon/lat here in order to @@ -150,7 +150,7 @@ GET my_locations/_search } -------------------------------------------------- -[float] +[discrete] ===== Lat Lon As String Format in `lat,lon`. @@ -177,7 +177,7 @@ GET my_locations/_search } -------------------------------------------------- -[float] +[discrete] ===== Bounding Box as Well-Known Text (WKT) [source,console] @@ -201,7 +201,7 @@ GET my_locations/_search } -------------------------------------------------- -[float] +[discrete] ===== Geohash [source,console] @@ -257,7 +257,7 @@ In this example, the geohash `dr` will produce the bounding box query with the top left corner at `45.0,-78.75` and the bottom right corner at `39.375,-67.5`. -[float] +[discrete] ==== Vertices The vertices of the bounding box can either be set by `top_left` and @@ -292,20 +292,20 @@ GET my_locations/_search -------------------------------------------------- -[float] +[discrete] ==== geo_point Type The filter *requires* the `geo_point` type to be set on the relevant field. -[float] +[discrete] ==== Multi Location Per Document The filter can work with multiple locations / points per document. Once a single location / point matches the filter, the document will be included in the filter -[float] +[discrete] [[geo-bbox-type]] ==== Type @@ -345,7 +345,7 @@ GET my_locations/_search } -------------------------------------------------- -[float] +[discrete] ==== Ignore Unmapped When set to `true` the `ignore_unmapped` option will ignore an unmapped field @@ -354,7 +354,7 @@ querying multiple indexes which might have different mappings. When set to `false` (the default value) the query will throw an exception if the field is not mapped. -[float] +[discrete] ==== Notes on Precision Geopoints have limited precision and are always rounded down during index time. diff --git a/docs/reference/query-dsl/geo-distance-query.asciidoc b/docs/reference/query-dsl/geo-distance-query.asciidoc index 9544862d5456..cfb2779659e2 100644 --- a/docs/reference/query-dsl/geo-distance-query.asciidoc +++ b/docs/reference/query-dsl/geo-distance-query.asciidoc @@ -64,13 +64,13 @@ GET /my_locations/_search } -------------------------------------------------- -[float] +[discrete] ==== Accepted Formats In much the same way the `geo_point` type can accept different representations of the geo point, the filter can accept it as well: -[float] +[discrete] ===== Lat Lon As Properties [source,console] @@ -96,7 +96,7 @@ GET /my_locations/_search } -------------------------------------------------- -[float] +[discrete] ===== Lat Lon As Array Format in `[lon, lat]`, note, the order of lon/lat here in order to @@ -123,7 +123,7 @@ GET /my_locations/_search -------------------------------------------------- -[float] +[discrete] ===== Lat Lon As String Format in `lat,lon`. @@ -148,7 +148,7 @@ GET /my_locations/_search } -------------------------------------------------- -[float] +[discrete] ===== Geohash [source,console] @@ -171,7 +171,7 @@ GET /my_locations/_search } -------------------------------------------------- -[float] +[discrete] ==== Options The following are options allowed on the filter: @@ -198,20 +198,20 @@ The following are options allowed on the filter: longitude, set to `COERCE` to additionally try and infer correct coordinates (default is `STRICT`). -[float] +[discrete] ==== geo_point Type The filter *requires* the `geo_point` type to be set on the relevant field. -[float] +[discrete] ==== Multi Location Per Document The `geo_distance` filter can work with multiple locations / points per document. Once a single location / point matches the filter, the document will be included in the filter. -[float] +[discrete] ==== Ignore Unmapped When set to `true` the `ignore_unmapped` option will ignore an unmapped field diff --git a/docs/reference/query-dsl/geo-polygon-query.asciidoc b/docs/reference/query-dsl/geo-polygon-query.asciidoc index 7767e8f1ee5d..c35881332113 100644 --- a/docs/reference/query-dsl/geo-polygon-query.asciidoc +++ b/docs/reference/query-dsl/geo-polygon-query.asciidoc @@ -32,7 +32,7 @@ GET /_search } -------------------------------------------------- -[float] +[discrete] ==== Query Options [cols="<,<",options="header",] @@ -45,10 +45,10 @@ invalid latitude or longitude, `COERCE` to try and infer correct latitude or longitude, or `STRICT` (default is `STRICT`). |======================================================================= -[float] +[discrete] ==== Allowed Formats -[float] +[discrete] ===== Lat Long as Array Format as `[lon, lat]` @@ -81,7 +81,7 @@ GET /_search } -------------------------------------------------- -[float] +[discrete] ===== Lat Lon as String Format in `lat,lon`. @@ -111,7 +111,7 @@ GET /_search } -------------------------------------------------- -[float] +[discrete] ===== Geohash [source,console] @@ -139,13 +139,13 @@ GET /_search } -------------------------------------------------- -[float] +[discrete] ==== geo_point Type The query *requires* the <> type to be set on the relevant field. -[float] +[discrete] ==== Ignore Unmapped When set to `true` the `ignore_unmapped` option will ignore an unmapped field diff --git a/docs/reference/query-dsl/geo-shape-query.asciidoc b/docs/reference/query-dsl/geo-shape-query.asciidoc index 8b349770769b..1046a35dc8e1 100644 --- a/docs/reference/query-dsl/geo-shape-query.asciidoc +++ b/docs/reference/query-dsl/geo-shape-query.asciidoc @@ -252,7 +252,7 @@ relation operator: intersects the query geometry. -[float] +[discrete] ==== Ignore Unmapped When set to `true` the `ignore_unmapped` option will ignore an unmapped field diff --git a/docs/reference/query-dsl/match-all-query.asciidoc b/docs/reference/query-dsl/match-all-query.asciidoc index 4c8ea5d3e8e5..6c77e4b83fd4 100644 --- a/docs/reference/query-dsl/match-all-query.asciidoc +++ b/docs/reference/query-dsl/match-all-query.asciidoc @@ -30,7 +30,7 @@ GET /_search -------------------------------------------------- [[query-dsl-match-none-query]] -[float] +[discrete] == Match None Query This is the inverse of the `match_all` query, which matches no documents. diff --git a/docs/reference/query-dsl/mlt-query.asciidoc b/docs/reference/query-dsl/mlt-query.asciidoc index 65f95cda6ac5..f54a6d03b127 100644 --- a/docs/reference/query-dsl/mlt-query.asciidoc +++ b/docs/reference/query-dsl/mlt-query.asciidoc @@ -151,7 +151,7 @@ The only required parameter is `like`, all other parameters have sensible defaults. There are three types of parameters: one to specify the document input, the other one for term selection and for query formation. -[float] +[discrete] ==== Document Input Parameters [horizontal] @@ -177,7 +177,7 @@ is the same as `like`. `fields`:: A list of fields to fetch and analyze the text from. -[float] +[discrete] [[mlt-query-term-selection]] ==== Term Selection Parameters @@ -219,7 +219,7 @@ reasonable to assume that "a stop word is never interesting". The analyzer that is used to analyze the free form text. Defaults to the analyzer associated with the first field in `fields`. -[float] +[discrete] ==== Query Formation Parameters [horizontal] diff --git a/docs/reference/query-dsl/multi-match-query.asciidoc b/docs/reference/query-dsl/multi-match-query.asciidoc index 0faa61111e6e..df0acd5523a3 100644 --- a/docs/reference/query-dsl/multi-match-query.asciidoc +++ b/docs/reference/query-dsl/multi-match-query.asciidoc @@ -23,7 +23,7 @@ GET /_search <1> The query string. <2> The fields to be queried. -[float] +[discrete] [[field-boost]] ==== `fields` and per-field boosting @@ -71,7 +71,7 @@ at once. It is defined by the `indices.query.bool.max_clause_count` <> setting. -[float] +[discrete] [[rewrite-param-perf-considerations]] === Performance considerations for the `rewrite` parameter For most uses, we recommend using the `constant_score`, diff --git a/docs/reference/query-dsl/percolate-query.asciidoc b/docs/reference/query-dsl/percolate-query.asciidoc index 8ad53bffd5b6..621ca044a29f 100644 --- a/docs/reference/query-dsl/percolate-query.asciidoc +++ b/docs/reference/query-dsl/percolate-query.asciidoc @@ -9,7 +9,7 @@ stored in an index. The `percolate` query itself contains the document that will be used as query to match with the stored queries. -[float] +[discrete] === Sample Usage Create an index with two fields: @@ -122,7 +122,7 @@ TIP: To provide a simple example, this documentation uses one index `my-index` f This set-up can work well when there are just a few percolate queries registered. However, with heavier usage it is recommended to store queries and documents in separate indices. Please see <> for more details. -[float] +[discrete] ==== Parameters The following parameters are required when percolating a document: @@ -148,7 +148,7 @@ In that case the `document` parameter can be substituted with the following para `preference`:: Optionally, preference to be used to fetch document to percolate. `version`:: Optionally, the expected version of the document to be fetched. -[float] +[discrete] ==== Percolating in a filter context In case you are not interested in the score, better performance can be expected by wrapping @@ -183,7 +183,7 @@ should be wrapped in a `constant_score` query or a `bool` query's filter clause. Note that the `percolate` query never gets cached by the query cache. -[float] +[discrete] ==== Percolating multiple documents The `percolate` query can match multiple documents simultaneously with the indexed percolator queries. @@ -265,14 +265,14 @@ GET /my-index/_search <1> The `_percolator_document_slot` indicates that the first, second and last documents specified in the `percolate` query are matching with this query. -[float] +[discrete] ==== Percolating an Existing Document In order to percolate a newly indexed document, the `percolate` query can be used. Based on the response from an index request, the `_id` and other meta information can be used to immediately percolate the newly added document. -[float] +[discrete] ===== Example Based on the previous example. @@ -330,14 +330,14 @@ case the search request would fail with a version conflict error. The search response returned is identical as in the previous example. -[float] +[discrete] ==== Percolate query and highlighting The `percolate` query is handled in a special way when it comes to highlighting. The queries hits are used to highlight the document that is provided in the `percolate` query. Whereas with regular highlighting the query in the search request is used to highlight the hits. -[float] +[discrete] ===== Example This example is based on the mapping of the first example. @@ -555,7 +555,7 @@ The slightly different response: <1> The highlight fields have been prefixed with the document slot they belong to, in order to know which highlight field belongs to what document. -[float] +[discrete] ==== Specifying multiple percolate queries It is possible to specify multiple `percolate` queries in a single search request: @@ -641,7 +641,7 @@ The above search request returns a response similar to this: <1> The `_percolator_document_slot_query1` percolator slot field indicates that these matched slots are from the `percolate` query with `_name` parameter set to `query1`. -[float] +[discrete] [[how-it-works]] ==== How it Works Under the Hood diff --git a/docs/reference/query-dsl/query_filter_context.asciidoc b/docs/reference/query-dsl/query_filter_context.asciidoc index 9a6b728ea741..75290290c07d 100644 --- a/docs/reference/query-dsl/query_filter_context.asciidoc +++ b/docs/reference/query-dsl/query_filter_context.asciidoc @@ -1,7 +1,7 @@ [[query-filter-context]] == Query and filter context -[float] +[discrete] [[relevance-scores]] === Relevance scores @@ -14,7 +14,7 @@ The relevance score is a positive floating point number, returned in the relevance scores differently, score calculation also depends on whether the query clause is run in a **query** or **filter** context. -[float] +[discrete] [[query-context]] === Query context In the query context, a query clause answers the question ``__How well does this @@ -26,7 +26,7 @@ Query context is in effect whenever a query clause is passed to a `query` parameter, such as the `query` parameter in the <> API. -[float] +[discrete] [[filter-context]] === Filter context In a filter context, a query clause answers the question ``__Does this @@ -46,7 +46,7 @@ parameter, such as the `filter` or `must_not` parameters in the <> query, or the <> aggregation. -[float] +[discrete] [[query-filter-context-ex]] === Example of query and filter contexts Below is an example of query clauses being used in query and filter context diff --git a/docs/reference/query-dsl/regexp-syntax.asciidoc b/docs/reference/query-dsl/regexp-syntax.asciidoc index 11b52e360c3d..2ff5fa4373fa 100644 --- a/docs/reference/query-dsl/regexp-syntax.asciidoc +++ b/docs/reference/query-dsl/regexp-syntax.asciidoc @@ -12,7 +12,7 @@ match patterns in data using placeholder characters, called operators. {es} uses https://lucene.apache.org/core/[Apache Lucene]'s regular expression engine to parse these queries. -[float] +[discrete] [[regexp-reserved-characters]] === Reserved characters Lucene's regular expression engine supports all Unicode characters. However, the @@ -39,7 +39,7 @@ backslash or surround it with double quotes. For example: .... -[float] +[discrete] [[regexp-standard-operators]] === Standard operators @@ -152,7 +152,7 @@ example: .... -- -[float] +[discrete] [[regexp-optional-operators]] === Optional operators @@ -162,7 +162,7 @@ Lucene's regular expression engine. To enable multiple operators, use a `|` separator. For example, a `flags` value of `COMPLEMENT|INTERVAL` enables the `COMPLEMENT` and `INTERVAL` operators. -[float] +[discrete] ==== Valid values `ALL` (Default):: @@ -216,7 +216,7 @@ You can combine the `@` operator with `&` and `~` operators to create an .... -- -[float] +[discrete] [[regexp-unsupported-operators]] === Unsupported operators Lucene's regular expression engine does not support anchor operators, such as diff --git a/docs/reference/query-dsl/shape-query.asciidoc b/docs/reference/query-dsl/shape-query.asciidoc index b3f5e87c6283..919993a4eb29 100644 --- a/docs/reference/query-dsl/shape-query.asciidoc +++ b/docs/reference/query-dsl/shape-query.asciidoc @@ -179,7 +179,7 @@ is within the query geometry. * `CONTAINS` - Return all documents whose `shape` field contains the query geometry. -[float] +[discrete] ==== Ignore Unmapped When set to `true` the `ignore_unmapped` option will ignore an unmapped field diff --git a/docs/reference/query-dsl/term-level-queries.asciidoc b/docs/reference/query-dsl/term-level-queries.asciidoc index fd3f57091627..440f436b49a3 100644 --- a/docs/reference/query-dsl/term-level-queries.asciidoc +++ b/docs/reference/query-dsl/term-level-queries.asciidoc @@ -16,7 +16,7 @@ Term-level queries still normalize search terms for `keyword` fields with the `normalizer` property. For more details, see <>. ==== -[float] +[discrete] [[term-level-query-types]] === Types of term-level queries diff --git a/docs/reference/release-notes/8.0.0-alpha1.asciidoc b/docs/reference/release-notes/8.0.0-alpha1.asciidoc index 3cbaee5fb3c5..9851c32c709e 100644 --- a/docs/reference/release-notes/8.0.0-alpha1.asciidoc +++ b/docs/reference/release-notes/8.0.0-alpha1.asciidoc @@ -7,7 +7,7 @@ The changes listed below have been released for the first time in {es} 8.0.0-alpha1. [[breaking-8.0.0-alpha1]] -[float] +[discrete] === Breaking changes Aggregations:: diff --git a/docs/reference/release-notes/highlights.asciidoc b/docs/reference/release-notes/highlights.asciidoc index 6e19586a63d1..8689c391925d 100644 --- a/docs/reference/release-notes/highlights.asciidoc +++ b/docs/reference/release-notes/highlights.asciidoc @@ -25,7 +25,7 @@ endif::[] // end::notable-highlights[] // Omit the notable highlights tag for entries that only need to appear in the ES ref: -// [float] +// [discrete] // === Heading // // Description. diff --git a/docs/reference/rollup/api-quickref.asciidoc b/docs/reference/rollup/api-quickref.asciidoc index 8a64d9df17f3..89c29b98b596 100644 --- a/docs/reference/rollup/api-quickref.asciidoc +++ b/docs/reference/rollup/api-quickref.asciidoc @@ -16,7 +16,7 @@ Most rollup endpoints have the following base: ---- // NOTCONSOLE -[float] +[discrete] [[rollup-api-jobs]] ==== /job/ @@ -27,14 +27,14 @@ Most rollup endpoints have the following base: * {ref}/rollup-stop-job.html[POST /_rollup/job//_stop]: Stop a {rollup-job} * {ref}/rollup-delete-job.html[DELETE /_rollup/job/+++]: Delete a {rollup-job} -[float] +[discrete] [[rollup-api-data]] ==== /data/ * {ref}/rollup-get-rollup-caps.html[GET /_rollup/data//_rollup_caps+++]: Get Rollup Capabilities * {ref}/rollup-get-rollup-index-caps.html[GET //_rollup/data/+++]: Get Rollup Index Capabilities -[float] +[discrete] [[rollup-api-index]] ==== // diff --git a/docs/reference/rollup/overview.asciidoc b/docs/reference/rollup/overview.asciidoc index 843cd5c05849..1d56b56f0bd3 100644 --- a/docs/reference/rollup/overview.asciidoc +++ b/docs/reference/rollup/overview.asciidoc @@ -25,7 +25,7 @@ So while the cost of storing a millisecond of sensor data from ten years ago is reading often diminishes with time. It's not useless -- it could easily contribute to a useful analysis -- but it's reduced value often leads to deletion rather than paying the fixed storage cost. -[float] +[discrete] ==== Rollup stores historical data at reduced granularity That's where Rollup comes into play. The Rollup functionality summarizes old, high-granularity data into a reduced @@ -41,7 +41,7 @@ automates this process of summarizing historical data. Details about setting up and configuring Rollup are covered in <> -[float] +[discrete] ==== Rollup uses standard query DSL The Rollup feature exposes a new search endpoint (`/_rollup_search` vs the standard `/_search`) which knows how to search @@ -55,7 +55,7 @@ are covered more in <>. But if your queries, aggregations and dashboards only use the available functionality, redirecting them to historical data is trivial. -[float] +[discrete] ==== Rollup merges "live" and "rolled" data A useful feature of Rollup is the ability to query both "live", realtime data in addition to historical "rolled" data @@ -69,7 +69,7 @@ would only see data older than a month. The RollupSearch endpoint, however, sup It will take the results from both data sources and merge them together. If there is overlap between the "live" and "rolled" data, live data is preferred to increase accuracy. -[float] +[discrete] ==== Rollup is multi-interval aware Finally, Rollup is capable of intelligently utilizing the best interval available. If you've worked with summarizing diff --git a/docs/reference/rollup/rollup-agg-limitations.asciidoc b/docs/reference/rollup/rollup-agg-limitations.asciidoc index 6f9f949bf8b6..8390c5b80a5a 100644 --- a/docs/reference/rollup/rollup-agg-limitations.asciidoc +++ b/docs/reference/rollup/rollup-agg-limitations.asciidoc @@ -8,7 +8,7 @@ experimental[] There are some limitations to how fields can be rolled up / aggregated. This page highlights the major limitations so that you are aware of them. -[float] +[discrete] ==== Limited aggregation components The Rollup functionality allows fields to be grouped with the following aggregations: diff --git a/docs/reference/rollup/rollup-api.asciidoc b/docs/reference/rollup/rollup-api.asciidoc index 9e56c5f15847..a24b85513db8 100644 --- a/docs/reference/rollup/rollup-api.asciidoc +++ b/docs/reference/rollup/rollup-api.asciidoc @@ -3,7 +3,7 @@ [[rollup-apis]] == Rollup APIs -[float] +[discrete] [[rollup-jobs-endpoint]] === Jobs @@ -11,14 +11,14 @@ * <> or <> * <> -[float] +[discrete] [[rollup-data-endpoint]] === Data * <> * <> -[float] +[discrete] [[rollup-search-endpoint]] === Search diff --git a/docs/reference/rollup/rollup-getting-started.asciidoc b/docs/reference/rollup/rollup-getting-started.asciidoc index 7eafd04682bc..5ef9b090a618 100644 --- a/docs/reference/rollup/rollup-getting-started.asciidoc +++ b/docs/reference/rollup/rollup-getting-started.asciidoc @@ -25,7 +25,7 @@ look like this: -------------------------------------------------- // NOTCONSOLE -[float] +[discrete] ==== Creating a rollup job We'd like to rollup these documents into hourly summaries, which will allow us to generate reports and dashboards with any time interval @@ -127,7 +127,7 @@ After you execute the above command and create the job, you'll receive the follo } ---- -[float] +[discrete] ==== Starting the job After the job is created, it will be sitting in an inactive state. Jobs need to be started before they begin processing data (this allows @@ -141,7 +141,7 @@ POST _rollup/job/sensor/_start -------------------------------------------------- // TEST[setup:sensor_rollup_job] -[float] +[discrete] ==== Searching the rolled results After the job has run and processed some data, we can use the <> endpoint to do some searching. The Rollup feature is designed @@ -316,7 +316,7 @@ Which returns a corresponding response: In addition to being more complicated (date histogram and a terms aggregation, plus an additional average metric), you'll notice the date_histogram uses a `7d` interval instead of `60m`. -[float] +[discrete] ==== Conclusion This quickstart should have provided a concise overview of the core functionality that Rollup exposes. There are more tips and things diff --git a/docs/reference/rollup/rollup-search-limitations.asciidoc b/docs/reference/rollup/rollup-search-limitations.asciidoc index 9e5315043ed2..adc597d02e9c 100644 --- a/docs/reference/rollup/rollup-search-limitations.asciidoc +++ b/docs/reference/rollup/rollup-search-limitations.asciidoc @@ -10,7 +10,7 @@ live data is thrown away, you will always lose some flexibility. This page highlights the major limitations so that you are aware of them. -[float] +[discrete] ==== Only one {rollup} index per search When using the <> endpoint, the `index` parameter accepts one or more indices. These can be a mix of regular, non-rollup @@ -31,7 +31,7 @@ Needless to say, this is a technically challenging piece of code. To help simplify the problem, we have limited search to just one rollup index at a time (which may contain multiple jobs). In the future we may be able to open this up to multiple rollup jobs. -[float] +[discrete] [[aggregate-stored-only]] ==== Can only aggregate what's been stored @@ -80,7 +80,7 @@ The response will tell you that the field and aggregation were not possible, bec ---- // TESTRESPONSE[s/"stack_trace": \.\.\./"stack_trace": $body.$_path/] -[float] +[discrete] ==== Interval granularity Rollups are stored at a certain granularity, as defined by the `date_histogram` group in the configuration. This means you @@ -110,7 +110,7 @@ as needed. That said, if multiple jobs are present in a single rollup index with varying intervals, the search endpoint will identify and use the job(s) with the largest interval to satisfy the search request. -[float] +[discrete] ==== Limited querying components The Rollup functionality allows `query`'s in the search request, but with a limited subset of components. The queries currently allowed are: @@ -127,7 +127,7 @@ If you wish to filter on a keyword `hostname` field, that field must have been c If you attempt to use an unsupported query, or the query references a field that wasn't configured in the rollup job, an exception will be thrown. We expect the list of support queries to grow over time as more are implemented. -[float] +[discrete] ==== Timezones Rollup documents are stored in the timezone of the `date_histogram` group configuration in the job. If no timezone is specified, the default diff --git a/docs/reference/scripting.asciidoc b/docs/reference/scripting.asciidoc index 33b8795a5811..b081bd81c99e 100644 --- a/docs/reference/scripting.asciidoc +++ b/docs/reference/scripting.asciidoc @@ -12,7 +12,7 @@ Additional `lang` plugins enable you to run scripts written in other languages. Everywhere a script can be used, you can include a `lang` parameter to specify the language of the script. -[float] +[discrete] == General-purpose languages These languages can be used for any purpose in the scripting APIs, @@ -30,7 +30,7 @@ and give the most flexibility. |======================================================================= -[float] +[discrete] == Special-purpose languages These languages are less flexible, but typically have higher performance for diff --git a/docs/reference/scripting/expression.asciidoc b/docs/reference/scripting/expression.asciidoc index fe58cbbdf131..3bf4f8f8445f 100644 --- a/docs/reference/scripting/expression.asciidoc +++ b/docs/reference/scripting/expression.asciidoc @@ -5,7 +5,7 @@ Lucene's expressions compile a `javascript` expression to bytecode. They are designed for high-performance custom ranking and sorting functions and are enabled for `inline` and `stored` scripting by default. -[float] +[discrete] === Performance Expressions were designed to have competitive performance with custom Lucene code. @@ -14,7 +14,7 @@ scripting engines: expressions do more "up-front". This allows for very fast execution, even faster than if you had written a `native` script. -[float] +[discrete] === Syntax Expressions support a subset of javascript syntax: a single expression. @@ -32,7 +32,7 @@ Variables in `expression` scripts are available to access: You can use Expressions scripts for `script_score`, `script_fields`, sort scripts, and numeric aggregation scripts, simply set the `lang` parameter to `expression`. -[float] +[discrete] === Numeric field API [cols="<,<",options="header",] |======================================================================= @@ -66,7 +66,7 @@ When a document is missing the field completely, by default the value will be tr Boolean fields are exposed as numerics, with `true` mapped to `1` and `false` mapped to `0`. For example: `doc['on_sale'].value ? doc['price'].value * 0.5 : doc['price'].value` -[float] +[discrete] === Date field API Date fields are treated as the number of milliseconds since January 1, 1970 and support the Numeric Fields API above, plus access to some date-specific fields: @@ -111,7 +111,7 @@ The following example shows the difference in years between the `date` fields da `doc['date1'].date.year - doc['date0'].date.year` -[float] +[discrete] [[geo-point-field-api]] === `geo_point` field API [cols="<,<",options="header",] @@ -132,7 +132,7 @@ The following example computes distance in kilometers from Washington, DC: In this example the coordinates could have been passed as parameters to the script, e.g. based on geolocation of the user. -[float] +[discrete] === Limitations There are a few limitations relative to other script languages: diff --git a/docs/reference/scripting/fields.asciidoc b/docs/reference/scripting/fields.asciidoc index 47adda9337a6..1292e84f7c66 100644 --- a/docs/reference/scripting/fields.asciidoc +++ b/docs/reference/scripting/fields.asciidoc @@ -4,7 +4,7 @@ Depending on where a script is used, it will have access to certain special variables and document fields. -[float] +[discrete] == Update scripts A script used in the <>, @@ -16,7 +16,7 @@ API will have access to the `ctx` variable which exposes: `ctx.op`:: The operation that should be applied to the document: `index` or `delete`. `ctx._index` etc:: Access to <>, some of which may be read-only. -[float] +[discrete] == Search and aggregation scripts With the exception of <> which are @@ -32,7 +32,7 @@ Field values can be accessed from a script using each of which is explained below. [[scripting-score]] -[float] +[discrete] === Accessing the score of a document within a script Scripts used in the <>, @@ -79,7 +79,7 @@ GET my_index/_search ------------------------------------- -[float] +[discrete] [[modules-scripting-doc-vals]] === Doc values @@ -138,7 +138,7 @@ access `text` fields from scripts. =================================================== -[float] +[discrete] [[modules-scripting-source]] === The document `_source` @@ -200,7 +200,7 @@ GET my_index/_search } ------------------------------- -[float] +[discrete] [[modules-scripting-stored]] === Stored fields diff --git a/docs/reference/scripting/security.asciidoc b/docs/reference/scripting/security.asciidoc index 88d8471062c2..b0072be4fd3c 100644 --- a/docs/reference/scripting/security.asciidoc +++ b/docs/reference/scripting/security.asciidoc @@ -8,7 +8,7 @@ all software has bugs and it is important to minimize the risk of failure in any security layer. Find below rules of thumb for how to keep Elasticsearch from being a vulnerability. -[float] +[discrete] === Do not run as root First and foremost, never run Elasticsearch as the `root` user as this would allow any successful effort to circumvent the other security layers to do @@ -16,7 +16,7 @@ allow any successful effort to circumvent the other security layers to do that it is running as `root` but this is so important that it is worth double and triple checking. -[float] +[discrete] === Do not expose Elasticsearch directly to users Do not expose Elasticsearch directly to users, instead have an application make requests on behalf of users. If this is not possible, have an application @@ -26,7 +26,7 @@ to write a <> that overwhelms Elasticsearch and brings down the cluster. All such searches should be considered bugs and the Elasticsearch contributors make an effort to prevent this but they are still possible. -[float] +[discrete] === Do not expose Elasticsearch directly to the Internet Do not expose Elasticsearch to the Internet, instead have an application make requests on behalf of the Internet. Do not entertain the thought of having @@ -49,7 +49,7 @@ Bad: * Users can write arbitrary scripts, queries, `_search` requests. * User actions make documents with structure defined by users. -[float] +[discrete] [[modules-scripting-other-layers]] === Other security layers In addition to user privileges and script sandboxing Elasticsearch uses the @@ -75,7 +75,7 @@ when allowing more than the defaults. Any extra permissions weakens the total security of the Elasticsearch deployment. [[allowed-script-types-setting]] -[float] +[discrete] === Allowed script types setting Elasticsearch supports two script types: `inline` and `stored` (<>). @@ -91,7 +91,7 @@ script.allowed_types: inline <1> (or any other types). [[allowed-script-contexts-setting]] -[float] +[discrete] === Allowed script contexts setting By default all script contexts are allowed to be executed. This can be modified using the diff --git a/docs/reference/scripting/using.asciidoc b/docs/reference/scripting/using.asciidoc index 2aaf0f4ab089..7631812fe717 100644 --- a/docs/reference/scripting/using.asciidoc +++ b/docs/reference/scripting/using.asciidoc @@ -43,7 +43,7 @@ GET my_index/_search } ------------------------------------- -[float] +[discrete] === Script parameters `lang`:: @@ -106,7 +106,7 @@ for ingest contexts. You can change these settings dynamically by setting ======================================== -[float] +[discrete] [[modules-scripting-short-script-form]] === Short script form A short script form can be used for brevity. In the short form, `script` is represented @@ -130,7 +130,7 @@ The same script in the normal form: ---------------------- // NOTCONSOLE -[float] +[discrete] [[modules-scripting-stored-scripts]] === Stored scripts @@ -145,7 +145,7 @@ privileges to create, retrieve, and delete stored scripts: For more information, see <>. -[float] +[discrete] ==== Request examples The following are examples of using a stored script that lives at @@ -222,7 +222,7 @@ DELETE _scripts/calculate-score ----------------------------------- // TEST[continued] -[float] +[discrete] [[modules-scripting-search-templates]] === Search templates You can also use the `_scripts` API to store **search templates**. Search @@ -237,7 +237,7 @@ mistakes. Search templates use the http://mustache.github.io/mustache.5.html[mustache templating language]. See <> for more information and examples. -[float] +[discrete] [[modules-scripting-using-caching]] === Script caching @@ -412,7 +412,7 @@ DELETE /_ingest/pipeline/my_test_scores_pipeline We recommend testing and benchmarking any indexing changes before deploying them in production. -[float] +[discrete] [[modules-scripting-errors]] === Script errors Elasticsearch returns error details when there is a compliation or runtime diff --git a/docs/reference/search.asciidoc b/docs/reference/search.asciidoc index 56b1d7d8c93c..97e5eeed8dcc 100644 --- a/docs/reference/search.asciidoc +++ b/docs/reference/search.asciidoc @@ -4,7 +4,7 @@ Most search APIs support <>, with the exception of the <> endpoints. -[float] +[discrete] [[search-routing]] === Routing @@ -52,7 +52,7 @@ The routing parameter can be multi valued represented as a comma separated string. This will result in hitting the relevant shards where the routing values match to. -[float] +[discrete] [[search-adaptive-replica]] === Adaptive Replica Selection @@ -82,7 +82,7 @@ If adaptive replica selection is turned off, searches are sent to the index/indices shards in a round robin fashion between all copies of the data (primaries and replicas). -[float] +[discrete] [[stats-groups]] === Stats Groups @@ -104,7 +104,7 @@ POST /_search -------------------------------------------------- // TEST[setup:twitter] -[float] +[discrete] [[global-search-timeout]] === Global Search Timeout @@ -121,7 +121,7 @@ The setting key is `search.default_search_timeout` and can be set using the <> endpoints. The default value is no global timeout. Setting this value to `-1` resets the global search timeout to no timeout. -[float] +[discrete] [[global-search-cancellation]] === Search Cancellation @@ -131,7 +131,7 @@ perform the request is closed by the client. It is fundamental that the http client sending requests closes connections whenever requests time out or are aborted. -[float] +[discrete] [[search-concurrency-and-parallelism]] === Search concurrency and parallelism diff --git a/docs/reference/search/profile.asciidoc b/docs/reference/search/profile.asciidoc index 63486c87c56c..672b938067d3 100644 --- a/docs/reference/search/profile.asciidoc +++ b/docs/reference/search/profile.asciidoc @@ -356,7 +356,7 @@ times. The meaning of the stats are as follows: -[float] +[discrete] ===== All parameters: [horizontal] @@ -890,7 +890,7 @@ overall time, the breakdown is inclusive of all children times. The meaning of the stats are as follows: -[float] +[discrete] ===== All parameters: [horizontal] diff --git a/docs/reference/search/rank-eval.asciidoc b/docs/reference/search/rank-eval.asciidoc index 362bbaf6eaa5..aade337c5a04 100644 --- a/docs/reference/search/rank-eval.asciidoc +++ b/docs/reference/search/rank-eval.asciidoc @@ -208,7 +208,7 @@ GET /my_index/_rank_eval The `metric` section determines which of the available evaluation metrics will be used. The following metrics are supported: -[float] +[discrete] [[k-precision]] ===== Precision at K (P@k) @@ -263,7 +263,7 @@ If set to 'true', unlabeled documents are ignored and neither count as relevant |======================================================================= -[float] +[discrete] [[k-recall]] ===== Recall at K (R@k) @@ -315,7 +315,7 @@ in the query. Defaults to 10. |======================================================================= -[float] +[discrete] ===== Mean reciprocal rank For every query in the test suite, this metric calculates the reciprocal of the @@ -356,7 +356,7 @@ in the query. Defaults to 10. |======================================================================= -[float] +[discrete] ===== Discounted cumulative gain (DCG) In contrast to the two metrics above, @@ -399,7 +399,7 @@ in the query. Defaults to 10. |======================================================================= -[float] +[discrete] ===== Expected Reciprocal Rank (ERR) Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank diff --git a/docs/reference/search/request/highlighting.asciidoc b/docs/reference/search/request/highlighting.asciidoc index c30ca129d3ff..b15e6c85fc70 100644 --- a/docs/reference/search/request/highlighting.asciidoc +++ b/docs/reference/search/request/highlighting.asciidoc @@ -269,7 +269,7 @@ type:: The highlighter to use: `unified`, `plain`, or `fvh`. Defaults to * <> [[override-global-settings]] -[float] +[discrete] === Override global settings You can specify highlighter settings globally and selectively override them for @@ -296,7 +296,7 @@ GET /_search -------------------------------------------------- // TEST[setup:twitter] -[float] +[discrete] [[specify-highlight-query]] === Specify a highlight query @@ -365,7 +365,7 @@ GET /_search -------------------------------------------------- // TEST[setup:twitter] -[float] +[discrete] [[set-highlighter-type]] === Set highlighter type @@ -390,7 +390,7 @@ GET /_search // TEST[setup:twitter] [[configure-tags]] -[float] +[discrete] === Configure highlighting tags By default, the highlighting will wrap highlighted text in `` and @@ -455,7 +455,7 @@ GET /_search -------------------------------------------------- // TEST[setup:twitter] -[float] +[discrete] [[highlight-source]] === Highlight on source @@ -480,7 +480,7 @@ GET /_search [[highlight-all]] -[float] +[discrete] === Highlight in all fields By default, only fields that contains a query match are highlighted. Set @@ -504,7 +504,7 @@ GET /_search // TEST[setup:twitter] [[matched-fields]] -[float] +[discrete] === Combine matches on multiple fields WARNING: This is only supported by the `fvh` highlighter @@ -638,7 +638,7 @@ to [[explicit-field-order]] -[float] +[discrete] === Explicitly order highlighted fields Elasticsearch highlights the fields in the order that they are sent, but per the JSON spec, objects are unordered. If you need to be explicit about the order @@ -664,7 +664,7 @@ fields are highlighted but a plugin might. -[float] +[discrete] [[control-highlighted-frags]] === Control highlighted fragments @@ -761,7 +761,7 @@ GET /_search -------------------------------------------------- // TEST[setup:twitter] -[float] +[discrete] [[highlight-postings-list]] === Highlight using the postings list @@ -801,7 +801,7 @@ PUT /example } -------------------------------------------------- -[float] +[discrete] [[specify-fragmenter]] === Specify a fragmenter for the plain highlighter diff --git a/docs/reference/search/suggesters.asciidoc b/docs/reference/search/suggesters.asciidoc index bf1c35988d8a..8b074d2c6416 100644 --- a/docs/reference/search/suggesters.asciidoc +++ b/docs/reference/search/suggesters.asciidoc @@ -115,7 +115,7 @@ suggested text, its document frequency and score compared to the suggest entry text. The meaning of the score depends on the used suggester. The term suggester's score is based on the edit distance. -[float] +[discrete] [[global-suggest]] ===== Global suggest text diff --git a/docs/reference/search/suggesters/context-suggest.asciidoc b/docs/reference/search/suggesters/context-suggest.asciidoc index be2aaf5653cb..a9d2841cc186 100644 --- a/docs/reference/search/suggesters/context-suggest.asciidoc +++ b/docs/reference/search/suggesters/context-suggest.asciidoc @@ -86,7 +86,7 @@ NOTE: Adding context mappings increases the index size for completion field. The is entirely heap resident, you can monitor the completion field index size using <>. [[suggester-context-category]] -[float] +[discrete] ===== Category Context The `category` context allows you to associate one or more categories with suggestions at index @@ -130,7 +130,7 @@ are explicitly indexed, the suggestions are indexed with both set of categories. -[float] +[discrete] ====== Category Query Suggestions can be filtered by one or more categories. The following @@ -218,7 +218,7 @@ NOTE: If a suggestion entry matches multiple contexts the final score is compute maximum score produced by any matching contexts. [[suggester-context-geo]] -[float] +[discrete] ===== Geo location Context A `geo` context allows you to associate one or more geo points or geohashes with suggestions @@ -227,7 +227,7 @@ a certain distance of a specified geo location. Internally, geo points are encoded as geohashes with the specified precision. -[float] +[discrete] ====== Geo Mapping In addition to the `path` setting, `geo` context mapping accepts the following settings: @@ -241,7 +241,7 @@ In addition to the `path` setting, `geo` context mapping accepts the following s NOTE: The index time `precision` setting sets the maximum geohash precision that can be used at query time. -[float] +[discrete] ====== Indexing geo contexts `geo` contexts can be explicitly set with suggestions or be indexed from a geo point field in the @@ -271,7 +271,7 @@ PUT place/_doc/1 } -------------------------------------------------- -[float] +[discrete] ====== Geo location Query Suggestions can be filtered and boosted with respect to how close they are to one or diff --git a/docs/reference/settings/ccr-settings.asciidoc b/docs/reference/settings/ccr-settings.asciidoc index 54d9c390d9aa..5622af8f522e 100644 --- a/docs/reference/settings/ccr-settings.asciidoc +++ b/docs/reference/settings/ccr-settings.asciidoc @@ -5,7 +5,7 @@ These {ccr} settings can be dynamically updated on a live cluster with the <>. -[float] +[discrete] [[ccr-recovery-settings]] ==== Remote recovery settings @@ -23,7 +23,7 @@ leader and follower clusters. For example if it is set to `20mb` on a leader, the leader will only send `20mb/s` to the follower even if the follower is requesting and can accept `60mb/s`. Defaults to `40mb`. -[float] +[discrete] [[ccr-advanced-recovery-settings]] ==== Advanced remote recovery settings diff --git a/docs/reference/settings/ml-settings.asciidoc b/docs/reference/settings/ml-settings.asciidoc index 47f32fb3978b..9814c8cb6a7f 100644 --- a/docs/reference/settings/ml-settings.asciidoc +++ b/docs/reference/settings/ml-settings.asciidoc @@ -24,7 +24,7 @@ file. // end::ml-settings-description-tag[] -[float] +[discrete] [[general-ml-settings]] ==== General machine learning settings @@ -110,7 +110,7 @@ each node. Typically, jobs spend a small amount of time in this state before they move to `open` state. Jobs that must restore large models when they are opening spend more time in the `opening` state. Defaults to `2`. -[float] +[discrete] [[advanced-ml-settings]] ==== Advanced machine learning settings @@ -147,7 +147,7 @@ JVM. If such a process does not connect within the time period specified by this setting then the process is assumed to have failed. Defaults to `10s`. The minimum value for this setting is `5s`. -[float] +[discrete] [[model-inference-circuit-breaker]] ==== {ml-cap} circuit breaker settings diff --git a/docs/reference/settings/monitoring-settings.asciidoc b/docs/reference/settings/monitoring-settings.asciidoc index eb76f74f7e78..c97aa368f5fc 100644 --- a/docs/reference/settings/monitoring-settings.asciidoc +++ b/docs/reference/settings/monitoring-settings.asciidoc @@ -22,7 +22,7 @@ configure monitoring settings in `logstash.yml`. For more information, see <>. -[float] +[discrete] [[general-monitoring-settings]] ==== General Monitoring Settings @@ -30,7 +30,7 @@ For more information, see <>. deprecated:[7.8.0,Basic License features should always be enabled] + This deprecated setting has no effect. -[float] +[discrete] [[monitoring-collection-settings]] ==== Monitoring Collection Settings @@ -124,7 +124,7 @@ information, see <>, <>, and <>. -[float] +[discrete] [[local-exporter-settings]] ==== Local Exporter Settings @@ -163,7 +163,7 @@ cluster alerts are not displayed. (<>) Time to wait for the master node to setup `local` exporter for monitoring. After that, the non-master nodes will warn the user for possible missing X-Pack configuration. Defaults to `30s`. -[float] +[discrete] [[http-exporter-settings]] ==== HTTP Exporter Settings diff --git a/docs/reference/settings/notification-settings.asciidoc b/docs/reference/settings/notification-settings.asciidoc index 5661e5ed31f9..201e1d61e2bf 100644 --- a/docs/reference/settings/notification-settings.asciidoc +++ b/docs/reference/settings/notification-settings.asciidoc @@ -21,7 +21,7 @@ For more information about creating and updating the {es} keystore, see <>. // end::notification-settings-description-tag[] -[float] +[discrete] [[general-notification-settings]] ==== General Watcher Settings `xpack.watcher.enabled`:: @@ -89,7 +89,7 @@ corresponding endpoints are explicitly allowed as well. include::ssl-settings.asciidoc[] -[float] +[discrete] [[email-notification-settings]] ==== Email Notification Settings You can configure the following email notification settings in @@ -231,7 +231,7 @@ Defaults to `true`. include::ssl-settings.asciidoc[] -[float] +[discrete] [[slack-notification-settings]] ==== Slack Notification Settings You can configure the following Slack notification settings in @@ -272,7 +272,7 @@ via Slack. You can specify the following Slack account attributes: Slack attachments documentation]. -- -[float] +[discrete] [[jira-notification-settings]] ==== Jira Notification Settings You can configure the following Jira notification settings in @@ -301,7 +301,7 @@ issues in Jira. You can specify the following Jira account attributes: Optional. -- -[float] +[discrete] [[pagerduty-notification-settings]] ==== PagerDuty Notification Settings You can configure the following PagerDuty notification settings in diff --git a/docs/reference/settings/security-hash-settings.asciidoc b/docs/reference/settings/security-hash-settings.asciidoc index 061ca38d545c..1f6ca2a25e33 100644 --- a/docs/reference/settings/security-hash-settings.asciidoc +++ b/docs/reference/settings/security-hash-settings.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[hashing-settings]] ==== User cache and password hash algorithms diff --git a/docs/reference/settings/security-settings.asciidoc b/docs/reference/settings/security-settings.asciidoc index e61e9117186e..49eeceaed48f 100644 --- a/docs/reference/settings/security-settings.asciidoc +++ b/docs/reference/settings/security-settings.asciidoc @@ -22,7 +22,7 @@ with the exception of the secure settings, which you add to the {es} keystore. For more information about creating and updating the {es} keystore, see <>. -[float] +[discrete] [[general-security-settings]] ==== General security settings `xpack.security.enabled`:: @@ -53,14 +53,14 @@ the sensitive nature of the information. `xpack.security.fips_mode.enabled`:: Enables fips mode of operation. Set this to `true` if you run this {es} instance in a FIPS 140-2 enabled JVM. For more information, see <>. Defaults to `false`. -[float] +[discrete] [[password-hashing-settings]] ==== Password hashing settings `xpack.security.authc.password_hashing.algorithm`:: Specifies the hashing algorithm that is used for secure user credential storage. See <>. Defaults to `bcrypt`. -[float] +[discrete] [[anonymous-access-settings]] ==== Anonymous access settings You can configure the following anonymous access settings in @@ -80,7 +80,7 @@ resource. When set to `false`, an HTTP 401 response is returned and the user can provide credentials with the appropriate permissions to gain access. Defaults to `true`. -[float] +[discrete] [[security-automata-settings]] ==== Automata Settings In places where the {security-features} accept wildcard patterns (e.g. index @@ -108,7 +108,7 @@ The length of time to retain in an item in the automata cache (based on most recent usage). Defaults to `48h` (48 hours). -[float] +[discrete] [[field-document-security-settings]] ==== Document and field level security settings @@ -132,7 +132,7 @@ Document level security queries may depend on Lucene BitSet objects, and these a automatically cached to improve performance. Defaults to `50mb`, after which least-recently-used entries will be evicted. -[float] +[discrete] [[token-service-settings]] ==== Token service settings @@ -148,7 +148,7 @@ Set to `false` to disable the built-in token service. Defaults to `true` unless The length of time that a token is valid for. By default this value is `20m` or 20 minutes. The maximum value is 1 hour. -[float] +[discrete] [[api-key-service-settings]] ==== API key service settings @@ -178,7 +178,7 @@ The hashing algorithm that is used for the in-memory cached API key credentials. For possible values, see <>. Defaults to `ssha256`. -[float] +[discrete] [[realm-settings]] ==== Realm settings // tag::realm-settings-description-tag[] @@ -214,7 +214,7 @@ The valid settings vary depending on the realm type. For more information, see <>. // end::realm-settings-description-tag[] -[float] +[discrete] [[ref-realm-settings]] ===== Settings valid for all realms // tag::realm-order-tag[] @@ -228,7 +228,7 @@ Indicates whether a realm is enabled. You can use this setting to disable a realm without removing its configuration information. Defaults to `true`. [[ref-native-settings]] -[float] +[discrete] ===== Native realm settings For a native realm, the `type` must be set to `native`. In addition to the <>, you can specify @@ -253,7 +253,7 @@ Defaults to `true`. [[ref-users-settings]] -[float] +[discrete] ===== File realm settings The `type` setting must be set to `file`. In addition to the @@ -281,7 +281,7 @@ this realm, so that it only supports user lookups. Defaults to `true`. [[ref-ldap-settings]] -[float] +[discrete] ===== LDAP realm settings The `type` setting must be set to `ldap`. In addition to the @@ -552,7 +552,7 @@ this realm, so that it only supports user lookups. Defaults to `true`. [[ref-ad-settings]] -[float] +[discrete] ===== Active Directory realm settings The `type` setting must be set to `active_directory`. In addition to the @@ -811,7 +811,7 @@ Referrals are URLs returned by the server that are to be used to continue the LDAP operation (such as `search`). Defaults to `true`. [[ref-pki-settings]] -[float] +[discrete] ===== PKI realm settings The `type` setting must be set to `pki`. In addition to the @@ -875,7 +875,7 @@ must be defined. For more details, see <>. [[ref-saml-settings]] -[float] +[discrete] ===== SAML realm settings // tag::saml-description-tag[] The `type` setting must be set to `saml`. In addition to the @@ -1051,7 +1051,7 @@ For more information, see <>. // end::saml-req-authn-context-tag[] -[float] +[discrete] [[ref-saml-signing-settings]] ===== SAML realm signing settings // tag::saml-signing-description-tag[] @@ -1115,7 +1115,7 @@ The password to the keystore in `signing.keystore.path`. The password for the key in the keystore (`signing.keystore.path`). Defaults to the keystore password. -[float] +[discrete] [[ref-saml-encryption-settings]] ===== SAML realm encryption settings // tag::saml-encryption-description-tag[] @@ -1173,7 +1173,7 @@ The password for the key in the keystore (`encryption.keystore.path`). Only a single password is supported. If you are using multiple decryption keys, they cannot have individual passwords. -[float] +[discrete] [[ref-saml-ssl-settings]] ===== SAML realm SSL settings // tag::saml-ssl-description-tag[] @@ -1277,7 +1277,7 @@ include::{es-repo-dir}/settings/common-defs.asciidoc[tag=ssl-supported-protocols include::{es-repo-dir}/settings/common-defs.asciidoc[tag=ssl-cipher-suites-values] // end::saml-ssl-cipher-suites-tag[] -[float] +[discrete] [[ref-kerberos-settings]] ===== Kerberos realm settings // tag::kerberos-description-tag[] @@ -1322,7 +1322,7 @@ See <>. // end::kerberos-authorization-realms-tag[] [[ref-oidc-settings]] -[float] +[discrete] ===== OpenID Connect realm settings // tag::oidc-description-tag[] In addition to the <>, you @@ -1545,7 +1545,7 @@ the OpenID Connect Provider endpoints. Specifies the maximum number of connections allowed per endpoint. // end::oidc-http-max-endpoint-connections-tag[] -[float] +[discrete] [[ref-oidc-ssl-settings]] ===== OpenID Connect realm SSL settings // tag::oidc-ssl-description-tag[] @@ -1649,7 +1649,7 @@ include::{es-repo-dir}/settings/common-defs.asciidoc[tag=ssl-supported-protocols include::{es-repo-dir}/settings/common-defs.asciidoc[tag=ssl-cipher-suites-values] // end::oidc-ssl-cipher-suites-tag[] -[float] +[discrete] [[load-balancing]] ===== Load balancing and failover @@ -1675,7 +1675,7 @@ IP addresses that correspond to this DNS name. Connections will continuously iterate through the list of addresses. If a server is unavailable, iterating through the list of URLs will continue until a successful connection is made. -[float] +[discrete] [[ssl-tls-settings]] ==== General TLS settings `xpack.security.ssl.diagnose.trust`:: @@ -1687,7 +1687,7 @@ This diagnostic message contains information that can be used to determine the cause of the failure and assist with resolving the problem. Set to `false` to disable these messages. -[float] +[discrete] [[tls-ssl-key-settings]] ===== TLS/SSL key and trusted certificate settings @@ -1733,7 +1733,7 @@ include::ssl-settings.asciidoc[] include::ssl-settings.asciidoc[] [[ssl-tls-profile-settings]] -[float] +[discrete] ===== Transport profile TLS/SSL settings The same settings that are available for the <> are also available for each transport profile. By default, the settings for a @@ -1746,7 +1746,7 @@ transport profile, use the prefix `transport.profiles.$PROFILE.xpack.security.` append the portion of the setting after `xpack.security.transport.`. For the key setting, this would be `transport.profiles.$PROFILE.xpack.security.ssl.key`. -[float] +[discrete] [[ip-filtering-settings]] ==== IP filtering settings You can configure the following settings for <>. diff --git a/docs/reference/settings/transform-settings.asciidoc b/docs/reference/settings/transform-settings.asciidoc index d3ccc2d9fe62..d753f68d1c31 100644 --- a/docs/reference/settings/transform-settings.asciidoc +++ b/docs/reference/settings/transform-settings.asciidoc @@ -17,7 +17,7 @@ The dynamic settings can also be updated across a cluster with the TIP: Dynamic settings take precedence over settings in the `elasticsearch.yml` file. -[float] +[discrete] [[general-transform-settings]] ==== General {transforms} settings diff --git a/docs/reference/setup.asciidoc b/docs/reference/setup.asciidoc index 6de5f25c8bbb..c8f9e3418d4f 100644 --- a/docs/reference/setup.asciidoc +++ b/docs/reference/setup.asciidoc @@ -12,14 +12,14 @@ running, including: * Configuring [[supported-platforms]] -[float] +[discrete] == Supported platforms The matrix of officially supported operating systems and JVMs is available here: link:/support/matrix[Support Matrix]. Elasticsearch is tested on the listed platforms, but it is possible that it will work on other platforms too. -[float] +[discrete] [[jvm-version]] == Java (JVM) Version diff --git a/docs/reference/setup/bootstrap-checks-xes.asciidoc b/docs/reference/setup/bootstrap-checks-xes.asciidoc index a81d9bbe4c89..99cc8fbfbb2e 100644 --- a/docs/reference/setup/bootstrap-checks-xes.asciidoc +++ b/docs/reference/setup/bootstrap-checks-xes.asciidoc @@ -5,7 +5,7 @@ In addition to the <>, there are checks that are specific to {xpack} features. -[float] +[discrete] === Encrypt sensitive data check //See EncryptSensitiveDAtaBootstrapCheck.java @@ -16,7 +16,7 @@ the secure settings store. To pass this bootstrap check, you must set the `xpack.watcher.encryption_key` on each node in the cluster. For more information, see <>. -[float] +[discrete] === PKI realm check //See PkiRealmBootstrapCheckTests.java @@ -28,7 +28,7 @@ information, see <> and <>. To pass this bootstrap check, if a PKI realm is enabled, you must configure TLS and enable client authentication on at least one network communication layer. -[float] +[discrete] === Role mappings check If you authenticate users with realms other than `native` or `file` realms, you @@ -46,7 +46,7 @@ To pass this bootstrap check, the role mapping files must exist and must be valid. The Distinguished Names (DNs) that are listed in the role mappings files must also be valid. -[float] +[discrete] [[bootstrap-checks-tls]] === SSL/TLS check //See TLSLicenseBootstrapCheck.java @@ -62,7 +62,7 @@ To pass this bootstrap check, you must <>. -[float] +[discrete] === Token SSL check //See TokenSSLBootstrapCheckTests.java diff --git a/docs/reference/setup/bootstrap-checks.asciidoc b/docs/reference/setup/bootstrap-checks.asciidoc index 231de843549f..cf544b733295 100644 --- a/docs/reference/setup/bootstrap-checks.asciidoc +++ b/docs/reference/setup/bootstrap-checks.asciidoc @@ -20,7 +20,7 @@ There are some bootstrap checks that are always enforced to prevent Elasticsearch from running with incompatible settings. These checks are documented individually. -[float] +[discrete] [[dev-vs-prod-mode]] === Development vs. production mode @@ -41,7 +41,7 @@ can be useful for configuring a single node to be reachable via HTTP for testing purposes without triggering production mode. [[single-node-discovery]] -[float] +[discrete] === Single-node discovery We recognize that some users need to bind transport to an external interface for testing their usage of the transport client. For this situation, we provide the @@ -50,7 +50,7 @@ discovery type `single-node` (configure it by setting `discovery.type` to join a cluster with any other node. -[float] +[discrete] === Forcing the bootstrap checks If you are running a single node in production, it is possible to evade the bootstrap checks (either by not binding transport to an external interface, or diff --git a/docs/reference/setup/configuration.asciidoc b/docs/reference/setup/configuration.asciidoc index 5337522ade83..dae86043352f 100644 --- a/docs/reference/setup/configuration.asciidoc +++ b/docs/reference/setup/configuration.asciidoc @@ -10,7 +10,7 @@ as `node.name` and paths), or settings which a node requires in order to be able to join a cluster, such as `cluster.name` and `network.host`. [[config-files-location]] -[float] +[discrete] === Config files location Elasticsearch has three configuration files: @@ -45,7 +45,7 @@ shell is not sufficient. Instead, this variable is sourced from change the config directory location. -[float] +[discrete] === Config file format The configuration format is http://www.yaml.org/[YAML]. Here is an @@ -66,7 +66,7 @@ path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch -------------------------------------------------- -[float] +[discrete] === Environment variable substitution Environment variables referenced with the `${...}` notation within the diff --git a/docs/reference/setup/important-settings/discovery-settings.asciidoc b/docs/reference/setup/important-settings/discovery-settings.asciidoc index de6ad8ab1716..d583f522da0e 100644 --- a/docs/reference/setup/important-settings/discovery-settings.asciidoc +++ b/docs/reference/setup/important-settings/discovery-settings.asciidoc @@ -8,7 +8,7 @@ There are two important discovery and cluster formation settings that should be configured before going to production so that nodes in the cluster can discover each other and elect a master node. -[float] +[discrete] [[unicast.hosts]] ==== `discovery.seed_hosts` @@ -29,7 +29,7 @@ If your master-eligible nodes do not have fixed names or addresses, use an <> to find their addresses dynamically. -[float] +[discrete] [[initial_master_nodes]] ==== `cluster.initial_master_nodes` diff --git a/docs/reference/setup/install.asciidoc b/docs/reference/setup/install.asciidoc index d1f3e79b0f1a..1fbce2987587 100644 --- a/docs/reference/setup/install.asciidoc +++ b/docs/reference/setup/install.asciidoc @@ -1,7 +1,7 @@ [[install-elasticsearch]] == Installing Elasticsearch -[float] +[discrete] === Hosted Elasticsearch You can run Elasticsearch on your own hardware, or use our @@ -10,7 +10,7 @@ on Elastic Cloud. The Elasticsearch Service is available on both AWS and GCP. {ess-trial}[Try out the Elasticsearch Service for free]. -[float] +[discrete] === Installing Elasticsearch Yourself Elasticsearch is provided in the following package formats: @@ -66,7 +66,7 @@ Formulae are available from the Elastic Homebrew tap for installing + {ref}/brew.html[Install {es} on macOS with Homebrew] -[float] +[discrete] [[config-mgmt-tools]] === Configuration Management Tools diff --git a/docs/reference/setup/install/zip-windows.asciidoc b/docs/reference/setup/install/zip-windows.asciidoc index be37b8cb9e18..f4f8cdd1a839 100644 --- a/docs/reference/setup/install/zip-windows.asciidoc +++ b/docs/reference/setup/install/zip-windows.asciidoc @@ -144,7 +144,7 @@ installation. However, upgrading across JVM types (e.g. JRE versus SE) is not supported, and does require the service to be reinstalled. [[windows-service-settings]] -[float] +[discrete] === Customizing service settings The Elasticsearch service can be configured prior to installation by setting the following environment variables (either using the https://technet.microsoft.com/en-us/library/cc754250(v=ws.10).aspx[set command] from the command line, or through the `System Properties->Environment Variables` GUI). diff --git a/docs/reference/setup/logging-config.asciidoc b/docs/reference/setup/logging-config.asciidoc index e9a85c83f398..249f82ea7afe 100644 --- a/docs/reference/setup/logging-config.asciidoc +++ b/docs/reference/setup/logging-config.asciidoc @@ -120,7 +120,7 @@ appenders can be found on the http://logging.apache.org/log4j/2.x/manual/configuration.html[Log4j documentation]. -[float] +[discrete] [[configuring-logging-levels]] === Configuring logging levels @@ -187,7 +187,7 @@ example, you want to send the logger to another file, or manage the logger differently; this is a rare use-case). -- -[float] +[discrete] [[deprecation-logging]] === Deprecation logging @@ -239,7 +239,7 @@ The user ID is included in the `X-Opaque-ID` field in deprecation JSON logs. --------------------------- // NOTCONSOLE -[float] +[discrete] [[json-logging]] === JSON log format diff --git a/docs/reference/setup/restart-cluster.asciidoc b/docs/reference/setup/restart-cluster.asciidoc index 87734a86fd46..8d3df2be91b1 100644 --- a/docs/reference/setup/restart-cluster.asciidoc +++ b/docs/reference/setup/restart-cluster.asciidoc @@ -9,7 +9,7 @@ nodes in the cluster while in the case of time, so the service remains uninterrupted. -[float] +[discrete] [[restart-cluster-full]] === Full-cluster restart @@ -166,7 +166,7 @@ the datafeeds from {kib} or with the <> and // end::restart_ml[] -[float] +[discrete] [[restart-cluster-rolling]] === Rolling restart diff --git a/docs/reference/setup/starting.asciidoc b/docs/reference/setup/starting.asciidoc index 0614c74d9b0f..999e599820bd 100644 --- a/docs/reference/setup/starting.asciidoc +++ b/docs/reference/setup/starting.asciidoc @@ -3,20 +3,20 @@ The method for starting {es} varies depending on how you installed it. -[float] +[discrete] [[start-targz]] === Archive packages (`.tar.gz`) If you installed {es} with a `.tar.gz` package, you can start {es} from the command line. -[float] +[discrete] include::install/targz-start.asciidoc[] -[float] +[discrete] include::install/targz-daemon.asciidoc[] -[float] +[discrete] [[start-zip]] === Archive packages (`.zip`) @@ -24,18 +24,18 @@ If you installed {es} on Windows with a `.zip` package, you can start {es} from the command line. If you want {es} to start automatically at boot time without any user interaction, <>. -[float] +[discrete] include::install/zip-windows-start.asciidoc[] -[float] +[discrete] [[start-deb]] === Debian packages -[float] +[discrete] [[start-es-deb-systemd]] include::install/systemd.asciidoc[] -[float] +[discrete] [[start-docker]] === Docker images @@ -43,7 +43,7 @@ If you installed a Docker image, you can start {es} from the command line. There are different methods depending on whether you're using development mode or production mode. See <>. -[float] +[discrete] [[start-msi]] === MSI packages @@ -52,13 +52,13 @@ from the command line. If you want it to start automatically at boot time without any user interaction, <>. -[float] +[discrete] include::install/msi-windows-start.asciidoc[] -[float] +[discrete] [[start-rpm]] === RPM packages -[float] +[discrete] [[start-es-rpm-systemd]] include::install/systemd.asciidoc[] diff --git a/docs/reference/setup/stopping.asciidoc b/docs/reference/setup/stopping.asciidoc index c9f718aa088c..8c3a8d40fa1d 100644 --- a/docs/reference/setup/stopping.asciidoc +++ b/docs/reference/setup/stopping.asciidoc @@ -36,7 +36,7 @@ $ cat /tmp/elasticsearch-pid && echo $ kill -SIGTERM 15516 -------------------------------------------------- -[float] +[discrete] [[fatal-errors]] === Stopping on Fatal Errors diff --git a/docs/reference/setup/sysconfig.asciidoc b/docs/reference/setup/sysconfig.asciidoc index 8b548202c80a..1862c2c74a61 100644 --- a/docs/reference/setup/sysconfig.asciidoc +++ b/docs/reference/setup/sysconfig.asciidoc @@ -16,7 +16,7 @@ The following settings *must* be considered before going to production: * <> [[dev-vs-prod]] -[float] +[discrete] === Development mode vs production mode By default, Elasticsearch assumes that you are working in development mode. diff --git a/docs/reference/slm/apis/index.asciidoc b/docs/reference/slm/apis/index.asciidoc index f3359ffaad19..ce03c58e4faa 100644 --- a/docs/reference/slm/apis/index.asciidoc +++ b/docs/reference/slm/apis/index.asciidoc @@ -14,7 +14,7 @@ view and manage snapshots, and restore data streams or indices. You can stop and restart SLM to temporarily pause automatic backups while performing upgrades or other maintenance. -[float] +[discrete] [[slm-and-security]] === Security and SLM diff --git a/docs/reference/slm/apis/slm-api.asciidoc b/docs/reference/slm/apis/slm-api.asciidoc index 295356fbe81d..7d60f02fe4e0 100644 --- a/docs/reference/slm/apis/slm-api.asciidoc +++ b/docs/reference/slm/apis/slm-api.asciidoc @@ -7,7 +7,7 @@ You use the following APIs to set up policies to automatically take snapshots an control how long they are retained. For more information about {slm} ({slm-init}), see <>. -[float] +[discrete] [[slm-api-policy-endpoint]] === Policy management APIs @@ -15,14 +15,14 @@ For more information about {slm} ({slm-init}), see <> * <> -[float] +[discrete] [[slm-api-index-endpoint]] === Snapshot management APIs * <> (take snapshots) * <> (delete expired snapshots) -[float] +[discrete] [[slm-api-management-endpoint]] === Operation management APIs diff --git a/docs/reference/snapshot-restore/index.asciidoc b/docs/reference/snapshot-restore/index.asciidoc index daeb2f8f5ed6..82f3097082bb 100644 --- a/docs/reference/snapshot-restore/index.asciidoc +++ b/docs/reference/snapshot-restore/index.asciidoc @@ -47,7 +47,7 @@ cluster is by using the snapshot and restore functionality. // end::backup-warning[] -[float] +[discrete] [[snapshot-restore-version-compatibility]] === Version compatibility diff --git a/docs/reference/snapshot-restore/register-repository.asciidoc b/docs/reference/snapshot-restore/register-repository.asciidoc index 03e0b3445119..58df21ba3469 100644 --- a/docs/reference/snapshot-restore/register-repository.asciidoc +++ b/docs/reference/snapshot-restore/register-repository.asciidoc @@ -96,7 +96,7 @@ When a repository is unregistered, {es} only removes the reference to the location where the repository is storing the snapshots. The snapshots themselves are left untouched and in place. -[float] +[discrete] [[snapshots-filesystem-repository]] === Shared file system repository @@ -163,7 +163,7 @@ unit, for example: `1GB`, `10MB`, `5KB`, `500B`. Defaults to `null` (unlimited c `max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second. `readonly`:: Makes repository read-only. Defaults to `false`. -[float] +[discrete] [[snapshots-read-only-repository]] === Read-only URL repository @@ -217,7 +217,7 @@ repositories.url.allowed_urls: ["http://www.example.org/root/*", "https://*.mydo NOTE: URLs using the `ftp`, `http`, `https`, or `jar` protocols do not need to be registered in the `path.repo` setting. -[float] +[discrete] [role="xpack"] [testenv="basic"] [[snapshots-source-only-repository]] @@ -261,7 +261,7 @@ PUT _snapshot/my_src_only_repository ----------------------------------- // TEST[continued] -[float] +[discrete] [[snapshots-repository-plugins]] === Repository plugins @@ -272,7 +272,7 @@ Other repository backends are available in these official plugins: * {plugins}/repository-azure.html[repository-azure] for Azure storage repositories * {plugins}/repository-gcs.html[repository-gcs] for Google Cloud Storage repositories -[float] +[discrete] [[snapshots-repository-verification]] === Repository verification When a repository is registered, it's immediately verified on all master and data nodes to make sure that it is functional @@ -301,7 +301,7 @@ POST /_snapshot/my_unverified_backup/_verify It returns a list of nodes where repository was successfully verified or an error message if verification process failed. -[float] +[discrete] [[snapshots-repository-cleanup]] === Repository cleanup Repositories can over time accumulate data that is not referenced by any existing snapshot. This is a result of the data safety guarantees diff --git a/docs/reference/sql/endpoints/jdbc.asciidoc b/docs/reference/sql/endpoints/jdbc.asciidoc index c6779d4409c3..36c4d8e5dcbf 100644 --- a/docs/reference/sql/endpoints/jdbc.asciidoc +++ b/docs/reference/sql/endpoints/jdbc.asciidoc @@ -8,7 +8,7 @@ It is Type 4 driver, meaning it is a platform independent, stand-alone, Direct t pure Java driver that converts JDBC calls to {es-sql}. [[sql-jdbc-installation]] -[float] +[discrete] === Installation The JDBC driver can be obtained from: @@ -40,7 +40,7 @@ from `artifacts.elastic.co/maven` by adding it to the repositories list: ---- [[jdbc-setup]] -[float] +[discrete] === Setup The driver main class is `org.elasticsearch.xpack.sql.jdbc.EsDriver`. @@ -70,7 +70,7 @@ Optional. The driver recognized the following properties: [[jdbc-cfg]] -[float] +[discrete] ===== Essential [[jdbc-cfg-timezone]] `timezone` (default JVM timezone):: @@ -78,7 +78,7 @@ Timezone used by the driver _per connection_ indicated by its `ID`. *Highly* recommended to set it (to, say, `UTC`) as the JVM timezone can vary, is global for the entire JVM and can't be changed easily when running under a security manager. [[jdbc-cfg-network]] -[float] +[discrete] ===== Network `connect.timeout` (default 30s):: @@ -97,7 +97,7 @@ Page size (in entries). The number of results returned per page by the server. Query timeout (in seconds). That is the maximum amount of time waiting for a query to return. [[jdbc-cfg-auth]] -[float] +[discrete] ==== Basic Authentication `user`:: Basic Authentication user name @@ -105,7 +105,7 @@ Query timeout (in seconds). That is the maximum amount of time waiting for a que `password`:: Basic Authentication password [[jdbc-cfg-ssl]] -[float] +[discrete] ==== SSL `ssl` (default false):: Enable SSL @@ -124,23 +124,23 @@ Query timeout (in seconds). That is the maximum amount of time waiting for a que `ssl.protocol`(default `TLS`):: SSL protocol to be used -[float] +[discrete] ==== Proxy `proxy.http`:: Http proxy host name `proxy.socks`:: SOCKS proxy host name -[float] +[discrete] ==== Mapping `field.multi.value.leniency` (default `true`):: Whether to be lenient and return the first value (without any guarantees of what that will be - typically the first in natural ascending order) for fields with multiple values (true) or throw an exception. -[float] +[discrete] ==== Index `index.include.frozen` (default `false`):: Whether to include <> in the query execution or not (default). -[float] +[discrete] ==== Additional `validate.properties` (default true):: If disabled, it will ignore any misspellings or unrecognizable properties. When enabled, an exception diff --git a/docs/reference/sql/endpoints/odbc.asciidoc b/docs/reference/sql/endpoints/odbc.asciidoc index fd92a37dca65..a7c583ce1754 100644 --- a/docs/reference/sql/endpoints/odbc.asciidoc +++ b/docs/reference/sql/endpoints/odbc.asciidoc @@ -6,7 +6,7 @@ == SQL ODBC [[sql-odbc-overview]] -[float] +[discrete] === Overview {odbc} is a 3.80 compliant ODBC driver for {es}. diff --git a/docs/reference/sql/endpoints/odbc/configuration.asciidoc b/docs/reference/sql/endpoints/odbc/configuration.asciidoc index 2158112b1ffb..eda7b9ee9b51 100644 --- a/docs/reference/sql/endpoints/odbc/configuration.asciidoc +++ b/docs/reference/sql/endpoints/odbc/configuration.asciidoc @@ -60,7 +60,7 @@ Such a file can be then shared among multiple systems and the user will need to The configuration steps are similar for all the above points. Following is an example of configuring a System DSN. -[float] +[discrete] ===== 2.1 Launch {odbc} DSN Editor Click on the _System DSN_ tab, then on the _Add..._ button: @@ -82,7 +82,7 @@ image:images/sql/odbc/dsn_editor_basic.png[] This new window has three tabs, each responsible for a set of configuration parameters, as follows. -[float] +[discrete] ===== 2.2 Connection parameters This tab allows configuration for the following items: @@ -129,7 +129,7 @@ At a minimum, the _Name_ and _Hostname_ fields must be provisioned, before the D WARNING: Connection encryption is enabled by default. This will need to be changed if connecting to an {es} node with no encryption. -[float] +[discrete] ===== 2.3 Cryptography parameters One of the following SSL options can be chosen: @@ -180,7 +180,7 @@ will be considered by default. Choose _All Files (\*.*)_ from the drop down, if .Certificate file browser image:images/sql/odbc/dsn_editor_security_cert.png[] -[float] +[discrete] ===== 2.4 Connection parameters The connection configuration can further be tweaked by the following parameters. @@ -255,7 +255,7 @@ This corresponds to the `EarlyExecution` setting in <>. .Connection parameters image:images/sql/odbc/dsn_editor_misc.png[] -[float] +[discrete] ===== 2.5 Logging parameters For troubleshooting purposes, the {odbc} offers functionality to log the API calls that an application makes; this is enabled in the Administrator application: @@ -289,7 +289,7 @@ When authentication is enabled, the password will be redacted from the logs. NOTE: Debug-logging can quickly lead to the creation of many very large files and generate significant processing overhead. Only enable if instructed so and preferably only when fetching low volumes of data. -[float] +[discrete] [[connection_testing]] ===== 2.5 Testing the connection Once the _Hostname_, the _Port_ (if different from implicit default) and the SSL options are configured, you can test if the provided @@ -352,7 +352,7 @@ once, the connection logging taking precedence over the environment variable logging. [[odbc-cfg-dsnparams]] -[float] +[discrete] ==== Connection string parameters The following is a list of additional parameters that can be configured for a diff --git a/docs/reference/sql/functions/aggs.asciidoc b/docs/reference/sql/functions/aggs.asciidoc index a506eadd3719..23ca1b7cfa1d 100644 --- a/docs/reference/sql/functions/aggs.asciidoc +++ b/docs/reference/sql/functions/aggs.asciidoc @@ -7,7 +7,7 @@ Functions for computing a _single_ result from a set of input values. {es-sql} supports aggregate functions only alongside <> (implicit or explicit). [[sql-functions-aggs-general]] -[float] +[discrete] === General Purpose [[sql-functions-aggs-avg]] @@ -404,7 +404,7 @@ include-tagged::{sql-specs}/docs/docs.csv-spec[aggSumScalars] -------------------------------------------------- [[sql-functions-aggs-statistics]] -[float] +[discrete] === Statistics [[sql-functions-aggs-kurtosis]] diff --git a/docs/reference/sql/functions/math.asciidoc b/docs/reference/sql/functions/math.asciidoc index e54644d75a9b..433132de8b4e 100644 --- a/docs/reference/sql/functions/math.asciidoc +++ b/docs/reference/sql/functions/math.asciidoc @@ -7,7 +7,7 @@ All math and trigonometric functions require their input (where applicable) to be numeric. [[sql-functions-math-generic]] -[float] +[discrete] === Generic [[sql-functions-math-abs]] @@ -388,7 +388,7 @@ include-tagged::{sql-specs}/docs/docs.csv-spec[mathTruncateWithNegativeParameter -------------------------------------------------- [[sql-functions-math-trigonometric]] -[float] +[discrete] === Trigonometric [[sql-functions-math-acos]] diff --git a/docs/reference/sql/language/data-types.asciidoc b/docs/reference/sql/language/data-types.asciidoc index d6318911d5fe..03b8bdf5ff19 100644 --- a/docs/reference/sql/language/data-types.asciidoc +++ b/docs/reference/sql/language/data-types.asciidoc @@ -90,7 +90,7 @@ s|SQL precision [[sql-multi-field]] -[float] +[discrete] ==== SQL and multi-fields A core concept in {es} is that of an `analyzed` field, that is a full-text value that is interpreted in order diff --git a/docs/reference/sql/language/indices.asciidoc b/docs/reference/sql/language/indices.asciidoc index 2892f1e22bdd..e8d20260d0ac 100644 --- a/docs/reference/sql/language/indices.asciidoc +++ b/docs/reference/sql/language/indices.asciidoc @@ -6,7 +6,7 @@ {es-sql} supports two types of patterns for matching multiple indices or tables: [[sql-index-patterns-multi]] -[float] +[discrete] ==== {es} multi-index The {es} notation for enumerating, including or excluding <> @@ -36,7 +36,7 @@ include-tagged::{sql-specs}/docs/docs.csv-spec[fromTablePatternQuoted] NOTE: There is the restriction that all resolved concrete tables have the exact same mapping. [[sql-index-patterns-like]] -[float] +[discrete] ==== SQL `LIKE` notation The common `LIKE` statement (including escaping if needed) to match a wildcard pattern, based on one `_` diff --git a/docs/reference/sql/language/syntax/lexic/index.asciidoc b/docs/reference/sql/language/syntax/lexic/index.asciidoc index 323400bde431..b717a72fb80e 100644 --- a/docs/reference/sql/language/syntax/lexic/index.asciidoc +++ b/docs/reference/sql/language/syntax/lexic/index.asciidoc @@ -62,7 +62,7 @@ Hence why in general, *especially* when dealing with user input it is *highly* r {es-sql} supports two kind of __implicitly-typed__ literals: strings and numbers. [[sql-syntax-string-literals]] -[float] +[discrete] ===== String Literals A string literal is an arbitrary number of characters bounded by single quotes `'`: `'Giant Robot'`. @@ -71,7 +71,7 @@ To include a single quote in the string, escape it using another single quote: ` NOTE: An escaped single quote is *not* a double quote (`"`), but a single quote `'` _repeated_ (`''`). [sql-syntax-numeric-literals] -[float] +[discrete] ===== Numeric Literals Numeric literals are accepted both in decimal and scientific notation with exponent marker (`e` or `E`), starting either with a digit or decimal point `.`: @@ -88,7 +88,7 @@ Numeric literals are accepted both in decimal and scientific notation with expon Numeric literals that contain a decimal point are always interpreted as being of type `double`. Those without are considered `integer` if they fit otherwise their type is `long` (or `BIGINT` in ANSI SQL types). [[sql-syntax-generic-literals]] -[float] +[discrete] ===== Generic Literals When dealing with arbitrary type literal, one creates the object by casting, typically, the string representation to the desired type. This can be achieved through the dedicated <> and <>: diff --git a/docs/reference/sql/limitations.asciidoc b/docs/reference/sql/limitations.asciidoc index 05f58023fbec..c0bb8606d881 100644 --- a/docs/reference/sql/limitations.asciidoc +++ b/docs/reference/sql/limitations.asciidoc @@ -3,7 +3,7 @@ [[sql-limitations]] == SQL Limitations -[float] +[discrete] [[large-parsing-trees]] === Large queries may throw `ParsingExpection` @@ -11,7 +11,7 @@ Extremely large queries can consume too much memory during the parsing phase, in abort parsing and throw an error. In such cases, consider reducing the query to a smaller size by potentially simplifying it or splitting it into smaller queries. -[float] +[discrete] [[sys-columns-describe-table-nested-fields]] === Nested fields in `SYS COLUMNS` and `DESCRIBE TABLE` @@ -31,7 +31,7 @@ For example: SELECT dep.dep_name.keyword FROM test_emp GROUP BY languages; -------------------------------------------------- -[float] +[discrete] === Scalar functions on nested fields are not allowed in `WHERE` and `ORDER BY` clauses {es-sql} doesn't support the usage of scalar functions on top of nested fields in `WHERE` and `ORDER BY` clauses with the exception of comparison and logical operators. @@ -59,7 +59,7 @@ SELECT * FROM test_emp WHERE dep.start_date >= CAST('2020-01-01' AS DATE) OR dep is supported. -[float] +[discrete] === Multi-nested fields {es-sql} doesn't support multi-nested documents, so a query cannot reference more than one nested field in an index. @@ -80,20 +80,20 @@ nested_B.text |VARCHAR |KEYWORD `nested_A` and `nested_B` cannot be used at the same time, nor `nested_A`/`nested_B` and `nested_A.nested_X` combination. For such situations, {es-sql} will display an error message. -[float] +[discrete] === Paginating nested inner hits When SELECTing a nested field, pagination will not work as expected, {es-sql} will return __at least__ the page size records. This is because of the way nested queries work in {es}: the root nested field will be returned and it's matching inner nested fields as well, pagination taking place on the **root nested document and not on its inner hits**. -[float] +[discrete] [[normalized-keyword-fields]] === Normalized `keyword` fields `keyword` fields in {es} can be normalized by defining a `normalizer`. Such fields are not supported in {es-sql}. -[float] +[discrete] === Array type of fields Array fields are not supported due to the "invisible" way in which {es} handles an array of values: the mapping doesn't indicate whether @@ -101,7 +101,7 @@ a field is an array (has multiple values) or not, so without reading all the dat When multiple values are returned for a field, by default, {es-sql} will throw an exception. However, it is possible to change this behavior through `field_multi_value_leniency` parameter in REST (disabled by default) or `field.multi.value.leniency` in drivers (enabled by default). -[float] +[discrete] === Sorting by aggregation When doing aggregations (`GROUP BY`) {es-sql} relies on {es}'s `composite` aggregation for its support for paginating results. @@ -129,7 +129,7 @@ SELECT age, ROUND(AVG(salary)) AS avg FROM test GROUP BY age ORDER BY avg; SELECT age, MAX(salary) - MIN(salary) AS diff FROM test GROUP BY age ORDER BY diff; -------------------------------------------------- -[float] +[discrete] === Using a sub-select Using sub-selects (`SELECT X FROM (SELECT Y)`) is **supported to a small degree**: any sub-select that can be "flattened" into a single @@ -150,7 +150,7 @@ include-tagged::{sql-specs}/docs/docs.csv-spec[limitationSubSelectRewritten] But, if the sub-select would include a `GROUP BY` or `HAVING` or the enclosing `SELECT` would be more complex than `SELECT X FROM (SELECT ...) WHERE [simple_condition]`, this is currently **un-supported**. -[float] +[discrete] [[first-last-agg-functions-having-clause]] === Using <>/<> aggregation functions in `HAVING` clause @@ -158,7 +158,7 @@ Using `FIRST` and `LAST` in the `HAVING` clause is not supported. The same appli <> and <> when their target column is of type <> as they are internally translated to `FIRST` and `LAST`. -[float] +[discrete] [[group-by-time]] === Using TIME data type in GROUP BY or <> @@ -184,7 +184,7 @@ SELECT count(*) FROM test GROUP BY MINUTE((CAST(date_created AS TIME)); SELECT HISTOGRAM(CAST(birth_date AS TIME), INTERVAL '10' MINUTES) as h, COUNT(*) FROM t GROUP BY h ------------------------------------------------------------- -[float] +[discrete] [[geo-sql-limitations]] === Geo-related functions @@ -195,7 +195,7 @@ indexed with some loss of precision from the original values (4.190951585769653E 8.381903171539307E-8 for longitude). The altitude component is accepted but not stored in doc values nor indexed. Therefore calling `ST_Z` function in the filtering, grouping or sorting will return `null`. -[float] +[discrete] [[fields-from-source]] === Retrieving from `_source` @@ -205,7 +205,7 @@ If a column, for which there is no source stored, is asked for in a query, {es-s this restriction are: `keyword`, `date`, `scaled_float`, `geo_point`, `geo_shape` since they are NOT returned from `_source` but from `docvalue_fields`. -[float] +[discrete] [[fields-from-docvalues]] === Retrieving from `docvalue_fields` @@ -213,13 +213,13 @@ When the number of columns retrievable from `docvalue_fields` is greater than th the query will fail with `IllegalArgumentException: Trying to retrieve too many docvalue_fields` error. Either the mentioned {es} setting needs to be adjusted or fewer columns retrievable from `docvalue_fields` need to be selected. -[float] +[discrete] [[aggs-in-pivot]] === Aggregations in the <> clause The aggregation expression in <> will currently accept only one aggregation. It is thus not possible to obtain multiple aggregations for any one pivoted column. -[float] +[discrete] [[subquery-in-pivot]] === Using a subquery in <>'s `IN`-subclause diff --git a/docs/reference/sql/overview.asciidoc b/docs/reference/sql/overview.asciidoc index db71b85bec33..c59e2e42ece2 100644 --- a/docs/reference/sql/overview.asciidoc +++ b/docs/reference/sql/overview.asciidoc @@ -6,7 +6,7 @@ {es-sql} aims to provide a powerful yet lightweight SQL interface to {es}. [[sql-introduction]] -[float] +[discrete] === Introduction {es-sql} is an X-Pack component that allows SQL-like queries to be executed in real-time against {es}. @@ -15,7 +15,7 @@ _natively_ inside {es}. One can think of {es-sql} as a _translator_, one that understands both SQL and {es} and makes it easy to read and process data in real-time, at scale by leveraging {es} capabilities. [[sql-why]] -[float] +[discrete] === Why {es-sql} ? Native integration:: diff --git a/docs/reference/sql/security.asciidoc b/docs/reference/sql/security.asciidoc index 963bb463964a..e58ab490b7b8 100644 --- a/docs/reference/sql/security.asciidoc +++ b/docs/reference/sql/security.asciidoc @@ -6,7 +6,7 @@ {es-sql} integrates with security, if this is enabled on your cluster. In such a scenario, {es-sql} supports both security at the transport layer (by encrypting the communication between the consumer and the server) and authentication (for the access layer). -[float] +[discrete] [[ssl-tls-config]] ==== SSL/TLS configuration @@ -14,7 +14,7 @@ In case of an encrypted transport, the SSL/TLS support needs to be enabled in {e Depending on your SSL configuration (whether the certificates are signed by a CA or not, whether they are global at JVM level or just local to one application), might require setting up the `keystore` and/or `truststore`, that is where the _credentials_ are stored (`keystore` - which typically stores private keys and certificates) and how to _verify_ them (`truststore` - which typically stores certificates from third party also known as CA - certificate authorities). + Typically (and again, do note that your environment might differ significantly), if the SSL setup for {es-sql} is not already done at the JVM level, one needs to setup the keystore if the {es-sql} security requires client authentication (PKI - Public Key Infrastructure), and setup `truststore` if SSL is enabled. -[float] +[discrete] ==== Authentication The authentication support in {es-sql} is of two types: @@ -22,7 +22,7 @@ The authentication support in {es-sql} is of two types: Username/Password:: Set these through `user` and `password` properties. PKI/X.509:: Use X.509 certificates to authenticate {es-sql} to {es}. For this, one would need to setup the `keystore` containing the private key and certificate to the appropriate user (configured in {es}) and the `truststore` with the CA certificate used to sign the SSL/TLS certificates in the {es} cluster. That is, one should setup the key to authenticate {es-sql} and also to verify that is the right one. To do so, one should set the `ssl.keystore.location` and `ssl.truststore.location` properties to indicate the `keystore` and `truststore` to use. It is recommended to have these secured through a password in which case `ssl.keystore.pass` and `ssl.truststore.pass` properties are required. -[float] +[discrete] [[sql-security-permissions]] ==== Permissions (server-side) Lastly, one the server one need to add a few permissions to diff --git a/docs/reference/upgrade/cluster_restart.asciidoc b/docs/reference/upgrade/cluster_restart.asciidoc index 3066d44c7101..e3e144e30c6e 100644 --- a/docs/reference/upgrade/cluster_restart.asciidoc +++ b/docs/reference/upgrade/cluster_restart.asciidoc @@ -11,7 +11,7 @@ and reindex your old indices or bring up a new {version} cluster and include::preparing_to_upgrade.asciidoc[] -[float] +[discrete] === Upgrading your cluster To perform a full cluster restart upgrade to {version}: diff --git a/docs/reference/upgrade/preparing_to_upgrade.asciidoc b/docs/reference/upgrade/preparing_to_upgrade.asciidoc index efacb6ab7593..fc2288acf466 100644 --- a/docs/reference/upgrade/preparing_to_upgrade.asciidoc +++ b/docs/reference/upgrade/preparing_to_upgrade.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] === Preparing to upgrade It is important to prepare carefully before starting an upgrade. Once you have diff --git a/docs/reference/upgrade/rolling_upgrade.asciidoc b/docs/reference/upgrade/rolling_upgrade.asciidoc index 31ad1464fe94..99b828dc695a 100644 --- a/docs/reference/upgrade/rolling_upgrade.asciidoc +++ b/docs/reference/upgrade/rolling_upgrade.asciidoc @@ -44,7 +44,7 @@ Upgrading directly to {version} from 6.6 or earlier requires a include::preparing_to_upgrade.asciidoc[] -[float] +[discrete] === Upgrading your cluster To perform a rolling upgrade to {version}: diff --git a/docs/resiliency/index.asciidoc b/docs/resiliency/index.asciidoc index 8cdd24379247..e20679be2c2d 100644 --- a/docs/resiliency/index.asciidoc +++ b/docs/resiliency/index.asciidoc @@ -40,7 +40,7 @@ are fully informed when you are architecting your system. == Work in Progress -[float] +[discrete] === Known Unknowns (STATUS: ONGOING) We consider this topic to be the most important in our quest for @@ -55,7 +55,7 @@ If you encounter an issue, https://github.com/elastic/elasticsearch/issues[pleas We are committed to tracking down and fixing all the issues that are posted. -[float] +[discrete] ==== Jepsen Tests The Jepsen platform is specifically designed to test distributed systems. It is not a single test and is regularly adapted @@ -63,7 +63,7 @@ to create new scenarios. We have currently ported all published Jepsen scenarios framework. As the Jepsen tests evolve, we will continue porting new scenarios that are not covered yet. We are committed to investigating all new scenarios and will report issues that we find on this page and in our GitHub repository. -[float] +[discrete] === Better request retry mechanism when nodes are disconnected (STATUS: ONGOING) If the node holding a primary shard is disconnected for whatever reason, the @@ -88,7 +88,7 @@ Further issues remain with the retry mechanism: See {GIT}9967[#9967]. (STATUS: ONGOING) -[float] +[discrete] === OOM resiliency (STATUS: ONGOING) The family of circuit breakers has greatly reduced the occurrence of OOM @@ -102,20 +102,20 @@ space. The following issues have been identified: Other safeguards are tracked in the meta-issue {GIT}11511[#11511]. -[float] +[discrete] === Relocating shards omitted by reporting infrastructure (STATUS: ONGOING) Indices stats and indices segments requests reach out to all nodes that have shards of that index. Shards that have relocated from a node while the stats request arrives will make that part of the request fail and are just ignored in the overall stats result. {GIT}13719[#13719] -[float] +[discrete] === Documentation of guarantees and handling of failures (STATUS: ONGOING) This status page is a start, but we can do a better job of explicitly documenting the processes at work in Elasticsearch and what happens in the case of each type of failure. The plan is to have a test case that validates each behavior under simulated conditions. Every test will document the expected results, the associated test code, and an explicit PASS or FAIL status for each simulated case. -[float] +[discrete] === Run Jepsen (STATUS: ONGOING) We have ported the known scenarios in the Jepsen blogs that check loss of acknowledged writes to our testing infrastructure. @@ -124,7 +124,7 @@ that no failures are found. == Completed -[float] +[discrete] === Documents indexed during a network partition cannot be uniquely identified (STATUS: DONE, v7.0.0) When a primary has been partitioned away from the cluster there is a short @@ -148,7 +148,7 @@ and sequence number fields, even in the presence of network partitions, and has been used to replace the `_version` field in operations that require uniquely identifying the document, such as optimistic concurrency control. -[float] +[discrete] === Replicas can fall out of sync when a primary shard fails (STATUS: DONE, v7.0.0) When a primary shard fails, a replica shard will be promoted to be the primary @@ -165,7 +165,7 @@ the discrepancies between shard copies at the document level, which allows to efficiently sync up the remaining replicas with the newly-promoted primary shard. -[float] +[discrete] === Repeated network partitions can cause cluster state updates to be lost (STATUS: DONE, v7.0.0) During a networking partition, cluster state updates (like mapping changes or @@ -192,7 +192,7 @@ sub-issues. See particularly {GIT}32171[#32171] and https://github.com/elastic/elasticsearch-formal-models/blob/master/ZenWithTerms/tla/ZenWithTerms.tla[the TLA+ formal model] used to verify these changes. -[float] +[discrete] === Divergence between primary and replica shard copies when documents deleted (STATUS: DONE, V6.3.0) Certain combinations of delays in performing activities related to the deletion @@ -232,7 +232,7 @@ formal model of the replica's behaviour] using TLA+. Running the TLC model checker on this model found all three issues. We then applied the proposed fixes to the model and validated that the fixed design behaved as expected. -[float] +[discrete] === Port Jepsen tests dealing with loss of acknowledged writes to our testing framework (STATUS: DONE, V5.0.0) We have increased our test coverage to include scenarios tested by Jepsen that demonstrate loss of acknowledged writes, as described in @@ -243,7 +243,7 @@ https://github.com/elastic/elasticsearch/blob/master/core/src/test/java/org/elas where the `testAckedIndexing` test was specifically added to check that we don't lose acknowledged writes in various failure scenarios. -[float] +[discrete] === Loss of documents during network partition (STATUS: DONE, v5.0.0) If a network partition separates a node from the master, there is some window of time before the node detects it. The length of the window is dependent on the type of the partition. This window is extremely small if a socket is broken. More adversarial partitions, for example, silently dropping requests without breaking the socket can take longer (up to 3x30s using current defaults). @@ -251,7 +251,7 @@ If a network partition separates a node from the master, there is some window of If the node hosts a primary shard at the moment of partition, and ends up being isolated from the cluster (which could have resulted in {GIT}2488[split-brain] before), some documents that are being indexed into the primary may be lost if they fail to reach one of the allocated replicas (due to the partition) and that replica is later promoted to primary by the master ({GIT}7572[#7572]). To prevent this situation, the primary needs to wait for the master to acknowledge replica shard failures before acknowledging the write to the client. {GIT}14252[#14252] -[float] +[discrete] === Safe primary relocations (STATUS: DONE, v5.0.0) When primary relocation completes, a cluster state is propagated that deactivates the old primary and marks the new primary as active. As @@ -265,7 +265,7 @@ In the reverse situation where a cluster state update that completes primary rel on the relocation target, each of the nodes believes the other to be the active primary. This leads to the issue of indexing requests chasing the primary being quickly sent back and forth between the nodes, potentially making them both go OOM. {GIT}12573[#12573] -[float] +[discrete] === Do not allow stale shards to automatically be promoted to primary (STATUS: DONE, v5.0.0) In some scenarios, after the loss of all valid copies, a stale replica shard can be automatically assigned as a primary, preferring old data @@ -275,7 +275,7 @@ this tracking information to allocate primary shards. When all shard copies are for one of the good shard copies to reappear. In case where all good copies are lost, a manual override command can be used to allocate a stale shard copy. -[float] +[discrete] === Make index creation resilient to index closing and full cluster crashes (STATUS: DONE, v5.0.0) Recovering an index requires a quorum (with an exception for 2) of shard copies to be available to allocate a primary. This means that @@ -287,7 +287,7 @@ but none of the shards have been started. If such an index was inadvertently clo shard will be allocated upon reopening the index. -[float] +[discrete] === Use two phase commit for Cluster State publishing (STATUS: DONE, v5.0.0) A master node in Elasticsearch continuously https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-fault-detection.html[monitors the cluster nodes] @@ -301,7 +301,7 @@ a new phase to cluster state publishing where the proposed cluster state is sent but is not yet committed. Only once enough nodes actively acknowledge the change, it is committed and commit messages are sent to the nodes. See {GIT}13062[#13062]. -[float] +[discrete] === Wait on incoming joins before electing local node as master (STATUS: DONE, v2.0.0) During master election each node pings in order to discover other nodes and validate the liveness of existing @@ -312,7 +312,7 @@ node should wait for the incoming joins from other nodes, thus validating that t nodes have sent their joins request (based on the `minimum_master_nodes` settings) the cluster state is updated. {GIT}12161[#12161] -[float] +[discrete] === Mapping changes should be applied synchronously (STATUS: DONE, v2.0.0) When introducing new fields using dynamic mapping, it is possible that the same @@ -323,12 +323,12 @@ can result in a corrupt shard. To prevent this, new fields should not be added to a shard's mapping until confirmed by the master. {GIT}8688[#8688] (STATUS: DONE) -[float] +[discrete] === Add per-segment and per-commit ID to help replication (STATUS: DONE, v2.0.0) {JIRA}5895[LUCENE-5895] adds a unique ID for each segment and each commit point. File-based replication (as performed by snapshot/restore) can use this ID to know whether the segment/commit on the source and destination machines are the same. Fixed in Lucene 5.0. -[float] +[discrete] === Write index metadata on data nodes where shards allocated (STATUS: DONE, v2.0.0) Today, index metadata is written only on nodes that are master-eligible, not on @@ -338,7 +338,7 @@ However, users running with a single master node are at risk of losing their index metadata if the master fails. Instead, this metadata should also be written on any node where a shard is allocated. {GIT}8823[#8823], {GIT}9952[#9952] -[float] +[discrete] === Better file distribution with multiple data paths (STATUS: DONE, v2.0.0) Today, a node configured with multiple data paths distributes writes across @@ -347,7 +347,7 @@ failure of a single disk corrupts many shards at once. Instead, by allocating an entire shard to a single data path, the extent of the damage can be limited to just the shards on that disk. {GIT}9498[#9498] -[float] +[discrete] === Lucene checksums phase 3 (STATUS: DONE, v2.0.0) Almost all files in Elasticsearch now have checksums which are validated before use. A few changes remain: @@ -357,12 +357,12 @@ Almost all files in Elasticsearch now have checksums which are validated before * {JIRA}5894[LUCENE-5894] lays the groundwork for extending more efficient checksum validation to all files during optimized bulk merges. (STATUS: DONE, Fixed in v2.0.0) * {GIT}8403[#8403] to add validation of checksums on Lucene `segments_N` files. (STATUS: DONE, v2.0.0) -[float] +[discrete] === Report shard-level statuses on write operations (STATUS: DONE, v2.0.0) Make write calls return the number of total/successful/missing shards in the same way that we do in search, which ensures transparency in the consistency of write operations. {GIT}7994[#7994]. (STATUS: DONE, v2.0.0) -[float] +[discrete] === Take filter cache key size into account (STATUS: DONE, v2.0.0) Commonly used filters are cached in Elasticsearch. That cache is limited in size @@ -380,7 +380,7 @@ This puts an effective limit on the number of entries in the cache. See {GIT}830 The issue has been completely solved by the move to Lucene's query cache. See {GIT}10897[#10897] -[float] +[discrete] === Ensure shard state ID is incremental (STATUS: DONE, v1.5.1) It is possible in very extreme cases during a complicated full cluster restart, @@ -389,7 +389,7 @@ Elasticsearch now ensures that the state ID always moves forwards, and throws an exception when a legacy ID is higher than the current ID. See {GIT}10316[#10316] (STATUS: DONE, v1.5.1) -[float] +[discrete] === Verification of index UUIDs (STATUS: DONE, v1.5.0) When deleting and recreating indices rapidly, it is possible that cluster state @@ -398,7 +398,7 @@ Elasticsearch now checks the index UUID to ensure that cluster state updates refer to the same index version that is present on the local node. See {GIT}9541[#9541] and {GIT}10200[#10200] (STATUS: DONE, Fixed in v1.5.0) -[float] +[discrete] === Disable recovery from known buggy versions (STATUS: DONE, v1.5.0) Corruptions have been known to occur when doing a rolling restart from older, buggy versions. @@ -406,7 +406,7 @@ Now, shards from versions before v1.4.0 are copied over in full and recovery fro before v1.3.2 are disabled entirely. See {GIT}9925[#9925] (STATUS: DONE, Fixed in v1.5.0) -[float] +[discrete] === Upgrade 3.x segments metadata on engine startup (STATUS: DONE, v1.5.0) Upgrading the metadata of old 3.x segments on node upgrade can be error prone @@ -414,7 +414,7 @@ and can result in corruption when merges are being run concurrently. Instead, Elasticsearch will now upgrade the metadata of 3.x segments before the engine starts. See {GIT}9899[#9899] (STATUS; DONE, fixed in v1.5.0) -[float] +[discrete] === Prevent setting minimum_master_nodes to more than the current node count (STATUS: DONE, v1.5.0) Setting `zen.discovery.minimum_master_nodes` to a value higher than the current node count @@ -423,7 +423,7 @@ way to fix this is to add more master-eligible nodes. {GIT}8321[#8321] adds a m to validate settings before applying them, and {GIT}9051[#9051] extends this validation support to settings applied during a cluster restore. (STATUS: DONE, Fixed in v1.5.0) -[float] +[discrete] === Simplify and harden shard recovery and allocation (STATUS: DONE, v1.5.0) Randomized testing combined with chaotic failures has revealed corner cases @@ -440,17 +440,17 @@ these operations in order to make them more deterministic. These include: * Rapid creation and deletion of an index can cause reuse of old index metadata {GIT}9489[#9489]. (STATUS: DONE, Fixed in v1.5.0) * Flush immediately after the last concurrent recovery finishes to clear out the translog before a new recovery starts {GIT}9439[#9439]. (STATUS: DONE, Fixed in v1.5.0) -[float] +[discrete] === Prevent use of known-bad Java versions (STATUS: DONE, v1.5.0) Certain versions of the JVM are known to have bugs which can cause index corruption. {GIT}7580[#7580] prevents Elasticsearch startup if known bad versions are in use. -[float] +[discrete] === Make recovery be more resilient to partial network partitions (STATUS: DONE, v1.5.0) When a node is experience network issues, the master detects it and removes the node from the cluster. That causes all ongoing recoveries from and to that node to be stopped and a new location is found for the relevant shards. However, in the of case partial network partition, where there are connectivity issues between the source and target nodes of a recovery but not between those nodes and the current master things may go wrong. While the nodes successfully restore the connection, the on going recoveries may have encountered issues. In {GIT}8720[#8720], we added test simulations for these and solved several issues that were flagged by them. -[float] +[discrete] === Improving Zen Discovery (STATUS: DONE, v1.4.0.Beta1) Recovery from failure is a complicated process, especially in an asynchronous distributed system like Elasticsearch. With several processes happening in parallel, it is important to ensure that recovery proceeds swiftly and safely. While fixing the {GIT}2488[split-brain issue] we have been hunting down corner cases that were not handled optimally, adding tests to demonstrate the issues, and working on fixes: @@ -460,7 +460,7 @@ Recovery from failure is a complicated process, especially in an asynchronous di * After joining a cluster, validate that the join was successful and that the master has been set in the local cluster state. {GIT}6969[#6969]. (STATUS: DONE, v1.4.0.Beta1) * Write additional tests that use the test infrastructure to verify proper behavior during network disconnections and garbage collections. {GIT}7082[#7082] (STATUS: DONE, v1.4.0.Beta1) -[float] +[discrete] === Lucene checksums phase 2 (STATUS:DONE, v1.4.0.Beta1) When Lucene opens a segment for reading, it validates the checksum on the smaller segment files -- those which it reads entirely into memory -- but not the large files like term frequencies and positions, as this would be very expensive. During merges, term vectors and stored fields are validated, as long the segments being merged come from the same version of Lucene. Checksumming for term vectors and stored fields is important because merging consists of performing optimized byte copies. Term frequencies, term positions, payloads, doc values, and norms are currently not checked during merges, although Lucene provides the option to do so. These files are less prone to silent corruption as they are actively decoded during merge, and so are more likely to throw exceptions if there is any corruption. @@ -471,17 +471,17 @@ The following changes have been made: * {JIRA}5842[LUCENE-5842] validates the structure of the checksum footer of the postings lists, doc values, stored fields and term vectors when opening a new segment, to ensure that these files have not been truncated. (STATUS: DONE, Fixed in Lucene 4.10 and v1.4.0.Beta1) * {GIT}8407[#8407] validates Lucene checksums for legacy files. (STATUS: DONE; Fixed in v1.3.6) -[float] +[discrete] === Don't allow unsupported codecs (STATUS: DONE, v1.4.0.Beta1) Lucene 4 added a number of alternative codecs for experimentation purposes, and Elasticsearch exposed the ability to change codecs. Since then, Lucene has settled on the best choice of codec and provides backwards compatibility only for the default codec. {GIT}7566[#7566] removes the ability to set alternate codecs. -[float] +[discrete] === Use checksums to identify entire segments (STATUS: DONE, v1.4.0.Beta1) A hash collision makes it possible for two different files to have the same length and the same checksum. Instead, a segment's identity should rely on checksums from all of the files in a single segment, which greatly reduces the chance of a collision. This change has been merged ({GIT}7351[#7351]). -[float] +[discrete] === Fix ''Split Brain can occur even with minimum_master_nodes'' (STATUS: DONE, v1.4.0.Beta1) Even when minimum master nodes is set, split brain can still occur under certain conditions, e.g. disconnection between master eligible nodes, which can lead to data loss. The scenario is described in detail in {GIT}2488[issue 2488]: @@ -490,19 +490,19 @@ Even when minimum master nodes is set, split brain can still occur under certain * Added tests that simulated the bug described in issue 2488. You can take a look at the https://github.com/elastic/elasticsearch/commit/7bf3ffe73c44f1208d1f7a78b0629eb48836e726[original commit] of a reproduction on master. (STATUS: DONE, v1.2.0) * The bug described in {GIT}2488[issue 2488] is caused by an issue in our zen discovery gossip protocol. This specific issue has been fixed, and work has been done to make the algorithm more resilient. (STATUS: DONE, v1.4.0.Beta1) -[float] +[discrete] === Translog Entry Checksum (STATUS: DONE, v1.4.0.Beta1) Each translog entry in Elasticsearch should have its own checksum, and potentially additional information, so that we can properly detect corrupted translog entries and act accordingly. You can find more detail in issue {GIT}6554[#6554]. To start, we will begin by adding checksums to the translog to detect corrupt entries. Once this work has been completed, we will add translog entry markers so that corrupt entries can be skipped in the translog if/when desired. -[float] +[discrete] === Request-Level Memory Circuit Breaker (STATUS: DONE, v1.4.0.Beta1) We are in the process of introducing multiple circuit breakers in Elasticsearch, which can “borrow” space from each other in the event that one runs out of memory. This architecture will allow limits for certain parts of memory, but still allow flexibility in the event that another reserve like field data is not being used. This change includes adding a breaker for the BigArrays internal object used for some aggregations. See issue {GIT}6739[#6739] for more details. -[float] +[discrete] === Doc Values (STATUS: DONE, v1.4.0.Beta1) Fielddata is one of the largest consumers of heap memory, and thus one of the primary reasons for running out of memory and causing node instability. Elasticsearch has had the “doc values” option for a while, which allows you to build these structures at index time so that they live on disk instead of in memory. Up until recently, doc values were significantly slower than in-memory fielddata. @@ -511,32 +511,32 @@ By benchmarking and profiling both Lucene and Elasticsearch, we identified the b See {GIT}6967[#6967], {GIT}6908[#6908], {GIT}4548[#4548], {GIT}3829[#3829], {GIT}4518[#4518], {GIT}5669[#5669], {JIRA}5748[LUCENE-5748], {JIRA}5703[LUCENE-5703], {JIRA}5750[LUCENE-5750], {JIRA}5721[LUCENE-5721], {JIRA}5799[LUCENE-5799]. -[float] +[discrete] === Index corruption when upgrading Lucene 3.x indices (STATUS: DONE, v1.4.0.Beta1) Upgrading indices create with Lucene 3.x (Elasticsearch v0.20 and before) to Lucene 4.7 - 4.9 (Elasticsearch v1.1.0 to v1.3.x), could result in index corruption. {JIRA}5907[LUCENE-5907] fixes this issue in Lucene 4.10. -[float] +[discrete] === Improve error handling when deleting files (STATUS: DONE, v1.4.0.Beta1) Lucene uses reference counting to prevent files that are still in use from being deleted. Lucene testing discovered a bug ({JIRA}5919[LUCENE-5919]) when decrementing the ref count on a batch of files. If deleting some of the files resulted in an exception (e.g. due to interference from a virus scanner), the files that had their ref counts decremented successfully could later have their ref counts deleted again, incorrectly, resulting in files being physically deleted before their time. This is fixed in Lucene 4.10. -[float] +[discrete] === Using Lucene Checksums to verify shards during snapshot/restore (STATUS:DONE, v1.3.3) The snapshot process should verify checksums for each file that is being snapshotted to make sure that created snapshot doesn’t contain corrupted files. If a corrupted file is detected, the snapshot should fail with an error. In order to implement this feature we need to have correct and verifiable checksums stored with segment files, which is only possible for files that were written by the officially supported append-only codecs. See {GIT}7159[#7159]. -[float] +[discrete] === Rare compression corruption during shard recovery (STATUS: DONE, v1.3.2) During recovery, the primary shard is copied over the network to become a new replica shard. In rare cases, it was possible for a hash collision to trigger a bug in the compression library that is used to produce corruption in the replica shard. This bug was exposed by the change to validate checksums during recovery. We tracked down the bug in the in compression library and submitted a patch, which was accepted and merged by the upstream project. See {GIT}7210[#7210]. -[float] +[discrete] === Safer recovery of replica shards (STATUS: DONE, v1.3.0) If a primary shard fails or is closed while a replica is using it for recovery, we need to ensure that the replica is properly failed as well, and allow recovery to start from the new primary. Also check that an active copy of a shard is available on another node before physically removing an inactive shard from disk. {GIT}6825[#6825], {GIT}6645[#6645], {GIT}6995[#6995]. -[float] +[discrete] === Using Lucene Checksums to verify shards during recovery (STATUS: DONE, v1.3.0) Elasticsearch can use Lucene checksums to validate files while {GIT}6776[recovering a replica shard from a primary]. @@ -545,17 +545,17 @@ This issue exposed a bug in Elasticsearch’s handling of primary shard failure In order to verify the checksumming mechanism, we added functionality to our testing infrastructure that can corrupt an arbitrary index file and at any point, such as while it’s traveling over the wire or residing on disk. The tests utilizing this feature expect full or partial recovery from the failure while neither losing data nor spreading the corruption. -[float] +[discrete] === Detect File Corruption (STATUS: DONE, v1.3.0) When a corrupted index can be detected during merging or refresh, Elasticsearch will fail the shard if a checksum failure is detected. You can read the full details in pull request {GIT}6776[#6776]. -[float] +[discrete] === Network disconnect events could be lost, causing a zombie node to stay in the cluster state (STATUS: DONE, v1.3.0) Previously, there was a very short window in which we could lose a node disconnect event. To prevent this from occurring, we added extra handling of connection errors to our nodes & master fault detection pinging to make sure the node disconnect event is detected. See issue {GIT}6686[#6686]. -[float] +[discrete] === Other fixes to Lucene to address resiliency (STATUS: DONE, v1.3.0) * NativeLock is released if Lock is closed after failing on obtain {JIRA}5738[LUCENE-5738]. @@ -563,7 +563,7 @@ Previously, there was a very short window in which we could lose a node disconne * FSDirectory’s fsync() is lenient, now throws exceptions when errors occur {JIRA}5570[LUCENE-5570] * fsync() directory when committing {JIRA}5588[LUCENE-5588] -[float] +[discrete] === Backwards Compatibility Testings (STATUS: DONE, v1.3.0) Since founding Elasticsearch Inc, we grew our test base from ~1k tests to about 4k in just about over a year. We invested massively into our testing infrastructure, running our tests continuously on different operating systems, bare metal hardware and cloud environments, all while randomizing JVMs and their settings. @@ -578,24 +578,24 @@ The work on our testing infrastructure is more than just issue prevention, it al You can read more about backwards compatibility tests in issue {GIT}6497[#6497]. -[float] +[discrete] === Full Translog Writes on all Platforms (STATUS: DONE, v1.2.2 and v1.3.0) We have recently received bug reports of transaction log corruption that can occur when indexing very large documents (in the area of 300 KB). Although some Linux users reported this behavior, it appears the problem occurs more frequently when running Windows. We traced the source of the problem to the fact that when serializing documents to the transaction log, the Operating System can actually write only part of the document before returning from the write call. We can now detect this situation and make sure that the entire document is properly written. You can read the full details in pull request {GIT}6576[#6576]. -[float] +[discrete] === Lucene Checksums (STATUS: DONE, v1.2.0) Before Apache Lucene version 4.8, checksums were not computed on generated index files. The result was that it was difficult to identify when or if a Lucene index got corrupted, whether by hardware failure, JVM bug or for an entirely different reason. For an idea of the checksum efforts in progress in Apache Lucene, see issues {JIRA}2446[LUCENE-2446], {JIRA}5580[LUCENE-5580] and {JIRA}5602[LUCENE-5602]. The gist is that Lucene 4.8+ now computes full checksums on all index files and it verifies them when opening metadata or other smaller files as well as other files during merges. -[float] +[discrete] === Detect errors faster by locally failing a shard upon an indexing error (STATUS: DONE, v1.2.0) Previously, Elasticsearch notified the master of the shard failure and waited for the master to close the local copy of the shard, thus assigning it to other nodes. This architecture caused delays in failure detection, potentially causing unneeded failures of other incoming requests. In rare cases, such as concurrency racing conditions or certain network partitions configurations, we could lose these failure notifications. We solved this issue by locally failing shards upon indexing errors. See issue {GIT}5847[#5847]. -[float] +[discrete] === Snapshot/Restore API (STATUS: DONE, v1.0.0) In Elasticsearch version 1.0, we significantly improved the backup process by introducing the Snapshot/Restore API. While it was always possible to make backups of Elasticsearch, the Snapshot/Restore API made the backup process much easier. @@ -606,19 +606,19 @@ Since that first release in version 1.0, the API has continued to evolve. In ver The Snapshot/Restore API supports a number of different repository types for storing backups. Currently, it’s possible to make backups to a shared file system, Amazon S3, HDFS, and Azure storage. We are continuing to work on adding other types of storage systems, as well as improving the robustness of the snapshot/restore process. -[float] +[discrete] === Circuit Breaker: Fielddata (STATUS: DONE, v1.0.0) Currently, the circuit breaker protects against loading too much field data by estimating how much memory the field data will take to load, then aborting the request if the memory requirements are too high. This feature was added in Elasticsearch version 1.0.0. -[float] +[discrete] === Use of Paginated Data Structures to Ease Garbage Collection (STATUS: DONE, v1.0.0 & v1.2.0) Elasticsearch has moved from an object-based cache to a page-based cache recycler as described in issue {GIT}4557[#4557]. This change makes garbage collection easier by limiting fragmentation, since all pages have the same size and are recycled. It also allows managing the size of the cache not based on the number of objects it contains, but on the memory that it uses. These pages are used for two main purposes: implementing higher level data structures such as hash tables that are used internally by aggregations to e.g. map terms to counts, as well as reusing memory in the translog/transport layer as detailed in issue {GIT}5691[#5691]. -[float] +[discrete] === Dedicated Master Nodes Resiliency (STATUS: DONE, v1.0.0) In order to run a more resilient cluster, we recommend running dedicated master nodes to ensure master nodes are not affected by resources consumed by data nodes. We also have made master nodes more resilient to heavy resource usage, mainly associated with large clusters / cluster states. @@ -630,12 +630,12 @@ These changes include: * Improve master handling of large scale mapping updates from data nodes by batching them into a single cluster event. (See issue {GIT}4373[#4373].) * Add an ack mechanism where next phase cluster updates are processed only when nodes acknowledged they received the previous cluster state. (See issues {GIT}3736[#3736], {GIT}3786[#3786], {GIT}4114[#4114], {GIT}4169[#4169], {GIT}4228[#4228] and {GIT}4421[#4421], which also include enhancements to the ack mechanism implementation.) -[float] +[discrete] === Multi Data Paths May Falsely Report Corrupt Index (STATUS: DONE, v1.0.0) When using multiple data paths, an index could be falsely reported as corrupted. This has been fixed with pull request {GIT}4674[#4674]. -[float] +[discrete] === Randomized Testing (STATUS: DONE, v1.0.0) In order to best validate for resiliency in Elasticsearch, we rewrote the Elasticsearch test infrastructure to introduce the concept of http://berlinbuzzwords.de/sites/berlinbuzzwords.de/files/media/documents/dawidweiss-randomizedtesting-pub.pdf[randomized testing]. Randomized testing allows us to easily enhance the Elasticsearch testing infrastructure with predictably irrational conditions, making the resulting code base more resilient. @@ -644,7 +644,7 @@ Each of our integration tests runs against a cluster with a random number of nod At Elasticsearch, we live the philosophy that we can miss a bug once, but never a second time. We make our tests more evil as you go, introducing randomness in all the areas where we discovered bugs. We figure if our tests don’t fail, we are not trying hard enough! If you are interested in how we have evolved our test infrastructure over time check out https://github.com/elastic/elasticsearch/issues?q=label%3Atest[issues labeled with ``test'' on GitHub]. -[float] +[discrete] === Lucene Loses Data On File Descriptors Failure (STATUS: DONE, v0.90.0) When a process runs out of file descriptors, Lucene can causes an index to be completely deleted. This issue was fixed in Lucene ({JIRA}4870[version 4.2.1]) and fixed in an early version of Elasticsearch. See issue {GIT}2812[#2812]. diff --git a/x-pack/docs/en/rest-api/security.asciidoc b/x-pack/docs/en/rest-api/security.asciidoc index 5a81fc8d2c7b..3cc29f9c70ce 100644 --- a/x-pack/docs/en/rest-api/security.asciidoc +++ b/x-pack/docs/en/rest-api/security.asciidoc @@ -11,7 +11,7 @@ You can use the following APIs to perform security activities. * <> * <> -[float] +[discrete] [[security-api-app-privileges]] === Application privileges @@ -23,7 +23,7 @@ privileges: * <> * <> -[float] +[discrete] [[security-role-mapping-apis]] === Role mappings @@ -33,7 +33,7 @@ You can use the following APIs to add, remove, update, and retrieve role mapping * <> * <> -[float] +[discrete] [[security-role-apis]] === Roles @@ -44,7 +44,7 @@ You can use the following APIs to add, remove, update, and retrieve roles in the * <> * <> -[float] +[discrete] [[security-token-apis]] === Tokens @@ -54,7 +54,7 @@ without requiring basic authentication: * <> * <> -[float] +[discrete] [[security-api-keys]] === API Keys @@ -65,7 +65,7 @@ without requiring basic authentication: * <> * <> -[float] +[discrete] [[security-user-apis]] === Users @@ -79,7 +79,7 @@ native realm: * <> * <> -[float] +[discrete] [[security-openid-apis]] === OpenID Connect @@ -90,7 +90,7 @@ authentication realm when using a custom web application other than Kibana * <> * <> -[float] +[discrete] [[security-saml-apis]] === SAML diff --git a/x-pack/docs/en/rest-api/security/role-mapping-resources.asciidoc b/x-pack/docs/en/rest-api/security/role-mapping-resources.asciidoc index a6a6fd7a90e7..be3454d3e3ed 100644 --- a/x-pack/docs/en/rest-api/security/role-mapping-resources.asciidoc +++ b/x-pack/docs/en/rest-api/security/role-mapping-resources.asciidoc @@ -31,7 +31,7 @@ A rule is a logical condition that is expressed by using a JSON DSL. The DSL sup its child is `false`, the `except` is `true`. -[float] +[discrete] [[mapping-roles-rule-field]] ==== Field rules @@ -57,7 +57,7 @@ The value specified in the field rule can be one of the following types: If _any_ of elements match, the match is successful. | ["admin", "operator"] |======================= -[float] +[discrete] ===== User fields The _user object_ against which rules are evaluated has the following fields: diff --git a/x-pack/docs/en/security/auditing/event-types.asciidoc b/x-pack/docs/en/security/auditing/event-types.asciidoc index 198ff53ac91a..627abf5b9387 100644 --- a/x-pack/docs/en/security/auditing/event-types.asciidoc +++ b/x-pack/docs/en/security/auditing/event-types.asciidoc @@ -41,7 +41,7 @@ The following is a list of the events that can be generated: profile. |====== -[float] +[discrete] [[audit-event-attributes]] === Audit event attributes @@ -235,7 +235,7 @@ that have been previously described: This attribute is only provided for authentication using an API key. -[float] +[discrete] [[audit-event-attributes-deprecated-formats]] === Audit event attributes for the deprecated formats diff --git a/x-pack/docs/en/security/auditing/output-logfile.asciidoc b/x-pack/docs/en/security/auditing/output-logfile.asciidoc index cfadb1a4cf84..2179af6e389c 100644 --- a/x-pack/docs/en/security/auditing/output-logfile.asciidoc +++ b/x-pack/docs/en/security/auditing/output-logfile.asciidoc @@ -38,7 +38,7 @@ any of the audit trails, audit events are forwarded to the root appender, which by default points to the `elasticsearch.log` file. -[float] +[discrete] [[audit-log-entry-format]] === Log entry format @@ -56,7 +56,7 @@ The log entries in the `_audit.json` file have the following format There is a list of <> specifying the set of fields for each sog entry type. -[float] +[discrete] [[deprecated-audit-log-entry-format]] === Deprecated log entry format @@ -84,7 +84,7 @@ The log entries in the `_access.log` file have the following format Audit Entry Attributes>> for the attributes that can be included for each type of event. -[float] +[discrete] [[audit-log-settings]] === Logfile output settings @@ -108,7 +108,7 @@ of information that is in place to assure backwards compatibility. If you are not strict about the audit format it is strongly recommended to only use the `_audit.json` log appender. -[float] +[discrete] [[audit-log-ignore-policy]] === Logfile audit events ignore policies diff --git a/x-pack/docs/en/security/authentication/built-in-users.asciidoc b/x-pack/docs/en/security/authentication/built-in-users.asciidoc index 5f5a68b8651f..d32ee4448a95 100644 --- a/x-pack/docs/en/security/authentication/built-in-users.asciidoc +++ b/x-pack/docs/en/security/authentication/built-in-users.asciidoc @@ -21,7 +21,7 @@ use. In particular, do not use the `elastic` superuser unless full access to the cluster is required. Instead, create users that have the minimum necessary roles or privileges for their activities. -[float] +[discrete] [[built-in-user-explanation]] ==== How the built-in users work These built-in users are stored in a special `.security` index, which is managed @@ -36,7 +36,7 @@ realm will not have any effect on the built-in users. The built-in users can be disabled individually, using the <>. -[float] +[discrete] [[bootstrap-elastic-passwords]] ==== The Elastic bootstrap password @@ -55,7 +55,7 @@ NOTE: After you < Users* page in {kib} or the <> to set a password for these users. -[float] +[discrete] [[add-built-in-user-apm]] ==== Adding built-in user passwords to APM @@ -196,7 +196,7 @@ then you should use the *Management > Users* page in {kib} or the <> to set a password for these users. -[float] +[discrete] [[disabling-default-password]] ==== Disabling default password functionality [IMPORTANT] diff --git a/x-pack/docs/en/security/authorization/alias-privileges.asciidoc b/x-pack/docs/en/security/authorization/alias-privileges.asciidoc index 561d065fec4f..f1163aa2224f 100644 --- a/x-pack/docs/en/security/authorization/alias-privileges.asciidoc +++ b/x-pack/docs/en/security/authorization/alias-privileges.asciidoc @@ -138,7 +138,7 @@ would be as follows: -------------------------------------------------- // NOTCONSOLE -[float] +[discrete] ==== Managing aliases Unlike creating indices, which requires the `create_index` privilege, adding, @@ -185,7 +185,7 @@ The privileges required for such a request are the same as above. Both index and alias need the `manage` permission. -[float] +[discrete] ==== Filtered aliases Aliases can hold a filter, which allows to select a subset of documents that can diff --git a/x-pack/docs/en/security/authorization/managing-roles.asciidoc b/x-pack/docs/en/security/authorization/managing-roles.asciidoc index 7a9a7a9eeed6..8e101a5e2f61 100644 --- a/x-pack/docs/en/security/authorization/managing-roles.asciidoc +++ b/x-pack/docs/en/security/authorization/managing-roles.asciidoc @@ -190,14 +190,14 @@ custom roles providers. If you need to integrate with another system to retriev user roles, you can build a custom roles provider plugin. For more information, see <>. -[float] +[discrete] [[roles-management-ui]] === Role management UI You can manage users and roles easily in {kib}. To manage roles, log in to {kib} and go to *Management / Security / Roles*. -[float] +[discrete] [[roles-management-api]] === Role management API @@ -206,7 +206,7 @@ dynamically. When you use the APIs to manage roles in the `native` realm, the roles are stored in an internal {es} index. For more information and examples, see <>. -[float] +[discrete] [[roles-management-file]] === File-based role management diff --git a/x-pack/docs/en/security/authorization/mapping-roles.asciidoc b/x-pack/docs/en/security/authorization/mapping-roles.asciidoc index e0c71c9f9f54..9aed323a4133 100644 --- a/x-pack/docs/en/security/authorization/mapping-roles.asciidoc +++ b/x-pack/docs/en/security/authorization/mapping-roles.asciidoc @@ -85,7 +85,7 @@ IMPORTANT: You cannot view, edit, or remove any roles that are defined in the ro mapping files by using the the role mapping APIs. ==== Realm specific details -[float] +[discrete] [[ldap-role-mapping]] ===== Active Directory and LDAP realms @@ -138,7 +138,7 @@ PUT /_security/role_mapping/basic_users } -------------------------------------------------- -[float] +[discrete] [[pki-role-mapping]] ===== PKI realms diff --git a/x-pack/docs/en/security/authorization/overview.asciidoc b/x-pack/docs/en/security/authorization/overview.asciidoc index 97ad0ead69f8..b4ad962e7554 100644 --- a/x-pack/docs/en/security/authorization/overview.asciidoc +++ b/x-pack/docs/en/security/authorization/overview.asciidoc @@ -9,7 +9,7 @@ This process takes place after the user is successfully identified and <>. [[roles]] -[float] +[discrete] === Role-based access control The {security-features} provide a role-based access control (RBAC) mechanism, @@ -61,7 +61,7 @@ The method for assigning roles to users varies depending on which realms you use to authenticate users. For more information, see <>. [[attributes]] -[float] +[discrete] === Attribute-based access control The {security-features} also provide an attribute-based access control (ABAC) diff --git a/x-pack/docs/en/security/fips-140-compliance.asciidoc b/x-pack/docs/en/security/fips-140-compliance.asciidoc index bfc3fb472558..13c7e790f032 100644 --- a/x-pack/docs/en/security/fips-140-compliance.asciidoc +++ b/x-pack/docs/en/security/fips-140-compliance.asciidoc @@ -18,7 +18,7 @@ For {es}, adherence to FIPS 140-2 is ensured by - Allowing the configuration of {es} in a FIPS 140-2 compliant manner, as documented below. -[float] +[discrete] === Upgrade considerations If you plan to upgrade your existing cluster to a version that can be run in @@ -40,14 +40,14 @@ regenerate your `elasticsearch.keystore` and migrate all secure settings to it, in addition to the necessary configuration changes outlined below, before starting each node. -[float] +[discrete] === Configuring {es} for FIPS 140-2 Apart from setting `xpack.security.fips_mode.enabled`, a number of security related settings need to be configured accordingly in order to be compliant and able to run {es} successfully in a FIPS 140-2 enabled JVM. -[float] +[discrete] ==== TLS SSLv2 and SSLv3 are not allowed by FIPS 140-2, so `SSLv2Hello` and `SSLv3` cannot @@ -58,7 +58,7 @@ NOTE: The use of TLS ciphers is mainly governed by the relevant crypto module are configured by default in {es} are FIPS 140-2 compliant and as such can be used in a FIPS 140-2 JVM. See <>. -[float] +[discrete] ==== TLS Keystores and keys Keystores can be used in a number of <> in order to @@ -84,7 +84,7 @@ keys must have corresponding length according to the following table: | `AES-256` | 15630 | 512+ |======================= -[float] +[discrete] ==== Password Hashing {es} offers a number of algorithms for securely hashing credentials in memory and @@ -107,7 +107,7 @@ The user cache will be emptied upon node restart, so any existing hashes using non-compliant algorithms will be discarded and the new ones will be created using the compliant `PBKDF2` algorithm you have selected. -[float] +[discrete] === Limitations Due to the limitations that FIPS 140-2 compliance enforces, a small number of diff --git a/x-pack/docs/en/security/get-started-security.asciidoc b/x-pack/docs/en/security/get-started-security.asciidoc index f3b1811d5dcd..384db33e2af4 100644 --- a/x-pack/docs/en/security/get-started-security.asciidoc +++ b/x-pack/docs/en/security/get-started-security.asciidoc @@ -6,7 +6,7 @@ In this tutorial, you learn how to secure a cluster by configuring users and roles in {es}, {kib}, {ls}, and {metricbeat}. -[float] +[discrete] [[get-started-security-prerequisites]] === Before you begin @@ -345,7 +345,7 @@ These roles enable the user to see the system metrics in {kib} (for example, on the *Discover* page or in the http://localhost:5601/app/kibana#/dashboard/Metricbeat-system-overview[{metricbeat} system overview dashboard]). -[float] +[discrete] [[gs-security-nextsteps]] === What's next? diff --git a/x-pack/docs/en/security/limitations.asciidoc b/x-pack/docs/en/security/limitations.asciidoc index da3f0acf4340..ae9aa6679faa 100644 --- a/x-pack/docs/en/security/limitations.asciidoc +++ b/x-pack/docs/en/security/limitations.asciidoc @@ -6,7 +6,7 @@ Limitations ++++ -[float] +[discrete] === Plugins Elasticsearch's plugin infrastructure is extremely flexible in terms of what can @@ -17,21 +17,21 @@ source or not) and therefore we cannot guarantee their compliance with {stack-security-features}. For this reason, third-party plugins are not officially supported on clusters with {security-features} enabled. -[float] +[discrete] === Changes in wildcard behavior Elasticsearch clusters with the {security-features} enabled apply the `/_all` wildcard, and all other wildcards, to the data streams, indices, and index aliases that the current user has privileges for, not all data streams, indices, and index aliases on the cluster. -[float] +[discrete] === Multi document APIs Multi get and multi term vectors API throw IndexNotFoundException when trying to access non existing indices that the user is not authorized for. By doing that they leak information regarding the fact that the data stream or index doesn't exist, while the user is not authorized to know anything about those data streams or indices. -[float] +[discrete] === Filtered index aliases Aliases containing filters are not a secure way to restrict access to individual @@ -41,7 +41,7 @@ The {stack-security-features} provide a secure way to restrict access to documents through the <> feature. -[float] +[discrete] === Field and document level security limitations When a user's role enables document or field level security for a data stream or index: @@ -71,7 +71,7 @@ When a user's role enables document level security for a data stream or index: the specified suggesters are ignored. * A search request cannot be profiled if document level security is enabled. -[float] +[discrete] [[alias-limitations]] === Index and field names can be leaked when using aliases @@ -83,7 +83,7 @@ index name and mappings for each index that the alias applies to. Until this limitation is addressed, avoid index and field names that contain confidential or sensitive information. -[float] +[discrete] === LDAP realm The <> does not currently support the discovery of nested diff --git a/x-pack/docs/en/security/overview.asciidoc b/x-pack/docs/en/security/overview.asciidoc index 22948d15acc8..2f41afc4b9b8 100644 --- a/x-pack/docs/en/security/overview.asciidoc +++ b/x-pack/docs/en/security/overview.asciidoc @@ -19,7 +19,7 @@ Security protects {es} clusters by: * <> so you know who's doing what to your cluster and the data it stores. -[float] +[discrete] [[preventing-unauthorized-access]] === Preventing unauthorized access @@ -48,7 +48,7 @@ can connect to the cluster based on <>. You can block and allow specific IP addresses, subnets, or DNS domains to control network-level access to a cluster. -[float] +[discrete] [[preserving-data-integrity]] === Preserving data integrity @@ -60,7 +60,7 @@ data by encrypting communications to, from, and within the cluster. See <>. For even greater protection, you can increase the <>. -[float] +[discrete] [[maintaining-audit-trail]] === Maintaining an audit trail diff --git a/x-pack/docs/en/security/securing-communications/configuring-tls-docker.asciidoc b/x-pack/docs/en/security/securing-communications/configuring-tls-docker.asciidoc index ac4cbcc40053..2fa1e9dea21f 100644 --- a/x-pack/docs/en/security/securing-communications/configuring-tls-docker.asciidoc +++ b/x-pack/docs/en/security/securing-communications/configuring-tls-docker.asciidoc @@ -13,7 +13,7 @@ For further details, see <> and https://www.elastic.co/subscriptions[available subscriptions]. -[float] +[discrete] ==== Prepare the environment <>. @@ -169,7 +169,7 @@ volumes: {"data01", "data02", "certs"} creating self-signed certificates without having to pin specific internal IP addresses. endif::[] -[float] +[discrete] ==== Run the example . Generate the certificates (only needed once): + @@ -209,7 +209,7 @@ auto --batch \ ---- -- -[float] +[discrete] ==== Tear everything down To remove all the Docker resources created by the example, issue: -- diff --git a/x-pack/docs/en/security/securing-communications/tutorial-tls-addnodes.asciidoc b/x-pack/docs/en/security/securing-communications/tutorial-tls-addnodes.asciidoc index a9f62a762f6a..256c880dff56 100644 --- a/x-pack/docs/en/security/securing-communications/tutorial-tls-addnodes.asciidoc +++ b/x-pack/docs/en/security/securing-communications/tutorial-tls-addnodes.asciidoc @@ -176,7 +176,7 @@ cluster in multiple primary and replica shards. For more information about the concepts of clusters, nodes, and shards, see <>. -[float] +[discrete] [[encrypting-internode-nextsteps]] === What's next? diff --git a/x-pack/docs/en/security/securing-communications/tutorial-tls-intro.asciidoc b/x-pack/docs/en/security/securing-communications/tutorial-tls-intro.asciidoc index e879fb0559ed..5575a427a0e5 100644 --- a/x-pack/docs/en/security/securing-communications/tutorial-tls-intro.asciidoc +++ b/x-pack/docs/en/security/securing-communications/tutorial-tls-intro.asciidoc @@ -23,7 +23,7 @@ requirements that are not covered in this tutorial. See <> and <>. -[float] +[discrete] [[encrypting-internode-prerequisites]] === Before you begin diff --git a/x-pack/docs/en/security/using-ip-filtering.asciidoc b/x-pack/docs/en/security/using-ip-filtering.asciidoc index 44337c4cb941..5f6440b22f60 100644 --- a/x-pack/docs/en/security/using-ip-filtering.asciidoc +++ b/x-pack/docs/en/security/using-ip-filtering.asciidoc @@ -13,7 +13,7 @@ NOTE: Elasticsearch installations are not designed to be publicly accessible over the Internet. IP Filtering and the other capabilities of the {es} {security-features} do not change this condition. -[float] +[discrete] === Enabling IP filtering The {es} {security-features} contain an access control feature that allows or @@ -54,7 +54,7 @@ xpack.security.transport.filter.allow: localhost xpack.security.transport.filter.deny: '*.google.com' -------------------------------------------------- -[float] +[discrete] === Disabling IP Filtering Disabling IP filtering can slightly improve performance under some conditions. @@ -75,7 +75,7 @@ xpack.security.transport.filter.enabled: false xpack.security.http.filter.enabled: true -------------------------------------------------- -[float] +[discrete] === Specifying TCP transport profiles <> @@ -92,7 +92,7 @@ transport.profiles.client.xpack.security.filter.deny: _all NOTE: When you do not specify a profile, `default` is used automatically. -[float] +[discrete] === HTTP filtering You may want to have different IP filtering for the transport and HTTP protocols. @@ -105,7 +105,7 @@ xpack.security.http.filter.allow: 172.16.0.0/16 xpack.security.http.filter.deny: _all -------------------------------------------------- -[float] +[discrete] [[dynamic-ip-filtering]] ==== Dynamically updating IP filter settings diff --git a/x-pack/docs/en/watcher/actions.asciidoc b/x-pack/docs/en/watcher/actions.asciidoc index 6e0a8c6efb37..589d87c1cd26 100644 --- a/x-pack/docs/en/watcher/actions.asciidoc +++ b/x-pack/docs/en/watcher/actions.asciidoc @@ -20,7 +20,7 @@ serve as a model for a templated email body. <>, <>, and <>. -[float] +[discrete] [[actions-ack-throttle]] === Acknowledgement and throttling @@ -316,7 +316,7 @@ include::actions/pagerduty.asciidoc[] include::actions/jira.asciidoc[] -[float] +[discrete] [[actions-ssl-openjdk]] === Using SSL/TLS with OpenJDK diff --git a/x-pack/docs/en/watcher/actions/email.asciidoc b/x-pack/docs/en/watcher/actions/email.asciidoc index 79cf1c9017bf..4c78b9fb5247 100644 --- a/x-pack/docs/en/watcher/actions/email.asciidoc +++ b/x-pack/docs/en/watcher/actions/email.asciidoc @@ -291,7 +291,7 @@ xpack.notification.email: ... -------------------------------------------------- -[float] +[discrete] [[gmail]] ===== Sending email from Gmail @@ -329,7 +329,7 @@ a unique App Password to send email from {watcher}. See https://support.google.com/accounts/answer/185833?hl=en[Sign in using App Passwords] for more information. -[float] +[discrete] [[outlook]] ===== Sending email from Outlook.com @@ -365,7 +365,7 @@ NOTE: You need to use a unique App Password if two-step verification is enable See http://windows.microsoft.com/en-us/windows/app-passwords-two-step-verification[App passwords and two-step verification] for more information. -[float] +[discrete] [[amazon-ses]] ===== Sending email from Amazon SES (Simple Email Service) @@ -402,7 +402,7 @@ NOTE: You need to use your Amazon SES SMTP credentials to send email through or https://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-domains.html[your whole domain] at AWS. -[float] +[discrete] [[exchange]] ===== Sending email from Microsoft Exchange @@ -437,7 +437,7 @@ To store the account SMTP password, use the keystore command bin/elasticsearch-keystore add xpack.notification.email.account.exchange_account.smtp.secure_password -------------------------------------------------- -[float] +[discrete] [[email-html-sanitization]] ===== Configuring HTML sanitization options diff --git a/x-pack/docs/en/watcher/example-watches/example-watch-clusterstatus.asciidoc b/x-pack/docs/en/watcher/example-watches/example-watch-clusterstatus.asciidoc index 656e4dbd7aae..edfd7cdb486d 100644 --- a/x-pack/docs/en/watcher/example-watches/example-watch-clusterstatus.asciidoc +++ b/x-pack/docs/en/watcher/example-watches/example-watch-clusterstatus.asciidoc @@ -13,7 +13,7 @@ Elasticsearch cluster: * <> if the cluster is RED. -[float] +[discrete] [[health-add-input]] ==== Schedule the watch and add an input @@ -122,7 +122,7 @@ GET .watcher-history*/_search -------------------------------------------------- // TEST[continued] -[float] +[discrete] [[health-add-condition]] ==== Add a condition @@ -176,7 +176,7 @@ GET .watcher-history*/_search?pretty ------------------------------------------------------ // TEST[continued] -[float] +[discrete] [[health-take-action]] ==== Take action @@ -266,7 +266,7 @@ GET .watcher-history*/_search?pretty ------------------------------------------------------- // TEST[continued] -[float] +[discrete] [[health-delete]] ==== Delete the watch diff --git a/x-pack/docs/en/watcher/getting-started.asciidoc b/x-pack/docs/en/watcher/getting-started.asciidoc index 4e88bbe6a433..eb81b1b1b05b 100644 --- a/x-pack/docs/en/watcher/getting-started.asciidoc +++ b/x-pack/docs/en/watcher/getting-started.asciidoc @@ -16,7 +16,7 @@ needs to be sent. * <> to send an alert when the condition is met. -[float] +[discrete] [[log-add-input]] === Schedule the watch and define an input @@ -73,7 +73,7 @@ GET .watcher-history*/_search?pretty ------------------------------------------------------------ // TEST[continued] -[float] +[discrete] [[log-add-condition]] === Add a condition @@ -148,7 +148,7 @@ GET .watcher-history*/_search?pretty -------------------------------------------------- // TEST[continued] -[float] +[discrete] [[log-take-action]] === Configure an action @@ -192,7 +192,7 @@ PUT _watcher/watch/log_error_watch } -------------------------------------------------- -[float] +[discrete] [[log-delete]] === Delete the Watch @@ -209,7 +209,7 @@ DELETE _watcher/watch/log_error_watch -------------------------------------------------- // TEST[continued] -[float] +[discrete] [[required-security-privileges]] === Required security privileges To enable users to create and manipulate watches, assign them the `watcher_admin` @@ -220,7 +220,7 @@ To allow users to view watches and the watch history, assign them the `watcher_u security role. Watcher users cannot create or manipulate watches; they are only allowed to execute read-only watch operations. -[float] +[discrete] [[next-steps]] === Where to go next diff --git a/x-pack/docs/en/watcher/gs-index.asciidoc b/x-pack/docs/en/watcher/gs-index.asciidoc index 1ddfd3f0838a..0e9163b571fc 100644 --- a/x-pack/docs/en/watcher/gs-index.asciidoc +++ b/x-pack/docs/en/watcher/gs-index.asciidoc @@ -37,7 +37,7 @@ All of these use-cases share a few key properties: * One or more actions are taken if the condition is true -- an email is sent, a 3rd party system is notified, or the query results are stored. -[float] +[discrete] === How watches work The {alert-features} provide an API for creating, managing and testing _watches_. diff --git a/x-pack/docs/en/watcher/how-watcher-works.asciidoc b/x-pack/docs/en/watcher/how-watcher-works.asciidoc index 1f733e694fb7..6ad441347839 100644 --- a/x-pack/docs/en/watcher/how-watcher-works.asciidoc +++ b/x-pack/docs/en/watcher/how-watcher-works.asciidoc @@ -14,7 +14,7 @@ search in the logs data indicates that there are too many 503 errors in the last This topic describes the elements of a watch and how watches operate. -[float] +[discrete] [[watch-definition]] === Watch definition @@ -129,7 +129,7 @@ PUT _watcher/watch/log_errors The watch payload that contains the errors is attached to the email. -[float] +[discrete] [[watch-execution]] === Watch execution @@ -196,7 +196,7 @@ The following diagram shows the watch execution process: image::images/watch-execution.jpg[align="center"] -[float] +[discrete] [[watch-acknowledgment-throttling]] === Watch acknowledgment and throttling @@ -216,7 +216,7 @@ the watch actions normally. For more information, see <>. -[float] +[discrete] [[watch-active-state]] === Watch active state @@ -245,7 +245,7 @@ reporting availability issues during the maintenance window. Deactivating a watch also enables you to keep it around for future use without deleting it from the system. -[float] +[discrete] [[scripts-templates]] === Scripts and templates @@ -261,7 +261,7 @@ and cached by Elasticsearch to optimize recurring execution. Autoloading is also supported. For more information, see <> and <>. -[float] +[discrete] [[watch-execution-context]] ==== Watch execution context @@ -295,7 +295,7 @@ The following snippet shows the basic structure of the _Watch Execution Context_ (i.e they're not persisted and can't be used between different executions of the same watch) -[float] +[discrete] [[scripts]] ==== Using scripts @@ -320,7 +320,7 @@ For example, if the watch metadata contains a `color` field condition or transform definition (e.g. `"params" : {"color": "red"}`), you can access its value via the `color` variable. -[float] +[discrete] [[templates]] ==== Using templates @@ -350,7 +350,7 @@ in sent emails: ---------------------------------------------------------------------- // NOTCONSOLE -[float] +[discrete] [[inline-templates-scripts]] ===== Inline templates and scripts @@ -412,7 +412,7 @@ The formal object definition for a script would be: ---------------------------------------------------------------------- // NOTCONSOLE -[float] +[discrete] [[stored-templates-scripts]] ===== Stored templates and scripts diff --git a/x-pack/docs/en/watcher/index.asciidoc b/x-pack/docs/en/watcher/index.asciidoc index d32ae700bdc4..3da0252bba3e 100644 --- a/x-pack/docs/en/watcher/index.asciidoc +++ b/x-pack/docs/en/watcher/index.asciidoc @@ -38,7 +38,7 @@ All of these use-cases share a few key properties: * One or more actions are taken if the condition is true -- an email is sent, a 3rd party system is notified, or the query results are stored. -[float] +[discrete] === How watches work The {alert-features} provide an API for creating, managing and testing _watches_. diff --git a/x-pack/docs/en/watcher/java/ack-watch.asciidoc b/x-pack/docs/en/watcher/java/ack-watch.asciidoc index 7cef48d6e337..2779aa5a589b 100644 --- a/x-pack/docs/en/watcher/java/ack-watch.asciidoc +++ b/x-pack/docs/en/watcher/java/ack-watch.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[api-java-ack-watch]] === Ack watch API diff --git a/x-pack/docs/en/watcher/java/activate-watch.asciidoc b/x-pack/docs/en/watcher/java/activate-watch.asciidoc index 96ea3f5e23d8..f0062c56472f 100644 --- a/x-pack/docs/en/watcher/java/activate-watch.asciidoc +++ b/x-pack/docs/en/watcher/java/activate-watch.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[api-java-activate-watch]] === Activate watch API diff --git a/x-pack/docs/en/watcher/java/deactivate-watch.asciidoc b/x-pack/docs/en/watcher/java/deactivate-watch.asciidoc index 98c4220e68c8..208868f40984 100644 --- a/x-pack/docs/en/watcher/java/deactivate-watch.asciidoc +++ b/x-pack/docs/en/watcher/java/deactivate-watch.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[api-java-deactivate-watch]] === Deactivate watch API diff --git a/x-pack/docs/en/watcher/java/delete-watch.asciidoc b/x-pack/docs/en/watcher/java/delete-watch.asciidoc index a019db933748..537cd29508ff 100644 --- a/x-pack/docs/en/watcher/java/delete-watch.asciidoc +++ b/x-pack/docs/en/watcher/java/delete-watch.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[api-java-delete-watch]] === Delete watch API diff --git a/x-pack/docs/en/watcher/java/execute-watch.asciidoc b/x-pack/docs/en/watcher/java/execute-watch.asciidoc index 6379c09ed23d..6875f90f1aa0 100644 --- a/x-pack/docs/en/watcher/java/execute-watch.asciidoc +++ b/x-pack/docs/en/watcher/java/execute-watch.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[api-java-execute-watch]] === Execute watch API diff --git a/x-pack/docs/en/watcher/java/get-watch.asciidoc b/x-pack/docs/en/watcher/java/get-watch.asciidoc index f7a8c92fc20c..6b0115d90154 100644 --- a/x-pack/docs/en/watcher/java/get-watch.asciidoc +++ b/x-pack/docs/en/watcher/java/get-watch.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[api-java-get-watch]] === Get watch API diff --git a/x-pack/docs/en/watcher/java/put-watch.asciidoc b/x-pack/docs/en/watcher/java/put-watch.asciidoc index 7e584efaf038..a3bc98583fc5 100644 --- a/x-pack/docs/en/watcher/java/put-watch.asciidoc +++ b/x-pack/docs/en/watcher/java/put-watch.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[api-java-put-watch]] === Put watch API diff --git a/x-pack/docs/en/watcher/java/service.asciidoc b/x-pack/docs/en/watcher/java/service.asciidoc index f0fd9a8f69c8..9e18c880075f 100644 --- a/x-pack/docs/en/watcher/java/service.asciidoc +++ b/x-pack/docs/en/watcher/java/service.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[api-java-service]] === Service API diff --git a/x-pack/docs/en/watcher/java/stats.asciidoc b/x-pack/docs/en/watcher/java/stats.asciidoc index a2986010943d..ee7afe854b31 100644 --- a/x-pack/docs/en/watcher/java/stats.asciidoc +++ b/x-pack/docs/en/watcher/java/stats.asciidoc @@ -1,4 +1,4 @@ -[float] +[discrete] [[api-java-stats]] === Stats API diff --git a/x-pack/docs/en/watcher/managing-watches.asciidoc b/x-pack/docs/en/watcher/managing-watches.asciidoc index ecd499489ecb..aa4c71a0dcdc 100644 --- a/x-pack/docs/en/watcher/managing-watches.asciidoc +++ b/x-pack/docs/en/watcher/managing-watches.asciidoc @@ -11,7 +11,7 @@ * Use the <> to deactivate watches * Use the <> to acknowledge watches -[float] +[discrete] [[listing-watches]] === Listing watches diff --git a/x-pack/docs/en/watcher/release-notes.asciidoc b/x-pack/docs/en/watcher/release-notes.asciidoc index d9410f69acb3..175941368631 100644 --- a/x-pack/docs/en/watcher/release-notes.asciidoc +++ b/x-pack/docs/en/watcher/release-notes.asciidoc @@ -2,11 +2,11 @@ [[watcher-release-notes]] == Watcher Release Notes (Pre-5.0) -[float] +[discrete] [[watcher-change-list]] === Change List -[float] +[discrete] ==== 2.4.2 November 22, 2016 @@ -15,7 +15,7 @@ November 22, 2016 order to prevent failed deletions with watches having small intervals * Chain input: Parsing now throws an exception if a data structure is added, that cannot keep its order -[float] +[discrete] ==== 2.4.1 September 28, 2016 @@ -35,7 +35,7 @@ in the watch history. HTTP actions. * Running watches can be updated and deleted. -[float] +[discrete] ==== 2.4.0 August 31, 2016 @@ -43,7 +43,7 @@ August 31, 2016 * The HTTP headers of a response are now part of the payload and can be accessed via `ctx.payload._headers` -[float] +[discrete] ==== 2.3.5 August 3, 2016 @@ -59,7 +59,7 @@ need to wait one day for this update to take effect when a new history index is created. * The `watcher.http.proxy.port` setting for global proxy configuration was not applied correctly. -[float] +[discrete] ==== 2.3.4 July 7, 2016 @@ -77,14 +77,14 @@ created. elements and the colspan and rowspan attributes on and elements. * Fixed the Watcher/Marvel examples in the documentation. -[float] +[discrete] ==== 2.3.3 May 18, 2016 .Enhancements * Adds support for Elasticsearch 2.3.3 -[float] +[discrete] ==== 2.3.2 April 26, 2016 @@ -99,14 +99,14 @@ becomes `foo_bar` are not specified at the account level or within the action itself, instead of failing. -[float] +[discrete] ==== 2.3.1 April 4, 2016 .Enhancements * Adds support for Elasticsearch 2.3.1 -[float] +[discrete] ==== 2.3.0 March 30, 2016 @@ -125,35 +125,35 @@ March 30, 2016 via HTTP requests and superseding and deprecating the usage of `attach_data` in order to use this feature -[float] +[discrete] ==== 2.2.1 March 10, 2016 .Bug Fixes * The `croneval` CLI tool sets the correct environment to run -[float] +[discrete] ==== 2.2.0 February 2, 2016 .Enhancements * Adds support for Elasticsearch 2.2.0. -[float] +[discrete] ==== 2.1.2 February 2, 2016 .Enhancements * Adds support for Elasticsearch 2.1.2 -[float] +[discrete] ==== 2.1.1 December 17, 2015 .Bug Fixes * Fixed an issue that prevented sending of emails -[float] +[discrete] ==== 2.1.0 November 24, 2015 @@ -174,7 +174,7 @@ November 24, 2015 running and those watch execution are unable the execute during the current start process. -[float] +[discrete] ==== 2.0.1 November 24, 2015 @@ -187,7 +187,7 @@ November 24, 2015 was running and those watch execution are unable the execute during the current start process. -[float] +[discrete] ==== 2.0.0 October 28, 2015 @@ -242,7 +242,7 @@ October 28, 2015 * Fixed url encoding issue in http input and webhook output. The url params were url encoded twice. -[float] +[discrete] ==== 1.0.1 July 29, 2015 @@ -256,7 +256,7 @@ July 29, 2015 * Fixed a compatibility issue with Elasticsearch 1.6.1 and 1.7.2, which were released earlier today. -[float] +[discrete] ==== 1.0.0 June 25, 2015 @@ -269,7 +269,7 @@ June 25, 2015 * Cleaned up the <> response. -[float] +[discrete] ==== 1.0.0-rc1 June 19, 2015 @@ -282,7 +282,7 @@ June 19, 2015 * It is now possible to configure timeouts for http requests in <> and <>. -[float] +[discrete] ==== 1.0.0-Beta2 June 10, 2015