From a94e5cb7c4f0712e6fe00e86a5dbcbb92d4dd454 Mon Sep 17 00:00:00 2001 From: James Rodewig <40268737+jrodewig@users.noreply.github.com> Date: Mon, 17 Aug 2020 09:44:24 -0400 Subject: [PATCH] [DOCS] Replace Wikipedia links with attribute (#61171) --- .../ml/evaluate-data-frame.asciidoc | 14 +++--- .../painless-guide/painless-datetime.asciidoc | 2 +- .../painless-debugging.asciidoc | 2 +- .../painless-method-dispatch.asciidoc | 4 +- .../adjacency-matrix-aggregation.asciidoc | 4 +- .../bucket/composite-aggregation.asciidoc | 2 +- .../bucket/geotilegrid-aggregation.asciidoc | 2 +- .../bucket/terms-aggregation.asciidoc | 2 +- ...ariablewidthhistogram-aggregation.asciidoc | 2 +- .../metrics/boxplot-aggregation.asciidoc | 4 +- .../metrics/geocentroid-aggregation.asciidoc | 2 +- ...an-absolute-deviation-aggregation.asciidoc | 2 +- .../metrics/percentile-aggregation.asciidoc | 2 +- .../metrics/string-stats-aggregation.asciidoc | 2 +- docs/reference/analysis/token-graphs.asciidoc | 2 +- .../cjk-bigram-tokenfilter.asciidoc | 4 +- .../common-grams-tokenfilter.asciidoc | 2 +- .../edgengram-tokenfilter.asciidoc | 2 +- .../tokenfilters/elision-tokenfilter.asciidoc | 2 +- .../tokenfilters/minhash-tokenfilter.asciidoc | 6 +-- .../tokenfilters/ngram-tokenfilter.asciidoc | 2 +- .../tokenfilters/shingle-tokenfilter.asciidoc | 2 +- .../tokenfilters/stop-tokenfilter.asciidoc | 2 +- .../tokenizers/edgengram-tokenizer.asciidoc | 2 +- .../tokenizers/ngram-tokenizer.asciidoc | 2 +- docs/reference/api-conventions.asciidoc | 2 +- docs/reference/cat/health.asciidoc | 6 +-- docs/reference/cat/shards.asciidoc | 4 +- docs/reference/cat/snapshots.asciidoc | 4 +- docs/reference/cluster/nodes-stats.asciidoc | 10 ++--- docs/reference/cluster/stats.asciidoc | 4 +- .../reference/commands/users-command.asciidoc | 2 +- docs/reference/eql/eql-search-api.asciidoc | 6 +-- docs/reference/eql/eql.asciidoc | 2 +- docs/reference/eql/functions.asciidoc | 6 +-- docs/reference/eql/pipes.asciidoc | 4 +- .../high-availability/cluster-design.asciidoc | 4 +- docs/reference/how-to/disk-usage.asciidoc | 2 +- .../reference/how-to/recipes/scoring.asciidoc | 2 +- docs/reference/how-to/search-speed.asciidoc | 4 +- docs/reference/index-modules.asciidoc | 2 +- .../indices/data-stream-stats.asciidoc | 2 +- .../ingest/processors/dissect.asciidoc | 2 +- .../mapping/params/similarity.asciidoc | 2 +- docs/reference/mapping/types/binary.asciidoc | 2 +- .../mapping/types/geo-point.asciidoc | 2 +- docs/reference/mapping/types/ip.asciidoc | 6 +-- docs/reference/mapping/types/range.asciidoc | 6 +-- .../apis/evaluate-dfanalytics.asciidoc | 6 +-- .../df-analytics/apis/put-inference.asciidoc | 4 +- docs/reference/ml/ml-shared.asciidoc | 2 +- docs/reference/modules/http.asciidoc | 6 +-- docs/reference/modules/network.asciidoc | 4 +- docs/reference/modules/transport.asciidoc | 2 +- .../query-dsl/function-score-query.asciidoc | 8 ++-- docs/reference/query-dsl/fuzzy-query.asciidoc | 2 +- .../query-dsl/multi-term-rewrite.asciidoc | 2 +- .../query-dsl/query-string-query.asciidoc | 6 +-- docs/reference/query-dsl/range-query.asciidoc | 6 +-- .../reference/query-dsl/regexp-query.asciidoc | 4 +- .../query-dsl/regexp-syntax.asciidoc | 4 +- docs/reference/query-dsl/shape-query.asciidoc | 2 +- .../query-dsl/term-level-queries.asciidoc | 2 +- docs/reference/scripting/security.asciidoc | 4 +- docs/reference/search/rank-eval.asciidoc | 10 ++--- docs/reference/search/search-fields.asciidoc | 2 +- docs/reference/settings/ml-settings.asciidoc | 2 +- .../reference/setup/bootstrap-checks.asciidoc | 2 +- .../setup/sysconfig/configuring.asciidoc | 2 +- docs/reference/sql/concepts.asciidoc | 2 +- docs/reference/sql/endpoints/rest.asciidoc | 8 ++-- docs/reference/sql/functions/aggs.asciidoc | 20 ++++----- .../sql/functions/date-time.asciidoc | 4 +- docs/reference/sql/functions/math.asciidoc | 44 +++++++++---------- .../rest-api/security/create-users.asciidoc | 2 +- .../authorization/managing-roles.asciidoc | 2 +- .../ccs-clients-integrations/http.asciidoc | 2 +- x-pack/docs/en/watcher/actions/email.asciidoc | 2 +- 78 files changed, 164 insertions(+), 164 deletions(-) diff --git a/docs/java-rest/high-level/ml/evaluate-data-frame.asciidoc b/docs/java-rest/high-level/ml/evaluate-data-frame.asciidoc index 10bc5bae7827..5c96fceed0c5 100644 --- a/docs/java-rest/high-level/ml/evaluate-data-frame.asciidoc +++ b/docs/java-rest/high-level/ml/evaluate-data-frame.asciidoc @@ -37,10 +37,10 @@ include-tagged::{doc-tests-file}[{api}-evaluation-outlierdetection] <2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) label for an example. Must be either true or false. <3> Name of the field in the index. Its value denotes the probability (as per some ML algorithm) of the example being classified as positive. <4> The remaining parameters are the metrics to be calculated based on the two fields described above -<5> https://en.wikipedia.org/wiki/Precision_and_recall#Precision[Precision] calculated at thresholds: 0.4, 0.5 and 0.6 -<6> https://en.wikipedia.org/wiki/Precision_and_recall#Recall[Recall] calculated at thresholds: 0.5 and 0.7 -<7> https://en.wikipedia.org/wiki/Confusion_matrix[Confusion matrix] calculated at threshold 0.5 -<8> https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve[AuC ROC] calculated and the curve points returned +<5> {wikipedia}/Precision_and_recall#Precision[Precision] calculated at thresholds: 0.4, 0.5 and 0.6 +<6> {wikipedia}/Precision_and_recall#Recall[Recall] calculated at thresholds: 0.5 and 0.7 +<7> {wikipedia}/Confusion_matrix[Confusion matrix] calculated at threshold 0.5 +<8> {wikipedia}/Receiver_operating_characteristic#Area_under_the_curve[AuC ROC] calculated and the curve points returned ===== Classification @@ -67,10 +67,10 @@ include-tagged::{doc-tests-file}[{api}-evaluation-regression] <2> Name of the field in the index. Its value denotes the actual (i.e. ground truth) value for an example. <3> Name of the field in the index. Its value denotes the predicted (as per some ML algorithm) value for the example. <4> The remaining parameters are the metrics to be calculated based on the two fields described above -<5> https://en.wikipedia.org/wiki/Mean_squared_error[Mean squared error] +<5> {wikipedia}/Mean_squared_error[Mean squared error] <6> Mean squared logarithmic error -<7> https://en.wikipedia.org/wiki/Huber_loss#Pseudo-Huber_loss_function[Pseudo Huber loss] -<8> https://en.wikipedia.org/wiki/Coefficient_of_determination[R squared] +<7> {wikipedia}/Huber_loss#Pseudo-Huber_loss_function[Pseudo Huber loss] +<8> {wikipedia}/Coefficient_of_determination[R squared] include::../execution.asciidoc[] diff --git a/docs/painless/painless-guide/painless-datetime.asciidoc b/docs/painless/painless-guide/painless-datetime.asciidoc index edde26fe0adc..b497185c76a9 100644 --- a/docs/painless/painless-guide/painless-datetime.asciidoc +++ b/docs/painless/painless-guide/painless-datetime.asciidoc @@ -24,7 +24,7 @@ milliseconds since an epoch of 1970-01-01 00:00:00 Zulu Time string:: a datetime representation as a sequence of characters defined by a standard format or a custom format; in Painless this is typically a <> of the standard format -https://en.wikipedia.org/wiki/ISO_8601[ISO 8601] +{wikipedia}/ISO_8601[ISO 8601] complex:: a datetime representation as a complex type (<>) that abstracts away internal details of how the datetime is stored and often provides utilities for modification and diff --git a/docs/painless/painless-guide/painless-debugging.asciidoc b/docs/painless/painless-guide/painless-debugging.asciidoc index ce383ebf72c2..afd837059646 100644 --- a/docs/painless/painless-guide/painless-debugging.asciidoc +++ b/docs/painless/painless-guide/painless-debugging.asciidoc @@ -4,7 +4,7 @@ ==== Debug.Explain Painless doesn't have a -https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop[REPL] +{wikipedia}/Read%E2%80%93eval%E2%80%93print_loop[REPL] and while it'd be nice for it to have one day, it wouldn't tell you the whole story around debugging painless scripts embedded in Elasticsearch because the data that the scripts have access to or "context" is so important. For now diff --git a/docs/painless/painless-guide/painless-method-dispatch.asciidoc b/docs/painless/painless-guide/painless-method-dispatch.asciidoc index b17bf4d8fcfa..dcb5a5b3cd1f 100644 --- a/docs/painless/painless-guide/painless-method-dispatch.asciidoc +++ b/docs/painless/painless-guide/painless-method-dispatch.asciidoc @@ -1,11 +1,11 @@ [[modules-scripting-painless-dispatch]] === How painless dispatches functions -Painless uses receiver, name, and https://en.wikipedia.org/wiki/Arity[arity] +Painless uses receiver, name, and {wikipedia}/Arity[arity] for method dispatch. For example, `s.foo(a, b)` is resolved by first getting the class of `s` and then looking up the method `foo` with two parameters. This is different from Groovy which uses the -https://en.wikipedia.org/wiki/Multiple_dispatch[runtime types] of the +{wikipedia}/Multiple_dispatch[runtime types] of the parameters and Java which uses the compile time types of the parameters. The consequence of this that Painless doesn't support overloaded methods like diff --git a/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc b/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc index befb59683913..e2d09c385e79 100644 --- a/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc @@ -1,7 +1,7 @@ [[search-aggregations-bucket-adjacency-matrix-aggregation]] === Adjacency Matrix Aggregation -A bucket aggregation returning a form of https://en.wikipedia.org/wiki/Adjacency_matrix[adjacency matrix]. +A bucket aggregation returning a form of {wikipedia}/Adjacency_matrix[adjacency matrix]. The request provides a collection of named filter expressions, similar to the `filters` aggregation request. Each bucket in the response represents a non-empty cell in the matrix of intersecting filters. @@ -104,7 +104,7 @@ Response: ==== Usage On its own this aggregation can provide all of the data required to create an undirected weighted graph. However, when used with child aggregations such as a `date_histogram` the results can provide the -additional levels of data required to perform https://en.wikipedia.org/wiki/Dynamic_network_analysis[dynamic network analysis] +additional levels of data required to perform {wikipedia}/Dynamic_network_analysis[dynamic network analysis] where examining interactions _over time_ becomes important. ==== Limitations diff --git a/docs/reference/aggregations/bucket/composite-aggregation.asciidoc b/docs/reference/aggregations/bucket/composite-aggregation.asciidoc index 110cb6f8cc56..2c81bbe66401 100644 --- a/docs/reference/aggregations/bucket/composite-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/composite-aggregation.asciidoc @@ -362,7 +362,7 @@ include::datehistogram-aggregation.asciidoc[tag=offset-note] The `geotile_grid` value source works on `geo_point` fields and groups points into buckets that represent cells in a grid. The resulting grid can be sparse and only contains cells that have matching data. Each cell corresponds to a -https://en.wikipedia.org/wiki/Tiled_web_map[map tile] as used by many online map +{wikipedia}/Tiled_web_map[map tile] as used by many online map sites. Each cell is labeled using a "{zoom}/{x}/{y}" format, where zoom is equal to the user-specified precision. diff --git a/docs/reference/aggregations/bucket/geotilegrid-aggregation.asciidoc b/docs/reference/aggregations/bucket/geotilegrid-aggregation.asciidoc index eb3e7f2dc4bd..d3d0a0e189c6 100644 --- a/docs/reference/aggregations/bucket/geotilegrid-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/geotilegrid-aggregation.asciidoc @@ -4,7 +4,7 @@ A multi-bucket aggregation that works on `geo_point` fields and groups points into buckets that represent cells in a grid. The resulting grid can be sparse and only contains cells that have matching data. Each cell corresponds to a -https://en.wikipedia.org/wiki/Tiled_web_map[map tile] as used by many online map +{wikipedia}/Tiled_web_map[map tile] as used by many online map sites. Each cell is labeled using a "{zoom}/{x}/{y}" format, where zoom is equal to the user-specified precision. diff --git a/docs/reference/aggregations/bucket/terms-aggregation.asciidoc b/docs/reference/aggregations/bucket/terms-aggregation.asciidoc index a1fb27059cc0..31d552843e33 100644 --- a/docs/reference/aggregations/bucket/terms-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/terms-aggregation.asciidoc @@ -295,7 +295,7 @@ a multi-value metrics aggregation, and in case of a single-value metrics aggrega The path must be defined in the following form: -// https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form +// {wikipedia}/Extended_Backus%E2%80%93Naur_Form [source,ebnf] -------------------------------------------------- AGG_SEPARATOR = '>' ; diff --git a/docs/reference/aggregations/bucket/variablewidthhistogram-aggregation.asciidoc b/docs/reference/aggregations/bucket/variablewidthhistogram-aggregation.asciidoc index 193ef1e3ef28..06e46128df9e 100644 --- a/docs/reference/aggregations/bucket/variablewidthhistogram-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/variablewidthhistogram-aggregation.asciidoc @@ -67,7 +67,7 @@ from all the existing ones. At most `shard_size` total buckets are created. In the reduce step, the coordinating node sorts the buckets from all shards by their centroids. Then, the two buckets with the nearest centroids are repeatedly merged until the target number of buckets is achieved. -This merging procedure is a form of https://en.wikipedia.org/wiki/Hierarchical_clustering[agglomerative hierarchical clustering]. +This merging procedure is a form of {wikipedia}/Hierarchical_clustering[agglomerative hierarchical clustering]. TIP: A shard can return fewer than `shard_size` buckets, but it cannot return more. diff --git a/docs/reference/aggregations/metrics/boxplot-aggregation.asciidoc b/docs/reference/aggregations/metrics/boxplot-aggregation.asciidoc index 200832f8cab0..d6abee0255bf 100644 --- a/docs/reference/aggregations/metrics/boxplot-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/boxplot-aggregation.asciidoc @@ -7,7 +7,7 @@ A `boxplot` metrics aggregation that computes boxplot of numeric values extracte These values can be generated by a provided script or extracted from specific numeric or <> in the documents. -The `boxplot` aggregation returns essential information for making a https://en.wikipedia.org/wiki/Box_plot[box plot]: minimum, maximum, +The `boxplot` aggregation returns essential information for making a {wikipedia}/Box_plot[box plot]: minimum, maximum, median, first quartile (25th percentile) and third quartile (75th percentile) values. ==== Syntax @@ -129,7 +129,7 @@ https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[C [WARNING] ==== Boxplot as other percentile aggregations are also -https://en.wikipedia.org/wiki/Nondeterministic_algorithm[non-deterministic]. +{wikipedia}/Nondeterministic_algorithm[non-deterministic]. This means you can get slightly different results using the same data. ==== diff --git a/docs/reference/aggregations/metrics/geocentroid-aggregation.asciidoc b/docs/reference/aggregations/metrics/geocentroid-aggregation.asciidoc index d66ee4cb6e49..d8fd47455497 100644 --- a/docs/reference/aggregations/metrics/geocentroid-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/geocentroid-aggregation.asciidoc @@ -1,7 +1,7 @@ [[search-aggregations-metrics-geocentroid-aggregation]] === Geo Centroid Aggregation -A metric aggregation that computes the weighted https://en.wikipedia.org/wiki/Centroid[centroid] from all coordinate values for geo fields. +A metric aggregation that computes the weighted {wikipedia}/Centroid[centroid] from all coordinate values for geo fields. Example: diff --git a/docs/reference/aggregations/metrics/median-absolute-deviation-aggregation.asciidoc b/docs/reference/aggregations/metrics/median-absolute-deviation-aggregation.asciidoc index 5944e78cc4fe..b6b4b1a98b56 100644 --- a/docs/reference/aggregations/metrics/median-absolute-deviation-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/median-absolute-deviation-aggregation.asciidoc @@ -1,7 +1,7 @@ [[search-aggregations-metrics-median-absolute-deviation-aggregation]] === Median Absolute Deviation Aggregation -This `single-value` aggregation approximates the https://en.wikipedia.org/wiki/Median_absolute_deviation[median absolute deviation] +This `single-value` aggregation approximates the {wikipedia}/Median_absolute_deviation[median absolute deviation] of its search results. Median absolute deviation is a measure of variability. It is a robust diff --git a/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc b/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc index 107d14558e4e..0587849a1107 100644 --- a/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/percentile-aggregation.asciidoc @@ -254,7 +254,7 @@ it. It would not be the case on more skewed distributions. [WARNING] ==== Percentile aggregations are also -https://en.wikipedia.org/wiki/Nondeterministic_algorithm[non-deterministic]. +{wikipedia}/Nondeterministic_algorithm[non-deterministic]. This means you can get slightly different results using the same data. ==== diff --git a/docs/reference/aggregations/metrics/string-stats-aggregation.asciidoc b/docs/reference/aggregations/metrics/string-stats-aggregation.asciidoc index fa9d656f780a..11f125ad4292 100644 --- a/docs/reference/aggregations/metrics/string-stats-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/string-stats-aggregation.asciidoc @@ -15,7 +15,7 @@ The string stats aggregation returns the following results: * `min_length` - The length of the shortest term. * `max_length` - The length of the longest term. * `avg_length` - The average length computed over all terms. -* `entropy` - The https://en.wikipedia.org/wiki/Entropy_(information_theory)[Shannon Entropy] value computed over all terms collected by +* `entropy` - The {wikipedia}/Entropy_(information_theory)[Shannon Entropy] value computed over all terms collected by the aggregation. Shannon entropy quantifies the amount of information contained in the field. It is a very useful metric for measuring a wide range of properties of a data set, such as diversity, similarity, randomness etc. diff --git a/docs/reference/analysis/token-graphs.asciidoc b/docs/reference/analysis/token-graphs.asciidoc index ab1dc52f5131..20f91891aed5 100644 --- a/docs/reference/analysis/token-graphs.asciidoc +++ b/docs/reference/analysis/token-graphs.asciidoc @@ -8,7 +8,7 @@ tokens, it also records the following: * The `positionLength`, the number of positions that a token spans Using these, you can create a -https://en.wikipedia.org/wiki/Directed_acyclic_graph[directed acyclic graph], +{wikipedia}/Directed_acyclic_graph[directed acyclic graph], called a _token graph_, for a stream. In a token graph, each position represents a node. Each token represents an edge or arc, pointing to the next position. diff --git a/docs/reference/analysis/tokenfilters/cjk-bigram-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/cjk-bigram-tokenfilter.asciidoc index f828b02c1345..7affc2a79d66 100644 --- a/docs/reference/analysis/tokenfilters/cjk-bigram-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/cjk-bigram-tokenfilter.asciidoc @@ -4,7 +4,7 @@ CJK bigram ++++ -Forms https://en.wikipedia.org/wiki/Bigram[bigrams] out of CJK (Chinese, +Forms {wikipedia}/Bigram[bigrams] out of CJK (Chinese, Japanese, and Korean) tokens. This filter is included in {es}'s built-in <Common grams ++++ -Generates https://en.wikipedia.org/wiki/Bigram[bigrams] for a specified set of +Generates {wikipedia}/Bigram[bigrams] for a specified set of common words. For example, you can specify `is` and `the` as common words. This filter then diff --git a/docs/reference/analysis/tokenfilters/edgengram-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/edgengram-tokenfilter.asciidoc index ce89504bf9b9..845a471d44b7 100644 --- a/docs/reference/analysis/tokenfilters/edgengram-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/edgengram-tokenfilter.asciidoc @@ -4,7 +4,7 @@ Edge n-gram ++++ -Forms an https://en.wikipedia.org/wiki/N-gram[n-gram] of a specified length from +Forms an {wikipedia}/N-gram[n-gram] of a specified length from the beginning of a token. For example, you can use the `edge_ngram` token filter to change `quick` to diff --git a/docs/reference/analysis/tokenfilters/elision-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/elision-tokenfilter.asciidoc index 96ead5bc616d..6bdf2e728bfa 100644 --- a/docs/reference/analysis/tokenfilters/elision-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/elision-tokenfilter.asciidoc @@ -4,7 +4,7 @@ Elision ++++ -Removes specified https://en.wikipedia.org/wiki/Elision[elisions] from +Removes specified {wikipedia}/Elision[elisions] from the beginning of tokens. For example, you can use this filter to change `l'avion` to `avion`. diff --git a/docs/reference/analysis/tokenfilters/minhash-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/minhash-tokenfilter.asciidoc index a1c88a24857a..7cc7e2e41c48 100644 --- a/docs/reference/analysis/tokenfilters/minhash-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/minhash-tokenfilter.asciidoc @@ -4,7 +4,7 @@ MinHash ++++ -Uses the https://en.wikipedia.org/wiki/MinHash[MinHash] technique to produce a +Uses the {wikipedia}/MinHash[MinHash] technique to produce a signature for a token stream. You can use MinHash signatures to estimate the similarity of documents. See <>. @@ -95,8 +95,8 @@ locality sensitive hashing (LSH). Depending on what constitutes the similarity between documents, various LSH functions https://arxiv.org/abs/1408.2927[have been proposed]. -For https://en.wikipedia.org/wiki/Jaccard_index[Jaccard similarity], a popular -LSH function is https://en.wikipedia.org/wiki/MinHash[MinHash]. +For {wikipedia}/Jaccard_index[Jaccard similarity], a popular +LSH function is {wikipedia}/MinHash[MinHash]. A general idea of the way MinHash produces a signature for a document is by applying a random permutation over the whole index vocabulary (random numbering for the vocabulary), and recording the minimum value for this permutation diff --git a/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc index 0ffd143aff42..1f30c6d62548 100644 --- a/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/ngram-tokenfilter.asciidoc @@ -4,7 +4,7 @@ N-gram ++++ -Forms https://en.wikipedia.org/wiki/N-gram[n-grams] of specified lengths from +Forms {wikipedia}/N-gram[n-grams] of specified lengths from a token. For example, you can use the `ngram` token filter to change `fox` to diff --git a/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc index 793982b43864..cb7cf92221ae 100644 --- a/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc @@ -4,7 +4,7 @@ Shingle ++++ -Add shingles, or word https://en.wikipedia.org/wiki/N-gram[n-grams], to a token +Add shingles, or word {wikipedia}/N-gram[n-grams], to a token stream by concatenating adjacent tokens. By default, the `shingle` token filter outputs two-word shingles and unigrams. diff --git a/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc index 650c5e0cffb5..d4675e001c52 100644 --- a/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc @@ -4,7 +4,7 @@ Stop ++++ -Removes https://en.wikipedia.org/wiki/Stop_words[stop words] from a token +Removes {wikipedia}/Stop_words[stop words] from a token stream. When not customized, the filter removes the following English stop words by diff --git a/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc index 2b6aff86278d..74b5e7d4434c 100644 --- a/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc @@ -6,7 +6,7 @@ The `edge_ngram` tokenizer first breaks text down into words whenever it encounters one of a list of specified characters, then it emits -https://en.wikipedia.org/wiki/N-gram[N-grams] of each word where the start of +{wikipedia}/N-gram[N-grams] of each word where the start of the N-gram is anchored to the beginning of the word. Edge N-Grams are useful for _search-as-you-type_ queries. diff --git a/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc index 64ac2690d9ba..cd7f2fb7c74e 100644 --- a/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc @@ -6,7 +6,7 @@ The `ngram` tokenizer first breaks text down into words whenever it encounters one of a list of specified characters, then it emits -https://en.wikipedia.org/wiki/N-gram[N-grams] of each word of the specified +{wikipedia}/N-gram[N-grams] of each word of the specified length. N-grams are like a sliding window that moves across the word - a continuous diff --git a/docs/reference/api-conventions.asciidoc b/docs/reference/api-conventions.asciidoc index b1ed17807bce..f458128ca661 100644 --- a/docs/reference/api-conventions.asciidoc +++ b/docs/reference/api-conventions.asciidoc @@ -20,7 +20,7 @@ parameter also support _multi-target syntax_. In multi-target syntax, you can use a comma-separated list to execute a request across multiple resources, such as data streams, indices, or index aliases: `test1,test2,test3`. You can also use -https://en.wikipedia.org/wiki/Glob_(programming)[glob-like] wildcard (`*`) +{wikipedia}/Glob_(programming)[glob-like] wildcard (`*`) expressions to target any resources that match the pattern: `test*` or `*test` or `te*t` or `*test*. diff --git a/docs/reference/cat/health.asciidoc b/docs/reference/cat/health.asciidoc index b3a82663e028..86b274982c1b 100644 --- a/docs/reference/cat/health.asciidoc +++ b/docs/reference/cat/health.asciidoc @@ -25,7 +25,7 @@ track cluster health alongside log files and alerting systems, the API returns timestamps in two formats: * `HH:MM:SS`, which is human-readable but includes no date information. -* https://en.wikipedia.org/wiki/Unix_time[Unix `epoch` time], which is +* {wikipedia}/Unix_time[Unix `epoch` time], which is machine-sortable and includes date information. This is useful for cluster recoveries that take multiple days. @@ -51,7 +51,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=time] `ts` (timestamps):: (Optional, boolean) If `true`, returns `HH:MM:SS` and -https://en.wikipedia.org/wiki/Unix_time[Unix `epoch`] timestamps. Defaults to +{wikipedia}/Unix_time[Unix `epoch`] timestamps. Defaults to `true`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] @@ -63,7 +63,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cat-v] [[cat-health-api-example-timestamp]] ===== Example with a timestamp By default, the cat health API returns `HH:MM:SS` and -https://en.wikipedia.org/wiki/Unix_time[Unix `epoch`] timestamps. For example: +{wikipedia}/Unix_time[Unix `epoch`] timestamps. For example: [source,console,id=cat-health-example] -------------------------------------------------- diff --git a/docs/reference/cat/shards.asciidoc b/docs/reference/cat/shards.asciidoc index 4e690360f9f3..c89d4d71d8f0 100644 --- a/docs/reference/cat/shards.asciidoc +++ b/docs/reference/cat/shards.asciidoc @@ -241,7 +241,7 @@ Sync ID of the shard. `unassigned.at`, `ua`:: Time at which the shard became unassigned in -https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal +{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal Time (UTC)]. `unassigned.details`, `ud`:: @@ -249,7 +249,7 @@ Details about why the shard became unassigned. `unassigned.for`, `uf`:: Time at which the shard was requested to be unassigned in -https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal +{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal Time (UTC)]. [[reason-unassigned]] diff --git a/docs/reference/cat/snapshots.asciidoc b/docs/reference/cat/snapshots.asciidoc index dcc1d053aa3c..31fd6fa4302a 100644 --- a/docs/reference/cat/snapshots.asciidoc +++ b/docs/reference/cat/snapshots.asciidoc @@ -60,14 +60,14 @@ version. * `SUCCESS`: The snapshot process completed with a full success. `start_epoch`, `ste`, `startEpoch`:: -(Default) https://en.wikipedia.org/wiki/Unix_time[Unix `epoch` time] at which +(Default) {wikipedia}/Unix_time[Unix `epoch` time] at which the snapshot process started. `start_time`, `sti`, `startTime`:: (Default) `HH:MM:SS` time at which the snapshot process started. `end_epoch`, `ete`, `endEpoch`:: -(Default) https://en.wikipedia.org/wiki/Unix_time[Unix `epoch` time] at which +(Default) {wikipedia}/Unix_time[Unix `epoch` time] at which the snapshot process ended. `end_time`, `eti`, `endTime`:: diff --git a/docs/reference/cluster/nodes-stats.asciidoc b/docs/reference/cluster/nodes-stats.asciidoc index a2ce7a701cfa..4a2b6ee56b90 100644 --- a/docs/reference/cluster/nodes-stats.asciidoc +++ b/docs/reference/cluster/nodes-stats.asciidoc @@ -182,7 +182,7 @@ Contains statistics for the node. `timestamp`:: (integer) Time the node stats were collected for this response. Recorded in milliseconds -since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch]. +since the {wikipedia}/Unix_time[Unix Epoch]. `name`:: (string) @@ -824,7 +824,7 @@ type filters for <> fields. `max_unsafe_auto_id_timestamp`:: (integer) Time of the most recently retried indexing request. Recorded in milliseconds -since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch]. +since the {wikipedia}/Unix_time[Unix Epoch]. `file_sizes`:: (object) @@ -953,7 +953,7 @@ Contains statistics about the operating system for the node. `timestamp`:: (integer) Last time the operating system statistics were refreshed. Recorded in -milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch]. +milliseconds since the {wikipedia}/Unix_time[Unix Epoch]. `cpu`:: (object) @@ -1178,7 +1178,7 @@ Contains process statistics for the node. `timestamp`:: (integer) Last time the statistics were refreshed. Recorded in milliseconds -since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch]. +since the {wikipedia}/Unix_time[Unix Epoch]. `open_file_descriptors`:: (integer) @@ -1650,7 +1650,7 @@ Contains file store statistics for the node. `timestamp`:: (integer) Last time the file stores statistics were refreshed. Recorded in -milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix Epoch]. +milliseconds since the {wikipedia}/Unix_time[Unix Epoch]. `total`:: (object) diff --git a/docs/reference/cluster/stats.asciidoc b/docs/reference/cluster/stats.asciidoc index c3cdf95a6238..582ab213a254 100644 --- a/docs/reference/cluster/stats.asciidoc +++ b/docs/reference/cluster/stats.asciidoc @@ -74,7 +74,7 @@ Unique identifier for the cluster. `timestamp`:: (integer) -https://en.wikipedia.org/wiki/Unix_time[Unix timestamp], in milliseconds, of +{wikipedia}/Unix_time[Unix timestamp], in milliseconds, of the last time the cluster statistics were refreshed. `status`:: @@ -447,7 +447,7 @@ assigned to selected nodes. `max_unsafe_auto_id_timestamp`:: (integer) -https://en.wikipedia.org/wiki/Unix_time[Unix timestamp], in milliseconds, of +{wikipedia}/Unix_time[Unix timestamp], in milliseconds, of the most recently retried indexing request. `file_sizes`:: diff --git a/docs/reference/commands/users-command.asciidoc b/docs/reference/commands/users-command.asciidoc index 2f668e07e0a9..319ea402390b 100644 --- a/docs/reference/commands/users-command.asciidoc +++ b/docs/reference/commands/users-command.asciidoc @@ -28,7 +28,7 @@ on each node in the cluster. Usernames and roles must be at least 1 and no more than 1024 characters. They can contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, punctuation, and printable symbols in the -https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block]. +{wikipedia}/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block]. Leading or trailing whitespace is not allowed. Passwords must be at least 6 characters long. diff --git a/docs/reference/eql/eql-search-api.asciidoc b/docs/reference/eql/eql-search-api.asciidoc index e43ede555e7e..3ba61a7d4355 100644 --- a/docs/reference/eql/eql-search-api.asciidoc +++ b/docs/reference/eql/eql-search-api.asciidoc @@ -241,7 +241,7 @@ Field used to sort events with the same Schema (ECS)]. + By default, matching events in the search response are sorted by timestamp, -converted to milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix +converted to milliseconds since the {wikipedia}/Unix_time[Unix epoch], in ascending order. If two or more events share the same timestamp, this field is used to sort the events in ascending, lexicographic order. @@ -257,7 +257,7 @@ Defaults to `@timestamp`, as defined in the does not contain the `@timestamp` field, this value is required. Events in the API response are sorted by this field's value, converted to -milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix epoch], in +milliseconds since the {wikipedia}/Unix_time[Unix epoch], in ascending order. The timestamp field is typically mapped as a <> or @@ -509,7 +509,7 @@ GET /my-index-000001/_eql/search The API returns the following response. Matching events in the `hits.events` property are sorted by <>, converted -to milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix epoch], +to milliseconds since the {wikipedia}/Unix_time[Unix epoch], in ascending order. If two or more events share the same timestamp, the diff --git a/docs/reference/eql/eql.asciidoc b/docs/reference/eql/eql.asciidoc index 1d31227b813c..575a969686a5 100644 --- a/docs/reference/eql/eql.asciidoc +++ b/docs/reference/eql/eql.asciidoc @@ -70,7 +70,7 @@ GET /my-index-000001/_eql/search The API returns the following response. Matching events are included in the `hits.events` property. These events are sorted by timestamp, converted to -milliseconds since the https://en.wikipedia.org/wiki/Unix_time[Unix epoch], in +milliseconds since the {wikipedia}/Unix_time[Unix epoch], in ascending order. [source,console-result] diff --git a/docs/reference/eql/functions.asciidoc b/docs/reference/eql/functions.asciidoc index 969ace1cd37d..5f7bb9a2bc5a 100644 --- a/docs/reference/eql/functions.asciidoc +++ b/docs/reference/eql/functions.asciidoc @@ -189,7 +189,7 @@ If `true`, matching is case-sensitive. Defaults to `false`. === `cidrMatch` Returns `true` if an IP address is contained in one or more provided -https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing[CIDR] blocks. +{wikipedia}/Classless_Inter-Domain_Routing[CIDR] blocks. [%collapsible] ==== @@ -219,8 +219,8 @@ cidrMatch(source.address, null) // returns null ``:: (Required, string or `null`) IP address. Supports -https://en.wikipedia.org/wiki/IPv4[IPv4] and -https://en.wikipedia.org/wiki/IPv6[IPv6] addresses. If `null`, the function +{wikipedia}/IPv4[IPv4] and +{wikipedia}/IPv6[IPv6] addresses. If `null`, the function returns `null`. + If using a field as the argument, this parameter supports only the <> diff --git a/docs/reference/eql/pipes.asciidoc b/docs/reference/eql/pipes.asciidoc index 9593f0930cf1..03ce9c2e8485 100644 --- a/docs/reference/eql/pipes.asciidoc +++ b/docs/reference/eql/pipes.asciidoc @@ -19,7 +19,7 @@ experimental::[] Returns up to a specified number of events or sequences, starting with the earliest matches. Works similarly to the -https://en.wikipedia.org/wiki/Head_(Unix)[Unix head command]. +{wikipedia}/Head_(Unix)[Unix head command]. [%collapsible] ==== @@ -53,7 +53,7 @@ Maximum number of matching events or sequences to return. Returns up to a specified number of events or sequences, starting with the most recent matches. Works similarly to the -https://en.wikipedia.org/wiki/Tail_(Unix)[Unix tail command]. +{wikipedia}/Tail_(Unix)[Unix tail command]. [%collapsible] ==== diff --git a/docs/reference/high-availability/cluster-design.asciidoc b/docs/reference/high-availability/cluster-design.asciidoc index fed5b426ed51..20b6d8e95b25 100644 --- a/docs/reference/high-availability/cluster-design.asciidoc +++ b/docs/reference/high-availability/cluster-design.asciidoc @@ -118,7 +118,7 @@ expect that if either node fails then {es} can elect the remaining node as the master, but it is impossible to tell the difference between the failure of a remote node and a mere loss of connectivity between the nodes. If both nodes were capable of running independent elections, a loss of connectivity would -lead to a https://en.wikipedia.org/wiki/Split-brain_(computing)[split-brain +lead to a {wikipedia}/Split-brain_(computing)[split-brain problem] and therefore data loss. {es} avoids this and protects your data by electing neither node as master until that node can be sure that it has the latest cluster state and that there is no other master in @@ -291,7 +291,7 @@ zone as the master but it is impossible to tell the difference between the failure of a remote zone and a mere loss of connectivity between the zones. If both zones were capable of running independent elections then a loss of connectivity would lead to a -https://en.wikipedia.org/wiki/Split-brain_(computing)[split-brain problem] and +{wikipedia}/Split-brain_(computing)[split-brain problem] and therefore data loss. {es} avoids this and protects your data by not electing a node from either zone as master until that node can be sure that it has the latest cluster state and that there is no other master in the cluster. This may diff --git a/docs/reference/how-to/disk-usage.asciidoc b/docs/reference/how-to/disk-usage.asciidoc index 0a791a44cd2b..cac21e3080fb 100644 --- a/docs/reference/how-to/disk-usage.asciidoc +++ b/docs/reference/how-to/disk-usage.asciidoc @@ -171,7 +171,7 @@ When Elasticsearch stores `_source`, it compresses multiple documents at once in order to improve the overall compression ratio. For instance it is very common that documents share the same field names, and quite common that they share some field values, especially on fields that have a low cardinality or -a https://en.wikipedia.org/wiki/Zipf%27s_law[zipfian] distribution. +a {wikipedia}/Zipf%27s_law[zipfian] distribution. By default documents are compressed together in the order that they are added to the index. If you enabled <> diff --git a/docs/reference/how-to/recipes/scoring.asciidoc b/docs/reference/how-to/recipes/scoring.asciidoc index 42c6b6412be1..1703feafe856 100644 --- a/docs/reference/how-to/recipes/scoring.asciidoc +++ b/docs/reference/how-to/recipes/scoring.asciidoc @@ -82,7 +82,7 @@ statistics. === Incorporating static relevance signals into the score Many domains have static signals that are known to be correlated with relevance. -For instance https://en.wikipedia.org/wiki/PageRank[PageRank] and url length are +For instance {wikipedia}/PageRank[PageRank] and url length are two commonly used features for web search in order to tune the score of web pages independently of the query. diff --git a/docs/reference/how-to/search-speed.asciidoc b/docs/reference/how-to/search-speed.asciidoc index 8b5f1ee16334..d3474b78a614 100644 --- a/docs/reference/how-to/search-speed.asciidoc +++ b/docs/reference/how-to/search-speed.asciidoc @@ -352,11 +352,11 @@ in the <>. === Use `preference` to optimize cache utilization There are multiple caches that can help with search performance, such as the -https://en.wikipedia.org/wiki/Page_cache[filesystem cache], the +{wikipedia}/Page_cache[filesystem cache], the <> or the <>. Yet all these caches are maintained at the node level, meaning that if you run the same request twice in a row, have 1 <> or more -and use https://en.wikipedia.org/wiki/Round-robin_DNS[round-robin], the default +and use {wikipedia}/Round-robin_DNS[round-robin], the default routing algorithm, then those two requests will go to different shard copies, preventing node-level caches from helping. diff --git a/docs/reference/index-modules.asciidoc b/docs/reference/index-modules.asciidoc index 0f1f5e88ac67..26d9075f889a 100644 --- a/docs/reference/index-modules.asciidoc +++ b/docs/reference/index-modules.asciidoc @@ -82,7 +82,7 @@ indices. The +default+ value compresses stored data with LZ4 compression, but this can be set to +best_compression+ - which uses https://en.wikipedia.org/wiki/DEFLATE[DEFLATE] for a higher + which uses {wikipedia}/DEFLATE[DEFLATE] for a higher compression ratio, at the expense of slower stored fields performance. If you are updating the compression type, the new one will be applied after segments are merged. Segment merging can be forced using diff --git a/docs/reference/indices/data-stream-stats.asciidoc b/docs/reference/indices/data-stream-stats.asciidoc index 3c97b239946d..8a9a19a74fc2 100644 --- a/docs/reference/indices/data-stream-stats.asciidoc +++ b/docs/reference/indices/data-stream-stats.asciidoc @@ -146,7 +146,7 @@ Total size, in bytes, of all shards for the data stream's backing indices. `maximum_timestamp`:: (integer) The data stream's highest `@timestamp` value, converted to milliseconds since -the https://en.wikipedia.org/wiki/Unix_time[Unix epoch]. +the {wikipedia}/Unix_time[Unix epoch]. + [NOTE] ===== diff --git a/docs/reference/ingest/processors/dissect.asciidoc b/docs/reference/ingest/processors/dissect.asciidoc index feedb4c70cfd..b7c5fbaf952c 100644 --- a/docs/reference/ingest/processors/dissect.asciidoc +++ b/docs/reference/ingest/processors/dissect.asciidoc @@ -7,7 +7,7 @@ Similar to the <>, dissect also extracts structured fields out of a single text field within a document. However unlike the <>, dissect does not use -https://en.wikipedia.org/wiki/Regular_expression[Regular Expressions]. This allows dissect's syntax to be simple and for +{wikipedia}/Regular_expression[Regular Expressions]. This allows dissect's syntax to be simple and for some cases faster than the <>. Dissect matches a single text field against a defined pattern. diff --git a/docs/reference/mapping/params/similarity.asciidoc b/docs/reference/mapping/params/similarity.asciidoc index e36d1bce2157..f03fffa750c3 100644 --- a/docs/reference/mapping/params/similarity.asciidoc +++ b/docs/reference/mapping/params/similarity.asciidoc @@ -16,7 +16,7 @@ The only similarities which can be used out of the box, without any further configuration are: `BM25`:: -The https://en.wikipedia.org/wiki/Okapi_BM25[Okapi BM25 algorithm]. The +The {wikipedia}/Okapi_BM25[Okapi BM25 algorithm]. The algorithm used by default in {es} and Lucene. `boolean`:: diff --git a/docs/reference/mapping/types/binary.asciidoc b/docs/reference/mapping/types/binary.asciidoc index 60460726b057..ab7b1bd5e9c5 100644 --- a/docs/reference/mapping/types/binary.asciidoc +++ b/docs/reference/mapping/types/binary.asciidoc @@ -5,7 +5,7 @@ ++++ The `binary` type accepts a binary value as a -https://en.wikipedia.org/wiki/Base64[Base64] encoded string. The field is not +{wikipedia}/Base64[Base64] encoded string. The field is not stored by default and is not searchable: [source,console] diff --git a/docs/reference/mapping/types/geo-point.asciidoc b/docs/reference/mapping/types/geo-point.asciidoc index 5a341772302e..8778a6f1ae56 100644 --- a/docs/reference/mapping/types/geo-point.asciidoc +++ b/docs/reference/mapping/types/geo-point.asciidoc @@ -103,7 +103,7 @@ format was changed early on to conform to the format used by GeoJSON. [NOTE] A point can be expressed as a {wikipedia}/Geohash[geohash]. -Geohashes are https://en.wikipedia.org/wiki/Base32[base32] encoded strings of +Geohashes are {wikipedia}/Base32[base32] encoded strings of the bits of the latitude and longitude interleaved. Each character in a geohash adds additional 5 bits to the precision. So the longer the hash, the more precise it is. For the indexing purposed geohashs are translated into diff --git a/docs/reference/mapping/types/ip.asciidoc b/docs/reference/mapping/types/ip.asciidoc index 6c2a258848e8..7364f2765aca 100644 --- a/docs/reference/mapping/types/ip.asciidoc +++ b/docs/reference/mapping/types/ip.asciidoc @@ -4,8 +4,8 @@ IP ++++ -An `ip` field can index/store either https://en.wikipedia.org/wiki/IPv4[IPv4] or -https://en.wikipedia.org/wiki/IPv6[IPv6] addresses. +An `ip` field can index/store either {wikipedia}/IPv4[IPv4] or +{wikipedia}/IPv6[IPv6] addresses. [source,console] -------------------------------------------------- @@ -75,7 +75,7 @@ The following parameters are accepted by `ip` fields: ==== Querying `ip` fields The most common way to query ip addresses is to use the -https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation[CIDR] +{wikipedia}/Classless_Inter-Domain_Routing#CIDR_notation[CIDR] notation: `[ip_address]/[prefix_length]`. For instance: [source,console] diff --git a/docs/reference/mapping/types/range.asciidoc b/docs/reference/mapping/types/range.asciidoc index a3cd113954d0..b7f59ef83685 100644 --- a/docs/reference/mapping/types/range.asciidoc +++ b/docs/reference/mapping/types/range.asciidoc @@ -12,8 +12,8 @@ The following range types are supported: `long_range`:: A range of signed 64-bit integers with a minimum value of +-2^63^+ and maximum of +2^63^-1+. `double_range`:: A range of double-precision 64-bit IEEE 754 floating point values. `date_range`:: A range of date values represented as unsigned 64-bit integer milliseconds elapsed since system epoch. -`ip_range` :: A range of ip values supporting either https://en.wikipedia.org/wiki/IPv4[IPv4] or - https://en.wikipedia.org/wiki/IPv6[IPv6] (or mixed) addresses. +`ip_range` :: A range of ip values supporting either {wikipedia}/IPv4[IPv4] or + {wikipedia}/IPv6[IPv6] (or mixed) addresses. Below is an example of configuring a mapping with various range fields followed by an example that indexes several range types. @@ -176,7 +176,7 @@ This query produces a similar result: ==== IP Range In addition to the range format above, IP ranges can be provided in -https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation[CIDR] notation: +{wikipedia}/Classless_Inter-Domain_Routing#CIDR_notation[CIDR] notation: [source,console] -------------------------------------------------- diff --git a/docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc b/docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc index f05e878767a3..50a80591bbed 100644 --- a/docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc +++ b/docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc @@ -126,7 +126,7 @@ which outputs a prediction of values. `mse`::: (Optional, object) Average squared difference between the predicted values and the actual (`ground truth`) value. - For more information, read https://en.wikipedia.org/wiki/Mean_squared_error[this wiki article]. + For more information, read {wikipedia}/Mean_squared_error[this wiki article]. `msle`::: (Optional, object) Average squared difference between the logarithm of the predicted values and the logarithm of the actual @@ -134,11 +134,11 @@ which outputs a prediction of values. `huber`::: (Optional, object) Pseudo Huber loss function. - For more information, read https://en.wikipedia.org/wiki/Huber_loss#Pseudo-Huber_loss_function[this wiki article]. + For more information, read {wikipedia}/Huber_loss#Pseudo-Huber_loss_function[this wiki article]. `r_squared`::: (Optional, object) Proportion of the variance in the dependent variable that is predictable from the independent variables. - For more information, read https://en.wikipedia.org/wiki/Coefficient_of_determination[this wiki article]. + For more information, read {wikipedia}/Coefficient_of_determination[this wiki article]. diff --git a/docs/reference/ml/df-analytics/apis/put-inference.asciidoc b/docs/reference/ml/df-analytics/apis/put-inference.asciidoc index 6eab6cfae1f8..4e71a3611c26 100644 --- a/docs/reference/ml/df-analytics/apis/put-inference.asciidoc +++ b/docs/reference/ml/df-analytics/apis/put-inference.asciidoc @@ -271,12 +271,12 @@ This `aggregated_output` type works with binary classification (classification for values [0, 1]). It multiplies the outputs (in the case of the `ensemble` model, the inference model values) by the supplied `weights`. The resulting vector is summed and passed to a -https://en.wikipedia.org/wiki/Sigmoid_function[`sigmoid` function]. The result +{wikipedia}/Sigmoid_function[`sigmoid` function]. The result of the `sigmoid` function is considered the probability of class 1 (`P_1`), consequently, the probability of class 0 is `1 - P_1`. The class with the highest probability (either 0 or 1) is then returned. For more information about logistic regression, see -https://en.wikipedia.org/wiki/Logistic_regression[this wiki article]. +{wikipedia}/Logistic_regression[this wiki article]. + .Properties of `logistic_regression` [%collapsible%open] diff --git a/docs/reference/ml/ml-shared.asciidoc b/docs/reference/ml/ml-shared.asciidoc index 5fd320689bce..88de83639944 100644 --- a/docs/reference/ml/ml-shared.asciidoc +++ b/docs/reference/ml/ml-shared.asciidoc @@ -604,7 +604,7 @@ Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, the smaller the value the longer the training will take. For more information about shrinkage, see -https://en.wikipedia.org/wiki/Gradient_boosting#Shrinkage[this wiki article]. +{wikipedia}/Gradient_boosting#Shrinkage[this wiki article]. By default, this value is calculated during hyperparameter optimization. end::eta[] diff --git a/docs/reference/modules/http.asciidoc b/docs/reference/modules/http.asciidoc index f1fe431baf14..26b7f451178f 100644 --- a/docs/reference/modules/http.asciidoc +++ b/docs/reference/modules/http.asciidoc @@ -53,7 +53,7 @@ The max size of allowed headers. Defaults to `8KB`. Support for compression when possible (with Accept-Encoding). If HTTPS is enabled, defaults to `false`. Otherwise, defaults to `true`. + Disabling compression for HTTPS mitigates potential security risks, such as a -https://en.wikipedia.org/wiki/BREACH[BREACH attack]. To compress HTTPS traffic, +{wikipedia}/BREACH[BREACH attack]. To compress HTTPS traffic, you must explicitly set `http.compression` to `true`. // end::http-compression-tag[] @@ -64,7 +64,7 @@ Defines the compression level to use for HTTP responses. Valid values are in the // tag::http-cors-enabled-tag[] `http.cors.enabled` {ess-icon}:: Enable or disable cross-origin resource sharing, which determines whether a browser on another origin can execute requests against {es}. Set to `true` to enable {es} to process pre-flight -https://en.wikipedia.org/wiki/Cross-origin_resource_sharing[CORS] requests. +{wikipedia}/Cross-origin_resource_sharing[CORS] requests. {es} will respond to those requests with the `Access-Control-Allow-Origin` header if the `Origin` sent in the request is permitted by the `http.cors.allow-origin` list. Set to `false` (the default) to make {es} ignore the `Origin` request header, effectively disabling CORS requests because {es} will never respond with the `Access-Control-Allow-Origin` response header. + NOTE: If the client does not send a pre-flight request with an `Origin` header or it does not check the response headers from the server to validate the @@ -124,7 +124,7 @@ The maximum number of warning headers in client HTTP responses. Defaults to `unb The maximum total size of warning headers in client HTTP responses. Defaults to `unbounded`. `http.tcp.no_delay`:: -Enable or disable the https://en.wikipedia.org/wiki/Nagle%27s_algorithm[TCP no delay] +Enable or disable the {wikipedia}/Nagle%27s_algorithm[TCP no delay] setting. Defaults to `network.tcp.no_delay`. `http.tcp.keep_alive`:: diff --git a/docs/reference/modules/network.asciidoc b/docs/reference/modules/network.asciidoc index 058b9e2fbf9e..39c81653f590 100644 --- a/docs/reference/modules/network.asciidoc +++ b/docs/reference/modules/network.asciidoc @@ -34,7 +34,7 @@ at least some of the other nodes in the cluster. This setting provides the initial list of addresses this node will try to contact. Accepts IP addresses or hostnames. If a hostname lookup resolves to multiple IP addresses then each IP address will be used for discovery. -https://en.wikipedia.org/wiki/Round-robin_DNS[Round robin DNS] -- returning a +{wikipedia}/Round-robin_DNS[Round robin DNS] -- returning a different IP from a list on each lookup -- can be used for discovery; non- existent IP addresses will throw exceptions and cause another DNS lookup on the next round of pinging (subject to <> and <> layers) share the following settings: `network.tcp.no_delay`:: -Enable or disable the https://en.wikipedia.org/wiki/Nagle%27s_algorithm[TCP no delay] +Enable or disable the {wikipedia}/Nagle%27s_algorithm[TCP no delay] setting. Defaults to `true`. `network.tcp.keep_alive`:: diff --git a/docs/reference/modules/transport.asciidoc b/docs/reference/modules/transport.asciidoc index 2493f078db4f..791184ac9285 100644 --- a/docs/reference/modules/transport.asciidoc +++ b/docs/reference/modules/transport.asciidoc @@ -53,7 +53,7 @@ TCP keep-alives apply to all kinds of long-lived connections and not just to transport connections. `transport.tcp.no_delay`:: -Enable or disable the https://en.wikipedia.org/wiki/Nagle%27s_algorithm[TCP no delay] +Enable or disable the {wikipedia}/Nagle%27s_algorithm[TCP no delay] setting. Defaults to `network.tcp.no_delay`. `transport.tcp.keep_alive`:: diff --git a/docs/reference/query-dsl/function-score-query.asciidoc b/docs/reference/query-dsl/function-score-query.asciidoc index 71d071046402..9e742c90a552 100644 --- a/docs/reference/query-dsl/function-score-query.asciidoc +++ b/docs/reference/query-dsl/function-score-query.asciidoc @@ -309,19 +309,19 @@ There are a number of options for the `field_value_factor` function: | Modifier | Meaning | `none` | Do not apply any multiplier to the field value -| `log` | Take the https://en.wikipedia.org/wiki/Common_logarithm[common logarithm] of the field value. +| `log` | Take the {wikipedia}/Common_logarithm[common logarithm] of the field value. Because this function will return a negative value and cause an error if used on values between 0 and 1, it is recommended to use `log1p` instead. | `log1p` | Add 1 to the field value and take the common logarithm | `log2p` | Add 2 to the field value and take the common logarithm -| `ln` | Take the https://en.wikipedia.org/wiki/Natural_logarithm[natural logarithm] of the field value. +| `ln` | Take the {wikipedia}/Natural_logarithm[natural logarithm] of the field value. Because this function will return a negative value and cause an error if used on values between 0 and 1, it is recommended to use `ln1p` instead. | `ln1p` | Add 1 to the field value and take the natural logarithm | `ln2p` | Add 2 to the field value and take the natural logarithm | `square` | Square the field value (multiply it by itself) -| `sqrt` | Take the https://en.wikipedia.org/wiki/Square_root[square root] of the field value -| `reciprocal` | https://en.wikipedia.org/wiki/Multiplicative_inverse[Reciprocate] the field value, same as `1/x` where `x` is the field's value +| `sqrt` | Take the {wikipedia}/Square_root[square root] of the field value +| `reciprocal` | {wikipedia}/Multiplicative_inverse[Reciprocate] the field value, same as `1/x` where `x` is the field's value |======================================================================= `missing`:: diff --git a/docs/reference/query-dsl/fuzzy-query.asciidoc b/docs/reference/query-dsl/fuzzy-query.asciidoc index 9a7ffa04643c..ef9c9a9dbf7b 100644 --- a/docs/reference/query-dsl/fuzzy-query.asciidoc +++ b/docs/reference/query-dsl/fuzzy-query.asciidoc @@ -5,7 +5,7 @@ ++++ Returns documents that contain terms similar to the search term, as measured by -a https://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein edit distance]. +a {wikipedia}/Levenshtein_distance[Levenshtein edit distance]. An edit distance is the number of one-character changes needed to turn one term into another. These changes can include: diff --git a/docs/reference/query-dsl/multi-term-rewrite.asciidoc b/docs/reference/query-dsl/multi-term-rewrite.asciidoc index 4903f35f458c..fe415f4eb5b4 100644 --- a/docs/reference/query-dsl/multi-term-rewrite.asciidoc +++ b/docs/reference/query-dsl/multi-term-rewrite.asciidoc @@ -16,7 +16,7 @@ following queries: To execute them, Lucene changes these queries to a simpler form, such as a <> or a -https://en.wikipedia.org/wiki/Bit_array[bit set]. +{wikipedia}/Bit_array[bit set]. The `rewrite` parameter determines: diff --git a/docs/reference/query-dsl/query-string-query.asciidoc b/docs/reference/query-dsl/query-string-query.asciidoc index 4d6ac96ff8bf..c2a9428c3506 100644 --- a/docs/reference/query-dsl/query-string-query.asciidoc +++ b/docs/reference/query-dsl/query-string-query.asciidoc @@ -165,7 +165,7 @@ value for a <> field, are ignored. Defaults to `false`. + -- (Optional, integer) Maximum number of -https://en.wikipedia.org/wiki/Deterministic_finite_automaton[automaton states] +{wikipedia}/Deterministic_finite_automaton[automaton states] required for the query. Default is `10000`. {es} uses https://lucene.apache.org/core/[Apache Lucene] internally to parse @@ -217,9 +217,9 @@ information, see the <>. + -- (Optional, string) -https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal +{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal Time (UTC) offset] or -https://en.wikipedia.org/wiki/List_of_tz_database_time_zones[IANA time zone] +{wikipedia}/List_of_tz_database_time_zones[IANA time zone] used to convert `date` values in the query string to UTC. Valid values are ISO 8601 UTC offsets, such as `+01:00` or -`08:00`, and IANA diff --git a/docs/reference/query-dsl/range-query.asciidoc b/docs/reference/query-dsl/range-query.asciidoc index 4f19a4cbb569..a35f8d59bce4 100644 --- a/docs/reference/query-dsl/range-query.asciidoc +++ b/docs/reference/query-dsl/range-query.asciidoc @@ -66,7 +66,7 @@ For valid syntax, see <>. ==== If a `format` and `date` value are incomplete, {es} replaces any missing year, month, or date component with the start of -https://en.wikipedia.org/wiki/Unix_time[Unix time], which is January 1st, 1970. +{wikipedia}/Unix_time[Unix time], which is January 1st, 1970. For example, if the `format` value is `dd`, {es} converts a `gte` value of `10` to `1970-01-10T00:00:00.000Z`. @@ -95,9 +95,9 @@ Matches documents with a range field value entirely within the query's range. + -- (Optional, string) -https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal +{wikipedia}/List_of_UTC_time_offsets[Coordinated Universal Time (UTC) offset] or -https://en.wikipedia.org/wiki/List_of_tz_database_time_zones[IANA time zone] +{wikipedia}/List_of_tz_database_time_zones[IANA time zone] used to convert `date` values in the query to UTC. Valid values are ISO 8601 UTC offsets, such as `+01:00` or -`08:00`, and IANA diff --git a/docs/reference/query-dsl/regexp-query.asciidoc b/docs/reference/query-dsl/regexp-query.asciidoc index e3d310e7d1a1..5b9e4ea8a22d 100644 --- a/docs/reference/query-dsl/regexp-query.asciidoc +++ b/docs/reference/query-dsl/regexp-query.asciidoc @@ -5,7 +5,7 @@ ++++ Returns documents that contain terms matching a -https://en.wikipedia.org/wiki/Regular_expression[regular expression]. +{wikipedia}/Regular_expression[regular expression]. A regular expression is a way to match patterns in data using placeholder characters, called operators. For a list of operators supported by the @@ -71,7 +71,7 @@ expression syntax>>. + -- (Optional, integer) Maximum number of -https://en.wikipedia.org/wiki/Deterministic_finite_automaton[automaton states] +{wikipedia}/Deterministic_finite_automaton[automaton states] required for the query. Default is `10000`. {es} uses https://lucene.apache.org/core/[Apache Lucene] internally to parse diff --git a/docs/reference/query-dsl/regexp-syntax.asciidoc b/docs/reference/query-dsl/regexp-syntax.asciidoc index 2ff5fa4373fa..57c8c9d35b8f 100644 --- a/docs/reference/query-dsl/regexp-syntax.asciidoc +++ b/docs/reference/query-dsl/regexp-syntax.asciidoc @@ -1,7 +1,7 @@ [[regexp-syntax]] == Regular expression syntax -A https://en.wikipedia.org/wiki/Regular_expression[regular expression] is a way to +A {wikipedia}/Regular_expression[regular expression] is a way to match patterns in data using placeholder characters, called operators. {es} supports regular expressions in the following queries: @@ -44,7 +44,7 @@ backslash or surround it with double quotes. For example: === Standard operators Lucene's regular expression engine does not use the -https://en.wikipedia.org/wiki/Perl_Compatible_Regular_Expressions[Perl +{wikipedia}/Perl_Compatible_Regular_Expressions[Perl Compatible Regular Expressions (PCRE)] library, but it does support the following standard operators. diff --git a/docs/reference/query-dsl/shape-query.asciidoc b/docs/reference/query-dsl/shape-query.asciidoc index 29406dde04ae..326ed7b8ab3b 100644 --- a/docs/reference/query-dsl/shape-query.asciidoc +++ b/docs/reference/query-dsl/shape-query.asciidoc @@ -19,7 +19,7 @@ examples. Similar to the `geo_shape` query, the `shape` query uses http://geojson.org[GeoJSON] or -https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry[Well Known Text] +{wikipedia}/Well-known_text_representation_of_geometry[Well Known Text] (WKT) to represent shapes. Given the following index: diff --git a/docs/reference/query-dsl/term-level-queries.asciidoc b/docs/reference/query-dsl/term-level-queries.asciidoc index 62868b7fc958..187f48188228 100644 --- a/docs/reference/query-dsl/term-level-queries.asciidoc +++ b/docs/reference/query-dsl/term-level-queries.asciidoc @@ -39,7 +39,7 @@ Returns documents that contain terms within a provided range. <>:: Returns documents that contain terms matching a -https://en.wikipedia.org/wiki/Regular_expression[regular expression]. +{wikipedia}/Regular_expression[regular expression]. <>:: Returns documents that contain an exact term in a provided field. diff --git a/docs/reference/scripting/security.asciidoc b/docs/reference/scripting/security.asciidoc index 426305df562a..505c4db3f3f5 100644 --- a/docs/reference/scripting/security.asciidoc +++ b/docs/reference/scripting/security.asciidoc @@ -3,7 +3,7 @@ While Elasticsearch contributors make every effort to prevent scripts from running amok, security is something best done in -https://en.wikipedia.org/wiki/Defense_in_depth_(computing)[layers] because +{wikipedia}/Defense_in_depth_(computing)[layers] because all software has bugs and it is important to minimize the risk of failure in any security layer. Find below rules of thumb for how to keep Elasticsearch from being a vulnerability. @@ -63,7 +63,7 @@ preventing them from being able to do things like write files and listen to sockets. Elasticsearch uses -https://en.wikipedia.org/wiki/Seccomp[seccomp] in Linux, +{wikipedia}/Seccomp[seccomp] in Linux, https://www.chromium.org/developers/design-documents/sandbox/osx-sandboxing-design[Seatbelt] in macOS, and https://msdn.microsoft.com/en-us/library/windows/desktop/ms684147[ActiveProcessLimit] diff --git a/docs/reference/search/rank-eval.asciidoc b/docs/reference/search/rank-eval.asciidoc index dc9abe1f9483..1970c2c568a7 100644 --- a/docs/reference/search/rank-eval.asciidoc +++ b/docs/reference/search/rank-eval.asciidoc @@ -214,7 +214,7 @@ will be used. The following metrics are supported: This metric measures the proportion of relevant results in the top k search results. It's a form of the well-known -https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision[Precision] +{wikipedia}/Evaluation_measures_(information_retrieval)#Precision[Precision] metric that only looks at the top k documents. It is the fraction of relevant documents in those first k results. A precision at 10 (P@10) value of 0.6 then means 6 out of the 10 top hits are relevant with respect to the user's @@ -269,7 +269,7 @@ If set to 'true', unlabeled documents are ignored and neither count as relevant This metric measures the total number of relevant results in the top k search results. It's a form of the well-known -https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Recall[Recall] +{wikipedia}/Evaluation_measures_(information_retrieval)#Recall[Recall] metric. It is the fraction of relevant documents in those first k results relative to all possible relevant results. A recall at 10 (R@10) value of 0.5 then means 4 out of 8 relevant documents, with respect to the user's information @@ -322,7 +322,7 @@ For every query in the test suite, this metric calculates the reciprocal of the rank of the first relevant document. For example, finding the first relevant result in position 3 means the reciprocal rank is 1/3. The reciprocal rank for each query is averaged across all queries in the test suite to give the -https://en.wikipedia.org/wiki/Mean_reciprocal_rank[mean reciprocal rank]. +{wikipedia}/Mean_reciprocal_rank[mean reciprocal rank]. [source,console] -------------------------------- @@ -360,7 +360,7 @@ in the query. Defaults to 10. ===== Discounted cumulative gain (DCG) In contrast to the two metrics above, -https://en.wikipedia.org/wiki/Discounted_cumulative_gain[discounted cumulative gain] +{wikipedia}/Discounted_cumulative_gain[discounted cumulative gain] takes both the rank and the rating of the search results into account. The assumption is that highly relevant documents are more useful for the user @@ -395,7 +395,7 @@ The `dcg` metric takes the following optional parameters: |Parameter |Description |`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter in the query. Defaults to 10. -|`normalize` | If set to `true`, this metric will calculate the https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG[Normalized DCG]. +|`normalize` | If set to `true`, this metric will calculate the {wikipedia}/Discounted_cumulative_gain#Normalized_DCG[Normalized DCG]. |======================================================================= diff --git a/docs/reference/search/search-fields.asciidoc b/docs/reference/search/search-fields.asciidoc index 782205358a0a..cc0483e4bd64 100644 --- a/docs/reference/search/search-fields.asciidoc +++ b/docs/reference/search/search-fields.asciidoc @@ -100,7 +100,7 @@ POST my-index-000001/_search <>. <> accept either `geojson` for http://www.geojson.org[GeoJSON] (the default) or `wkt` for - https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry[Well Known Text]. + {wikipedia}/Well-known_text_representation_of_geometry[Well Known Text]. Other field types do not support the `format` parameter. The values are returned as a flat list in the `fields` section in each hit: diff --git a/docs/reference/settings/ml-settings.asciidoc b/docs/reference/settings/ml-settings.asciidoc index 9814c8cb6a7f..edd0fdca9f6e 100644 --- a/docs/reference/settings/ml-settings.asciidoc +++ b/docs/reference/settings/ml-settings.asciidoc @@ -11,7 +11,7 @@ You do not need to configure any settings to use {ml}. It is enabled by default. IMPORTANT: {ml-cap} uses SSE4.2 instructions, so will only work on machines whose -CPUs https://en.wikipedia.org/wiki/SSE4#Supporting_CPUs[support] SSE4.2. If you +CPUs {wikipedia}/SSE4#Supporting_CPUs[support] SSE4.2. If you run {es} on older hardware you must disable {ml} (by setting `xpack.ml.enabled` to `false`). diff --git a/docs/reference/setup/bootstrap-checks.asciidoc b/docs/reference/setup/bootstrap-checks.asciidoc index cf544b733295..74269f7d49c0 100644 --- a/docs/reference/setup/bootstrap-checks.asciidoc +++ b/docs/reference/setup/bootstrap-checks.asciidoc @@ -77,7 +77,7 @@ heap size check, you must configure the <>. === File descriptor check File descriptors are a Unix construct for tracking open "files". In Unix -though, https://en.wikipedia.org/wiki/Everything_is_a_file[everything is +though, {wikipedia}/Everything_is_a_file[everything is a file]. For example, "files" could be a physical file, a virtual file (e.g., `/proc/loadavg`), or network sockets. Elasticsearch requires lots of file descriptors (e.g., every shard is composed of multiple diff --git a/docs/reference/setup/sysconfig/configuring.asciidoc b/docs/reference/setup/sysconfig/configuring.asciidoc index 153bbf7cefa8..7976efee84fe 100644 --- a/docs/reference/setup/sysconfig/configuring.asciidoc +++ b/docs/reference/setup/sysconfig/configuring.asciidoc @@ -81,7 +81,7 @@ However, system limits need to be specified via <>. ==== Systemd configuration When using the RPM or Debian packages on systems that use -https://en.wikipedia.org/wiki/Systemd[systemd], system limits must be +{wikipedia}/Systemd[systemd], system limits must be specified via systemd. The systemd service file (`/usr/lib/systemd/system/elasticsearch.service`) diff --git a/docs/reference/sql/concepts.asciidoc b/docs/reference/sql/concepts.asciidoc index 49faaa9cf9b0..a7363bdfeb37 100644 --- a/docs/reference/sql/concepts.asciidoc +++ b/docs/reference/sql/concepts.asciidoc @@ -9,7 +9,7 @@ NOTE: This documentation while trying to be complete, does assume the reader has As a general rule, {es-sql} as the name indicates provides a SQL interface to {es}. As such, it follows the SQL terminology and conventions first, whenever possible. However the backing engine itself is {es} for which {es-sql} was purposely created hence why features or concepts that are not available, or cannot be mapped correctly, in SQL appear in {es-sql}. -Last but not least, {es-sql} tries to obey the https://en.wikipedia.org/wiki/Principle_of_least_astonishment[principle of least surprise], though as all things in the world, everything is relative. +Last but not least, {es-sql} tries to obey the {wikipedia}/Principle_of_least_astonishment[principle of least surprise], though as all things in the world, everything is relative. === Mapping concepts across SQL and {es} diff --git a/docs/reference/sql/endpoints/rest.asciidoc b/docs/reference/sql/endpoints/rest.asciidoc index f81b090afa48..386124eda70d 100644 --- a/docs/reference/sql/endpoints/rest.asciidoc +++ b/docs/reference/sql/endpoints/rest.asciidoc @@ -76,7 +76,7 @@ s|Description |csv |text/csv -|https://en.wikipedia.org/wiki/Comma-separated_values[Comma-separated values] +|{wikipedia}/Comma-separated_values[Comma-separated values] |json |application/json @@ -84,7 +84,7 @@ s|Description |tsv |text/tab-separated-values -|https://en.wikipedia.org/wiki/Tab-separated_values[Tab-separated values] +|{wikipedia}/Tab-separated_values[Tab-separated values] |txt |text/plain @@ -92,7 +92,7 @@ s|Description |yaml |application/yaml -|https://en.wikipedia.org/wiki/YAML[YAML] (YAML Ain't Markup Language) human-readable format +|{wikipedia}/YAML[YAML] (YAML Ain't Markup Language) human-readable format 3+h| Binary Formats @@ -102,7 +102,7 @@ s|Description |smile |application/smile -|https://en.wikipedia.org/wiki/Smile_(data_interchange_format)[Smile] binary data format similar to CBOR +|{wikipedia}/Smile_(data_interchange_format)[Smile] binary data format similar to CBOR |=== diff --git a/docs/reference/sql/functions/aggs.asciidoc b/docs/reference/sql/functions/aggs.asciidoc index 23ca1b7cfa1d..b7121189af17 100644 --- a/docs/reference/sql/functions/aggs.asciidoc +++ b/docs/reference/sql/functions/aggs.asciidoc @@ -25,7 +25,7 @@ AVG(numeric_field) <1> *Output*: `double` numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Arithmetic_mean[Average] (arithmetic mean) of input values. +*Description*: Returns the {wikipedia}/Arithmetic_mean[Average] (arithmetic mean) of input values. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -424,7 +424,7 @@ KURTOSIS(field_name) <1> *Description*: -https://en.wikipedia.org/wiki/Kurtosis[Quantify] the shape of the distribution of input values in the field `field_name`. +{wikipedia}/Kurtosis[Quantify] the shape of the distribution of input values in the field `field_name`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -458,7 +458,7 @@ MAD(field_name) <1> *Description*: -https://en.wikipedia.org/wiki/Median_absolute_deviation[Measure] the variability of the input values in the field `field_name`. +{wikipedia}/Median_absolute_deviation[Measure] the variability of the input values in the field `field_name`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -490,7 +490,7 @@ PERCENTILE( *Description*: -Returns the nth https://en.wikipedia.org/wiki/Percentile[percentile] (represented by `numeric_exp` parameter) +Returns the nth {wikipedia}/Percentile[percentile] (represented by `numeric_exp` parameter) of input values in the field `field_name`. ["source","sql",subs="attributes,macros"] @@ -523,7 +523,7 @@ PERCENTILE_RANK( *Description*: -Returns the nth https://en.wikipedia.org/wiki/Percentile_rank[percentile rank] (represented by `numeric_exp` parameter) +Returns the nth {wikipedia}/Percentile_rank[percentile rank] (represented by `numeric_exp` parameter) of input values in the field `field_name`. ["source","sql",subs="attributes,macros"] @@ -553,7 +553,7 @@ SKEWNESS(field_name) <1> *Description*: -https://en.wikipedia.org/wiki/Skewness[Quantify] the asymmetric distribution of input values in the field `field_name`. +{wikipedia}/Skewness[Quantify] the asymmetric distribution of input values in the field `field_name`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -587,7 +587,7 @@ STDDEV_POP(field_name) <1> *Description*: -Returns the https://en.wikipedia.org/wiki/Standard_deviations[population standard deviation] of input values in the field `field_name`. +Returns the {wikipedia}/Standard_deviations[population standard deviation] of input values in the field `field_name`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -616,7 +616,7 @@ STDDEV_SAMP(field_name) <1> *Description*: -Returns the https://en.wikipedia.org/wiki/Standard_deviations[sample standard deviation] of input values in the field `field_name`. +Returns the {wikipedia}/Standard_deviations[sample standard deviation] of input values in the field `field_name`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -674,7 +674,7 @@ VAR_POP(field_name) <1> *Description*: -Returns the https://en.wikipedia.org/wiki/Variance[population variance] of input values in the field `field_name`. +Returns the {wikipedia}/Variance[population variance] of input values in the field `field_name`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -704,7 +704,7 @@ VAR_SAMP(field_name) <1> *Description*: -Returns the https://en.wikipedia.org/wiki/Variance[sample variance] of input values in the field `field_name`. +Returns the {wikipedia}/Variance[sample variance] of input values in the field `field_name`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- diff --git a/docs/reference/sql/functions/date-time.asciidoc b/docs/reference/sql/functions/date-time.asciidoc index 97be32679730..2d8139624cdc 100644 --- a/docs/reference/sql/functions/date-time.asciidoc +++ b/docs/reference/sql/functions/date-time.asciidoc @@ -890,7 +890,7 @@ ISO_DAY_OF_WEEK(datetime_exp) <1> *Output*: integer -*Description*: Extract the day of the week from a date/datetime, following the https://en.wikipedia.org/wiki/ISO_week_date[ISO 8601 standard]. +*Description*: Extract the day of the week from a date/datetime, following the {wikipedia}/ISO_week_date[ISO 8601 standard]. Monday is `1`, Tuesday is `2`, etc. [source, sql] @@ -913,7 +913,7 @@ ISO_WEEK_OF_YEAR(datetime_exp) <1> *Output*: integer -*Description*: Extract the week of the year from a date/datetime, following https://en.wikipedia.org/wiki/ISO_week_date[ISO 8601 standard]. The first week +*Description*: Extract the week of the year from a date/datetime, following {wikipedia}/ISO_week_date[ISO 8601 standard]. The first week of a year is the first week with a majority (4 or more) of its days in January. [source, sql] diff --git a/docs/reference/sql/functions/math.asciidoc b/docs/reference/sql/functions/math.asciidoc index 433132de8b4e..f2ebe488248f 100644 --- a/docs/reference/sql/functions/math.asciidoc +++ b/docs/reference/sql/functions/math.asciidoc @@ -25,7 +25,7 @@ ABS(numeric_exp) <1> *Output*: numeric -*Description*: Returns the https://en.wikipedia.org/wiki/Absolute_value[absolute value] of `numeric_exp`. The return type is the same as the input type. +*Description*: Returns the {wikipedia}/Absolute_value[absolute value] of `numeric_exp`. The return type is the same as the input type. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -47,7 +47,7 @@ CBRT(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Cube_root[cube root] of `numeric_exp`. +*Description*: Returns the {wikipedia}/Cube_root[cube root] of `numeric_exp`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -89,7 +89,7 @@ E() *Output*: `2.718281828459045` -*Description*: Returns https://en.wikipedia.org/wiki/E_%28mathematical_constant%29[Euler's number]. +*Description*: Returns {wikipedia}/E_%28mathematical_constant%29[Euler's number]. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -111,7 +111,7 @@ EXP(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns https://en.wikipedia.org/wiki/Exponential_function[Euler's number at the power] of `numeric_exp` e^numeric_exp^. +*Description*: Returns {wikipedia}/Exponential_function[Euler's number at the power] of `numeric_exp` e^numeric_exp^. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -177,7 +177,7 @@ LOG(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Natural_logarithm[natural logarithm] of `numeric_exp`. +*Description*: Returns the {wikipedia}/Natural_logarithm[natural logarithm] of `numeric_exp`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -199,7 +199,7 @@ LOG10(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Common_logarithm[base 10 logarithm] of `numeric_exp`. +*Description*: Returns the {wikipedia}/Common_logarithm[base 10 logarithm] of `numeric_exp`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -219,7 +219,7 @@ PI() *Output*: `3.141592653589793` -*Description*: Returns https://en.wikipedia.org/wiki/Pi[PI number]. +*Description*: Returns {wikipedia}/Pi[PI number]. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -348,7 +348,7 @@ SQRT(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns https://en.wikipedia.org/wiki/Square_root[square root] of `numeric_exp`. +*Description*: Returns {wikipedia}/Square_root[square root] of `numeric_exp`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -406,7 +406,7 @@ ACOS(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Inverse_trigonometric_functions[arccosine] of `numeric_exp` as an angle, expressed in radians. +*Description*: Returns the {wikipedia}/Inverse_trigonometric_functions[arccosine] of `numeric_exp` as an angle, expressed in radians. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -428,7 +428,7 @@ ASIN(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Inverse_trigonometric_functions[arcsine] of `numeric_exp` as an angle, expressed in radians. +*Description*: Returns the {wikipedia}/Inverse_trigonometric_functions[arcsine] of `numeric_exp` as an angle, expressed in radians. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -450,7 +450,7 @@ ATAN(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Inverse_trigonometric_functions[arctangent] of `numeric_exp` as an angle, expressed in radians. +*Description*: Returns the {wikipedia}/Inverse_trigonometric_functions[arctangent] of `numeric_exp` as an angle, expressed in radians. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -475,7 +475,7 @@ ATAN2( *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Atan2[arctangent of the `ordinate` and `abscisa` coordinates] specified as an angle, expressed in radians. +*Description*: Returns the {wikipedia}/Atan2[arctangent of the `ordinate` and `abscisa` coordinates] specified as an angle, expressed in radians. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -497,7 +497,7 @@ COS(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Trigonometric_functions#cosine[cosine] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians. +*Description*: Returns the {wikipedia}/Trigonometric_functions#cosine[cosine] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -519,7 +519,7 @@ COSH(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Hyperbolic_function[hyperbolic cosine] of `numeric_exp`. +*Description*: Returns the {wikipedia}/Hyperbolic_function[hyperbolic cosine] of `numeric_exp`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -541,7 +541,7 @@ COT(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Trigonometric_functions#Cosecant,_secant,_and_cotangent[cotangent] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians. +*Description*: Returns the {wikipedia}/Trigonometric_functions#Cosecant,_secant,_and_cotangent[cotangent] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -563,8 +563,8 @@ DEGREES(numeric_exp) <1> *Output*: double numeric value -*Description*: Convert from https://en.wikipedia.org/wiki/Radian[radians] -to https://en.wikipedia.org/wiki/Degree_(angle)[degrees]. +*Description*: Convert from {wikipedia}/Radian[radians] +to {wikipedia}/Degree_(angle)[degrees]. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -586,8 +586,8 @@ RADIANS(numeric_exp) <1> *Output*: double numeric value -*Description*: Convert from https://en.wikipedia.org/wiki/Degree_(angle)[degrees] -to https://en.wikipedia.org/wiki/Radian[radians]. +*Description*: Convert from {wikipedia}/Degree_(angle)[degrees] +to {wikipedia}/Radian[radians]. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -609,7 +609,7 @@ SIN(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Trigonometric_functions#sine[sine] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians. +*Description*: Returns the {wikipedia}/Trigonometric_functions#sine[sine] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -631,7 +631,7 @@ SINH(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Hyperbolic_function[hyperbolic sine] of `numeric_exp`. +*Description*: Returns the {wikipedia}/Hyperbolic_function[hyperbolic sine] of `numeric_exp`. ["source","sql",subs="attributes,macros"] -------------------------------------------------- @@ -653,7 +653,7 @@ TAN(numeric_exp) <1> *Output*: double numeric value -*Description*: Returns the https://en.wikipedia.org/wiki/Trigonometric_functions#tangent[tangent] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians. +*Description*: Returns the {wikipedia}/Trigonometric_functions#tangent[tangent] of `numeric_exp`, where `numeric_exp` is an angle expressed in radians. ["source","sql",subs="attributes,macros"] -------------------------------------------------- diff --git a/x-pack/docs/en/rest-api/security/create-users.asciidoc b/x-pack/docs/en/rest-api/security/create-users.asciidoc index 29f40cce2262..b6012d765028 100644 --- a/x-pack/docs/en/rest-api/security/create-users.asciidoc +++ b/x-pack/docs/en/rest-api/security/create-users.asciidoc @@ -43,7 +43,7 @@ For more information about the native realm, see [[username-validation]] NOTE: Usernames must be at least 1 and no more than 1024 characters. They can contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, punctuation, and -printable symbols in the https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block]. Leading or trailing whitespace is not allowed. +printable symbols in the {wikipedia}/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block]. Leading or trailing whitespace is not allowed. -- diff --git a/x-pack/docs/en/security/authorization/managing-roles.asciidoc b/x-pack/docs/en/security/authorization/managing-roles.asciidoc index 8e101a5e2f61..c23a566aacad 100644 --- a/x-pack/docs/en/security/authorization/managing-roles.asciidoc +++ b/x-pack/docs/en/security/authorization/managing-roles.asciidoc @@ -35,7 +35,7 @@ A role is defined by the following JSON structure: [[valid-role-name]] NOTE: Role names must be at least 1 and no more than 1024 characters. They can contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), spaces, - punctuation, and printable symbols in the https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block]. + punctuation, and printable symbols in the {wikipedia}/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block]. Leading or trailing whitespace is not allowed. [[roles-indices-priv]] diff --git a/x-pack/docs/en/security/ccs-clients-integrations/http.asciidoc b/x-pack/docs/en/security/ccs-clients-integrations/http.asciidoc index 2e85b82e7a24..1fbc89227fc1 100644 --- a/x-pack/docs/en/security/ccs-clients-integrations/http.asciidoc +++ b/x-pack/docs/en/security/ccs-clients-integrations/http.asciidoc @@ -2,7 +2,7 @@ === HTTP/REST clients and security The {es} {security-features} work with standard HTTP -https://en.wikipedia.org/wiki/Basic_access_authentication[basic authentication] +{wikipedia}/Basic_access_authentication[basic authentication] headers to authenticate users. Since Elasticsearch is stateless, this header must be sent with every request: diff --git a/x-pack/docs/en/watcher/actions/email.asciidoc b/x-pack/docs/en/watcher/actions/email.asciidoc index 4c78b9fb5247..06df69419857 100644 --- a/x-pack/docs/en/watcher/actions/email.asciidoc +++ b/x-pack/docs/en/watcher/actions/email.asciidoc @@ -442,7 +442,7 @@ bin/elasticsearch-keystore add xpack.notification.email.account.exchange_account ===== Configuring HTML sanitization options The `email` action supports sending messages with an HTML body. However, for -security reasons, {watcher} https://en.wikipedia.org/wiki/HTML_sanitization[sanitizes] +security reasons, {watcher} {wikipedia}/HTML_sanitization[sanitizes] the HTML. You can control which HTML features are allowed or disallowed by configuring the