diff --git a/docs/Versions.asciidoc b/docs/Versions.asciidoc index 61d8a86c213f..582493a684e8 100644 --- a/docs/Versions.asciidoc +++ b/docs/Versions.asciidoc @@ -16,8 +16,7 @@ include::{docs-root}/shared/versions/stack/{source_branch}.asciidoc[] Javadoc roots used to generate links from Painless's API reference /////// :java11-javadoc: https://docs.oracle.com/en/java/javase/11/docs/api -:joda-time-javadoc: http://www.joda.org/joda-time/apidocs -:lucene-core-javadoc: http://lucene.apache.org/core/{lucene_version_path}/core +:lucene-core-javadoc: https://lucene.apache.org/core/{lucene_version_path}/core ifeval::["{release-state}"=="unreleased"] :elasticsearch-javadoc: https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/{version}-SNAPSHOT diff --git a/docs/community-clients/index.asciidoc b/docs/community-clients/index.asciidoc index 8cd0609730d5..6c5da8106c6e 100644 --- a/docs/community-clients/index.asciidoc +++ b/docs/community-clients/index.asciidoc @@ -53,7 +53,7 @@ a number of clients that have been contributed by the community for various lang * https://github.com/mpenet/spandex[Spandex]: Clojure client, based on the new official low level rest-client. -* http://github.com/clojurewerkz/elastisch[Elastisch]: +* https://github.com/clojurewerkz/elastisch[Elastisch]: Clojure client. [[coldfusion]] @@ -65,12 +65,12 @@ a number of clients that have been contributed by the community for various lang [[erlang]] == Erlang -* http://github.com/tsloughter/erlastic_search[erlastic_search]: +* https://github.com/tsloughter/erlastic_search[erlastic_search]: Erlang client using HTTP. * https://github.com/datahogs/tirexs[Tirexs]: An https://github.com/elixir-lang/elixir[Elixir] based API/DSL, inspired by - http://github.com/karmi/tire[Tire]. Ready to use in pure Erlang + https://github.com/karmi/tire[Tire]. Ready to use in pure Erlang environment. * https://github.com/sashman/elasticsearch_elixir_bulk_processor[Elixir Bulk Processor]: @@ -145,10 +145,10 @@ Also see the {client}/perl-api/current/index.html[official Elasticsearch Perl cl Also see the {client}/php-api/current/index.html[official Elasticsearch PHP client]. -* http://github.com/ruflin/Elastica[Elastica]: +* https://github.com/ruflin/Elastica[Elastica]: PHP client. -* http://github.com/nervetattoo/elasticsearch[elasticsearch] PHP client. +* https://github.com/nervetattoo/elasticsearch[elasticsearch] PHP client. * https://github.com/madewithlove/elasticsearcher[elasticsearcher] Agnostic lightweight package on top of the Elasticsearch PHP client. Its main goal is to allow for easier structuring of queries and indices in your application. It does not want to hide or replace functionality of the Elasticsearch PHP client. @@ -218,9 +218,6 @@ Also see the {client}/rust-api/current/index.html[official Elasticsearch Rust cl * https://github.com/newapplesho/elasticsearch-smalltalk[elasticsearch-smalltalk] - Pharo Smalltalk client for Elasticsearch -* http://ss3.gemstone.com/ss/Elasticsearch.html[Elasticsearch] - - Smalltalk client for Elasticsearch - [[vertx]] == Vert.x diff --git a/docs/java-rest/high-level/getting-started.asciidoc b/docs/java-rest/high-level/getting-started.asciidoc index 1e53ce5b5ff4..a2c50f5435a9 100644 --- a/docs/java-rest/high-level/getting-started.asciidoc +++ b/docs/java-rest/high-level/getting-started.asciidoc @@ -40,7 +40,7 @@ The javadoc for the REST high level client can be found at {rest-high-level-clie === Maven Repository The high-level Java REST client is hosted on -http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.elasticsearch.client%22[Maven +https://search.maven.org/search?q=g:org.elasticsearch.client[Maven Central]. The minimum Java version required is `1.8`. The High Level REST Client is subject to the same release cycle as diff --git a/docs/java-rest/low-level/configuration.asciidoc b/docs/java-rest/low-level/configuration.asciidoc index 8bada5c22e26..d368d8362f09 100644 --- a/docs/java-rest/low-level/configuration.asciidoc +++ b/docs/java-rest/low-level/configuration.asciidoc @@ -140,7 +140,7 @@ openssl pkcs12 -export -in client.crt -inkey private_key.pem \ -name "client" -out client.p12 ``` -If no explicit configuration is provided, the http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#CustomizingStores[system default configuration] +If no explicit configuration is provided, the https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#CustomizingStores[system default configuration] will be used. === Others @@ -154,11 +154,11 @@ indefinitely and negative hostname resolutions for ten seconds. If the resolved addresses of the hosts to which you are connecting the client to vary with time then you might want to modify the default JVM behavior. These can be modified by adding -http://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.ttl=`] +https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.ttl=`] and -http://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.negative.ttl=`] +https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.negative.ttl=`] to your -http://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html[Java +https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html[Java security policy]. === Node selector diff --git a/docs/java-rest/low-level/usage.asciidoc b/docs/java-rest/low-level/usage.asciidoc index 9d55ff79ce26..ecbe868ff761 100644 --- a/docs/java-rest/low-level/usage.asciidoc +++ b/docs/java-rest/low-level/usage.asciidoc @@ -13,7 +13,7 @@ The javadoc for the low level REST client can be found at {rest-client-javadoc}/ === Maven Repository The low-level Java REST client is hosted on -http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.elasticsearch.client%22[Maven +https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.elasticsearch.client%22[Maven Central]. The minimum Java version required is `1.8`. The low-level REST client is subject to the same release cycle as @@ -57,7 +57,7 @@ dependencies { === Dependencies The low-level Java REST client internally uses the -http://hc.apache.org/httpcomponents-asyncclient-dev/[Apache Http Async Client] +https://hc.apache.org/httpcomponents-asyncclient-dev/[Apache Http Async Client] to send http requests. It depends on the following artifacts, namely the async http client and its own transitive dependencies: @@ -212,7 +212,7 @@ include-tagged::{doc-tests}/RestClientDocumentation.java[rest-client-init-client -------------------------------------------------- <1> Set a callback that allows to modify the http client configuration (e.g. encrypted communication over ssl, or anything that the -http://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`] +https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/apidocs/org/apache/http/impl/nio/client/HttpAsyncClientBuilder.html[`org.apache.http.impl.nio.client.HttpAsyncClientBuilder`] allows to set) @@ -401,7 +401,7 @@ https://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/Ht `HttpEntity#getContent` method comes handy which returns an `InputStream` reading from the previously buffered response body. As an alternative, it is possible to provide a custom -http://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[`org.apache.http.nio.protocol.HttpAsyncResponseConsumer`] +https://hc.apache.org/httpcomponents-core-ga/httpcore-nio/apidocs/org/apache/http/nio/protocol/HttpAsyncResponseConsumer.html[`org.apache.http.nio.protocol.HttpAsyncResponseConsumer`] that controls how bytes are read and buffered. [[java-rest-low-usage-logging]] diff --git a/docs/painless/painless-guide/painless-walkthrough.asciidoc b/docs/painless/painless-guide/painless-walkthrough.asciidoc index 771330ad9a5b..ab34a73a4789 100644 --- a/docs/painless/painless-guide/painless-walkthrough.asciidoc +++ b/docs/painless/painless-guide/painless-walkthrough.asciidoc @@ -219,7 +219,7 @@ Painless's native support for regular expressions has syntax constructs: * `/pattern/`: Pattern literals create patterns. This is the only way to create a pattern in painless. The pattern inside the ++/++'s are just -http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expressions]. +https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expressions]. See <> for more. * `=~`: The find operator return a `boolean`, `true` if a subsequence of the text matches, `false` otherwise. @@ -281,7 +281,7 @@ POST hockey/_update_by_query ---------------------------------------------------------------- `Matcher.replaceAll` is just a call to Java's `Matcher`'s -http://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#replaceAll-java.lang.String-[replaceAll] +https://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#replaceAll-java.lang.String-[replaceAll] method so it supports `$1` and `\1` for replacements: [source,console] diff --git a/docs/painless/painless-lang-spec.asciidoc b/docs/painless/painless-lang-spec.asciidoc index 2f108c73732e..aeb1a9d4c753 100644 --- a/docs/painless/painless-lang-spec.asciidoc +++ b/docs/painless/painless-lang-spec.asciidoc @@ -11,10 +11,10 @@ refer to the corresponding topics in the https://docs.oracle.com/javase/specs/jls/se8/html/index.html[Java Language Specification]. -Painless scripts are parsed and compiled using the http://www.antlr.org/[ANTLR4] -and http://asm.ow2.org/[ASM] libraries. Scripts are compiled directly +Painless scripts are parsed and compiled using the https://www.antlr.org/[ANTLR4] +and https://asm.ow2.org/[ASM] libraries. Scripts are compiled directly into Java Virtual Machine (JVM) byte code and executed against a standard JVM. This specification uses ANTLR4 grammar notation to describe the allowed syntax. However, the actual Painless grammar is more compact than what is shown here. -include::painless-lang-spec/index.asciidoc[] \ No newline at end of file +include::painless-lang-spec/index.asciidoc[] diff --git a/docs/plugins/analysis-icu.asciidoc b/docs/plugins/analysis-icu.asciidoc index a6a7f3a4d0f2..a8041e471001 100644 --- a/docs/plugins/analysis-icu.asciidoc +++ b/docs/plugins/analysis-icu.asciidoc @@ -57,7 +57,7 @@ convert `nfc` to `nfd` or `nfkc` to `nfkd` respectively: Which letters are normalized can be controlled by specifying the `unicode_set_filter` parameter, which accepts a -http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]. +https://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]. Here are two examples, the default usage and a customised character filter: @@ -103,7 +103,7 @@ PUT icu_sample ==== ICU Tokenizer Tokenizes text into words on word boundaries, as defined in -http://www.unicode.org/reports/tr29/[UAX #29: Unicode Text Segmentation]. +https://www.unicode.org/reports/tr29/[UAX #29: Unicode Text Segmentation]. It behaves much like the {ref}/analysis-standard-tokenizer.html[`standard` tokenizer], but adds better support for some Asian languages by using a dictionary-based approach to identify words in Thai, Lao, Chinese, Japanese, and Korean, and @@ -137,7 +137,7 @@ for a more detailed explanation. To add icu tokenizer rules, set the `rule_files` settings, which should contain a comma-separated list of `code:rulefile` pairs in the following format: -http://unicode.org/iso15924/iso15924-codes.html[four-letter ISO 15924 script code], +https://unicode.org/iso15924/iso15924-codes.html[four-letter ISO 15924 script code], followed by a colon, then a rule file name. Rule files are placed `ES_HOME/config` directory. As a demonstration of how the rule files can be used, save the following user file to `$ES_HOME/config/KeywordTokenizer.rbbi`: @@ -210,7 +210,7 @@ with the `name` parameter, which accepts `nfc`, `nfkc`, and `nfkc_cf` Which letters are normalized can be controlled by specifying the `unicode_set_filter` parameter, which accepts a -http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]. +https://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]. You should probably prefer the <>. @@ -287,7 +287,7 @@ no need to use Normalize character or token filter as well. Which letters are folded can be controlled by specifying the `unicode_set_filter` parameter, which accepts a -http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]. +https://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet]. The following example exempts Swedish characters from folding. It is important to note that both upper and lowercase forms should be specified, and that @@ -433,7 +433,7 @@ The following parameters are accepted by `icu_collation_keyword` fields: The strength property determines the minimum level of difference considered significant during comparison. Possible values are : `primary`, `secondary`, `tertiary`, `quaternary` or `identical`. See the -http://icu-project.org/apiref/icu4j/com/ibm/icu/text/Collator.html[ICU Collation documentation] +https://icu-project.org/apiref/icu4j/com/ibm/icu/text/Collator.html[ICU Collation documentation] for a more detailed explanation for each value. Defaults to `tertiary` unless otherwise specified in the collation. diff --git a/docs/plugins/analysis-stempel.asciidoc b/docs/plugins/analysis-stempel.asciidoc index 18d4f73af3be..54118945ab3e 100644 --- a/docs/plugins/analysis-stempel.asciidoc +++ b/docs/plugins/analysis-stempel.asciidoc @@ -4,9 +4,6 @@ The Stempel Analysis plugin integrates Lucene's Stempel analysis module for Polish into elasticsearch. -It provides high quality stemming for Polish, based on the -http://www.egothor.org/[Egothor project]. - :plugin_name: analysis-stempel include::install_remove.asciidoc[] diff --git a/docs/plugins/analysis-ukrainian.asciidoc b/docs/plugins/analysis-ukrainian.asciidoc index 178fc6d507c6..534c1708b98e 100644 --- a/docs/plugins/analysis-ukrainian.asciidoc +++ b/docs/plugins/analysis-ukrainian.asciidoc @@ -3,7 +3,7 @@ The Ukrainian Analysis plugin integrates Lucene's UkrainianMorfologikAnalyzer into elasticsearch. -It provides stemming for Ukrainian using the http://github.com/morfologik/morfologik-stemming[Morfologik project]. +It provides stemming for Ukrainian using the https://github.com/morfologik/morfologik-stemming[Morfologik project]. :plugin_name: analysis-ukrainian include::install_remove.asciidoc[] diff --git a/docs/plugins/analysis.asciidoc b/docs/plugins/analysis.asciidoc index 82f3f15ab9d9..bc347744340e 100644 --- a/docs/plugins/analysis.asciidoc +++ b/docs/plugins/analysis.asciidoc @@ -18,7 +18,7 @@ transliteration. <>:: -Advanced analysis of Japanese using the http://www.atilika.org/[Kuromoji analyzer]. +Advanced analysis of Japanese using the https://www.atilika.org/[Kuromoji analyzer]. <>:: diff --git a/docs/plugins/api.asciidoc b/docs/plugins/api.asciidoc index 96d54f591aac..ad12ddbdbf02 100644 --- a/docs/plugins/api.asciidoc +++ b/docs/plugins/api.asciidoc @@ -9,7 +9,7 @@ API extension plugins add new functionality to Elasticsearch by adding new APIs A number of plugins have been contributed by our community: * https://github.com/carrot2/elasticsearch-carrot2[carrot2 Plugin]: - Results clustering with http://project.carrot2.org/[carrot2] (by Dawid Weiss) + Results clustering with https://github.com/carrot2/carrot2[carrot2] (by Dawid Weiss) * https://github.com/wikimedia/search-extra[Elasticsearch Trigram Accelerated Regular Expression Filter]: (by Wikimedia Foundation/Nik Everett) @@ -18,7 +18,7 @@ A number of plugins have been contributed by our community: (by Wikimedia Foundation/Nik Everett) * https://github.com/YannBrrd/elasticsearch-entity-resolution[Entity Resolution Plugin]: - Uses http://github.com/larsga/Duke[Duke] for duplication detection (by Yann Barraud) + Uses https://github.com/larsga/Duke[Duke] for duplication detection (by Yann Barraud) * https://github.com/zentity-io/zentity[Entity Resolution Plugin] (https://zentity.io[zentity]): Real-time entity resolution with pure Elasticsearch (by Dave Moore) diff --git a/docs/plugins/authors.asciidoc b/docs/plugins/authors.asciidoc index 531aea142a08..76a0588ceadf 100644 --- a/docs/plugins/authors.asciidoc +++ b/docs/plugins/authors.asciidoc @@ -116,5 +116,5 @@ AccessController.doPrivileged( ); -------------------------------------------------- -See http://www.oracle.com/technetwork/java/seccodeguide-139067.html[Secure Coding Guidelines for Java SE] +See https://www.oracle.com/technetwork/java/seccodeguide-139067.html[Secure Coding Guidelines for Java SE] for more information. diff --git a/docs/plugins/discovery-azure-classic.asciidoc b/docs/plugins/discovery-azure-classic.asciidoc index 2f580fbd24db..b7a94ea60e27 100644 --- a/docs/plugins/discovery-azure-classic.asciidoc +++ b/docs/plugins/discovery-azure-classic.asciidoc @@ -139,7 +139,7 @@ about your nodes. Before starting, you need to have: -* A http://www.windowsazure.com/[Windows Azure account] +* A https://azure.microsoft.com/en-us/[Windows Azure account] * OpenSSL that isn't from MacPorts, specifically `OpenSSL 1.0.1f 6 Jan 2014` doesn't seem to create a valid keypair for ssh. FWIW, `OpenSSL 1.0.1c 10 May 2012` on Ubuntu 14.04 LTS is known to work. @@ -331,27 +331,7 @@ scp /tmp/azurekeystore.pkcs12 azure-elasticsearch-cluster.cloudapp.net:/home/ela ssh azure-elasticsearch-cluster.cloudapp.net ---- -Once connected, install Elasticsearch: - -["source","sh",subs="attributes,callouts"] ----- -# Install Latest Java version -# Read http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html for details -sudo add-apt-repository ppa:webupd8team/java -sudo apt-get update -sudo apt-get install oracle-java8-installer - -# If you want to install OpenJDK instead -# sudo apt-get update -# sudo apt-get install openjdk-8-jre-headless - -# Download Elasticsearch -curl -s https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-{version}.deb -o elasticsearch-{version}.deb - -# Prepare Elasticsearch installation -sudo dpkg -i elasticsearch-{version}.deb ----- -// NOTCONSOLE +Once connected, {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[install {es}]: Check that Elasticsearch is running: diff --git a/docs/plugins/discovery-ec2.asciidoc b/docs/plugins/discovery-ec2.asciidoc index a3190cff9224..1e9b14e2bf25 100644 --- a/docs/plugins/discovery-ec2.asciidoc +++ b/docs/plugins/discovery-ec2.asciidoc @@ -29,7 +29,7 @@ will work correctly even if it finds master-ineligible nodes, but master elections will be more efficient if this can be avoided. The interaction with the AWS API can be authenticated using the -http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[instance +https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[instance role], or else custom credentials can be supplied. ===== Enabling EC2 discovery @@ -76,7 +76,7 @@ The available settings for the EC2 discovery plugin are as follows. `discovery.ec2.endpoint`:: The EC2 service endpoint to which to connect. See - http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region to find + https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region to find the appropriate endpoint for the region. This setting defaults to `ec2.us-east-1.amazonaws.com` which is appropriate for clusters running in the `us-east-1` region. @@ -152,7 +152,7 @@ For example if you tag some EC2 instances with a tag named `elasticsearch-host-name` and set `host_type: tag:elasticsearch-host-name` then the `discovery-ec2` plugin will read each instance's host name from the value of the `elasticsearch-host-name` tag. -http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[Read more +https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[Read more about EC2 Tags]. -- @@ -293,7 +293,7 @@ available on AWS-based infrastructure from https://www.elastic.co/cloud. EC2 instances offer a number of different kinds of storage. Please be aware of the following when selecting the storage for your cluster: -* http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html[Instance +* https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html[Instance Store] is recommended for {es} clusters as it offers excellent performance and is cheaper than EBS-based storage. {es} is designed to work well with this kind of ephemeral storage because it replicates each shard across multiple nodes. If @@ -327,7 +327,7 @@ https://aws.amazon.com/ec2/instance-types/[instance types] with networking labelled as `Moderate` or `Low`. * It is a good idea to distribute your nodes across multiple -http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability +https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability zones] and use {ref}/modules-cluster.html#shard-allocation-awareness[shard allocation awareness] to ensure that each shard has copies in more than one availability zone. diff --git a/docs/plugins/discovery-gce.asciidoc b/docs/plugins/discovery-gce.asciidoc index de9abec94e3e..94d73c0bc4f6 100644 --- a/docs/plugins/discovery-gce.asciidoc +++ b/docs/plugins/discovery-gce.asciidoc @@ -182,29 +182,7 @@ Failing to set this will result in unauthorized messages when starting Elasticse See <>. ============================================== - -Once connected, install Elasticsearch: - -[source,sh] --------------------------------------------------- -sudo apt-get update - -# Download Elasticsearch -wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-2.0.0.deb - -# Prepare Java installation (Oracle) -sudo echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | sudo tee /etc/apt/sources.list.d/webupd8team-java.list -sudo echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | sudo tee -a /etc/apt/sources.list.d/webupd8team-java.list -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886 -sudo apt-get update -sudo apt-get install oracle-java8-installer - -# Prepare Java installation (or OpenJDK) -# sudo apt-get install java8-runtime-headless - -# Prepare Elasticsearch installation -sudo dpkg -i elasticsearch-2.0.0.deb --------------------------------------------------- +Once connected, {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[install {es}]: [[discovery-gce-usage-long-install-plugin]] ===== Install Elasticsearch discovery gce plugin diff --git a/docs/plugins/discovery.asciidoc b/docs/plugins/discovery.asciidoc index b3090616add2..100373c50b81 100644 --- a/docs/plugins/discovery.asciidoc +++ b/docs/plugins/discovery.asciidoc @@ -30,7 +30,7 @@ addresses of seed hosts. The following discovery plugins have been contributed by our community: -* https://github.com/fabric8io/elasticsearch-cloud-kubernetes[Kubernetes Discovery Plugin] (by Jimmi Dyson, http://fabric8.io[fabric8]) +* https://github.com/fabric8io/elasticsearch-cloud-kubernetes[Kubernetes Discovery Plugin] (by Jimmi Dyson, https://fabric8.io[fabric8]) include::discovery-ec2.asciidoc[] diff --git a/docs/plugins/ingest-attachment.asciidoc b/docs/plugins/ingest-attachment.asciidoc index 404d8aa87f65..a837544122c7 100644 --- a/docs/plugins/ingest-attachment.asciidoc +++ b/docs/plugins/ingest-attachment.asciidoc @@ -2,7 +2,7 @@ === Ingest Attachment Processor Plugin The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by -using the Apache text extraction library http://lucene.apache.org/tika/[Tika]. +using the Apache text extraction library https://tika.apache.org/[Tika]. You can use the ingest attachment plugin as a replacement for the mapper attachment plugin. diff --git a/docs/plugins/ingest.asciidoc b/docs/plugins/ingest.asciidoc index 89075c32ab95..257b74d92900 100644 --- a/docs/plugins/ingest.asciidoc +++ b/docs/plugins/ingest.asciidoc @@ -11,7 +11,7 @@ The core ingest plugins are: <>:: The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by -using the Apache text extraction library http://lucene.apache.org/tika/[Tika]. +using the Apache text extraction library https://tika.apache.org/[Tika]. <>:: diff --git a/docs/plugins/integrations.asciidoc b/docs/plugins/integrations.asciidoc index 944ed4c1030b..0f76c6634924 100644 --- a/docs/plugins/integrations.asciidoc +++ b/docs/plugins/integrations.asciidoc @@ -11,7 +11,7 @@ Integrations are not plugins, but are external tools or modules that make it eas [discrete] ==== Supported by the community: -* http://drupal.org/project/search_api_elasticsearch[Drupal]: +* https://drupal.org/project/search_api_elasticsearch[Drupal]: Drupal Elasticsearch integration via Search API. * https://drupal.org/project/elasticsearch_connector[Drupal]: @@ -28,7 +28,7 @@ Integrations are not plugins, but are external tools or modules that make it eas search (facets, etc), along with some Natural Language Processing features (ex.: More like this) -* http://extensions.xwiki.org/xwiki/bin/view/Extension/Elastic+Search+Macro/[XWiki Next Generation Wiki]: +* https://extensions.xwiki.org/xwiki/bin/view/Extension/Elastic+Search+Macro/[XWiki Next Generation Wiki]: XWiki has an Elasticsearch and Kibana macro allowing to run Elasticsearch queries and display the results in XWiki pages using XWiki's scripting language as well as include Kibana Widgets in XWiki pages [discrete] @@ -101,13 +101,6 @@ releases 2.0 and later do not support rivers. [discrete] ==== Supported by the community: -* http://www.searchtechnologies.com/aspire-for-elasticsearch[Aspire for Elasticsearch]: - Aspire, from Search Technologies, is a powerful connector and processing - framework designed for unstructured data. It has connectors to internal and - external repositories including SharePoint, Documentum, Jive, RDB, file - systems, websites and more, and can transform and normalize this data before - indexing in Elasticsearch. - * https://camel.apache.org/elasticsearch.html[Apache Camel Integration]: An Apache camel component to integrate Elasticsearch @@ -117,13 +110,13 @@ releases 2.0 and later do not support rivers. * https://github.com/FriendsOfSymfony/FOSElasticaBundle[FOSElasticaBundle]: Symfony2 Bundle wrapping Elastica. -* http://grails.org/plugin/elasticsearch[Grails]: +* https://plugins.grails.org/plugin/puneetbehl/elasticsearch[Grails]: Elasticsearch Grails plugin. -* http://haystacksearch.org/[Haystack]: +* https://haystacksearch.org/[Haystack]: Modular search for Django -* http://hibernate.org/search/[Hibernate Search] +* https://hibernate.org/search/[Hibernate Search] Integration with Hibernate ORM, from the Hibernate team. Automatic synchronization of write operations, yet exposes full Elasticsearch capabilities for queries. Can return either Elasticsearch native or re-map queries back into managed entities loaded within transaction from the reference database. * https://github.com/spring-projects/spring-data-elasticsearch[Spring Data Elasticsearch]: @@ -185,7 +178,7 @@ releases 2.0 and later do not support rivers. * https://github.com/radu-gheorghe/check-es[check-es]: Nagios/Shinken plugins for checking on Elasticsearch -* http://sematext.com/spm/index.html[SPM for Elasticsearch]: +* https://sematext.com/spm/index.html[SPM for Elasticsearch]: Performance monitoring with live charts showing cluster and node stats, integrated alerts, email reports, etc. * https://www.zabbix.com/integrations/elasticsearch[Zabbix monitoring template]: diff --git a/docs/plugins/plugin-script.asciidoc b/docs/plugins/plugin-script.asciidoc index c28c38dce7b6..775dd28e0ff9 100644 --- a/docs/plugins/plugin-script.asciidoc +++ b/docs/plugins/plugin-script.asciidoc @@ -93,7 +93,7 @@ To install a plugin from an HTTP URL: + [source,shell] ----------------------------------- -sudo bin/elasticsearch-plugin install http://some.domain/path/to/plugin.zip +sudo bin/elasticsearch-plugin install https://some.domain/path/to/plugin.zip ----------------------------------- + The plugin script will refuse to talk to an HTTPS URL with an untrusted diff --git a/docs/plugins/repository-azure.asciidoc b/docs/plugins/repository-azure.asciidoc index 86c2a61561b0..2293f83036b8 100644 --- a/docs/plugins/repository-azure.asciidoc +++ b/docs/plugins/repository-azure.asciidoc @@ -139,7 +139,7 @@ stored in the keystore are marked as "secure"; the other settings belong in the The client side timeout for any single request to Azure. The value should specify the time unit. For example, a value of `5s` specifies a 5 second timeout. There is no default value, which means that {es} uses the - http://azure.github.io/azure-storage-java/com/microsoft/azure/storage/RequestOptions.html#setTimeoutIntervalInMs(java.lang.Integer)[default value] + https://azure.github.io/azure-storage-java/com/microsoft/azure/storage/RequestOptions.html#setTimeoutIntervalInMs(java.lang.Integer)[default value] set by the Azure client (known as 5 minutes). This setting can be defined globally, per account, or both. @@ -241,8 +241,10 @@ client.admin().cluster().preparePutRepository("my_backup_java1") [[repository-azure-validation]] ==== Repository validation rules -According to the http://msdn.microsoft.com/en-us/library/dd135715.aspx[containers naming guide], a container name must -be a valid DNS name, conforming to the following naming rules: +According to the +https://docs.microsoft.com/en-us/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata[containers +naming guide], a container name must be a valid DNS name, conforming to the +following naming rules: * Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character. * Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not diff --git a/docs/plugins/repository-hdfs.asciidoc b/docs/plugins/repository-hdfs.asciidoc index dcb2255d5b42..174ec174e5ca 100644 --- a/docs/plugins/repository-hdfs.asciidoc +++ b/docs/plugins/repository-hdfs.asciidoc @@ -57,7 +57,7 @@ The following settings are supported: `conf.`:: Inlined configuration parameter to be added to Hadoop configuration. (Optional) - Only client oriented properties from the hadoop http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml[core] and http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml[hdfs] configuration files will be recognized by the plugin. + Only client oriented properties from the hadoop https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml[core] and https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml[hdfs] configuration files will be recognized by the plugin. `compress`:: diff --git a/docs/plugins/repository-s3.asciidoc b/docs/plugins/repository-s3.asciidoc index 3c3600aea8b1..b1d81b882c47 100644 --- a/docs/plugins/repository-s3.asciidoc +++ b/docs/plugins/repository-s3.asciidoc @@ -17,7 +17,7 @@ The plugin provides a repository type named `s3` which may be used when creating a repository. The repository defaults to using https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html[ECS IAM Role] or -http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[EC2 +https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[EC2 IAM Role] credentials for authentication. The only mandatory setting is the bucket name: @@ -117,7 +117,7 @@ settings belong in the `elasticsearch.yml` file. The S3 service endpoint to connect to. This defaults to `s3.amazonaws.com` but the - http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region[AWS + https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region[AWS documentation] lists alternative S3 endpoints. If you are using an <> then you should set this to the service's endpoint. @@ -278,7 +278,7 @@ include::repository-shared-settings.asciidoc[] Minimum threshold below which the chunk is uploaded using a single request. Beyond this threshold, the S3 repository will use the - http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html[AWS + https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html[AWS Multipart Upload API] to split the chunk into several parts, each of `buffer_size` length, and to upload each part in its own request. Note that setting a buffer size lower than `5mb` is not allowed since it will prevent @@ -290,7 +290,7 @@ include::repository-shared-settings.asciidoc[] `canned_acl`:: The S3 repository supports all - http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl[S3 + https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl[S3 canned ACLs] : `private`, `public-read`, `public-read-write`, `authenticated-read`, `log-delivery-write`, `bucket-owner-read`, `bucket-owner-full-control`. Defaults to `private`. You could specify a @@ -308,7 +308,7 @@ include::repository-shared-settings.asciidoc[] the storage class of existing objects. Due to the extra complexity with the Glacier class lifecycle, it is not currently supported by the plugin. For more information about the different classes, see - http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html[AWS + https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html[AWS Storage Classes Guide] NOTE: The option of defining client settings in the repository settings as diff --git a/docs/python/index.asciidoc b/docs/python/index.asciidoc index 05019ccaed2f..b3523d21cb31 100644 --- a/docs/python/index.asciidoc +++ b/docs/python/index.asciidoc @@ -5,22 +5,22 @@ Official low-level client for Elasticsearch. Its goal is to provide common ground for all Elasticsearch-related code in Python; because of this it tries to be opinion-free and very extendable. The full documentation is available at -http://elasticsearch-py.readthedocs.org/ +https://elasticsearch-py.readthedocs.org/ .Elasticsearch DSL ************************************************************************************ For a more high level client library with more limited scope, have a look at -http://elasticsearch-dsl.readthedocs.org/[elasticsearch-dsl] - a more pythonic library +https://elasticsearch-dsl.readthedocs.org/[elasticsearch-dsl] - a more pythonic library sitting on top of `elasticsearch-py`. It provides a more convenient and idiomatic way to write and manipulate -http://elasticsearch-dsl.readthedocs.org/en/latest/search_dsl.html[queries]. It +https://elasticsearch-dsl.readthedocs.org/en/latest/search_dsl.html[queries]. It stays close to the Elasticsearch JSON DSL, mirroring its terminology and structure while exposing the whole range of the DSL from Python either directly using defined classes or a queryset-like expressions. It also provides an optional -http://elasticsearch-dsl.readthedocs.org/en/latest/persistence.html#doctype[persistence +https://elasticsearch-dsl.readthedocs.org/en/latest/persistence.html#doctype[persistence layer] for working with documents as Python objects in an ORM-like fashion: defining mappings, retrieving and saving documents, wrapping the document data in user-defined classes. @@ -114,7 +114,7 @@ The client's features include: * pluggable architecture The client also contains a convenient set of -http://elasticsearch-py.readthedocs.org/en/master/helpers.html[helpers] for +https://elasticsearch-py.readthedocs.org/en/master/helpers.html[helpers] for some of the more engaging tasks like bulk indexing and reindexing. @@ -126,7 +126,7 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 + https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, diff --git a/docs/reference/aggregations/bucket/geohashgrid-aggregation.asciidoc b/docs/reference/aggregations/bucket/geohashgrid-aggregation.asciidoc index a5a496704e8c..922c13a3c115 100644 --- a/docs/reference/aggregations/bucket/geohashgrid-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/geohashgrid-aggregation.asciidoc @@ -2,7 +2,7 @@ === GeoHash grid Aggregation A multi-bucket aggregation that works on `geo_point` fields and groups points into buckets that represent cells in a grid. -The resulting grid can be sparse and only contains cells that have matching data. Each cell is labeled using a http://en.wikipedia.org/wiki/Geohash[geohash] which is of user-definable precision. +The resulting grid can be sparse and only contains cells that have matching data. Each cell is labeled using a {wikipedia}/Geohash[geohash] which is of user-definable precision. * High precision geohashes have a long string length and represent cells that cover only a small area. * Low precision geohashes have a short string length and represent cells that each cover a large area. diff --git a/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc b/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc index be99601d4467..c12ea9a38ea6 100644 --- a/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc @@ -370,7 +370,7 @@ Chi square behaves like mutual information and can be configured with the same p ===== Google normalized distance -Google normalized distance as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (http://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter +Google normalized distance as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (https://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter [source,js] -------------------------------------------------- diff --git a/docs/reference/aggregations/bucket/significanttext-aggregation.asciidoc b/docs/reference/aggregations/bucket/significanttext-aggregation.asciidoc index f71bdc285069..afa63c83e2f1 100644 --- a/docs/reference/aggregations/bucket/significanttext-aggregation.asciidoc +++ b/docs/reference/aggregations/bucket/significanttext-aggregation.asciidoc @@ -101,7 +101,7 @@ Filtering near-duplicate text is a difficult task at index-time but we can clean `filter_duplicate_text` setting. -First let's look at an unfiltered real-world example using the http://research.signalmedia.co/newsir16/signal-dataset.html[Signal media dataset] of +First let's look at an unfiltered real-world example using the https://research.signalmedia.co/newsir16/signal-dataset.html[Signal media dataset] of a million news articles covering a wide variety of news. Here are the raw significant text results for a search for the articles mentioning "elasticsearch": diff --git a/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc b/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc index 44bc7e983d96..554f221a415b 100644 --- a/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc @@ -71,7 +71,7 @@ values as the required memory usage and the need to communicate those per-shard sets between nodes would utilize too many resources of the cluster. This `cardinality` aggregation is based on the -http://static.googleusercontent.com/media/research.google.com/fr//pubs/archive/40671.pdf[HyperLogLog++] +https://static.googleusercontent.com/media/research.google.com/fr//pubs/archive/40671.pdf[HyperLogLog++] algorithm, which counts based on the hashes of the values with some interesting properties: diff --git a/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc b/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc index 830974ac9a8d..7327cee996f3 100644 --- a/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/pattern-analyzer.asciidoc @@ -13,12 +13,12 @@ themselves. The regular expression defaults to `\W+` (or all non-word characters ======================================== The pattern analyzer uses -http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. +https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly. -Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. +Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. ======================================== @@ -146,11 +146,11 @@ The `pattern` analyzer accepts the following parameters: [horizontal] `pattern`:: - A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`. + A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`. `flags`:: - Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. + Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`. `lowercase`:: diff --git a/docs/reference/analysis/analyzers/standard-analyzer.asciidoc b/docs/reference/analysis/analyzers/standard-analyzer.asciidoc index f160beed621c..459d10983418 100644 --- a/docs/reference/analysis/analyzers/standard-analyzer.asciidoc +++ b/docs/reference/analysis/analyzers/standard-analyzer.asciidoc @@ -7,7 +7,7 @@ The `standard` analyzer is the default analyzer which is used if none is specified. It provides grammar based tokenization (based on the Unicode Text Segmentation algorithm, as specified in -http://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well +https://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well for most languages. [discrete] diff --git a/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc b/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc index 6a08f6b4fb66..4d82778861a9 100644 --- a/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc +++ b/docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc @@ -13,12 +13,12 @@ The replacement string can refer to capture groups in the regular expression. ======================================== The pattern replace character filter uses -http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. +https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly. -Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. +Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. ======================================== @@ -30,17 +30,17 @@ The `pattern_replace` character filter accepts the following parameters: [horizontal] `pattern`:: - A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression]. Required. + A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression]. Required. `replacement`:: The replacement string, which can reference capture groups using the `$1`..`$9` syntax, as explained - http://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#appendReplacement-java.lang.StringBuffer-java.lang.String-[here]. + https://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#appendReplacement-java.lang.StringBuffer-java.lang.String-[here]. `flags`:: - Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. + Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`. [discrete] diff --git a/docs/reference/analysis/tokenfilters/hunspell-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/hunspell-tokenfilter.asciidoc index 05f2dec9cadb..8b1d8baf7559 100644 --- a/docs/reference/analysis/tokenfilters/hunspell-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/hunspell-tokenfilter.asciidoc @@ -5,7 +5,7 @@ ++++ Provides <> based on a provided -http://en.wikipedia.org/wiki/Hunspell[Hunspell dictionary]. The `hunspell` +{wikipedia}/Hunspell[Hunspell dictionary]. The `hunspell` filter requires <> of one or more language-specific Hunspell dictionaries. @@ -244,4 +244,4 @@ Path to a Hunspell dictionary directory. This path must be absolute or relative to the `config` location. + By default, the `/hunspell` directory is used, as described in -<>. \ No newline at end of file +<>. diff --git a/docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc index d466843f078d..df7cb09f85ba 100644 --- a/docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc @@ -332,7 +332,7 @@ You cannot specify this parameter and `keywords_pattern`. + -- (Required*, string) -http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java +https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression] used to match tokens. Tokens that match this expression are marked as keywords and not stemmed. @@ -386,4 +386,4 @@ PUT /my-index-000001 } } } ----- \ No newline at end of file +---- diff --git a/docs/reference/analysis/tokenfilters/kstem-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/kstem-tokenfilter.asciidoc index bc29549d5ebb..2741a568ab3e 100644 --- a/docs/reference/analysis/tokenfilters/kstem-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/kstem-tokenfilter.asciidoc @@ -4,7 +4,7 @@ KStem ++++ -Provides http://ciir.cs.umass.edu/pubfiles/ir-35.pdf[KStem]-based stemming for +Provides https://ciir.cs.umass.edu/pubfiles/ir-35.pdf[KStem]-based stemming for the English language. The `kstem` filter combines <> with a built-in <>. diff --git a/docs/reference/analysis/tokenfilters/lowercase-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/lowercase-tokenfilter.asciidoc index f829b6f4e97f..7d6db987ab95 100644 --- a/docs/reference/analysis/tokenfilters/lowercase-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/lowercase-tokenfilter.asciidoc @@ -108,7 +108,7 @@ Language-specific lowercase token filter to use. Valid values include: {lucene-analysis-docs}/el/GreekLowerCaseFilter.html[GreekLowerCaseFilter] `irish`::: Uses Lucene's -http://lucene.apache.org/core/{lucene_version_path}/analyzers-common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html[IrishLowerCaseFilter] +{lucene-analysis-docs}/ga/IrishLowerCaseFilter.html[IrishLowerCaseFilter] `turkish`::: Uses Lucene's {lucene-analysis-docs}/tr/TurkishLowerCaseFilter.html[TurkishLowerCaseFilter] diff --git a/docs/reference/analysis/tokenfilters/normalization-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/normalization-tokenfilter.asciidoc index 85f33d3f3849..b47420baf9d5 100644 --- a/docs/reference/analysis/tokenfilters/normalization-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/normalization-tokenfilter.asciidoc @@ -10,34 +10,34 @@ characters of a certain language. [horizontal] Arabic:: -http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizer.html[`arabic_normalization`] +{lucene-analysis-docs}/ar/ArabicNormalizer.html[`arabic_normalization`] German:: -http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html[`german_normalization`] +{lucene-analysis-docs}/de/GermanNormalizationFilter.html[`german_normalization`] Hindi:: -http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizer.html[`hindi_normalization`] +{lucene-analysis-docs}/hi/HindiNormalizer.html[`hindi_normalization`] Indic:: -http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizer.html[`indic_normalization`] +{lucene-analysis-docs}/in/IndicNormalizer.html[`indic_normalization`] Kurdish (Sorani):: -http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizer.html[`sorani_normalization`] +{lucene-analysis-docs}/ckb/SoraniNormalizer.html[`sorani_normalization`] Persian:: -http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizer.html[`persian_normalization`] +{lucene-analysis-docs}/fa/PersianNormalizer.html[`persian_normalization`] Scandinavian:: -http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html[`scandinavian_normalization`], -http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html[`scandinavian_folding`] +{lucene-analysis-docs}/miscellaneous/ScandinavianNormalizationFilter.html[`scandinavian_normalization`], +{lucene-analysis-docs}/miscellaneous/ScandinavianFoldingFilter.html[`scandinavian_folding`] Serbian:: -http://lucene.apache.org/core/7_1_0/analyzers-common/org/apache/lucene/analysis/sr/SerbianNormalizationFilter.html[`serbian_normalization`] +{lucene-analysis-docs}/sr/SerbianNormalizationFilter.html[`serbian_normalization`] diff --git a/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc index 7b9a3b319904..b57c31a64e3b 100644 --- a/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc @@ -15,12 +15,12 @@ overlap. ======================================== The pattern capture token filter uses -http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. +https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly. -Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. +Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. ======================================== diff --git a/docs/reference/analysis/tokenfilters/pattern_replace-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/pattern_replace-tokenfilter.asciidoc index 3eff19aec485..b9e879c92f02 100644 --- a/docs/reference/analysis/tokenfilters/pattern_replace-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/pattern_replace-tokenfilter.asciidoc @@ -7,7 +7,7 @@ Uses a regular expression to match and replace token substrings. The `pattern_replace` filter uses -http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's +https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's regular expression syntax]. By default, the filter replaces matching substrings with an empty substring (`""`). @@ -22,7 +22,7 @@ A poorly-written regular expression may run slowly or return a StackOverflowError, causing the node running the expression to exit suddenly. Read more about -http://www.regular-expressions.info/catastrophic.html[pathological regular +https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. ==== @@ -108,7 +108,7 @@ in each token. Defaults to `true`. `pattern`:: (Required, string) Regular expression, written in -http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's +https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java's regular expression syntax]. The filter replaces token substrings matching this pattern with the substring in the `replacement` parameter. @@ -157,4 +157,4 @@ PUT /my-index-000001 } } } ----- \ No newline at end of file +---- diff --git a/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc index 8502e0cfdbad..ead87d61d254 100644 --- a/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc @@ -125,7 +125,7 @@ Basque:: http://snowball.tartarus.org/algorithms/basque/stemmer.html[*`basque`*] Bengali:: -http://www.tandfonline.com/doi/abs/10.1080/02564602.1993.11437284[*`bengali`*] +https://www.tandfonline.com/doi/abs/10.1080/02564602.1993.11437284[*`bengali`*] Brazilian Portuguese:: {lucene-analysis-docs}/br/BrazilianStemmer.html[*`brazilian`*] @@ -137,7 +137,7 @@ Catalan:: http://snowball.tartarus.org/algorithms/catalan/stemmer.html[*`catalan`*] Czech:: -http://portal.acm.org/citation.cfm?id=1598600[*`czech`*] +https://dl.acm.org/doi/10.1016/j.ipm.2009.06.001[*`czech`*] Danish:: http://snowball.tartarus.org/algorithms/danish/stemmer.html[*`danish`*] @@ -148,9 +148,9 @@ http://snowball.tartarus.org/algorithms/kraaij_pohlmann/stemmer.html[`dutch_kp`] English:: http://snowball.tartarus.org/algorithms/porter/stemmer.html[*`english`*], -http://ciir.cs.umass.edu/pubfiles/ir-35.pdf[`light_english`], +https://ciir.cs.umass.edu/pubfiles/ir-35.pdf[`light_english`], http://snowball.tartarus.org/algorithms/lovins/stemmer.html[`lovins`], -http://www.researchgate.net/publication/220433848_How_effective_is_suffixing[`minimal_english`], +https://www.researchgate.net/publication/220433848_How_effective_is_suffixing[`minimal_english`], http://snowball.tartarus.org/algorithms/english/stemmer.html[`porter2`], {lucene-analysis-docs}/en/EnglishPossessiveFilter.html[`possessive_english`] @@ -162,29 +162,29 @@ http://snowball.tartarus.org/algorithms/finnish/stemmer.html[*`finnish`*], http://clef.isti.cnr.it/2003/WN_web/22.pdf[`light_finnish`] French:: -http://dl.acm.org/citation.cfm?id=1141523[*`light_french`*], +https://dl.acm.org/citation.cfm?id=1141523[*`light_french`*], http://snowball.tartarus.org/algorithms/french/stemmer.html[`french`], -http://dl.acm.org/citation.cfm?id=318984[`minimal_french`] +https://dl.acm.org/citation.cfm?id=318984[`minimal_french`] Galician:: http://bvg.udc.es/recursos_lingua/stemming.jsp[*`galician`*], http://bvg.udc.es/recursos_lingua/stemming.jsp[`minimal_galician`] (Plural step only) German:: -http://dl.acm.org/citation.cfm?id=1141523[*`light_german`*], +https://dl.acm.org/citation.cfm?id=1141523[*`light_german`*], http://snowball.tartarus.org/algorithms/german/stemmer.html[`german`], http://snowball.tartarus.org/algorithms/german2/stemmer.html[`german2`], http://members.unine.ch/jacques.savoy/clef/morpho.pdf[`minimal_german`] Greek:: -http://sais.se/mthprize/2007/ntais2007.pdf[*`greek`*] +https://sais.se/mthprize/2007/ntais2007.pdf[*`greek`*] Hindi:: http://computing.open.ac.uk/Sites/EACLSouthAsia/Papers/p6-Ramanathan.pdf[*`hindi`*] Hungarian:: http://snowball.tartarus.org/algorithms/hungarian/stemmer.html[*`hungarian`*], -http://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[`light_hungarian`] +https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[`light_hungarian`] Indonesian:: http://www.illc.uva.nl/Publications/ResearchReports/MoL-2003-02.text.pdf[*`indonesian`*] @@ -193,7 +193,7 @@ Irish:: http://snowball.tartarus.org/otherapps/oregan/intro.html[*`irish`*] Italian:: -http://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_italian`*], +https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_italian`*], http://snowball.tartarus.org/algorithms/italian/stemmer.html[`italian`] Kurdish (Sorani):: @@ -203,7 +203,7 @@ Latvian:: {lucene-analysis-docs}/lv/LatvianStemmer.html[*`latvian`*] Lithuanian:: -http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_3/lucene/analysis/common/src/java/org/apache/lucene/analysis/lt/stem_ISO_8859_1.sbl?view=markup[*`lithuanian`*] +https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_3/lucene/analysis/common/src/java/org/apache/lucene/analysis/lt/stem_ISO_8859_1.sbl?view=markup[*`lithuanian`*] Norwegian (Bokmål):: http://snowball.tartarus.org/algorithms/norwegian/stemmer.html[*`norwegian`*], @@ -215,20 +215,20 @@ Norwegian (Nynorsk):: {lucene-analysis-docs}/no/NorwegianMinimalStemmer.html[`minimal_nynorsk`] Portuguese:: -http://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[*`light_portuguese`*], +https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[*`light_portuguese`*], pass:macros[http://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf[`minimal_portuguese`\]], http://snowball.tartarus.org/algorithms/portuguese/stemmer.html[`portuguese`], -http://www.inf.ufrgs.br/\~viviane/rslp/index.htm[`portuguese_rslp`] +https://www.inf.ufrgs.br/\~viviane/rslp/index.htm[`portuguese_rslp`] Romanian:: http://snowball.tartarus.org/algorithms/romanian/stemmer.html[*`romanian`*] Russian:: http://snowball.tartarus.org/algorithms/russian/stemmer.html[*`russian`*], -http://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf[`light_russian`] +https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf[`light_russian`] Spanish:: -http://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_spanish`*], +https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_spanish`*], http://snowball.tartarus.org/algorithms/spanish/stemmer.html[`spanish`] Swedish:: diff --git a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc index c0c3799cdb9e..bc288fbf720e 100644 --- a/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc @@ -145,7 +145,7 @@ However, it is recommended to define large synonyms set in a file using [discrete] ==== WordNet synonyms -Synonyms based on http://wordnet.princeton.edu/[WordNet] format can be +Synonyms based on https://wordnet.princeton.edu/[WordNet] format can be declared using `format`: [source,console] diff --git a/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc index c803bae05526..77cf7f371dfd 100644 --- a/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc @@ -136,7 +136,7 @@ However, it is recommended to define large synonyms set in a file using [discrete] ==== WordNet synonyms -Synonyms based on http://wordnet.princeton.edu/[WordNet] format can be +Synonyms based on https://wordnet.princeton.edu/[WordNet] format can be declared using `format`: [source,console] diff --git a/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc index f5dc48f6fdcd..59a8b4d0b29e 100644 --- a/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc @@ -371,7 +371,7 @@ $ => DIGIT # in some cases you might not want to split on ZWJ # this also tests the case where we need a bigger byte[] -# see http://en.wikipedia.org/wiki/Zero-width_joiner +# see https://en.wikipedia.org/wiki/Zero-width_joiner \\u200D => ALPHANUM ---- diff --git a/docs/reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc index 58782c849bfe..5010254d8d96 100644 --- a/docs/reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc @@ -320,7 +320,7 @@ $ => DIGIT # in some cases you might not want to split on ZWJ # this also tests the case where we need a bigger byte[] -# see http://en.wikipedia.org/wiki/Zero-width_joiner +# see https://en.wikipedia.org/wiki/Zero-width_joiner \\u200D => ALPHANUM ---- @@ -379,4 +379,4 @@ PUT /my-index-000001 } } } ----- \ No newline at end of file +---- diff --git a/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc index 90af45edec28..112ba92bf599 100644 --- a/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc @@ -16,12 +16,12 @@ non-word characters. ======================================== The pattern tokenizer uses -http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. +https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions]. A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly. -Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. +Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them]. ======================================== @@ -107,11 +107,11 @@ The `pattern` tokenizer accepts the following parameters: [horizontal] `pattern`:: - A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`. + A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`. `flags`:: - Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. + Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags]. Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`. `group`:: diff --git a/docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc index 6c2a1283e7a2..2ea16ea5f6a2 100644 --- a/docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc @@ -6,7 +6,7 @@ The `standard` tokenizer provides grammar based tokenization (based on the Unicode Text Segmentation algorithm, as specified in -http://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well +https://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well for most languages. [discrete] diff --git a/docs/reference/api-conventions.asciidoc b/docs/reference/api-conventions.asciidoc index 45c1be4e4519..9b3e18422741 100644 --- a/docs/reference/api-conventions.asciidoc +++ b/docs/reference/api-conventions.asciidoc @@ -542,7 +542,7 @@ Some queries and APIs support parameters to allow inexact _fuzzy_ matching, using the `fuzziness` parameter. When querying `text` or `keyword` fields, `fuzziness` is interpreted as a -http://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein Edit Distance] +{wikipedia}/Levenshtein_distance[Levenshtein Edit Distance] -- the number of one character changes that need to be made to one string to make it the same as another string. diff --git a/docs/reference/docs/bulk.asciidoc b/docs/reference/docs/bulk.asciidoc index 510ba19cb33d..830586fb01d6 100644 --- a/docs/reference/docs/bulk.asciidoc +++ b/docs/reference/docs/bulk.asciidoc @@ -107,7 +107,7 @@ Perl:: Python:: - See http://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*] + See https://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*] JavaScript:: diff --git a/docs/reference/index-modules/similarity.asciidoc b/docs/reference/index-modules/similarity.asciidoc index c31cef72cd83..ca19fcc3b56b 100644 --- a/docs/reference/index-modules/similarity.asciidoc +++ b/docs/reference/index-modules/similarity.asciidoc @@ -61,7 +61,7 @@ PUT /index/_mapping TF/IDF based similarity that has built-in tf normalization and is supposed to work better for short fields (like names). See -http://en.wikipedia.org/wiki/Okapi_BM25[Okapi_BM25] for more details. +{wikipedia}i/Okapi_BM25[Okapi_BM25] for more details. This similarity has the following options: [horizontal] @@ -114,7 +114,7 @@ Type name: `DFR` [[dfi]] ==== DFI similarity -Similarity that implements the http://trec.nist.gov/pubs/trec21/papers/irra.web.nb.pdf[divergence from independence] +Similarity that implements the https://trec.nist.gov/pubs/trec21/papers/irra.web.nb.pdf[divergence from independence] model. This similarity has the following options: diff --git a/docs/reference/ingest/processors/grok.asciidoc b/docs/reference/ingest/processors/grok.asciidoc index 4105bbe51b37..06703a1156d7 100644 --- a/docs/reference/ingest/processors/grok.asciidoc +++ b/docs/reference/ingest/processors/grok.asciidoc @@ -10,7 +10,7 @@ that is generally written for humans and not computer consumption. This processor comes packaged with many https://github.com/elastic/elasticsearch/blob/{branch}/libs/grok/src/main/resources/patterns[reusable patterns]. -If you need help building patterns to match your logs, you will find the {kibana-ref}/xpack-grokdebugger.html[Grok Debugger] tool quite useful! The Grok Debugger is an {xpack} feature under the Basic License and is therefore *free to use*. The Grok Constructor at is also a useful tool. +If you need help building patterns to match your logs, you will find the {kibana-ref}/xpack-grokdebugger.html[Grok Debugger] tool quite useful! The Grok Debugger is an {xpack} feature under the Basic License and is therefore *free to use*. The https://grokconstructor.appspot.com[Grok Constructor] is also a useful tool. [[grok-basics]] ==== Grok Basics diff --git a/docs/reference/mapping/types/geo-point.asciidoc b/docs/reference/mapping/types/geo-point.asciidoc index ab3d3eda0127..5a341772302e 100644 --- a/docs/reference/mapping/types/geo-point.asciidoc +++ b/docs/reference/mapping/types/geo-point.asciidoc @@ -85,7 +85,7 @@ GET my-index-000001/_search <2> Geo-point expressed as a string with the format: `"lat,lon"`. <3> Geo-point expressed as a geohash. <4> Geo-point expressed as an array with the format: [ `lon`, `lat`] -<5> Geo-point expressed as a http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] +<5> Geo-point expressed as a https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] POINT with the format: `"POINT(lon lat)"` <6> A geo-bounding box query which finds all geo-points that fall inside the box. @@ -102,7 +102,7 @@ format was changed early on to conform to the format used by GeoJSON. ================================================== [NOTE] -A point can be expressed as a http://en.wikipedia.org/wiki/Geohash[geohash]. +A point can be expressed as a {wikipedia}/Geohash[geohash]. Geohashes are https://en.wikipedia.org/wiki/Base32[base32] encoded strings of the bits of the latitude and longitude interleaved. Each character in a geohash adds additional 5 bits to the precision. So the longer the hash, the more diff --git a/docs/reference/mapping/types/geo-shape.asciidoc b/docs/reference/mapping/types/geo-shape.asciidoc index b7b49ae2aabc..fab3cdaf4f69 100644 --- a/docs/reference/mapping/types/geo-shape.asciidoc +++ b/docs/reference/mapping/types/geo-shape.asciidoc @@ -156,7 +156,7 @@ triangular mesh (see <>). Multiple PrefixTree implementations are provided: * GeohashPrefixTree - Uses -http://en.wikipedia.org/wiki/Geohash[geohashes] for grid squares. +{wikipedia}/Geohash[geohashes] for grid squares. Geohashes are base32 encoded strings of the bits of the latitude and longitude interleaved. So the longer the hash, the more precise it is. Each character added to the geohash represents another tree level and @@ -164,7 +164,7 @@ adds 5 bits of precision to the geohash. A geohash represents a rectangular area and has 32 sub rectangles. The maximum number of levels in Elasticsearch is 24; the default is 9. * QuadPrefixTree - Uses a -http://en.wikipedia.org/wiki/Quadtree[quadtree] for grid squares. +{wikipedia}/Quadtree[quadtree] for grid squares. Similar to geohash, quad trees interleave the bits of the latitude and longitude the resulting hash is a bit set. A tree level in a quad tree represents 2 bits in this bit set, one for each coordinate. The maximum @@ -254,8 +254,8 @@ Geo-shape queries on geo-shapes implemented with PrefixTrees will not be execute [discrete] ==== Input Structure -Shapes can be represented using either the http://www.geojson.org[GeoJSON] -or http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] +Shapes can be represented using either the http://geojson.org[GeoJSON] +or https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] (WKT) format. The following table provides a mapping of GeoJSON and WKT to Elasticsearch types: @@ -356,7 +356,7 @@ House to the US Capitol Building. [discrete] [[geo-polygon]] -===== http://www.geojson.org/geojson-spec.html#id4[Polygon] +===== http://geojson.org/geojson-spec.html#id4[Polygon] A polygon is defined by a list of a list of points. The first and last points in each (outer) list must be the same (the polygon must be @@ -418,7 +418,7 @@ ambiguous polygons around the dateline and poles are possible. https://tools.ietf.org/html/rfc7946#section-3.1.6[GeoJSON] mandates that the outer polygon must be counterclockwise and interior shapes must be clockwise, which agrees with the Open Geospatial Consortium (OGC) -http://www.opengeospatial.org/standards/sfa[Simple Feature Access] +https://www.opengeospatial.org/standards/sfa[Simple Feature Access] specification for vertex ordering. Elasticsearch accepts both clockwise and counterclockwise polygons if they @@ -467,7 +467,7 @@ POST /example/_doc [discrete] [[geo-multipoint]] -===== http://www.geojson.org/geojson-spec.html#id5[MultiPoint] +===== http://geojson.org/geojson-spec.html#id5[MultiPoint] The following is an example of a list of geojson points: @@ -496,7 +496,7 @@ POST /example/_doc [discrete] [[geo-multilinestring]] -===== http://www.geojson.org/geojson-spec.html#id6[MultiLineString] +===== http://geojson.org/geojson-spec.html#id6[MultiLineString] The following is an example of a list of geojson linestrings: @@ -527,7 +527,7 @@ POST /example/_doc [discrete] [[geo-multipolygon]] -===== http://www.geojson.org/geojson-spec.html#id7[MultiPolygon] +===== http://geojson.org/geojson-spec.html#id7[MultiPolygon] The following is an example of a list of geojson polygons (second polygon contains a hole): diff --git a/docs/reference/mapping/types/point.asciidoc b/docs/reference/mapping/types/point.asciidoc index 620ead709c3b..303db66bb5a3 100644 --- a/docs/reference/mapping/types/point.asciidoc +++ b/docs/reference/mapping/types/point.asciidoc @@ -61,7 +61,7 @@ PUT my-index-000001/_doc/5 <1> Point expressed as an object, with `x` and `y` keys. <2> Point expressed as a string with the format: `"x,y"`. <3> Point expressed as an array with the format: [ `x`, `y`] -<4> Point expressed as a http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] +<4> Point expressed as a https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] POINT with the format: `"POINT(x y)"` The coordinates provided to the indexer are single precision floating point values so diff --git a/docs/reference/mapping/types/shape.asciidoc b/docs/reference/mapping/types/shape.asciidoc index 475b3917dff3..d9dbc2bdc84c 100644 --- a/docs/reference/mapping/types/shape.asciidoc +++ b/docs/reference/mapping/types/shape.asciidoc @@ -19,7 +19,7 @@ You can query documents using this type using ==== Mapping Options Like the <> field type, the `shape` field mapping maps -http://www.geojson.org[GeoJSON] or http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] +http://geojson.org[GeoJSON] or https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] (WKT) geometry objects to the shape type. To enable it, users must explicitly map fields to the shape type. @@ -96,8 +96,8 @@ precision floats for the vertex values so accuracy is guaranteed to the same pre [discrete] ==== Input Structure -Shapes can be represented using either the http://www.geojson.org[GeoJSON] -or http://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] +Shapes can be represented using either the http://geojson.org[GeoJSON] +or https://docs.opengeospatial.org/is/12-063r5/12-063r5.html[Well-Known Text] (WKT) format. The following table provides a mapping of GeoJSON and WKT to Elasticsearch types: @@ -190,7 +190,7 @@ POST /example/_doc [discrete] [[polygon]] -===== http://www.geojson.org/geojson-spec.html#id4[Polygon] +===== http://geojson.org/geojson-spec.html#id4[Polygon] A polygon is defined by a list of a list of points. The first and last points in each (outer) list must be the same (the polygon must be @@ -251,7 +251,7 @@ POST /example/_doc https://tools.ietf.org/html/rfc7946#section-3.1.6[GeoJSON] mandates that the outer polygon must be counterclockwise and interior shapes must be clockwise, which agrees with the Open Geospatial Consortium (OGC) -http://www.opengeospatial.org/standards/sfa[Simple Feature Access] +https://www.opengeospatial.org/standards/sfa[Simple Feature Access] specification for vertex ordering. By default Elasticsearch expects vertices in counterclockwise (right hand rule) @@ -277,7 +277,7 @@ POST /example/_doc [discrete] [[multipoint]] -===== http://www.geojson.org/geojson-spec.html#id5[MultiPoint] +===== http://geojson.org/geojson-spec.html#id5[MultiPoint] The following is an example of a list of geojson points: @@ -306,7 +306,7 @@ POST /example/_doc [discrete] [[multilinestring]] -===== http://www.geojson.org/geojson-spec.html#id6[MultiLineString] +===== http://geojson.org/geojson-spec.html#id6[MultiLineString] The following is an example of a list of geojson linestrings: @@ -337,7 +337,7 @@ POST /example/_doc [discrete] [[multipolygon]] -===== http://www.geojson.org/geojson-spec.html#id7[MultiPolygon] +===== http://geojson.org/geojson-spec.html#id7[MultiPolygon] The following is an example of a list of geojson polygons (second polygon contains a hole): diff --git a/docs/reference/modules/http.asciidoc b/docs/reference/modules/http.asciidoc index 81c994185abc..0f6f73adaab8 100644 --- a/docs/reference/modules/http.asciidoc +++ b/docs/reference/modules/http.asciidoc @@ -7,13 +7,13 @@ The HTTP layer exposes {es}'s REST APIs over HTTP. The HTTP mechanism is completely asynchronous in nature, meaning that there is no blocking thread waiting for a response. The benefit of using asynchronous communication for HTTP is solving the -http://en.wikipedia.org/wiki/C10k_problem[C10k problem]. +{wikipedia}/C10k_problem[C10k problem]. When possible, consider using -http://en.wikipedia.org/wiki/Keepalive#HTTP_Keepalive[HTTP keep alive] +{wikipedia}/Keepalive#HTTP_Keepalive[HTTP keep alive] when connecting for better performance and try to get your favorite client not to do -http://en.wikipedia.org/wiki/Chunked_transfer_encoding[HTTP chunking]. +{wikipedia}/Chunked_transfer_encoding[HTTP chunking]. // end::modules-http-description-tag[] [http-settings] diff --git a/docs/reference/query-dsl/fuzzy-query.asciidoc b/docs/reference/query-dsl/fuzzy-query.asciidoc index 8616459758cc..75d700914d46 100644 --- a/docs/reference/query-dsl/fuzzy-query.asciidoc +++ b/docs/reference/query-dsl/fuzzy-query.asciidoc @@ -5,7 +5,7 @@ ++++ Returns documents that contain terms similar to the search term, as measured by -a http://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein edit distance]. +a https://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein edit distance]. An edit distance is the number of one-character changes needed to turn one term into another. These changes can include: diff --git a/docs/reference/query-dsl/geo-shape-query.asciidoc b/docs/reference/query-dsl/geo-shape-query.asciidoc index 1046a35dc8e1..fbc06db28e0b 100644 --- a/docs/reference/query-dsl/geo-shape-query.asciidoc +++ b/docs/reference/query-dsl/geo-shape-query.asciidoc @@ -23,7 +23,7 @@ examples. ==== Inline Shape Definition Similar to the `geo_shape` type, the `geo_shape` query uses -http://www.geojson.org[GeoJSON] to represent shapes. +http://geojson.org[GeoJSON] to represent shapes. Given the following index with locations as `geo_shape` fields: diff --git a/docs/reference/query-dsl/query-string-syntax.asciidoc b/docs/reference/query-dsl/query-string-syntax.asciidoc index b59d9ef9adfc..2b1a8872c3b9 100644 --- a/docs/reference/query-dsl/query-string-syntax.asciidoc +++ b/docs/reference/query-dsl/query-string-syntax.asciidoc @@ -123,7 +123,7 @@ operator: quikc~ brwn~ foks~ This uses the -http://en.wikipedia.org/wiki/Damerau-Levenshtein_distance[Damerau-Levenshtein distance] +{wikipedia}/Damerau-Levenshtein_distance[Damerau-Levenshtein distance] to find all terms with a maximum of two changes, where a change is the insertion, deletion or substitution of a single character, or transposition of two adjacent diff --git a/docs/reference/query-dsl/rank-feature-query.asciidoc b/docs/reference/query-dsl/rank-feature-query.asciidoc index 8a11cd35efa1..1563a67bc036 100644 --- a/docs/reference/query-dsl/rank-feature-query.asciidoc +++ b/docs/reference/query-dsl/rank-feature-query.asciidoc @@ -82,7 +82,7 @@ Index several documents to the `test` index. ---- PUT /test/_doc/1?refresh { - "url": "http://en.wikipedia.org/wiki/2016_Summer_Olympics", + "url": "https://en.wikipedia.org/wiki/2016_Summer_Olympics", "content": "Rio 2016", "pagerank": 50.3, "url_length": 42, @@ -94,7 +94,7 @@ PUT /test/_doc/1?refresh PUT /test/_doc/2?refresh { - "url": "http://en.wikipedia.org/wiki/2016_Brazilian_Grand_Prix", + "url": "https://en.wikipedia.org/wiki/2016_Brazilian_Grand_Prix", "content": "Formula One motor race held on 13 November 2016", "pagerank": 50.3, "url_length": 47, @@ -107,7 +107,7 @@ PUT /test/_doc/2?refresh PUT /test/_doc/3?refresh { - "url": "http://en.wikipedia.org/wiki/Deadpool_(film)", + "url": "https://en.wikipedia.org/wiki/Deadpool_(film)", "content": "Deadpool is a 2016 American superhero film", "pagerank": 50.3, "url_length": 37, diff --git a/docs/reference/query-dsl/shape-query.asciidoc b/docs/reference/query-dsl/shape-query.asciidoc index 919993a4eb29..29406dde04ae 100644 --- a/docs/reference/query-dsl/shape-query.asciidoc +++ b/docs/reference/query-dsl/shape-query.asciidoc @@ -18,7 +18,7 @@ examples. ==== Inline Shape Definition Similar to the `geo_shape` query, the `shape` query uses -http://www.geojson.org[GeoJSON] or +http://geojson.org[GeoJSON] or https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry[Well Known Text] (WKT) to represent shapes. diff --git a/docs/reference/query-dsl/term-level-queries.asciidoc b/docs/reference/query-dsl/term-level-queries.asciidoc index 440f436b49a3..62868b7fc958 100644 --- a/docs/reference/query-dsl/term-level-queries.asciidoc +++ b/docs/reference/query-dsl/term-level-queries.asciidoc @@ -26,7 +26,7 @@ Returns documents that contain any indexed value for a field. <>:: Returns documents that contain terms similar to the search term. {es} measures similarity, or fuzziness, using a -http://en.wikipedia.org/wiki/Levenshtein_distance[Levenshtein edit distance]. +{wikipedia}/Levenshtein_distance[Levenshtein edit distance]. <>:: Returns documents based on their <>. @@ -74,4 +74,4 @@ include::terms-query.asciidoc[] include::terms-set-query.asciidoc[] -include::wildcard-query.asciidoc[] \ No newline at end of file +include::wildcard-query.asciidoc[] diff --git a/docs/reference/rest-api/cron-expressions.asciidoc b/docs/reference/rest-api/cron-expressions.asciidoc index 2c59b7e46179..0708a85044c3 100644 --- a/docs/reference/rest-api/cron-expressions.asciidoc +++ b/docs/reference/rest-api/cron-expressions.asciidoc @@ -8,9 +8,9 @@ A cron expression is a string of the following form: [year] ------------------------------ -{es} uses the cron parser from the http://www.quartz-scheduler.org[Quartz Job Scheduler]. +{es} uses the cron parser from the https://quartz-scheduler.org[Quartz Job Scheduler]. For more information about writing Quartz cron expressions, see the -http://www.quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-06.html[Quartz CronTrigger Tutorial]. +http://www.quartz-scheduler.org/documentation/quartz-2.2.2/tutorials/crontrigger.htmll[Quartz CronTrigger Tutorial]. All schedule times are in coordinated universal time (UTC); other timezones are not supported. diff --git a/docs/reference/scripting/expression.asciidoc b/docs/reference/scripting/expression.asciidoc index 3bf4f8f8445f..61301fa873b4 100644 --- a/docs/reference/scripting/expression.asciidoc +++ b/docs/reference/scripting/expression.asciidoc @@ -19,7 +19,7 @@ This allows for very fast execution, even faster than if you had written a `nati Expressions support a subset of javascript syntax: a single expression. -See the link:http://lucene.apache.org/core/6_0_0/expressions/index.html?org/apache/lucene/expressions/js/package-summary.html[expressions module documentation] +See the https://lucene.apache.org/core/{lucene_version_path}/expressions/index.html?org/apache/lucene/expressions/js/package-summary.html[expressions module documentation] for details on what operators and functions are available. Variables in `expression` scripts are available to access: diff --git a/docs/reference/scripting/security.asciidoc b/docs/reference/scripting/security.asciidoc index b0072be4fd3c..385d0217686f 100644 --- a/docs/reference/scripting/security.asciidoc +++ b/docs/reference/scripting/security.asciidoc @@ -53,7 +53,7 @@ Bad: [[modules-scripting-other-layers]] === Other security layers In addition to user privileges and script sandboxing Elasticsearch uses the -http://www.oracle.com/technetwork/java/seccodeguide-139067.html[Java Security Manager] +https://www.oracle.com/java/technologies/javase/seccodeguide.html[Java Security Manager] and native security tools as additional layers of security. As part of its startup sequence Elasticsearch enables the Java Security Manager diff --git a/docs/reference/scripting/using.asciidoc b/docs/reference/scripting/using.asciidoc index 2c2757c5693d..312e0861700d 100644 --- a/docs/reference/scripting/using.asciidoc +++ b/docs/reference/scripting/using.asciidoc @@ -234,7 +234,7 @@ entire query. Just provide the stored template's ID and the template parameters. This is useful when you want to run a commonly used query quickly and without mistakes. -Search templates use the http://mustache.github.io/mustache.5.html[mustache +Search templates use the https://mustache.github.io/mustache.5.html[mustache templating language]. See <> for more information and examples. [discrete] diff --git a/docs/reference/search/rank-eval.asciidoc b/docs/reference/search/rank-eval.asciidoc index 675db2388cb2..8628f1d52b8a 100644 --- a/docs/reference/search/rank-eval.asciidoc +++ b/docs/reference/search/rank-eval.asciidoc @@ -405,7 +405,7 @@ in the query. Defaults to 10. Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank for the graded relevance case (Olivier Chapelle, Donald Metzler, Ya Zhang, and Pierre Grinspan. 2009. -http://olivier.chapelle.cc/pub/err.pdf[Expected reciprocal rank for graded relevance].) +https://olivier.chapelle.cc/pub/err.pdf[Expected reciprocal rank for graded relevance].) It is based on the assumption of a cascade model of search, in which a user scans through ranked search results in order and stops at the first document diff --git a/docs/reference/search/request/scroll.asciidoc b/docs/reference/search/request/scroll.asciidoc index 071cc4fd2f7e..6a15d206ca5d 100644 --- a/docs/reference/search/request/scroll.asciidoc +++ b/docs/reference/search/request/scroll.asciidoc @@ -25,7 +25,7 @@ Perl:: Python:: - See http://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*] + See https://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*] JavaScript:: diff --git a/docs/reference/search/search-template.asciidoc b/docs/reference/search/search-template.asciidoc index 414ba5b8c17d..00ffc2833598 100644 --- a/docs/reference/search/search-template.asciidoc +++ b/docs/reference/search/search-template.asciidoc @@ -34,7 +34,7 @@ render search requests, before they are executed and fill existing templates with template parameters. For more information on how Mustache templating and what kind of templating you -can do with it check out the http://mustache.github.io/mustache.5.html[online +can do with it check out the https://mustache.github.io/mustache.5.html[online documentation of the mustache project]. NOTE: The mustache language is implemented in {es} as a sandboxed scripting @@ -604,7 +604,7 @@ query as a string instead: The `{{#url}}value{{/url}}` function can be used to encode a string value in a HTML encoding form as defined in by the -http://www.w3.org/TR/html4/[HTML specification]. +https://www.w3.org/TR/html4/[HTML specification]. As an example, it is useful to encode a URL: diff --git a/docs/reference/setup.asciidoc b/docs/reference/setup.asciidoc index c8f9e3418d4f..6f3459922c0d 100644 --- a/docs/reference/setup.asciidoc +++ b/docs/reference/setup.asciidoc @@ -24,14 +24,14 @@ platforms, but it is possible that it will work on other platforms too. == Java (JVM) Version Elasticsearch is built using Java, and includes a bundled version of -http://openjdk.java.net[OpenJDK] from the JDK maintainers (GPLv2+CE) +https://openjdk.java.net[OpenJDK] from the JDK maintainers (GPLv2+CE) within each distribution. The bundled JVM is the recommended JVM and is located within the `jdk` directory of the Elasticsearch home directory. To use your own version of Java, set the `JAVA_HOME` environment variable. If you must use a version of Java that is different from the bundled JVM, we recommend using a link:/support/matrix[supported] -http://www.oracle.com/technetwork/java/eol-135779.html[LTS version of Java]. +https://www.oracle.com/technetwork/java/eol-135779.html[LTS version of Java]. Elasticsearch will refuse to start if a known-bad version of Java is used. The bundled JVM directory may be removed when using your own JVM. diff --git a/docs/reference/setup/configuration.asciidoc b/docs/reference/setup/configuration.asciidoc index dae86043352f..e3bb60ca30b3 100644 --- a/docs/reference/setup/configuration.asciidoc +++ b/docs/reference/setup/configuration.asciidoc @@ -48,7 +48,7 @@ change the config directory location. [discrete] === Config file format -The configuration format is http://www.yaml.org/[YAML]. Here is an +The configuration format is https://yaml.org/[YAML]. Here is an example of changing the path of the data and logs directories: [source,yaml] diff --git a/docs/reference/setup/install/deb.asciidoc b/docs/reference/setup/install/deb.asciidoc index ad274466d3be..b09b9c4b29c9 100644 --- a/docs/reference/setup/install/deb.asciidoc +++ b/docs/reference/setup/install/deb.asciidoc @@ -11,7 +11,7 @@ The latest stable version of Elasticsearch can be found on the link:/downloads/elasticsearch[Download Elasticsearch] page. Other versions can be found on the link:/downloads/past-releases[Past Releases page]. -NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK] +NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK] from the JDK maintainers (GPLv2+CE). To use your own version of Java, see the <> diff --git a/docs/reference/setup/install/rpm.asciidoc b/docs/reference/setup/install/rpm.asciidoc index d0a9bbea37d6..ef8c318260d8 100644 --- a/docs/reference/setup/install/rpm.asciidoc +++ b/docs/reference/setup/install/rpm.asciidoc @@ -15,7 +15,7 @@ The latest stable version of Elasticsearch can be found on the link:/downloads/elasticsearch[Download Elasticsearch] page. Other versions can be found on the link:/downloads/past-releases[Past Releases page]. -NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK] +NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK] from the JDK maintainers (GPLv2+CE). To use your own version of Java, see the <> diff --git a/docs/reference/setup/install/targz.asciidoc b/docs/reference/setup/install/targz.asciidoc index d8f6cb00414f..cc374f1ea243 100644 --- a/docs/reference/setup/install/targz.asciidoc +++ b/docs/reference/setup/install/targz.asciidoc @@ -10,7 +10,7 @@ link:/downloads/elasticsearch[Download Elasticsearch] page. Other versions can be found on the link:/downloads/past-releases[Past Releases page]. -NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK] +NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK] from the JDK maintainers (GPLv2+CE). To use your own version of Java, see the <> diff --git a/docs/reference/setup/install/windows.asciidoc b/docs/reference/setup/install/windows.asciidoc index 04fe974c6137..27cd8fcf9f1f 100644 --- a/docs/reference/setup/install/windows.asciidoc +++ b/docs/reference/setup/install/windows.asciidoc @@ -25,7 +25,7 @@ link:/downloads/elasticsearch[Download Elasticsearch] page. Other versions can be found on the link:/downloads/past-releases[Past Releases page]. -NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK] +NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK] from the JDK maintainers (GPLv2+CE). To use your own version of Java, see the <> diff --git a/docs/reference/setup/install/zip-windows.asciidoc b/docs/reference/setup/install/zip-windows.asciidoc index f4f8cdd1a839..96954fdad104 100644 --- a/docs/reference/setup/install/zip-windows.asciidoc +++ b/docs/reference/setup/install/zip-windows.asciidoc @@ -24,7 +24,7 @@ link:/downloads/elasticsearch[Download Elasticsearch] page. Other versions can be found on the link:/downloads/past-releases[Past Releases page]. -NOTE: Elasticsearch includes a bundled version of http://openjdk.java.net[OpenJDK] +NOTE: Elasticsearch includes a bundled version of https://openjdk.java.net[OpenJDK] from the JDK maintainers (GPLv2+CE). To use your own version of Java, see the <> @@ -200,7 +200,7 @@ The Elasticsearch service can be configured prior to installation by setting the The timeout in seconds that procrun waits for service to exit gracefully. Defaults to `0`. -NOTE: At its core, `elasticsearch-service.bat` relies on http://commons.apache.org/proper/commons-daemon/[Apache Commons Daemon] project +NOTE: At its core, `elasticsearch-service.bat` relies on https://commons.apache.org/proper/commons-daemon/[Apache Commons Daemon] project to install the service. Environment variables set prior to the service installation are copied and will be used during the service lifecycle. This means any changes made to them after the installation will not be picked up unless the service is reinstalled. NOTE: On Windows, the <> can be configured as for diff --git a/docs/reference/setup/logging-config.asciidoc b/docs/reference/setup/logging-config.asciidoc index 249f82ea7afe..1a99cc9568d0 100644 --- a/docs/reference/setup/logging-config.asciidoc +++ b/docs/reference/setup/logging-config.asciidoc @@ -117,7 +117,7 @@ loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the -http://logging.apache.org/log4j/2.x/manual/configuration.html[Log4j +https://logging.apache.org/log4j/2.x/manual/configuration.html[Log4j documentation]. [discrete] diff --git a/docs/reference/setup/sysconfig/dns-cache.asciidoc b/docs/reference/setup/sysconfig/dns-cache.asciidoc index 54a1e20a15ae..94c469c978ce 100644 --- a/docs/reference/setup/sysconfig/dns-cache.asciidoc +++ b/docs/reference/setup/sysconfig/dns-cache.asciidoc @@ -10,10 +10,10 @@ seconds. These values should be suitable for most environments, including environments where DNS resolutions vary with time. If not, you can edit the values `es.networkaddress.cache.ttl` and `es.networkaddress.cache.negative.ttl` in the <>. Note that the values -http://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.ttl=`] +https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.ttl=`] and -http://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.negative.ttl=`] +https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html[`networkaddress.cache.negative.ttl=`] in the -http://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html[Java +https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html[Java security policy] are ignored by Elasticsearch unless you remove the settings for `es.networkaddress.cache.ttl` and `es.networkaddress.cache.negative.ttl`. diff --git a/docs/reference/sql/endpoints/jdbc.asciidoc b/docs/reference/sql/endpoints/jdbc.asciidoc index 36c4d8e5dcbf..a4bf08ad13dc 100644 --- a/docs/reference/sql/endpoints/jdbc.asciidoc +++ b/docs/reference/sql/endpoints/jdbc.asciidoc @@ -16,7 +16,7 @@ The JDBC driver can be obtained from: Dedicated page:: https://www.elastic.co/downloads/jdbc-client[elastic.co] provides links, typically for manual downloads. Maven dependency:: -http://maven.apache.org/[Maven]-compatible tools can retrieve it automatically as a dependency: +https://maven.apache.org/[Maven]-compatible tools can retrieve it automatically as a dependency: ["source","xml",subs="attributes"] ---- diff --git a/docs/reference/sql/endpoints/rest.asciidoc b/docs/reference/sql/endpoints/rest.asciidoc index f7bbf3ce6f83..f81b090afa48 100644 --- a/docs/reference/sql/endpoints/rest.asciidoc +++ b/docs/reference/sql/endpoints/rest.asciidoc @@ -98,7 +98,7 @@ s|Description |cbor |application/cbor -|http://cbor.io/[Concise Binary Object Representation] +|https://cbor.io/[Concise Binary Object Representation] |smile |application/smile diff --git a/docs/resiliency/index.asciidoc b/docs/resiliency/index.asciidoc index e20679be2c2d..16b790db1588 100644 --- a/docs/resiliency/index.asciidoc +++ b/docs/resiliency/index.asciidoc @@ -638,7 +638,7 @@ When using multiple data paths, an index could be falsely reported as corrupted. [discrete] === Randomized Testing (STATUS: DONE, v1.0.0) -In order to best validate for resiliency in Elasticsearch, we rewrote the Elasticsearch test infrastructure to introduce the concept of http://berlinbuzzwords.de/sites/berlinbuzzwords.de/files/media/documents/dawidweiss-randomizedtesting-pub.pdf[randomized testing]. Randomized testing allows us to easily enhance the Elasticsearch testing infrastructure with predictably irrational conditions, making the resulting code base more resilient. +In order to best validate for resiliency in Elasticsearch, we rewrote the Elasticsearch test infrastructure to introduce the concept of https://github.com/randomizedtesting/randomizedtesting[randomized testing]. Randomized testing allows us to easily enhance the Elasticsearch testing infrastructure with predictably irrational conditions, making the resulting code base more resilient. Each of our integration tests runs against a cluster with a random number of nodes, and indices have a random number of shards and replicas. Merge settings change for every run, indexing is done in serial or async fashion or even wrapped in a bulk operation and thread pool sizes vary to ensure that we don’t produce a deadlock no matter what happens. The list of places we use this randomization infrastructure is long, and growing every day, and has saved us headaches several times before we shipped a particular feature.