[[docker]] === Install {es} with Docker {es} is also available as Docker images. Starting with version 8.0.0, these are based upon a tiny core of essential files. Prior versions used https://hub.docker.com/_/centos/[centos:8] as the base image. A list of all published Docker images and tags is available at https://www.docker.elastic.co[www.docker.elastic.co]. The source files are in https://github.com/elastic/elasticsearch/blob/{branch}/distribution/docker[Github]. include::license.asciidoc[] Starting in {es} 8.0, security is enabled by default. With security enabled, {stack} {security-features} require TLS encryption for the transport networking layer, or your cluster will fail to start. ==== Install Docker Desktop or Docker Engine Install the appropriate https://docs.docker.com/get-docker/[Docker application] for your operating system. NOTE: Make sure that Docker is allotted at least 4GiB of memory. In Docker Desktop, you configure resource usage on the Advanced tab in Preference (macOS) or Settings (Windows). ==== Pull the {es} Docker image Obtaining {es} for Docker is as simple as issuing a `docker pull` command against the Elastic Docker registry. ifeval::["{release-state}"=="unreleased"] WARNING: Version {version} of {es} has not yet been released, so no Docker image is currently available for this version. endif::[] ifeval::["{release-state}"!="unreleased"] [source,sh,subs="attributes"] ---- docker pull {docker-repo}:{version} ---- endif::[] Now that you have the {es} Docker image, you can start a <> or <> cluster. [[docker-cli-run-dev-mode]] ==== Start a single-node cluster with Docker ifeval::["{release-state}"=="unreleased"] WARNING: Version {version} of the {es} Docker image has not yet been released. endif::[] If you're starting a single-node {es} cluster in a Docker container, security will be automatically enabled and configured for you. When you start {es} for the first time, the following security configuration occurs automatically: * <> are generated for the transport and HTTP layers. * The Transport Layer Security (TLS) configuration settings are written to `elasticsearch.yml`. * A password is generated for the `elastic` user. * An enrollment token is generated for {kib}. You can then {kibana-ref}/docker.html[start {kib}] and enter the enrollment token, which is valid for 30 minutes. This token automatically applies the security settings from your {es} cluster, authenticates to {es} with the `kibana_system` user, and writes the security configuration to `kibana.yml`. The following command starts a single-node {es} cluster for development or testing. . Start {es} in Docker. A password is generated for the `elastic` user and output to the terminal, plus an enrollment token for enrolling {kib}. + -- ifeval::["{release-state}"!="unreleased"] [source,sh,subs="attributes"] ---- docker run --name es-node01 -p 9200:9200 -p 9300:9300 -it {docker-image} ---- endif::[] -- + TIP: You might need to scroll back a bit in the terminal to view the password and enrollment token. . Copy the generated password and enrollment token and save them in a secure location. These values are shown only when you start {es} for the first time. + [NOTE] ==== If you need to reset the password for the `elastic` user or other built-in users, run the <> tool. This tool is available in the {es} `/bin` directory of the Docker container. For example: [source,sh] ---- docker exec -it es-node01 /usr/share/elasticsearch/bin/reset-elastic-password ---- ==== . Copy the `http_ca.crt` security certificate from your Docker container to your local machine. + [source,sh] ---- docker cp es-node01:/usr/share/elasticsearch/config/tls_auto_config_*/http_ca.crt . ---- . Open a new terminal and verify that you can connect to your {es} cluster by making an authenticated call, using the `http_ca.crt` file that you copied from your Docker container. Enter the password for the `elastic` user when prompted. + [source,sh] ---- curl --cacert http_ca.crt -u elastic https://localhost:9200 ---- // NOTCONSOLE ===== Next steps You now have a test {es} environment set up. Before you start serious development or go into production with {es}, review the <> to apply when running {es} in Docker in production. [[elasticsearch-security-certificates]] ===== Security certificates and keys When you start {es} for the first time, the following certificates and keys are generated in the `/usr/share/elasticsearch/config/tls_auto_config_initial_node_` directory in the Docker container, and allow you to connect a {kib} instance to your secured {es} cluster and encrypt internode communication. The files are listed here for reference. `http_ca.crt`:: The CA certificate that is used to sign the certificates for the HTTP layer of this {es} cluster. `http_keystore_local_node.p12`:: Keystore that contains the key and certificate for the HTTP layer for this node. `transport_keystore_all_nodes.p12`:: Keystore that contains the key and certificate for the transport layer for all the nodes in your cluster. [[docker-compose-file]] ==== Start a multi-node cluster with Docker Compose When defining multiple nodes in a `docker-compose.yml` file, you'll need to explicitly enable and configure security so that {es} doesn't try to generate a password for the `elastic` user on every node. ===== Prepare the environment The following example uses Docker Compose to start a three-node {es} cluster. Create each of the following files inside of a new directory. Copy and paste the contents of each example into the appropriate file as described in the following sections: * <> * <> * <> * <> [[docker-instances-yml]] [discrete] ===== `instances.yml` When you run the example, {es} uses this file to create a three-node cluster. The nodes are named `es01`, `es02`,and `es03`. ifeval::["{release-state}"=="unreleased"] + -- WARNING: Version {version} of {es} has not yet been released, so a `docker-compose.yml` is not available for this version. endif::[] ifeval::["{release-state}"!="unreleased"] [source,yaml,subs="attributes"] ---- include::instances.yml[] ---- endif::[] -- [[docker-env]] [discrete] ===== `.env` The `.env` file sets environment variables that are used when you run the example. Ensure that you specify a strong password for the `elastic` user with the `ELASTIC_PASSWORD` variable. This variable is referenced by the `docker-compose.yml` file. ifeval::["{release-state}"=="unreleased"] + -- WARNING: Version {version} of {es} has not yet been released, so a `docker-compose.yml` is not available for this version. endif::[] ifeval::["{release-state}"!="unreleased"] [source,yaml,subs="attributes"] ---- include::.env[] ---- endif::[] -- `COMPOSE_PROJECT_NAME`:: Adds an `es_` prefix for all volumes and networks created by `docker-compose`. `CERTS_DIR`:: Specifies the path inside the Docker image where {es} expects the security certificates. `ELASTIC_PASSWORD`:: Sets the initial password for the `elastic` user. [discrete] [[docker-create-certs]] ===== `create-certs.yml` The `create-certs.yml` file includes a script that generates node certificates and a certificate authority (CA) certificate and key where {es} expects them. These certificates and key are placed in a Docker volume named `es_certs`. ifeval::["{release-state}"=="unreleased"] + -- WARNING: Version {version} of {es} has not yet been released, so a `docker-compose.yml` is not available for this version. endif::[] ifeval::["{release-state}"!="unreleased"] [source,yaml,subs="attributes"] ---- include::create-certs.yml[] ---- endif::[] -- [[docker-docker-compose]] [discrete] ===== `docker-compose.yml` The `docker-compose.yml` file defines configuration settings for each of your {es} nodes. NOTE: This sample `docker-compose.yml` file uses the `ES_JAVA_OPTS` environment variable to manually set the heap size to 512MB. We do not recommend using `ES_JAVA_OPTS` in production. See <>. This configuration exposes port `9200` on all network interfaces. Given how Docker manipulates `iptables` on Linux, this means that your {es} cluster is publicly accessible, potentially ignoring any firewall settings. If you don't want to expose port `9200` and instead use a reverse proxy, replace `9200:9200` with `127.0.0.1:9200:9200` in the `docker-compose.yml` file. {es} will then only be accessible from the host machine itself. ifeval::["{release-state}"=="unreleased"] + -- WARNING: Version {version} of {es} has not yet been released, so a `docker-compose.yml` is not available for this version. endif::[] ifeval::["{release-state}"!="unreleased"] [source,yaml,subs="attributes"] ---- include::docker-compose.yml[] ---- endif::[] -- ===== Start your cluster with security enabled and configured This sample Docker Compose file starts a three-node {es} cluster. The https://docs.docker.com/storage/volumes[Docker named volumes] `data01`, `data02`, and `data03` store the node data directories so that the data persists across restarts. If they don't already exist, running `docker-compose` creates these volumes. [[docker-generate-certificates]] . Generate the certificates. You only need to run this command one time: + ["source","sh"] ---- docker-compose -f create-certs.yml run --rm create_certs ---- . Start your {es} nodes with TLS configured on the transport layer: + ["source","sh"] ---- docker-compose up -d ---- + Node `es01` listens on `localhost:9200` and `es02` and `es03` talk to `es01` over a Docker network. . Access the {es} API over TLS using the bootstrapped password for the `elastic` user that you specified in the `.env` file: + ["source","sh",subs="attributes"] ---- docker run --rm -v es_certs:/certs --network=es_default {docker-image} curl --cacert /certs/ca/ca.crt -u elastic: https://es01:9200 ---- // NOTCONSOLE + -- `es_certs`:: The name of the volume that the script in `create-certs.yml` creates to hold your certificates. ``:: The password for the `elastic` user, defined by the `ELASTIC_PASSWORD` variable in the `.env` file. -- . Submit a `_cat/nodes` request to check that the nodes are up and running: + [source,sh] ---- curl -X GET "https://localhost:9200/_cat/nodes?v=true&pretty" ---- // NOTCONSOLE Log messages go to the console and are handled by the configured Docker logging driver. By default, you can access logs with `docker logs`. If you prefer that the {es} container write logs to disk, set the `ES_LOG_STYLE` environment variable to `file`. This causes {es} to use the same logging configuration as other {es} distribution formats. If you need to generate a new password for the `elastic` user or any of the built-in users, use the `elasticsearch-reset-password` tool: WARNING: Windows users not running PowerShell must remove all backslashes (`\`) and join lines in the following command. ["source","sh"] ---- docker exec es01 /bin/bash -c "bin/elasticsearch-reset-password \ auto --batch \ --url https://localhost:9200" ---- ===== Stop the cluster To stop the cluster, run `docker-compose down`. The data in the Docker volumes is preserved and loaded when you restart the cluster with `docker-compose up`. -- ["source","sh"] ---- docker-compose down ---- -- To **delete the data volumes** when you stop the cluster, specify the `-v` option: ["source","sh"] ---- docker-compose down -v ---- WARNING: Deleting data volumes will remove the generated security certificates for your nodes. You will need to run `docker-compose` and <> before starting your cluster. ===== Next steps You now have a test {es} environment set up. Before you start serious development or go into production with {es}, review the <> to apply when running {es} in Docker in production. [[docker-prod-prerequisites]] ==== Using the Docker images in production The following requirements and recommendations apply when running {es} in Docker in production. ===== Set `vm.max_map_count` to at least `262144` The `vm.max_map_count` kernel setting must be set to at least `262144` for production use. How you set `vm.max_map_count` depends on your platform: * Linux + -- The `vm.max_map_count` setting should be set permanently in `/etc/sysctl.conf`: [source,sh] -------------------------------------------- grep vm.max_map_count /etc/sysctl.conf vm.max_map_count=262144 -------------------------------------------- To apply the setting on a live system, run: [source,sh] -------------------------------------------- sysctl -w vm.max_map_count=262144 -------------------------------------------- -- * macOS with https://docs.docker.com/docker-for-mac[Docker for Mac] + -- The `vm.max_map_count` setting must be set within the xhyve virtual machine: . From the command line, run: + [source,sh] -------------------------------------------- screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty -------------------------------------------- . Press enter and use`sysctl` to configure `vm.max_map_count`: + [source,sh] -------------------------------------------- sysctl -w vm.max_map_count=262144 -------------------------------------------- . To exit the `screen` session, type `Ctrl a d`. -- * Windows and macOS with https://www.docker.com/products/docker-desktop[Docker Desktop] + -- The `vm.max_map_count` setting must be set via docker-machine: [source,sh] -------------------------------------------- docker-machine ssh sudo sysctl -w vm.max_map_count=262144 -------------------------------------------- -- * Windows with https://docs.docker.com/docker-for-windows/wsl[Docker Desktop WSL 2 backend] + -- The `vm.max_map_count` setting must be set in the docker-desktop container: [source,sh] -------------------------------------------- wsl -d docker-desktop sysctl -w vm.max_map_count=262144 -------------------------------------------- -- ===== Configuration files must be readable by the `elasticsearch` user By default, {es} runs inside the container as user `elasticsearch` using uid:gid `1000:0`. IMPORTANT: One exception is https://docs.openshift.com/container-platform/3.6/creating_images/guidelines.html#openshift-specific-guidelines[Openshift], which runs containers using an arbitrarily assigned user ID. Openshift presents persistent volumes with the gid set to `0`, which works without any adjustments. If you are bind-mounting a local directory or file, it must be readable by the `elasticsearch` user. In addition, this user must have write access to the <> ({es} needs write access to the `config` directory so that it can generate a keystore). A good strategy is to grant group access to gid `0` for the local directory. For example, to prepare a local directory for storing data through a bind-mount: [source,sh] -------------------------------------------- mkdir esdatadir chmod g+rwx esdatadir chgrp 0 esdatadir -------------------------------------------- You can also run an {es} container using both a custom UID and GID. Unless you bind-mount each of the `config`, `data` and `logs` directories, you must pass the command line option `--group-add 0` to `docker run`. This ensures that the user under which {es} is running is also a member of the `root` (GID 0) group inside the container. ===== Increase ulimits for nofile and nproc Increased ulimits for <> and <> must be available for the {es} containers. Verify the https://github.com/moby/moby/tree/ea4d1243953e6b652082305a9c3cda8656edab26/contrib/init[init system] for the Docker daemon sets them to acceptable values. To check the Docker daemon defaults for ulimits, run: [source,sh] -------------------------------------------- docker run --rm centos:8 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su' -------------------------------------------- If needed, adjust them in the Daemon or override them per container. For example, when using `docker run`, set: [source,sh] -------------------------------------------- --ulimit nofile=65535:65535 -------------------------------------------- ===== Disable swapping Swapping needs to be disabled for performance and node stability. For information about ways to do this, see <>. If you opt for the `bootstrap.memory_lock: true` approach, you also need to define the `memlock: true` ulimit in the https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker Daemon], or explicitly set for the container as shown in the <>. When using `docker run`, you can specify: [source,sh] ---- -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 ---- ===== Randomize published ports The image https://docs.docker.com/engine/reference/builder/#/expose[exposes] TCP ports 9200 and 9300. For production clusters, randomizing the published ports with `--publish-all` is recommended, unless you are pinning one container per host. [[docker-set-heap-size]] ===== Manually set the heap size By default, {es} automatically sizes JVM heap based on a nodes's <> and the total memory available to the node's container. We recommend this default sizing for most production environments. If needed, you can override default sizing by manually setting JVM heap size. To manually set the heap size in production, bind mount a <> file under `/usr/share/elasticsearch/config/jvm.options.d` that includes your desired <> settings. For testing, you can also manually set the heap size using the `ES_JAVA_OPTS` environment variable. For example, to use 16GB, specify `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`. The `ES_JAVA_OPTS` variable overrides all other JVM options. The `ES_JAVA_OPTS` variable overrides all other JVM options. We do not recommend using `ES_JAVA_OPTS` in production. The `docker-compose.yml` file above sets the heap size to 512MB. ===== Pin deployments to a specific image version Pin your deployments to a specific version of the {es} Docker image. For example +docker.elastic.co/elasticsearch/elasticsearch:{version}+. ===== Always bind data volumes You should use a volume bound on `/usr/share/elasticsearch/data` for the following reasons: . The data of your {es} node won't be lost if the container is killed . {es} is I/O sensitive and the Docker storage driver is not ideal for fast I/O . It allows the use of advanced https://docs.docker.com/engine/extend/plugins/#volume-plugins[Docker volume plugins] ===== Avoid using `loop-lvm` mode If you are using the devicemapper storage driver, do not use the default `loop-lvm` mode. Configure docker-engine to use https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm]. ===== Centralize your logs Consider centralizing your logs by using a different https://docs.docker.com/engine/admin/logging/overview/[logging driver]. Also note that the default json-file logging driver is not ideally suited for production use. [[docker-configuration-methods]] ==== Configuring {es} with Docker When you run in Docker, the <> are loaded from `/usr/share/elasticsearch/config/`. To use custom configuration files, you <> over the configuration files in the image. You can set individual {es} configuration parameters using Docker environment variables. The <> and the <> use this method. You can use the setting name directly as the environment variable name. If you cannot do this, for example because your orchestration platform forbids periods in environment variable names, then you can use an alternative style by converting the setting name as follows. . Change the setting name to uppercase . Prefix it with `ES_SETTING_` . Escape any underscores (`_`) by duplicating them . Convert all periods (`.`) to underscores (`_`) For example, `-e bootstrap.memory_lock=true` becomes `-e ES_SETTING_BOOTSTRAP_MEMORY__LOCK=true`. You can use the contents of a file to set the value of the `ELASTIC_PASSWORD` or `KEYSTORE_PASSWORD` environment variables, by suffixing the environment variable name with `_FILE`. This is useful for passing secrets such as passwords to {es} without specifying them directly. For example, to set the {es} bootstrap password from a file, you can bind mount the file and set the `ELASTIC_PASSWORD_FILE` environment variable to the mount location. If you mount the password file to `/run/secrets/bootstrapPassword.txt`, specify: [source,sh] -------------------------------------------- -e ELASTIC_PASSWORD_FILE=/run/secrets/bootstrapPassword.txt -------------------------------------------- You can override the default command for the image to pass {es} configuration parameters as command line options. For example: [source,sh] -------------------------------------------- docker run bin/elasticsearch -Ecluster.name=mynewclustername -------------------------------------------- While bind-mounting your configuration files is usually the preferred method in production, you can also <<_c_customized_image, create a custom Docker image>> that contains your configuration. [[docker-config-bind-mount]] ===== Mounting {es} configuration files Create custom config files and bind-mount them over the corresponding files in the Docker image. For example, to bind-mount `custom_elasticsearch.yml` with `docker run`, specify: [source,sh] -------------------------------------------- -v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -------------------------------------------- If you bind-mount a custom `elasticsearch.yml` file, ensure it includes the `network.host: 0.0.0.0` setting. This setting ensures the node is reachable for HTTP and transport traffic, provided its ports are exposed. The Docker image's built-in `elasticsearch.yml` file includes this setting by default. IMPORTANT: The container **runs {es} as user `elasticsearch` using uid:gid `1000:0`**. Bind mounted host directories and files must be accessible by this user, and the data and log directories must be writable by this user. [[docker-keystore-bind-mount]] ===== Create an encrypted {es} keystore By default, {es} will auto-generate a keystore file for <>. This file is obfuscated but not encrypted. To encrypt your secure settings with a password and have them persist outside the container, use a `docker run` command to manually create the keystore instead. The command must: * Bind-mount the `config` directory. The command will create an `elasticsearch.keystore` file in this directory. To avoid errors, do not directly bind-mount the `elasticsearch.keystore` file. * Use the `elasticsearch-keystore` tool with the `create -p` option. You'll be prompted to enter a password for the keystore. ifeval::["{release-state}"!="unreleased"] For example: [source,sh,subs="attributes"] ---- docker run -it --rm \ -v full_path_to/config:/usr/share/elasticsearch/config \ docker.elastic.co/elasticsearch/elasticsearch:{version} \ bin/elasticsearch-keystore create -p ---- You can also use a `docker run` command to add or update secure settings in the keystore. You'll be prompted to enter the setting values. If the keystore is encrypted, you'll also be prompted to enter the keystore password. [source,sh,subs="attributes"] ---- docker run -it --rm \ -v full_path_to/config:/usr/share/elasticsearch/config \ docker.elastic.co/elasticsearch/elasticsearch:{version} \ bin/elasticsearch-keystore \ add my.secure.setting \ my.other.secure.setting ---- endif::[] If you've already created the keystore and don't need to update it, you can bind-mount the `elasticsearch.keystore` file directly. You can use the `KEYSTORE_PASSWORD` environment variable to provide the keystore password to the container at startup. For example, a `docker run` command might have the following options: [source,sh] ---- -v full_path_to/config/elasticsearch.keystore:/usr/share/elasticsearch/config/elasticsearch.keystore -e KEYSTORE_PASSWORD=mypassword ---- [[_c_customized_image]] ===== Using custom Docker images In some environments, it might make more sense to prepare a custom image that contains your configuration. A `Dockerfile` to achieve this might be as simple as: [source,sh,subs="attributes"] -------------------------------------------- FROM docker.elastic.co/elasticsearch/elasticsearch:{version} COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/ -------------------------------------------- You could then build and run the image with: [source,sh] -------------------------------------------- docker build --tag=elasticsearch-custom . docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom -------------------------------------------- Some plugins require additional security permissions. You must explicitly accept them either by: * Attaching a `tty` when you run the Docker image and allowing the permissions when prompted. * Inspecting the security permissions and accepting them (if appropriate) by adding the `--batch` flag to the plugin install command. See {plugins}/_other_command_line_parameters.html[Plugin management] for more information. The {es} Docker image only includes what is required to run {es}, and does not provide a package manager. It is possible to add additional utilities with a multi-phase Docker build. You must also copy any dependencies, for example shared libraries. [source,sh,subs="attributes"] -------------------------------------------- FROM centos:8 AS builder yum install -y some-package FROM docker.elastic.co/elasticsearch/elasticsearch:{version} COPY --from=builder /usr/bin/some-utility /usr/bin/ COPY --from=builder /usr/lib/some-lib.so /usr/lib/ -------------------------------------------- You should use `centos:8` as a base in order to avoid incompatibilities. Use http://man7.org/linux/man-pages/man1/ldd.1.html[`ldd`] to list the shared libraries required by a utility. [discrete] [[troubleshoot-docker-errors]] ==== Troubleshoot Docker errors for {es} Here’s how to resolve common errors when running {es} with Docker. ===== elasticsearch.keystore is a directory [source,txt] ---- Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.io.IOException: Is a directory: SimpleFSIndexInput(path="/usr/share/elasticsearch/config/elasticsearch.keystore") Likely root cause: java.io.IOException: Is a directory ---- A <> `docker run` command attempted to directly bind-mount an `elasticsearch.keystore` file that doesn't exist. If you use the `-v` or `--volume` flag to mount a file that doesn't exist, Docker instead creates a directory with the same name. To resolve this error: . Delete the `elasticsearch.keystore` directory in the `config` directory. . Update the `-v` or `--volume` flag to point to the `config` directory path rather than the keystore file's path. For an example, see <>. . Retry the command. ===== elasticsearch.keystore: Device or resource busy [source,txt] ---- Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy ---- A `docker run` command attempted to <> while directly bind-mounting the `elasticsearch.keystore` file. To update the keystore, the container requires access to other files in the `config` directory, such as `keystore.tmp`. To resolve this error: . Update the `-v` or `--volume` flag to point to the `config` directory path rather than the keystore file's path. For an example, see <>. . Retry the command.