[DOCS] Add local dev setup instructions (#107913)

* [DOCS] Add local dev setup instructions

- Replace existing Run ES in Docker locally page, with simpler no-security local dev setup
- Move this file into Quickstart folder, along with existing quickstart guide
- Update self-managed instructions in Quickstart guide to use local dev approach
This commit is contained in:
Liam Thompson 2024-05-07 18:10:48 +02:00 committed by GitHub
parent 6e7afa04b4
commit d0f4966431
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
12 changed files with 202 additions and 254 deletions

View file

@ -10,7 +10,7 @@ include::intro.asciidoc[]
include::release-notes/highlights.asciidoc[] include::release-notes/highlights.asciidoc[]
include::getting-started.asciidoc[] include::quickstart/index.asciidoc[]
include::setup.asciidoc[] include::setup.asciidoc[]

View file

@ -1,10 +1,9 @@
[chapter]
[[getting-started]] [[getting-started]]
= Quick start == Quick start guide
This guide helps you learn how to: This guide helps you learn how to:
* install and run {es} and {kib} (using {ecloud} or Docker), * Run {es} and {kib} (using {ecloud} or in a local Docker dev environment),
* add simple (non-timestamped) dataset to {es}, * add simple (non-timestamped) dataset to {es},
* run basic searches. * run basic searches.

View file

@ -0,0 +1,10 @@
[[quickstart]]
= Quickstart
Get started quickly with {es}.
* Learn how to run {es} (and {kib}) for <<run-elasticsearch-locally,local development>>.
* Follow our <<getting-started,Quickstart guide>> to add data to {es} and query it.
include::run-elasticsearch-locally.asciidoc[]
include::getting-started.asciidoc[]

View file

@ -0,0 +1,177 @@
[[run-elasticsearch-locally]]
== Run {es} locally in Docker (without security)
++++
<titleabbrev>Local dev setup (Docker)</titleabbrev>
++++
[WARNING]
====
*DO NOT USE THESE INSTRUCTIONS FOR PRODUCTION DEPLOYMENTS*
The instructions on this page are for *local development only*. Do not use these instructions for production deployments, because they are not secure.
While this approach is convenient for experimenting and learning, you should never run the service in this way in a production environment.
Refer to https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html[Install {es}] to learn about the various options for installing {es} in a production environment, including using Docker.
====
The following commands help you very quickly spin up a single-node {es} cluster, together with {kib} in Docker.
Note that if you don't need the {kib} UI, you can skip those instructions.
[discrete]
[[local-dev-why]]
=== When would I use this setup?
Use this setup if you want to quickly spin up {es} (and {kib}) for local development or testing.
For example you might:
* Want to run a quick test to see how a feature works.
* Follow a tutorial or guide that requires an {es} cluster, like our <<getting-started,quick start guide>>.
* Experiment with the {es} APIs using different tools, like the Dev Tools Console, cURL, or an Elastic programming language client.
* Quickly spin up an {es} cluster to test an executable https://github.com/elastic/elasticsearch-labs/tree/main/notebooks#readme[Python notebook] locally.
[discrete]
[[local-dev-prerequisites]]
=== Prerequisites
If you don't have Docker installed, https://www.docker.com/products/docker-desktop[download and install Docker Desktop] for your operating system.
[discrete]
[[local-dev-env-vars]]
=== Set environment variables
Configure the following environment variables.
[source,sh]
----
export ELASTIC_PASSWORD="<ES_PASSWORD>" # password for "elastic" username
export KIBANA_PASSWORD="<KIB_PASSWORD>" # Used _internally_ by Kibana, must be at least 6 characters long
----
[discrete]
[[local-dev-create-docker-network]]
=== Create a Docker network
To run both {es} and {kib}, you'll need to create a Docker network:
[source,sh]
----
docker network create elastic-net
----
[discrete]
[[local-dev-run-es]]
=== Run {es}
Start the {es} container with the following command:
ifeval::["{release-state}"=="unreleased"]
WARNING: Version {version} has not yet been released.
No Docker image is currently available for {es} {version}.
endif::[]
[source,sh,subs="attributes"]
----
docker run -p 127.0.0.1:9200:9200 -d --name elasticsearch --network elastic-net \
-e ELASTIC_PASSWORD=$ELASTIC_PASSWORD \
-e "discovery.type=single-node" \
-e "xpack.security.http.ssl.enabled=false" \
-e "xpack.license.self_generated.type=trial" \
{docker-image}
----
[discrete]
[[local-dev-run-kib]]
=== Run {kib} (optional)
To run {kib}, you must first set the `kibana_system` password in the {es} container.
[source,sh,subs="attributes"]
----
# configure the Kibana password in the ES container
curl -u elastic:$ELASTIC_PASSWORD \
-X POST \
http://localhost:9200/_security/user/kibana_system/_password \
-d '{"password":"'"$KIBANA_PASSWORD"'"}' \
-H 'Content-Type: application/json'
----
// NOTCONSOLE
Start the {kib} container with the following command:
ifeval::["{release-state}"=="unreleased"]
WARNING: Version {version} has not yet been released.
No Docker image is currently available for {es} {version}.
endif::[]
[source,sh,subs="attributes"]
----
docker run -p 127.0.0.1:5601:5601 -d --name kibana --network elastic-net \
-e ELASTICSEARCH_URL=http://elasticsearch:9200 \
-e ELASTICSEARCH_HOSTS=http://elasticsearch:9200 \
-e ELASTICSEARCH_USERNAME=kibana_system \
-e ELASTICSEARCH_PASSWORD=$KIBANA_PASSWORD \
-e "xpack.security.enabled=false" \
-e "xpack.license.self_generated.type=trial" \
{kib-docker-image}
----
[NOTE]
====
The service is started with a trial license. The trial license enables all features of Elasticsearch for a trial period of 30 days. After the trial period expires, the license is downgraded to a basic license, which is free forever. If you prefer to skip the trial and use the basic license, set the value of the `xpack.license.self_generated.type` variable to basic instead. For a detailed feature comparison between the different licenses, refer to our https://www.elastic.co/subscriptions[subscriptions page].
====
[discrete]
[[local-dev-connecting-clients]]
== Connecting to {es} with language clients
To connect to the {es} cluster from a language client, you can use basic authentication with the `elastic` username and the password you set in the environment variable.
You'll use the following connection details:
* **{es} endpoint**: `http://localhost:9200`
* **Username**: `elastic`
* **Password**: `$ELASTIC_PASSWORD` (Value you set in the environment variable)
For example, to connect with the Python `elasticsearch` client:
[source,python]
----
import os
from elasticsearch import Elasticsearch
username = 'elastic'
password = os.getenv('ELASTIC_PASSWORD') # Value you set in the environment variable
client = Elasticsearch(
"http://localhost:9200",
basic_auth=(username, password)
)
print(client.info())
----
Here's an example curl command using basic authentication:
[source,sh,subs="attributes"]
----
curl -u elastic:$ELASTIC_PASSWORD \
-X PUT \
http://localhost:9200/my-new-index \
-H 'Content-Type: application/json'
----
// NOTCONSOLE
[discrete]
[[local-dev-next-steps]]
=== Next steps
Use our <<getting-started,quick start guide>> to learn the basics of {es}: how to add data and query it.
[discrete]
[[local-dev-production]]
=== Moving to production
This setup is not suitable for production use. For production deployments, we recommend using our managed service on Elastic Cloud. https://cloud.elastic.co/registration[Sign up for a free trial] (no credit card required).
Otherwise, refer to https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html[Install {es}] to learn about the various options for installing {es} in a self-managed production environment, including using Docker.

View file

@ -29,8 +29,6 @@ resource-heavy {ls} deployment should be on its own host.
include::setup/install.asciidoc[] include::setup/install.asciidoc[]
include::setup/run-elasticsearch-locally.asciidoc[]
include::setup/configuration.asciidoc[] include::setup/configuration.asciidoc[]
include::setup/important-settings.asciidoc[] include::setup/important-settings.asciidoc[]

View file

@ -20,7 +20,7 @@ If you want to install and manage {es} yourself, you can:
* Run {es} in a <<elasticsearch-docker-images,Docker container>>. * Run {es} in a <<elasticsearch-docker-images,Docker container>>.
* Set up and manage {es}, {kib}, {agent}, and the rest of the Elastic Stack on Kubernetes with {eck-ref}[{eck}]. * Set up and manage {es}, {kib}, {agent}, and the rest of the Elastic Stack on Kubernetes with {eck-ref}[{eck}].
TIP: To try out Elasticsearch on your own machine, we recommend using Docker and running both Elasticsearch and Kibana. For more information, see <<run-elasticsearch-locally,Run Elasticsearch locally>>. TIP: To try out Elasticsearch on your own machine, we recommend using Docker and running both Elasticsearch and Kibana. For more information, see <<run-elasticsearch-locally,Run Elasticsearch locally>>. Please note that this setup is *not suitable for production use*.
[discrete] [discrete]
[[elasticsearch-install-packages]] [[elasticsearch-install-packages]]

View file

@ -8,6 +8,12 @@ https://github.com/elastic/elasticsearch/blob/{branch}/distribution/docker[GitHu
include::license.asciidoc[] include::license.asciidoc[]
[TIP]
====
If you just want to test {es} in local development, refer to <<run-elasticsearch-locally>>.
Please note that this setup is not suitable for production environments.
====
[[docker-cli-run-dev-mode]] [[docker-cli-run-dev-mode]]
==== Run {es} in Docker ==== Run {es} in Docker

View file

@ -1,183 +0,0 @@
[[run-elasticsearch-locally]]
== Run Elasticsearch locally
////
IMPORTANT: This content is replicated in the Elasticsearch repo
README.ascidoc file. If you make changes, you must also update the
Elasticsearch README.
+
GitHub renders the tagged region directives when you view the README,
so it's not possible to just include the content from the README. Darn.
+
Also note that there are similar instructions in the Kibana guide:
https://www.elastic.co/guide/en/kibana/current/docker.html
////
To try out Elasticsearch on your own machine, we recommend using Docker
and running both Elasticsearch and Kibana.
Docker images are available from the https://www.docker.elastic.co[Elastic Docker registry].
NOTE: Starting in Elasticsearch 8.0, security is enabled by default.
The first time you start Elasticsearch, TLS encryption is configured automatically,
a password is generated for the `elastic` user,
and a Kibana enrollment token is created so you can connect Kibana to your secured cluster.
For other installation options, see the
https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html[Elasticsearch installation documentation].
[discrete]
=== Start Elasticsearch
. Install and start https://www.docker.com/products/docker-desktop[Docker
Desktop]. Go to **Preferences > Resources > Advanced** and set Memory to at least 4GB.
. Start an Elasticsearch container:
ifeval::["{release-state}"=="unreleased"]
+
WARNING: Version {version} of {es} has not yet been released, so no
Docker image is currently available for this version.
endif::[]
+
[source,sh,subs="attributes"]
----
docker network create elastic
docker pull docker.elastic.co/elasticsearch/elasticsearch:{version}
docker run --name elasticsearch --net elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -t docker.elastic.co/elasticsearch/elasticsearch:{version}
----
+
When you start Elasticsearch for the first time, the generated `elastic` user password and
Kibana enrollment token are output to the terminal.
+
NOTE: You might need to scroll back a bit in the terminal to view the password
and enrollment token.
. Copy the generated password and enrollment token and save them in a secure
location. These values are shown only when you start Elasticsearch for the first time.
You'll use these to enroll Kibana with your Elasticsearch cluster and log in.
[discrete]
=== Start Kibana
Kibana enables you to easily send requests to Elasticsearch and analyze, visualize, and manage data interactively.
. In a new terminal session, start Kibana and connect it to your Elasticsearch container:
ifeval::["{release-state}"=="unreleased"]
+
WARNING: Version {version} of {kib} has not yet been released, so no
Docker image is currently available for this version.
endif::[]
+
[source,sh,subs="attributes"]
----
docker pull docker.elastic.co/kibana/kibana:{version}
docker run --name kibana --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:{version}
----
+
When you start Kibana, a unique URL is output to your terminal.
. To access Kibana, open the generated URL in your browser.
.. Paste the enrollment token that you copied when starting
Elasticsearch and click the button to connect your Kibana instance with Elasticsearch.
.. Log in to Kibana as the `elastic` user with the password that was generated
when you started Elasticsearch.
[discrete]
=== Send requests to Elasticsearch
You send data and other requests to Elasticsearch through REST APIs.
You can interact with Elasticsearch using any client that sends HTTP requests,
such as the https://www.elastic.co/guide/en/elasticsearch/client/index.html[Elasticsearch
language clients] and https://curl.se[curl].
Kibana's developer console provides an easy way to experiment and test requests.
To access the console, go to **Management > Dev Tools**.
[discrete]
=== Add data
You index data into Elasticsearch by sending JSON objects (documents) through the REST APIs.
Whether you have structured or unstructured text, numerical data, or geospatial data,
Elasticsearch efficiently stores and indexes it in a way that supports fast searches.
For timestamped data such as logs and metrics, you typically add documents to a
data stream made up of multiple auto-generated backing indices.
To add a single document to an index, submit an HTTP post request that targets the index.
[source,console]
----
POST /customer/_doc/1
{
"firstname": "Jennifer",
"lastname": "Walters"
}
----
This request automatically creates the `customer` index if it doesn't exist,
adds a new document that has an ID of 1, and
stores and indexes the `firstname` and `lastname` fields.
The new document is available immediately from any node in the cluster.
You can retrieve it with a GET request that specifies its document ID:
[source,console]
----
GET /customer/_doc/1
----
// TEST[continued]
To add multiple documents in one request, use the `_bulk` API.
Bulk data must be newline-delimited JSON (NDJSON).
Each line must end in a newline character (`\n`), including the last line.
[source,console]
----
PUT customer/_bulk
{ "create": { } }
{ "firstname": "Monica","lastname":"Rambeau"}
{ "create": { } }
{ "firstname": "Carol","lastname":"Danvers"}
{ "create": { } }
{ "firstname": "Wanda","lastname":"Maximoff"}
{ "create": { } }
{ "firstname": "Jennifer","lastname":"Takeda"}
----
// TEST[continued]
[discrete]
=== Search
Indexed documents are available for search in near real-time.
The following search matches all customers with a first name of _Jennifer_
in the `customer` index.
[source,console]
----
GET customer/_search
{
"query" : {
"match" : { "firstname": "Jennifer" }
}
}
----
// TEST[continued]
[discrete]
=== Explore
You can use Discover in Kibana to interactively search and filter your data.
From there, you can start creating visualizations and building and sharing dashboards.
To get started, create a _data view_ that connects to one or more Elasticsearch indices,
data streams, or index aliases.
. Go to **Management > Stack Management > Kibana > Data Views**.
. Select **Create data view**.
. Enter a name for the data view and a pattern that matches one or more indices,
such as _customer_.
. Select **Save data view to Kibana**.
To start exploring, go to **Analytics > Discover**.

View file

@ -12,7 +12,7 @@
aria-controls="self-managed-tab-api-call" aria-controls="self-managed-tab-api-call"
id="self-managed-api-call" id="self-managed-api-call"
tabindex="-1"> tabindex="-1">
Self-managed Local Dev (Docker)
</button> </button>
</div> </div>
<div tabindex="0" <div tabindex="0"

View file

@ -50,7 +50,7 @@ terminal session.
[source,sh] [source,sh]
---- ----
curl --cacert http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200 curl -u elastic:$ELASTIC_PASSWORD https://localhost:9200
---- ----
// NOTCONSOLE // NOTCONSOLE

View file

@ -5,14 +5,14 @@
aria-selected="true" aria-selected="true"
aria-controls="cloud-tab-install" aria-controls="cloud-tab-install"
id="cloud-install"> id="cloud-install">
Elasticsearch Service Elastic Cloud
</button> </button>
<button role="tab" <button role="tab"
aria-selected="false" aria-selected="false"
aria-controls="self-managed-tab-install" aria-controls="self-managed-tab-install"
id="self-managed-install" id="self-managed-install"
tabindex="-1"> tabindex="-1">
Self-managed Local Dev (Docker)
</button> </button>
</div> </div>
<div tabindex="0" <div tabindex="0"

View file

@ -8,64 +8,5 @@ include::{docs-root}/shared/cloud/ess-getting-started.asciidoc[tag=generic]
// end::cloud[] // end::cloud[]
// tag::self-managed[] // tag::self-managed[]
*Start a single-node cluster* Refer to our <<run-elasticsearch-locally, quickstart local dev instructions>> to quickly spin up a local development environment in Docker. If you don't need {kib}, you'll only need one `docker run` command to start {es}. Please note that this setup is *not suitable for production use*.
We'll use a single-node {es} cluster in this quick start, which makes sense for testing and development.
Refer to <<docker>> for advanced Docker documentation.
. Run the following Docker commands:
+
[source,sh,subs="attributes"]
----
docker network create elastic
docker pull {docker-image}
docker run --name es01 --net elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -t {docker-image}
----
. Copy the generated `elastic` password and enrollment token, which are output to your terminal.
You'll use these to enroll {kib} with your {es} cluster and log in.
These credentials are only shown when you start {es} for the first time.
+
We recommend storing the `elastic` password as an environment variable in your shell. Example:
+
[source,sh]
----
export ELASTIC_PASSWORD="your_password"
----
+
. Copy the `http_ca.crt` SSL certificate from the container to your local machine.
+
[source,sh]
----
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
----
+
. Make a REST API call to {es} to ensure the {es} container is running.
+
[source,sh]
----
curl --cacert http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200
----
// NOTCONSOLE
*Run {kib}*
{kib} is the user interface for Elastic.
It's great for getting started with {es} and exploring your data.
We'll be using the Dev Tools *Console* in {kib} to make REST API calls to {es}.
In a new terminal session, start {kib} and connect it to your {es} container:
[source,sh,subs="attributes"]
----
docker pull {kib-docker-image}
docker run --name kibana --net elastic -p 5601:5601 {kib-docker-image}
----
When you start {kib}, a unique URL is output to your terminal.
To access {kib}:
. Open the generated URL in your browser.
. Paste the enrollment token that you copied earlier, to connect your {kib} instance with {es}.
. Log in to {kib} as the `elastic` user with the password that was generated when you started {es}.
// end::self-managed[] // end::self-managed[]