[DOCS] Refactor quick start guide and README (#71331)

Changes:

* Refactors the "Getting Started" content down to one page.
* Refactors the README to reduce duplicated content and better mirror
Kibana's.
* Focuses the quick start on time series data, including data streams
and runtime fields.
* Streamlines self-managed install instructions to Docker.

Co-authored-by: debadair <debadair@elastic.co>
This commit is contained in:
James Rodewig 2021-04-20 09:32:21 -04:00 committed by GitHub
parent 67c748ebd2
commit b2130249b0
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
12 changed files with 783 additions and 2892 deletions

View file

@ -1,198 +1,44 @@
= Elasticsearch = Elasticsearch
== A Distributed RESTful Search Engine Elasticsearch is the distributed, RESTful search and analytics engine at the
heart of the https://www.elastic.co/products[Elastic Stack]. You can use
Elasticsearch to store, search, and manage data for:
=== https://www.elastic.co/products/elasticsearch[https://www.elastic.co/products/elasticsearch] * Logs
* Metrics
* A search backend
* Application monitoring
* Endpoint security
Elasticsearch is a distributed RESTful search engine built for the cloud. Features include: \... and more!
* Distributed and Highly Available Search Engine. To learn more about Elasticsearch's features and capabilities, see our
** Each index is fully sharded with a configurable number of shards. https://www.elastic.co/products/elasticsearch[product page].
** Each shard can have one or more replicas.
** Read / Search operations performed on any of the replica shards.
* Multi-tenant.
** Support for more than one index.
** Index level configuration (number of shards, index storage, etc.).
* Various set of APIs
** HTTP RESTful API
** All APIs perform automatic node operation rerouting.
* Document oriented
** No need for upfront schema definition.
** Schema can be defined for customization of the indexing process.
* Reliable, Asynchronous Write Behind for long term persistency.
* Near real-time search.
* Built on top of Apache Lucene
** Each shard is a fully functional Lucene index
** All the power of Lucene easily exposed through simple configuration and plugins.
* Per operation consistency
** Single document-level operations are atomic, consistent, isolated, and durable.
== Getting Started [[get-started]]
== Get started
First of all, DON'T PANIC. It will take 5 minutes to get the gist of what Elasticsearch is all about. The simplest way to set up Elasticsearch is to create a managed deployment with
https://www.elastic.co/cloud/as-a-service[Elasticsearch Service on Elastic
Cloud].
=== Installation If you prefer to install and manage Elasticsearch yourself, you can download
the latest version from
https://www.elastic.co/downloads/elasticsearch[elastic.co/downloads/elasticsearch].
* https://www.elastic.co/downloads/elasticsearch[Download] and unpack the Elasticsearch official distribution. For more installation options, see the
* Run `bin/elasticsearch` on Linux or macOS. Run `bin\elasticsearch.bat` on Windows. https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html[Elasticsearch installation
* Run `curl -X GET http://localhost:9200/` to verify Elasticsearch is running. documentation].
For more options, see [[upgrade]]
https://www.elastic.co/guide/en/elasticsearch/reference/current/starting-elasticsearch.html[Starting == Upgrade
Elasticsearch].
=== Indexing To upgrade from an earlier version of Elasticsearch, see the
https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html[Elasticsearch upgrade
documentation].
First, index some sample JSON documents. The first request automatically creates [[build-source]]
the `my-index-000001` index. == Build from source
----
curl -X POST 'http://localhost:9200/my-index-000001/_doc?pretty' -H 'Content-Type: application/json' -d '
{
"@timestamp": "2099-11-15T13:12:00",
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "kimchy"
}
}'
curl -X POST 'http://localhost:9200/my-index-000001/_doc?pretty' -H 'Content-Type: application/json' -d '
{
"@timestamp": "2099-11-15T14:12:12",
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "elkbee"
}
}'
curl -X POST 'http://localhost:9200/my-index-000001/_doc?pretty' -H 'Content-Type: application/json' -d '
{
"@timestamp": "2099-11-15T01:46:38",
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "elkbee"
}
}'
----
=== Search
Next, use a search request to find any documents with a `user.id` of `kimchy`.
----
curl -X GET 'http://localhost:9200/my-index-000001/_search?q=user.id:kimchy&pretty=true'
----
Instead of a query string, you can use Elasticsearch's
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html[Query
DSL] in the request body.
----
curl -X GET 'http://localhost:9200/my-index-000001/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"match" : { "user.id": "kimchy" }
}
}'
----
You can also retrieve all documents in `my-index-000001`.
----
curl -X GET 'http://localhost:9200/my-index-000001/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"match_all" : {}
}
}'
----
During indexing, Elasticsearch automatically mapped the `@timestamp` field as a
date. This lets you run a range search.
----
curl -X GET 'http://localhost:9200/my-index-000001/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"range" : {
"@timestamp": {
"from": "2099-11-15T13:00:00",
"to": "2099-11-15T14:00:00"
}
}
}
}'
----
=== Multiple indices
Elasticsearch supports multiple indices. The previous examples used an index
called `my-index-000001`. You can create another index, `my-index-000002`, to
store additional data when `my-index-000001` reaches a certain age or size. You
can also use separate indices to store different types of data.
You can configure each index differently. The following request
creates `my-index-000002` with two primary shards rather than the default of
one. This may be helpful for larger indices.
----
curl -X PUT 'http://localhost:9200/my-index-000002?pretty' -H 'Content-Type: application/json' -d '
{
"settings" : {
"index.number_of_shards" : 2
}
}'
----
You can then add a document to `my-index-000002`.
----
curl -X POST 'http://localhost:9200/my-index-000002/_doc?pretty' -H 'Content-Type: application/json' -d '
{
"@timestamp": "2099-11-16T13:12:00",
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "kimchy"
}
}'
----
You can search and perform other operations on multiple indices with a single
request. The following request searches `my-index-000001` and `my-index-000002`.
----
curl -X GET 'http://localhost:9200/my-index-000001,my-index-000002/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"match_all" : {}
}
}'
----
You can omit the index from the request path to search all indices.
----
curl -X GET 'http://localhost:9200/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
"query" : {
"match_all" : {}
}
}'
----
=== Distributed, highly available
Let's face it; things will fail...
Elasticsearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replicas. By default, an index is created with 1 shard and 1 replica per shard (1/1). Many topologies can be used, including 1/10 (improve search performance) or 20/1 (improve indexing performance, with search executed in a MapReduce fashion across shards).
To play with the distributed nature of Elasticsearch, bring more nodes up and shut down nodes. The system will continue to serve requests (ensure you use the correct HTTP port) with the latest data indexed.
=== Where to go from here?
We have just covered a tiny portion of what Elasticsearch is all about. For more information, please refer to the https://www.elastic.co/products/elasticsearch[elastic.co] website. General questions can be asked on the https://discuss.elastic.co[Elastic Forum] or https://ela.st/slack[on Slack]. The Elasticsearch GitHub repository is reserved for bug reports and feature requests only.
=== Building from source
Elasticsearch uses https://gradle.org[Gradle] for its build system. Elasticsearch uses https://gradle.org[Gradle] for its build system.
@ -214,10 +60,31 @@ To build distributions for all supported platforms, run:
./gradlew assemble ./gradlew assemble
---- ----
Finished distributions are output to `distributions/archives`. Distributions are output to `distributions/archives`.
See the xref:TESTING.asciidoc[TESTING] for more information about running the Elasticsearch test suite. To run the test suite, see xref:TESTING.asciidoc[TESTING].
=== Upgrading from older Elasticsearch versions [[docs]]
== Documentation
To ensure a smooth upgrade process from earlier versions of Elasticsearch, please see our https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html[upgrade documentation] for more details on the upgrade process. For the complete Elasticsearch documentation visit
https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html[elastic.co].
For information about our documentation processes, see the
xref:docs/README.asciidoc[docs README].
[[contribute]]
== Contribute
For contribution guidelines, see xref:CONTRIBUTING.md[CONTRIBUTING].
[[questions]]
== Questions? Problems? Suggestions?
* To report a bug or request a feature, create a
https://github.com/elastic/elasticsearch/issues/new/choose[GitHub Issue]. Please
ensure someone else hasn't created an issue for the same topic.
* Need help using Elasticsearch? Reach out on the
https://discuss.elastic.co[Elastic Forum] or https://ela.st/slack[Slack]. A
fellow community member or Elastic engineer will be happy to help you out.

View file

@ -377,36 +377,6 @@ buildRestTests.setups['user_hits'] = '''
{"index":{}} {"index":{}}
{"timestamp": "2019-01-03T13:00:00", "user_id": "4"}''' {"timestamp": "2019-01-03T13:00:00", "user_id": "4"}'''
// Fake bank account data used by getting-started.asciidoc
buildRestTests.setups['bank'] = '''
- do:
indices.create:
index: bank
body:
settings:
number_of_shards: 5
number_of_routing_shards: 5
- do:
bulk:
index: bank
refresh: true
body: |
#bank_data#
'''
/* Load the actual accounts only if we're going to use them. This complicates
* dependency checking but that is a small price to pay for not building a
* 400kb string every time we start the build. */
File accountsFile = new File("$projectDir/src/test/resources/accounts.json")
buildRestTests.inputs.file(accountsFile)
buildRestTests.doFirst {
String accounts = accountsFile.getText('UTF-8')
// Indent like a yaml test needs
accounts = accounts.replaceAll('(?m)^', ' ')
buildRestTests.setups['bank'] =
buildRestTests.setups['bank'].replace('#bank_data#', accounts)
}
// Used by sampler and diversified-sampler aggregation docs // Used by sampler and diversified-sampler aggregation docs
buildRestTests.setups['stackoverflow'] = ''' buildRestTests.setups['stackoverflow'] = '''
- do: - do:

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

View file

@ -1513,3 +1513,28 @@ See <<put-enrich-policy-api>>.
=== Rollup API === Rollup API
See <<rollup-apis>>. See <<rollup-apis>>.
[role="exclude",id="getting-started-install"]
=== Get {es} up and running
See <<run-elasticsearch>>.
[role="exclude",id="getting-started-index"]
=== Index some documents
See <<add-data>>.
[role="exclude",id="getting-started-search"]
=== Start searching
See <<qs-search-data>>.
[role="exclude",id="getting-started-aggregations"]
=== Analyze results with aggregations
See <<getting-started>>.
[role="exclude",id="getting-started-next-steps"]
=== Where to go from here
See <<getting-started>>.

View file

@ -0,0 +1,40 @@
++++
<div class="tabs" data-tab-group="host">
<div role="tablist" aria-label="Make an API call">
<button role="tab"
aria-selected="true"
aria-controls="cloud-tab-api-call"
id="cloud-api-call">
Elasticsearch Service
</button>
<button role="tab"
aria-selected="false"
aria-controls="self-managed-tab-api-call"
id="self-managed-api-call"
tabindex="-1">
Self-managed
</button>
</div>
<div tabindex="0"
role="tabpanel"
id="cloud-tab-api-call"
aria-labelledby="cloud-api-call">
++++
include::api-call.asciidoc[tag=cloud]
++++
</div>
<div tabindex="0"
role="tabpanel"
id="self-managed-tab-api-call"
aria-labelledby="self-managed-api-call"
hidden="">
++++
include::api-call.asciidoc[tag=self-managed]
++++
</div>
</div>
++++

View file

@ -0,0 +1,53 @@
// tag::cloud[]
**Use curl**
. To communicate with {es} using curl or another client, you need your
cluster's endpoint. Go to the **Elasticsearch** page and click **Copy
endpoint**.
. To submit an example API request, run the following curl command in a new
terminal session. Replace `<password>` with the password for the `elastic` user.
Replace `<elasticsearch_endpoint>` with your endpoint.
+
[source,sh]
----
curl -u elastic:<password> <elasticsearch_endpoint>/
----
// NOTCONSOLE
**Use {kib}**
. Go to the *{kib}* page and click **Launch**.
//tag::kibana-api-ex[]
. Open {kib}'s main menu and go to **Dev Tools > Console**.
+
[role="screenshot"]
image::images/kibana-console.png[{kib} Console,align="center"]
. Run the following example API request in the console:
+
[source,console]
----
GET /
----
//end::kibana-api-ex[]
// end::cloud[]
// tag::self-managed[]
**Use curl**
To submit an example API request, run the following curl command in a new
terminal session.
[source,sh]
----
curl -X GET http://localhost:9200/
----
// NOTCONSOLE
**Use {kib}**
include::api-call.asciidoc[tag=kibana-api-ex]
// end::self-managed[]

View file

@ -0,0 +1,40 @@
++++
<div class="tabs" data-tab-group="host">
<div role="tablist" aria-label="Clean up your deployment">
<button role="tab"
aria-selected="true"
aria-controls="cloud-tab-cleanup"
id="cloud-cleanup">
Elasticsearch Service
</button>
<button role="tab"
aria-selected="false"
aria-controls="self-managed-tab-cleanup"
id="self-managed-cleanup"
tabindex="-1">
Self-managed
</button>
</div>
<div tabindex="0"
role="tabpanel"
id="cloud-tab-cleanup"
aria-labelledby="cloud-cleanup">
++++
include::quick-start-cleanup.asciidoc[tag=cloud]
++++
</div>
<div tabindex="0"
role="tabpanel"
id="self-managed-tab-cleanup"
aria-labelledby="self-managed-cleanup"
hidden="">
++++
include::quick-start-cleanup.asciidoc[tag=self-managed]
++++
</div>
</div>
++++

View file

@ -0,0 +1,23 @@
// tag::cloud[]
Click **Delete deployment** from the deployment overview page and follow the
prompts.
// end::cloud[]
// tag::self-managed[]
To stop your {es} and {kib} Docker containers, run:
[source,sh]
----
docker stop es01-test
docker stop kib01-test
----
To remove the containers, run:
[source,sh]
----
docker rm es01-test
docker rm kib01-test
----
// end::self-managed[]

View file

@ -0,0 +1,40 @@
++++
<div class="tabs" data-tab-group="host">
<div role="tablist" aria-label="Run Elasticsearch">
<button role="tab"
aria-selected="true"
aria-controls="cloud-tab-install"
id="cloud-install">
Elasticsearch Service
</button>
<button role="tab"
aria-selected="false"
aria-controls="self-managed-tab-install"
id="self-managed-install"
tabindex="-1">
Self-managed
</button>
</div>
<div tabindex="0"
role="tabpanel"
id="cloud-tab-install"
aria-labelledby="cloud-install">
++++
include::quick-start-install.asciidoc[tag=cloud]
++++
</div>
<div tabindex="0"
role="tabpanel"
id="self-managed-tab-install"
aria-labelledby="self-managed-install"
hidden="">
++++
include::quick-start-install.asciidoc[tag=self-managed]
++++
</div>
</div>
++++

View file

@ -0,0 +1,48 @@
// tag::cloud[]
include::{docs-root}/shared/cloud/ess-getting-started.asciidoc[tag=generic]
// end::cloud[]
// tag::self-managed[]
**Install and run {es} using Docker**
ifeval::["{release-state}"=="unreleased"]
NOTE: No Docker image is currently available for {es} {version}.
endif::[]
ifeval::["{release-state}"!="unreleased"]
. Install and start https://www.docker.com/products/docker-desktop[Docker
Desktop].
. Run:
+
[source,sh,subs="attributes"]
----
docker pull {docker-repo}:{version}
docker run --name es01-test -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" {docker-image}
----
endif::[]
**Install and run {kib} using Docker**
To analyze, visualize, and manage {es} data using an intuitive UI, install
{kib}.
ifeval::["{release-state}"=="unreleased"]
NOTE: No Docker image is currently available for {kib} {version}.
endif::[]
ifeval::["{release-state}"!="unreleased"]
. In a new terminal session, run:
+
["source","txt",subs="attributes"]
----
docker pull docker.elastic.co/kibana/kibana:{version}
docker run --name kib01-test --link es01-test:elasticsearch -p 5601:5601 docker.elastic.co/kibana/kibana:{version}
----
. To access {kib}, go to http://localhost:5601[http://localhost:5601]
endif::[]
// end::self-managed[]

File diff suppressed because it is too large Load diff