[DOCS] Adds quick start (#78822)

* [DOCS] Getting started refresh

* Dashboard changes

* Discover and Dashboard changes

* [DOCS] Adds quick start

* Redirects

* Redirects pt 2

* Redirects pt

* More redirect issues

* Removed second chunk of KQL tasks

* Review comments
This commit is contained in:
Kaarina Tungseth 2020-10-08 15:16:26 -05:00 committed by GitHub
parent 84210d6b97
commit a8b5e9f245
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
43 changed files with 157 additions and 732 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 301 KiB

After

Width:  |  Height:  |  Size: 889 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 925 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 558 KiB

After

Width:  |  Height:  |  Size: 82 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 208 KiB

After

Width:  |  Height:  |  Size: 648 KiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 1,005 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 202 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 259 KiB

After

Width:  |  Height:  |  Size: 273 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 308 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 206 KiB

After

Width:  |  Height:  |  Size: 223 KiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 407 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 222 KiB

After

Width:  |  Height:  |  Size: 346 KiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 470 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 505 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 215 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 151 KiB

View file

@ -0,0 +1,142 @@
[[get-started]]
== Quick start
To quickly get up and running with {kib}, set up on Cloud, then add a sample data set that you can explore and analyze.
When you've finished, you'll know how to:
* <<explore-the-data,Explore the data with *Discover*.>>
* <<view-and-analyze-the-data,Gain insight into the data with *Dashboard*.>>
[float]
=== Before you begin
When security is enabled, you must have `read`, `write`, and `manage` privileges on the `kibana_sample_data_*` indices. For more information, refer to {ref}/security-privileges.html[Security privileges].
[float]
[[set-up-on-cloud]]
== Set up on cloud
include::{docs-root}/shared/cloud/ess-getting-started.asciidoc[]
[float]
[[gs-get-data-into-kibana]]
== Add the sample data
Sample data sets come with sample visualizations, dashboards, and more to help you explore {kib} without adding your own data.
. From the home page, click *Try our sample data*.
. On the *Sample eCommerce orders* card, click *Add data*.
+
[role="screenshot"]
image::getting-started/images/add-sample-data.png[]
[float]
[[explore-the-data]]
== Explore the data
*Discover* displays an interactive histogram that shows the distribution of of data, or documents, over time, and a table that lists the fields for each document that matches the index. By default, all fields are shown for each matching document.
. Open the menu, then click *Discover*.
. Change the <<set-time-filter, time filter>> to *Last 7 days*.
+
[role="screenshot"]
image::images/tutorial-discover-2.png[]
. To focus in on the documents you want to view, use the <<kuery-query,{kib} Query Language>>. In the *KQL* search field, enter:
+
[source,text]
products.taxless_price >= 60 AND category : Women's Clothing
+
The query returns the women's clothing orders for $60 and more.
+
[role="screenshot"]
image::images/tutorial-discover-4.png[]
. Hover over the list of *Available fields*, then click *+* next to the fields you want to view in the table.
+
For example, when you add the *category* field, the table displays the product categories for the orders.
+
[role="screenshot"]
image::images/tutorial-discover-3.png[]
+
For more information, refer to <<discover, *Discover*>>.
[float]
[[view-and-analyze-the-data]]
== View and analyze the data
A dashboard is a collection of panels that you can use to view and analyze the data. Panels contain visualizations, interactive controls, Markdown, and more.
. Open the menu, then click *Dashboard*.
. Click *[eCommerce] Revenue Dashboard*.
+
[role="screenshot"]
image::getting-started/images/tutorial-sample-dashboard.png[]
[float]
[[filter-and-query-the-data]]
=== Filter the data
To focus in on the data you want to view on the dashboard, use filters.
. From the *Controls* visualization, make a selection from the *Manufacturer* and *Category* dropdowns, then click *Apply changes*.
+
For example, the following dashboard shows the data for women's clothing from Gnomehouse.
+
[role="screenshot"]
image::getting-started/images/tutorial-sample-filter.png[]
. To manually add a filter, click *Add filter*, then specify the options.
+
For example, to view the orders for Wednesday, select *day_of_week* from the *Field* dropdown, select *is* from the *Operator* dropdown, then select *Wednesday* from the *Value* dropdown.
+
[role="screenshot"]
image::getting-started/images/tutorial-sample-filter2.png[]
. When you are done, remove the filters.
+
For more information, refer to <<dashboard,*Dashboard*>>.
[float]
[[create-a-visualization]]
=== Create a visualization
To create a treemap that shows the top regions and manufacturers, use *Lens*, then add the treemap to the dashboard.
. From the {kib} toolbar, click *Edit*, then click *Create new*.
. On the *New Visualization* window, click *Lens*.
. From the *Available fields* list, drag and drop the following fields to the visualization builder:
* *geoip.city_name*
* *manufacturer.keyword*
+
. From the visualization dropdown, select *Treemap*.
+
[role="screenshot"]
image::getting-started/images/tutorial-visualization-dropdown.png[Visualization dropdown with Treemap selected]
. Click *Save*.
. On the *Save Lens visualization*, enter a title and make sure *Add to Dashboard after saving* is selected, then click *Save and return*.
+
The treemap appears as the last visualization on the dashboard.
+
[role="screenshot"]
image::getting-started/images/tutorial-final-dashboard.gif[Final dashboard with new treemap visualization]
+
For more information, refer to <<lens, *Lens*>>.
[float]
[[quick-start-whats-next]]
== What's next?
If you are you ready to add your own data, refer to <<connect-to-elasticsearch,Add data to {kib}>>.
If you want to ingest your data, refer to {ingest-guide}/ingest-management-getting-started.html[Quick start: Get logs and metrics into the Elastic Stack].

View file

@ -1,56 +0,0 @@
[[tutorial-define-index]]
=== Define your index patterns
Index patterns tell {kib} which {es} indices you want to explore.
An index pattern can match the name of a single index, or include a wildcard
(*) to match multiple indices.
For example, Logstash typically creates a
series of indices in the format `logstash-YYYY.MMM.DD`. To explore all
of the log data from May 2018, you could specify the index pattern
`logstash-2018.05*`.
[float]
==== Create the index patterns
First you'll create index patterns for the Shakespeare data set, which has an
index named `shakespeare,` and the accounts data set, which has an index named
`bank`. These data sets don't contain time series data.
. Open the menu, then go to *Stack Management > {kib} > Index Patterns*.
. If this is your first index pattern, the *Create index pattern* page opens.
. In the *Index pattern name* field, enter `shakes*`.
+
[role="screenshot"]
image::images/tutorial-pattern-1.png[Image showing how to enter shakes* in Index Pattern Name field]
. Click *Next step*.
. On the *Configure settings* page, *Create index pattern*.
+
Youre presented a table of all fields and associated data types in the index.
. Create a second index pattern named `ba*`.
[float]
==== Create an index pattern for the time series data
Create an index pattern for the Logstash index, which
contains the time series data.
. Create an index pattern named `logstash*`, then click *Next step*.
. From the *Time field* dropdown, select *@timestamp, then click *Create index pattern*.
+
[role="screenshot"]
image::images/tutorial_index_patterns.png[Image showing how to create an index pattern]
NOTE: When you define an index pattern, the indices that match that pattern must
exist in Elasticsearch and they must contain data. To check if the indices are
available, open the menu, go to *Dev Tools > Console*, then enter `GET _cat/indices`. Alternately, use
`curl -XGET "http://localhost:9200/_cat/indices"`.
For Windows, run `Invoke-RestMethod -Uri "http://localhost:9200/_cat/indices"` in Powershell.

View file

@ -1,35 +0,0 @@
[[explore-your-data]]
=== Explore your data
With *Discover*, you use {ref}/query-dsl-query-string-query.html#query-string-syntax[Elasticsearch
queries] to explore your data and narrow the results with filters.
. Open the menu, then go to *Discover*.
+
The `shakes*` index pattern appears.
. To make `ba*` the index, click the *Change Index Pattern* dropdown, then select `ba*`.
+
By default, all fields are shown for each matching document.
. In the *Search* field, enter the following, then click *Update*:
+
[source,text]
account_number<100 AND balance>47500
+
The search returns all account numbers between zero and 99 with balances in
excess of 47,500. Results appear for account numbers 8, 32, 78, 85, and 97.
+
[role="screenshot"]
image::images/tutorial-discover-2.png[Image showing the search results for account numbers between zero and 99, with balances in excess of 47,500]
+
. Hover over the list of *Available fields*, then
click *Add* next to each field you want include in the table.
+
For example, when you add the `account_number` field, the display changes to a list of five
account numbers.
+
[role="screenshot"]
image::images/tutorial-discover-3.png[Image showing a dropdown with five account numbers, which match the previous query for account balance]
Now that you know what your documents contain, it's time to gain insight into your data with visualizations.

View file

@ -1,219 +0,0 @@
[[create-your-own-dashboard]]
== Create your own dashboard
Ready to add data to {kib} and create your own dashboard? In this tutorial, you'll use three types of data sets that'll help you learn to:
* <<load-the-data-sets, Load data into Elasticsearch>>
* <<tutorial-define-index, Define an index pattern>>
* <<explore-your-data, Discover and explore data>>
* <<tutorial-visualizing, Visualize data>>
[float]
[[download-the-data]]
=== Download the data
To complete the tutorial, you'll download and use the following data sets:
* The complete works of William Shakespeare, suitably parsed into fields
* A set of fictitious bank accounts with randomly generated data
* A set of randomly generated log files
Create a new working directory where you want to download the files. From that directory, run the following commands:
[source,shell]
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/shakespeare.json
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/accounts.zip
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/logs.jsonl.gz
Alternatively, for Windows users, run the following commands in Powershell:
[source,shell]
Invoke-RestMethod https://download.elastic.co/demos/kibana/gettingstarted/8.x/shakespeare.json -OutFile shakespeare.json
Invoke-RestMethod https://download.elastic.co/demos/kibana/gettingstarted/8.x/accounts.zip -OutFile accounts.zip
Invoke-RestMethod https://download.elastic.co/demos/kibana/gettingstarted/8.x/logs.jsonl.gz -OutFile logs.jsonl.gz
Two of the data sets are compressed. To extract the files, use these commands:
[source,shell]
unzip accounts.zip
gunzip logs.jsonl.gz
[float]
==== Structure of the data sets
The Shakespeare data set has the following structure:
[source,json]
{
"line_id": INT,
"play_name": "String",
"speech_number": INT,
"line_number": "String",
"speaker": "String",
"text_entry": "String",
}
The accounts data set has the following structure:
[source,json]
{
"account_number": INT,
"balance": INT,
"firstname": "String",
"lastname": "String",
"age": INT,
"gender": "M or F",
"address": "String",
"employer": "String",
"email": "String",
"city": "String",
"state": "String"
}
The logs data set has dozens of different fields. The notable fields include the following:
[source,json]
{
"memory": INT,
"geo.coordinates": "geo_point"
"@timestamp": "date"
}
[float]
==== Set up mappings
Before you load the Shakespeare and logs data sets, you must set up {ref}/mapping.html[_mappings_] for the fields.
Mappings divide the documents in the index into logical groups and specify the characteristics
of the fields. These characteristics include the searchability of the field
and whether it's _tokenized_, or broken up into separate words.
NOTE: If security is enabled, you must have the `all` Kibana privilege to run this tutorial.
You must also have the `create`, `manage` `read`, `write,` and `delete`
index privileges. See {ref}/security-privileges.html[Security privileges]
for more information.
Open the menu, then go to *Dev Tools*. On the *Console* page, set up a mapping for the Shakespeare data set:
[source,js]
PUT /shakespeare
{
"mappings": {
"properties": {
"speaker": {"type": "keyword"},
"play_name": {"type": "keyword"},
"line_id": {"type": "integer"},
"speech_number": {"type": "integer"}
}
}
}
//CONSOLE
The mapping specifies field characteristics for the data set:
* The `speaker` and `play_name` fields are keyword fields. These fields are not analyzed.
The strings are treated as a single unit even if they contain multiple words.
* The `line_id` and `speech_number` fields are integers.
The logs data set requires a mapping to label the latitude and longitude pairs
as geographic locations by applying the `geo_point` type.
[source,js]
PUT /logstash-2015.05.18
{
"mappings": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
//CONSOLE
[source,js]
PUT /logstash-2015.05.19
{
"mappings": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
//CONSOLE
[source,js]
PUT /logstash-2015.05.20
{
"mappings": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
//CONSOLE
The accounts data set doesn't require any mappings.
[float]
[[load-the-data-sets]]
==== Load the data sets
At this point, you're ready to use the Elasticsearch {ref}/docs-bulk.html[bulk]
API to load the data sets:
[source,shell]
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/bank/_bulk?pretty' --data-binary @accounts.json
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/_bulk?pretty' --data-binary @logs.jsonl
Or for Windows users, in Powershell:
[source,shell]
Invoke-RestMethod "http://<host>:<port>/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json"
Invoke-RestMethod "http://<host>:<port>/shakespeare/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare.json"
Invoke-RestMethod "http://<host>:<port>/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl"
These commands might take some time to execute, depending on the available computing resources.
When you define an index pattern, the indices that match the pattern must
exist in {es} and contain data.
To verify the availability of the indices, open the menu, go to *Dev Tools > Console*, then enter:
[source,js]
GET /_cat/indices?v
Alternately, use:
[source,shell]
`curl -XGET "http://localhost:9200/_cat/indices"`.
The output should look similar to:
[source,shell]
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open bank 1 1 1000 0 418.2kb 418.2kb
yellow open shakespeare 1 1 111396 0 17.6mb 17.6mb
yellow open logstash-2015.05.18 1 1 4631 0 15.6mb 15.6mb
yellow open logstash-2015.05.19 1 1 4624 0 15.7mb 15.7mb
yellow open logstash-2015.05.20 1 1 4750 0 16.4mb 16.4mb

View file

@ -1,159 +0,0 @@
[[explore-kibana-using-sample-data]]
== Explore {kib} using sample data
Ready to get some hands-on experience with {kib}?
In this tutorial, youll work with {kib} sample data and learn to:
* <<explore-the-data, Explore the sample data using *Discover*>>
* <<view-and-analyze-the-data, View and analyze the data on a dashboard>>
* <<filter-and-query-the-data, Filter and query the dashboard data>>
NOTE: If security is enabled, you must have `read`, `write`, and `manage` privileges
on the `kibana_sample_data_*` indices. For more information, refer to
{ref}/security-privileges.html[Security privileges].
[float]
[[add-the-sample-data]]
=== Add the sample data
Add the *Sample flight data*.
. On the home page, click *Load a data set and a {kib} dashboard*.
. On the *Sample flight data* card, click *Add data*.
[float]
[[explore-the-data]]
=== Explore the data
Explore the documents in the index that
match the selected index pattern. The index pattern tells {kib} which {es} index you want to
explore.
. Open the menu, then go to *Discover*.
. Make sure `kibana_sample_data_flights` is the current index pattern.
You might need to click *New* in the {kib} toolbar to refresh the data.
+
You'll see a histogram that shows the distribution of
documents over time. A table lists the fields for
each document that matches the index. By default, all fields are shown.
+
[role="screenshot"]
image::getting-started/images/tutorial-sample-discover1.png[]
. Hover over the list of *Available fields*, then click *Add* next
to each field you want explore in the table.
+
[role="screenshot"]
image::getting-started/images/tutorial-sample-discover2.png[]
[float]
[[view-and-analyze-the-data]]
=== View and analyze the data
A _dashboard_ is a collection of panels that provide you with an overview of your data that you can
use to analyze your data. Panels contain everything you need, including visualizations,
interactive controls, Markdown, and more.
To open the *Global Flight* dashboard, open the menu, then go to *Dashboard*.
[role="screenshot"]
image::getting-started/images/tutorial-sample-dashboard.png[]
[float]
[[change-the-panel-data]]
==== Change the panel data
To gain insights into your data, change the appearance and behavior of the panels.
For example, edit the metric panel to find the airline that has the lowest average fares.
. In the {kib} toolbar, click *Edit*.
. In the *Average Ticket Price* metric panel, open the panel menu, then select *Edit visualization*.
. To change the data on the panel, use an {es} {ref}/search-aggregations.html[bucket aggregation],
which sorts the documents that match your search criteria into different categories or buckets.
.. In the *Buckets* pane, select *Add > Split group*.
.. From the *Aggregation* dropdown, select *Terms*.
.. From the *Field* dropdown, select *Carrier*.
.. Set *Descending* to *4*, then click *Update*.
+
The average ticket price for all four airlines appear in the visualization builder.
+
[role="screenshot"]
image::getting-started/images/tutorial-sample-edit1.png[]
. To save your changes, click *Save and return* in the {kib} toolbar.
. To save the dashboard, click *Save* in the {kib} toolbar.
+
[role="screenshot"]
image::getting-started/images/tutorial-sample-edit2.png[]
[float]
[[filter-and-query-the-data]]
==== Filter and query the data
To focus in on the data you want to explore, use filters and queries.
For more information, refer to
{ref}/query-filter-context.html[Query and filter context].
To filter the data:
. In the *Controls* visualization, select an *Origin City* and *Destination City*, then click *Apply changes*.
+
The `OriginCityName` and the `DestCityName` fields filter the data in the panels.
+
For example, the following dashboard shows the data for flights from London to Milan.
+
[role="screenshot"]
image::getting-started/images/tutorial-sample-filter.png[]
. To manually add a filter, click *Add filter*,
then specify the data you want to view.
. When you are finished experimenting, remove all filters.
[[query-the-data]]
To query the data:
. To view all flights out of Rome, enter the following in the *KQL* query bar, then click *Update*:
+
[source,text]
OriginCityName: Rome
. For a more complex query with AND and OR, enter:
+
[source,text]
OriginCityName:Rome AND (Carrier:JetBeats OR Carrier:"Kibana Airlines")
+
The dashboard panels update to display the flights out of Rome on JetBeats and
{kib} Airlines.
+
[role="screenshot"]
image::getting-started/images/tutorial-sample-query.png[]
. When you are finished exploring, remove the query by
clearing the contents in the *KQL* query bar, then click *Update*.
[float]
=== Next steps
Now that you know the {kib} basics, try out the <<create-your-own-dashboard, Create your own dashboard>> tutorial, where you'll learn to:
* Add a data set to {kib}
* Define an index pattern
* Discover and explore data
* Create and add panels to a dashboard

View file

@ -1,193 +0,0 @@
[[tutorial-visualizing]]
=== Visualize your data
Shape your data using a variety
of {kib} supported visualizations, tables, and more. In this tutorial, you'll create four
visualizations that you'll use to create a dashboard.
To begin, open the menu, go to *Dashboard*, then click *Create new dashboard*.
[float]
[[compare-the-number-of-speaking-parts-in-the-play]]
=== Compare the number of speaking parts in the plays
To visualize the Shakespeare data and compare the number of speaking parts in the plays, create a bar chart using *Lens*.
. Click *Create new*, then click *Lens* on the *New Visualization* window.
+
[role="screenshot"]
image::images/tutorial-visualize-wizard-step-1.png[Image showing different options for your new visualization]
. Make sure the index pattern is *shakes*.
. Display the play data along the x-axis.
.. From the *Available fields* list, drag and drop *play_name* to the *X-axis* field.
.. Click *Top values of play_name*.
.. From the *Order direction* dropdown, select *Ascending*.
.. In the *Label* field, enter `Play Name`.
. Display the number of speaking parts per play along the y-axis.
.. From the *Available fields* list, drag and drop *speaker* to the *Y-axis* field.
.. Click *Unique count of speaker*.
.. In the *Label* field, enter `Speaking Parts`.
+
[role="screenshot"]
image::images/tutorial-visualize-bar-1.5.png[Bar chart showing the speaking parts data]
. *Save* the chart with the name `Bar Example`.
+
To show a tooltip with the number of speaking parts for that play, hover over a bar.
+
Notice how the individual play names show up as whole phrases, instead of
broken up into individual words. This is the result of the mapping
you did at the beginning of the tutorial, when you marked the `play_name` field
as `not analyzed`.
[float]
[[view-the-average-account-balance-by-age]]
=== View the average account balance by age
To gain insight into the account balances in the bank account data, create a pie chart. In this tutorial, you'll use the {es}
{ref}/search-aggregations.html[bucket aggregation] to specify the pie slices to display. The bucket aggregation sorts the documents that match your search criteria into different
categories and establishes multiple ranges of account balances so that you can find how many accounts fall into each range.
. Click *Create new*, then click *Pie* on the *New Visualization* window.
. On the *Choose a source* window, select `ba*`.
+
Since the default search matches all documents, the pie contains a single slice.
. In the *Buckets* pane, click *Add > Split slices.*
.. From the *Aggregation* dropdown, select *Range*.
.. From the *Field* dropdown, select *balance*.
.. Click *Add range* until there are six rows of fields, then define the following ranges:
+
[source,text]
0 999
1000 2999
3000 6999
7000 14999
15000 30999
31000 50000
. Click *Update*.
+
The pie chart displays the proportion of the 1,000 accounts that fall into each of the ranges.
+
[role="screenshot"]
image::images/tutorial-visualize-pie-2.png[Pie chart displaying accounts that fall into each of the ranges, scaled to 1000 accounts]
. Add another bucket aggregation that displays the ages of the account holders.
.. In the *Buckets* pane, click *Add*, then click *Split slices*.
.. From the *Sub aggregation* dropdown, select *Terms*.
.. From the *Field* dropdown, select *age*, then click *Update*.
+
The break down of the ages of the account holders are displayed
in a ring around the balance ranges.
+
[role="screenshot"]
image::images/tutorial-visualize-pie-3.png[Final pie chart showing all of the changes]
. Click *Save*, then enter `Pie Example` in the *Title* field.
[float]
[role="xpack"]
[[visualize-geographic-information]]
=== Visualize geographic information
To visualize geographic information in the log file data, use <<maps,Maps>>.
. Click *Create new*, then click *Maps* on the *New Visualization* window.
. To change the time, use the time filter.
.. Set the *Start date* to `May 18, 2015 @ 12:00:00.000`.
.. Set the *End date* to `May 20, 2015 @ 12:00:00.000`.
+
[role="screenshot"]
image::images/gs_maps_time_filter.png[Image showing the time filter for Maps tutorial]
.. Click *Update*
. Map the geo coordinates from the log files.
.. Click *Add layer > Clusters and grids*.
.. From the *Index pattern* dropdown, select *logstash*.
.. Click *Add layer*.
. Specify the *Layer Style*.
.. From the *Fill color* dropdown, select the yellow to red color ramp.
.. In the *Border width* field, enter `3`.
.. From the *Border color* dropdown, select *#FFF*, then click *Save & close*.
+
[role="screenshot"]
image::images/tutorial-visualize-map-2.png[Example of a map visualization]
. Click *Save*, then enter `Map Example` in the *Title* field.
. Add the map to your dashboard.
.. Open the menu, go to *Dashboard*, then click *Add*.
.. On the *Add panels* flyout, click *Map Example*.
[float]
[[tutorial-visualize-markdown]]
=== Add context to your visualizations with Markdown
Add context to your new visualizations with Markdown text.
. Click *Create new*, then click *Markdown* on the *New Visualization* window.
. In the *Markdown* text field, enter:
+
[source,markdown]
# This is a tutorial dashboard!
The Markdown widget uses **markdown** syntax.
> Blockquotes in Markdown use the > character.
. Click *Update*.
+
The Markdown renders in the preview pane.
+
[role="screenshot"]
image::images/tutorial-visualize-md-2.png[Image showing example markdown editing field]
. Click *Save*, then enter `Markdown Example` in the *Title* field.
[role="screenshot"]
image::images/tutorial-dashboard.png[Final visualization with bar chart, pie chart, map, and markdown text field]
[float]
=== Next steps
Now that you have the basics, you're ready to start exploring your own system data with {kib}.
* To add your own data to {kib}, refer to <<connect-to-elasticsearch,Add data to {kib}>>.
* To search and filter your data, refer to {kibana-ref}/discover.html[Discover].
* To create a dashboard with your own data, refer to <<dashboard, Dashboard>>.
* To create maps that you can add to your dashboards, refer to <<maps,Maps>>.
* To create presentations of your live data, refer to <<canvas,Canvas>>.

View file

@ -67,7 +67,7 @@ You can read more at {ref}/rollup-job-config.html[rollup job configuration].
=== Try it: Create and visualize rolled up data
This example creates a rollup job to capture log data from sample web logs.
To follow along, add the <<gs-get-data-into-kibana, sample web logs data set>>.
To follow along, add the sample web logs data set.
In this example, you want data that is older than 7 days in the target index pattern `kibana_sample_data_logs`
to roll up once a day into the index `rollup_logstash`. Youll bucket the

View file

@ -59,7 +59,7 @@ This page has moved. Please see <<reporting-getting-started>>.
[role="exclude",id="add-sample-data"]
== Add sample data
This page has moved. Please see <<gs-get-data-into-kibana>>.
This page has moved. Please see <<get-started>>.
[role="exclude",id="tilemap"]
== Coordinate map
@ -112,3 +112,9 @@ This content has moved. See
This content has moved. See
{ref}/ccr-getting-started.html#ccr-getting-started-remote-cluster[Connect to a remote cluster].
[role="exclude",id="tutorial-define-index"]
== Define your index patterns
This content has moved. See
<<get-started, Quick start>>.

View file

@ -11,7 +11,7 @@ To start working with your data in {kib}, you can:
* Connect {kib} with existing {es} indices.
If you're not ready to use your own data, you can add a <<gs-get-data-into-kibana, sample data set>>
If you're not ready to use your own data, you can add a <<get-started, sample data set>>
to see all that you can do in {kib}.
[float]

View file

@ -39,7 +39,7 @@ Create the *Host Overview* drilldown shown above.
*Set up the dashboards*
. Add the <<gs-get-data-into-kibana, sample web logs>> data set.
. Add the sample web logs data set.
. Create a new dashboard, called `Host Overview`, and include these visualizations
from the sample data set:

View file

@ -36,7 +36,7 @@ The following panels support URL drilldowns:
This example shows how to create the "Show on Github" drilldown shown above.
. Add the <<gs-get-data-into-kibana, sample web logs>> data set.
. Add the sample web logs data set.
. Open the *[Logs] Web traffic* dashboard. This isnt data from Github, but it should work for demonstration purposes.
. In the dashboard menu bar, click *Edit*.
. In *[Logs] Visitors by OS*, open the panel menu, and then select *Create drilldown*.

View file

@ -1,61 +0,0 @@
[[get-started]]
= Get started
[partintro]
--
Ready to try out {kib} and see what it can do? The quickest way to get started with {kib} is to set up on Cloud, then add a sample data set to explore the full range of {kib} features.
[float]
[[set-up-on-cloud]]
== Set up on cloud
include::{docs-root}/shared/cloud/ess-getting-started.asciidoc[]
[float]
[[gs-get-data-into-kibana]]
== Get data into {kib}
The easiest way to get data into {kib} is to add a sample data set.
{kib} has several sample data sets that you can use before loading your own data:
* *Sample eCommerce orders* includes visualizations for tracking product-related information,
such as cost, revenue, and price.
* *Sample flight data* includes visualizations for monitoring flight routes.
* *Sample web logs* includes visualizations for monitoring website traffic.
To use the sample data sets:
. Go to the home page.
. Click *Load a data set and a {kib} dashboard*.
. Click *View data* and view the prepackaged dashboards, maps, and more.
[role="screenshot"]
image::getting-started/images/add-sample-data.png[]
NOTE: The timestamps in the sample data sets are relative to when they are installed.
If you uninstall and reinstall a data set, the timestamps change to reflect the most recent installation.
[float]
== Next steps
* To get a hands-on experience creating visualizations, follow the <<explore-kibana-using-sample-data, add sample data>> tutorial.
* If you're ready to load an actual data set and build a dashboard, follow the <<create-your-own-dashboard, Create your own dashboard>> tutorial.
--
include::{kib-repo-dir}/getting-started/tutorial-sample-data.asciidoc[]
include::{kib-repo-dir}/getting-started/tutorial-full-experience.asciidoc[]
include::{kib-repo-dir}/getting-started/tutorial-define-index.asciidoc[]
include::{kib-repo-dir}/getting-started/tutorial-discovering.asciidoc[]
include::{kib-repo-dir}/getting-started/tutorial-visualizing.asciidoc[]

View file

@ -2,6 +2,8 @@ include::introduction.asciidoc[]
include::whats-new.asciidoc[]
include::{kib-repo-dir}/getting-started/quick-start-guide.asciidoc[]
include::setup.asciidoc[]
include::monitoring/configuring-monitoring.asciidoc[leveloffset=+1]
@ -11,8 +13,6 @@ include::monitoring/monitoring-kibana.asciidoc[leveloffset=+2]
include::security/securing-kibana.asciidoc[]
include::getting-started.asciidoc[]
include::discover.asciidoc[]
include::dashboard/dashboard.asciidoc[]

View file

@ -155,6 +155,6 @@ and start exploring data in minutes.
You can also <<install, install {kib} on your own>>&mdash;no code, no additional
infrastructure required.
Our <<create-your-own-dashboard, Getting Started>> and in-product guidance can
Our <<get-started, Quick start>> and in-product guidance can
help you get up and running, faster. Click the help icon image:images/intro-help-icon.png[]
in the top navigation bar for help with questions or to provide feedback.

View file

@ -28,7 +28,7 @@ To complete this tutorial, you'll need the following:
* **A space**: In this tutorial, use `Dev Mortgage` as the space
name. See <<spaces-managing, spaces management>> for
details on creating a space.
* **Data**: You can use <<gs-get-data-into-kibana, sample data>> or
* **Data**: You can use <<get-started, sample data>> or
live data. In the following steps, Filebeat and Metricbeat data are used.
[float]