Updates getting started guide (#41778)
|
@ -7,7 +7,7 @@ To get up and running with Canvas, use the following tutorial where you'll creat
|
|||
[float]
|
||||
=== Before you begin
|
||||
|
||||
For this tutorial, you'll need to add the {kibana-ref}/add-sample-data.html[Sample eCommerce orders data].
|
||||
For this tutorial, you'll need to add the <<add-sample-data, Sample eCommerce orders data>>.
|
||||
|
||||
[float]
|
||||
=== Create and personalize your workpad
|
||||
|
|
|
@ -14,7 +14,7 @@ When you create a workpad, you'll start with a blank page, or you can choose a w
|
|||
|
||||
* To import an existing workpad, click and drag a workpad JSON file to the *Import workpad JSON file* field.
|
||||
|
||||
For advanced workpad examples, add a {kibana-ref}/add-sample-data.html[sample Kibana data set], then select *Canvas* from the *View Data* dropdown list.
|
||||
For advanced workpad examples, add a <<add-sample-data, sample Kibana data set>>, then select *Canvas* from the *View Data* dropdown list.
|
||||
|
||||
For more workpad inspiration, go to the link:https://www.elastic.co/blog/[Elastic Blog].
|
||||
|
||||
|
|
|
@ -4,47 +4,57 @@
|
|||
[partintro]
|
||||
--
|
||||
|
||||
Ready to get some hands-on experience with {kib}? There are two ways to start:
|
||||
You’re new to Kibana and want to give it a try. {kib} has sample data sets and
|
||||
tutorials to help you get started.
|
||||
|
||||
* <<tutorial-sample-data, Explore {kib} using the Flights dashboard>>
|
||||
+
|
||||
Load the Flights sample data and dashboard with one click and start
|
||||
interacting with {kib} visualizations in seconds.
|
||||
[float]
|
||||
=== Sample data
|
||||
|
||||
* <<tutorial-build-dashboard, Build your own dashboard>>
|
||||
+
|
||||
Manually load a data set and build your own visualizations and dashboard.
|
||||
You can use the <<add-sample-data, sample data
|
||||
sets>> to take {kib} for a test ride without having
|
||||
to go through the process of loading data yourself. With one click,
|
||||
you can install a sample data set and start interacting with
|
||||
{kib} visualizations in seconds. You can access the sample data
|
||||
from the {kib} home page.
|
||||
|
||||
Before you begin, make sure you've <<install, installed Kibana>> and established
|
||||
a {kibana-ref}/connect-to-elasticsearch.html[connection to Elasticsearch].
|
||||
You might also be interested in the
|
||||
https://www.elastic.co/webinars/getting-started-kibana[Getting Started with Kibana]
|
||||
video tutorial.
|
||||
[float]
|
||||
|
||||
=== Add data tutorials
|
||||
{kib} has built-in *Add Data* tutorials to help you set up
|
||||
data flows in the Elastic Stack. These tutorials are available
|
||||
from the Kibana home page. In *Add Data to Kibana*, find the data type
|
||||
you’re interested in, and click its button to view a list of available tutorials.
|
||||
|
||||
[float]
|
||||
=== Hands-on experience
|
||||
|
||||
The following tutorials walk you through searching, analyzing,
|
||||
and visualizing data.
|
||||
|
||||
* <<tutorial-sample-data, Explore Kibana using sample data>>. You'll
|
||||
learn to filter and query data, edit visualizations, and interact with dashboards.
|
||||
|
||||
* <<tutorial-build-dashboard, Build your own dashboard>>. You'll manually load a data set and build
|
||||
your own visualizations and dashboard.
|
||||
|
||||
[float]
|
||||
=== Before you begin
|
||||
|
||||
Make sure you've <<install, installed Kibana>> and established
|
||||
a <<connect-to-elasticsearch, connection to Elasticsearch>>.
|
||||
|
||||
If you are running our https://cloud.elastic.co[hosted Elasticsearch Service]
|
||||
on Elastic Cloud, you can access Kibana with a single click.
|
||||
|
||||
|
||||
--
|
||||
|
||||
include::getting-started/add-sample-data.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-sample-data.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-sample-filter.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-sample-query.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-sample-discover.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-sample-edit.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-sample-inspect.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-sample-remove.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-full-experience.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-load-dataset.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-define-index.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-discovering.asciidoc[]
|
||||
|
@ -53,6 +63,3 @@ include::getting-started/tutorial-visualizing.asciidoc[]
|
|||
|
||||
include::getting-started/tutorial-dashboard.asciidoc[]
|
||||
|
||||
include::getting-started/tutorial-inspect.asciidoc[]
|
||||
|
||||
include::getting-started/wrapping-up.asciidoc[]
|
||||
|
|
|
@ -1,32 +1,28 @@
|
|||
[[add-sample-data]]
|
||||
== Get up and running with sample data
|
||||
== Add sample data
|
||||
|
||||
{kib} has several sample data sets that you can use to explore {kib} before loading your own data.
|
||||
Sample data sets install prepackaged visualizations, dashboards,
|
||||
{kibana-ref}/canvas-getting-started.html[Canvas workpads],
|
||||
and {kibana-ref}/maps.html[Maps].
|
||||
|
||||
The sample data sets showcase a variety of use cases:
|
||||
These sample data sets showcase a variety of use cases:
|
||||
|
||||
* *eCommerce orders* includes visualizations for product-related information,
|
||||
such as cost, revenue, and price.
|
||||
* *Flight data* enables you to view and interact with flight routes.
|
||||
* *Web logs* lets you analyze website traffic.
|
||||
* *Flight data* enables you to view and interact with flight routes for four airlines.
|
||||
|
||||
To get started, go to the home page and click the link next to *Add sample data*.
|
||||
|
||||
Once you have loaded a data set, click *View data* to view visualizations in *Dashboard*.
|
||||
|
||||
*Note:* The timestamps in the sample data sets are relative to when they are installed.
|
||||
If you uninstall and reinstall a data set, the timestamps will change to reflect the most recent installation.
|
||||
To get started, go to the {kib} home page and click the link underneath *Add sample data*.
|
||||
|
||||
Once you've loaded a data set, click *View data* to view prepackaged
|
||||
visualizations, dashboards, Canvas workpads, Maps, and Machine Learning jobs.
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/add-sample-data.png[]
|
||||
|
||||
NOTE: The timestamps in the sample data sets are relative to when they are installed.
|
||||
If you uninstall and reinstall a data set, the timestamps will change to reflect the most recent installation.
|
||||
|
||||
[float]
|
||||
==== Next steps
|
||||
=== Next steps
|
||||
|
||||
Play with the sample flight data in the {kibana-ref}/tutorial-sample-data.html[flight dashboard tutorial].
|
||||
* Explore {kib} by following the <<tutorial-sample-data, sample data tutorial>>.
|
||||
|
||||
Learn how to load data, define index patterns and build visualizations by {kibana-ref}/tutorial-build-dashboard.html[building your own dashboard].
|
||||
* Learn how to load data, define index patterns, and build visualizations by <<tutorial-build-dashboard, building your own dashboard>>.
|
||||
|
|
|
@ -1,27 +1,57 @@
|
|||
[[tutorial-dashboard]]
|
||||
=== Displaying your visualizations in a dashboard
|
||||
=== Add visualizations to a dashboard
|
||||
|
||||
A dashboard is a collection of visualizations that you can arrange and share.
|
||||
You'll build a dashboard that contains the visualizations you saved during
|
||||
this tutorial.
|
||||
|
||||
. Open *Dashboard*.
|
||||
. Click *Create new dashboard*.
|
||||
. Click *Add*.
|
||||
. On the Dashboard overview page, click *Create new dashboard*.
|
||||
. Click *Add* in the menu bar.
|
||||
. Add *Bar Example*, *Map Example*, *Markdown Example*, and *Pie Example*.
|
||||
|
||||
|
||||
Your sample dashboard look like this:
|
||||
|
||||
+
|
||||
Your sample dashboard should look like this:
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-dashboard.png[]
|
||||
|
||||
. Try out the editing controls.
|
||||
+
|
||||
You can rearrange the visualizations by clicking a the header of a
|
||||
visualization and dragging. The gear icon in the top right of a visualization
|
||||
displays controls for editing and deleting the visualization. A resize control
|
||||
is on the lower right.
|
||||
|
||||
To get a link to share or HTML code to embed the dashboard in a web page, save
|
||||
the dashboard and click *Share*.
|
||||
. *Save* your dashboard.
|
||||
|
||||
==== Inspect the data
|
||||
|
||||
Seeing visualizations of your data is great,
|
||||
but sometimes you need to look at the actual data to
|
||||
understand what's really going on. You can inspect the data behind any visualization
|
||||
and view the {es} query used to retrieve it.
|
||||
|
||||
. In the dashboard, hover the pointer over the pie chart, and then click the icon in the upper right.
|
||||
. From the *Options* menu, select *Inspect*.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-full-inspect1.png[]
|
||||
|
||||
. To look at the query used to fetch the data for the visualization, select *View > Requests*
|
||||
in the upper right of the Inspect pane.
|
||||
|
||||
[float]
|
||||
=== Next steps
|
||||
|
||||
Now that you have a handle on the basics, you're ready to start exploring
|
||||
your own data with Kibana.
|
||||
|
||||
* See {kibana-ref}/discover.html[Discover] for information about searching and filtering
|
||||
your data.
|
||||
* See {kibana-ref}/visualize.html[Visualize] for information about the visualization
|
||||
types Kibana has to offer.
|
||||
* See {kibana-ref}/management.html[Management] for information about configuring Kibana
|
||||
and managing your saved objects.
|
||||
* See {kibana-ref}/console-kibana.html[Console] to learn about the interactive
|
||||
console you can use to submit REST requests to Elasticsearch.
|
||||
|
||||
*Save* your dashboard.
|
||||
|
|
|
@ -1,45 +1,53 @@
|
|||
[[tutorial-define-index]]
|
||||
=== Defining your index patterns
|
||||
=== Define your index patterns
|
||||
|
||||
Index patterns tell Kibana which Elasticsearch indices you want to explore.
|
||||
An index pattern can match the name of a single index, or include a wildcard
|
||||
(*) to match multiple indices.
|
||||
(*) to match multiple indices.
|
||||
|
||||
For example, Logstash typically creates a
|
||||
series of indices in the format `logstash-YYYY.MMM.DD`. To explore all
|
||||
of the log data from May 2018, you could specify the index pattern
|
||||
`logstash-2018.05*`.
|
||||
|
||||
You'll create patterns for the Shakespeare data set, which has an
|
||||
|
||||
[float]
|
||||
==== Create your first index pattern
|
||||
|
||||
First you'll create index patterns for the Shakespeare data set, which has an
|
||||
index named `shakespeare,` and the accounts data set, which has an index named
|
||||
`bank.` These data sets don't contain time-series data.
|
||||
`bank`. These data sets don't contain time series data.
|
||||
|
||||
. In Kibana, open *Management*, and then click *Index Patterns.*
|
||||
. If this is your first index pattern, the *Create index pattern* page opens automatically.
|
||||
Otherwise, click *Create index pattern* in the upper left.
|
||||
Otherwise, click *Create index pattern*.
|
||||
. Enter `shakes*` in the *Index pattern* field.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-pattern-1.png[]
|
||||
|
||||
. Click *Next step*.
|
||||
. In *Configure settings*, click *Create index pattern*. For this pattern,
|
||||
you don't need to configure any settings.
|
||||
. Define a second index pattern named `ba*` You don't need to configure any settings for this pattern.
|
||||
. In *Configure settings*, click *Create index pattern*.
|
||||
+
|
||||
You’re presented a table of all fields and associated data types in the index.
|
||||
|
||||
Now create an index pattern for the Logstash data set. This data set
|
||||
contains time-series data.
|
||||
. Return to the *Index patterns* overview page and define a second index pattern named `ba*`.
|
||||
|
||||
[float]
|
||||
==== Create an index pattern for time series data
|
||||
|
||||
Now create an index pattern for the Logstash index, which
|
||||
contains time series data.
|
||||
|
||||
. Define an index pattern named `logstash*`.
|
||||
. Click *Next step*.
|
||||
. In *Configure settings*, select *@timestamp* in the *Time Filter field name* dropdown menu.
|
||||
. Open the *Time Filter field name* dropdown and select *@timestamp*.
|
||||
. Click *Create index pattern*.
|
||||
|
||||
|
||||
|
||||
|
||||
NOTE: When you define an index pattern, the indices that match that pattern must
|
||||
exist in Elasticsearch and they must contain data. To check which indices are
|
||||
available, go to *Dev Tools > Console* and enter `GET _cat/indices`. Alternately, use
|
||||
`curl -XGET "http://localhost:9200/_cat/indices"`.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
[[tutorial-discovering]]
|
||||
=== Discovering your data
|
||||
=== Discover your data
|
||||
|
||||
Using the Discover application, you can enter
|
||||
an {ref}/query-dsl-query-string-query.html#query-string-syntax[Elasticsearch
|
||||
|
@ -10,24 +10,27 @@ query] to search your data and filter the results.
|
|||
The current index pattern appears below the filter bar, in this case `shakes*`.
|
||||
You might need to click *New* in the menu bar to refresh the data.
|
||||
|
||||
. Click the caret to the right of the current index pattern, and select `ba*`.
|
||||
. Click the caret to the right of the current index pattern, and select `ba*`.
|
||||
+
|
||||
By default, all fields are shown for each matching document.
|
||||
|
||||
. In the search field, enter the following string:
|
||||
+
|
||||
[source,text]
|
||||
account_number<100 AND balance>47500
|
||||
|
||||
+
|
||||
The search returns all account numbers between zero and 99 with balances in
|
||||
excess of 47,500. It returns results for account numbers 8, 32, 78, 85, and 97.
|
||||
|
||||
excess of 47,500. Results appear for account numbers 8, 32, 78, 85, and 97.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-discover-2.png[]
|
||||
|
||||
By default, all fields are shown for each matching document. To choose which
|
||||
fields to display, hover the pointer over the the list of *Available Fields*
|
||||
. To choose which
|
||||
fields to display, hover the pointer over the list of *Available fields*
|
||||
and then click *add* next to each field you want include as a column in the table.
|
||||
|
||||
+
|
||||
For example, if you add the `account_number` field, the display changes to a list of five
|
||||
account numbers.
|
||||
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-discover-3.png[]
|
||||
|
|
|
@ -1,12 +1,213 @@
|
|||
[[tutorial-build-dashboard]]
|
||||
== Building your own dashboard
|
||||
== Build your own dashboard
|
||||
|
||||
Ready to load some data and build a dashboard? This tutorial shows you how to:
|
||||
Want to load some data into Kibana and build a dashboard? This tutorial shows you how to:
|
||||
|
||||
* Load a data set into Elasticsearch
|
||||
* Define an index pattern
|
||||
* Discover and explore the data
|
||||
* Visualize the data
|
||||
* Add visualizations to a dashboard
|
||||
* Inspect the data behind a visualization
|
||||
* <<tutorial-load-dataset, Load a data set into Elasticsearch>>
|
||||
* <<tutorial-define-index, Define an index pattern to connect to Elasticsearch>>
|
||||
* <<tutorial-discovering, Discover and explore the data>>
|
||||
* <<tutorial-visualizing, Visualize the data>>
|
||||
* <<tutorial-dashboard, Add visualizations to a dashboard>>
|
||||
|
||||
When you complete this tutorial, you'll have a dashboard that looks like this.
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-dashboard.png[]
|
||||
|
||||
[float]
|
||||
[[tutorial-load-dataset]]
|
||||
=== Load sample data
|
||||
|
||||
This tutorial requires you to download three data sets:
|
||||
|
||||
* The complete works of William Shakespeare, suitably parsed into fields
|
||||
* A set of fictitious accounts with randomly generated data
|
||||
* A set of randomly generated log files
|
||||
|
||||
[float]
|
||||
==== Download the data sets
|
||||
|
||||
Create a new working directory where you want to download the files. From that directory, run the following commands:
|
||||
|
||||
[source,shell]
|
||||
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/shakespeare.json
|
||||
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/accounts.zip
|
||||
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/logs.jsonl.gz
|
||||
|
||||
Two of the data sets are compressed. To extract the files, use these commands:
|
||||
|
||||
[source,shell]
|
||||
unzip accounts.zip
|
||||
gunzip logs.jsonl.gz
|
||||
|
||||
[float]
|
||||
==== Structure of the data sets
|
||||
|
||||
The Shakespeare data set has this structure:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"line_id": INT,
|
||||
"play_name": "String",
|
||||
"speech_number": INT,
|
||||
"line_number": "String",
|
||||
"speaker": "String",
|
||||
"text_entry": "String",
|
||||
}
|
||||
|
||||
The accounts data set is structured as follows:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"account_number": INT,
|
||||
"balance": INT,
|
||||
"firstname": "String",
|
||||
"lastname": "String",
|
||||
"age": INT,
|
||||
"gender": "M or F",
|
||||
"address": "String",
|
||||
"employer": "String",
|
||||
"email": "String",
|
||||
"city": "String",
|
||||
"state": "String"
|
||||
}
|
||||
|
||||
The logs data set has dozens of different fields. Here are the notable fields for this tutorial:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"memory": INT,
|
||||
"geo.coordinates": "geo_point"
|
||||
"@timestamp": "date"
|
||||
}
|
||||
|
||||
[float]
|
||||
==== Set up mappings
|
||||
|
||||
Before you load the Shakespeare and logs data sets, you must set up {ref}/mapping.html[_mappings_] for the fields.
|
||||
Mappings divide the documents in the index into logical groups and specify the characteristics
|
||||
of the fields. These characteristics include the searchability of the field
|
||||
and whether it's _tokenized_, or broken up into separate words.
|
||||
|
||||
NOTE: If security is enabled, you must have the `all` Kibana privilege to run this tutorial.
|
||||
You must also have the `create`, `manage` `read`, `write,` and `delete`
|
||||
index privileges. See {xpack-ref}/security-privileges.html[Security Privileges]
|
||||
for more information.
|
||||
|
||||
In Kibana *Dev Tools > Console*, set up a mapping for the Shakespeare data set:
|
||||
|
||||
[source,js]
|
||||
PUT /shakespeare
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"speaker": {"type": "keyword"},
|
||||
"play_name": {"type": "keyword"},
|
||||
"line_id": {"type": "integer"},
|
||||
"speech_number": {"type": "integer"}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//CONSOLE
|
||||
|
||||
This mapping specifies field characteristics for the data set:
|
||||
|
||||
* The `speaker` and `play_name` fields are keyword fields. These fields are not analyzed.
|
||||
The strings are treated as a single unit even if they contain multiple words.
|
||||
* The `line_id` and `speech_number` fields are integers.
|
||||
|
||||
The logs data set requires a mapping to label the latitude and longitude pairs
|
||||
as geographic locations by applying the `geo_point` type.
|
||||
|
||||
[source,js]
|
||||
PUT /logstash-2015.05.18
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//CONSOLE
|
||||
|
||||
[source,js]
|
||||
PUT /logstash-2015.05.19
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//CONSOLE
|
||||
|
||||
[source,js]
|
||||
PUT /logstash-2015.05.20
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//CONSOLE
|
||||
|
||||
The accounts data set doesn't require any mappings.
|
||||
|
||||
[float]
|
||||
==== Load the data sets
|
||||
|
||||
At this point, you're ready to use the Elasticsearch {ref}/docs-bulk.html[bulk]
|
||||
API to load the data sets:
|
||||
|
||||
[source,shell]
|
||||
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/bank/account/_bulk?pretty' --data-binary @accounts.json
|
||||
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
|
||||
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/_bulk?pretty' --data-binary @logs.jsonl
|
||||
|
||||
Or for Windows users, in Powershell:
|
||||
[source,shell]
|
||||
Invoke-RestMethod "http://<host>:<port>/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json"
|
||||
Invoke-RestMethod "http://<host>:<port>/shakespeare/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare.json"
|
||||
Invoke-RestMethod "http://<host>:<port>/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl"
|
||||
|
||||
These commands might take some time to execute, depending on the available computing resources.
|
||||
|
||||
Verify successful loading:
|
||||
|
||||
[source,js]
|
||||
GET /_cat/indices?v
|
||||
|
||||
//CONSOLE
|
||||
|
||||
Your output should look similar to this:
|
||||
|
||||
[source,shell]
|
||||
health status index pri rep docs.count docs.deleted store.size pri.store.size
|
||||
yellow open bank 1 1 1000 0 418.2kb 418.2kb
|
||||
yellow open shakespeare 1 1 111396 0 17.6mb 17.6mb
|
||||
yellow open logstash-2015.05.18 1 1 4631 0 15.6mb 15.6mb
|
||||
yellow open logstash-2015.05.19 1 1 4624 0 15.7mb 15.7mb
|
||||
yellow open logstash-2015.05.20 1 1 4750 0 16.4mb 16.4mb
|
||||
|
|
|
@ -1,24 +0,0 @@
|
|||
[[tutorial-inspect]]
|
||||
=== Inspecting the data
|
||||
|
||||
Seeing visualizations of your data is great,
|
||||
but sometimes you need to look at the actual data to
|
||||
understand what's really going on. You can inspect the data behind any visualization
|
||||
and view the {es} query used to retrieve it.
|
||||
|
||||
. In the dashboard, hover the pointer over the pie chart.
|
||||
. Click the icon in the upper right.
|
||||
. From the *Options* menu, select *Inspect*.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-full-inspect1.png[]
|
||||
|
||||
You can also look at the query used to fetch the data for the visualization.
|
||||
|
||||
. Open the *View:Data* menu and select *Requests*.
|
||||
. Click the tabs to look at the request statistics, the Elasticsearch request,
|
||||
and the response in JSON.
|
||||
. To close the Inspector, click X in the upper right.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-full-inspect2.png[]
|
|
@ -1,190 +0,0 @@
|
|||
[[tutorial-load-dataset]]
|
||||
=== Loading sample data
|
||||
|
||||
This tutorial requires three data sets:
|
||||
|
||||
* The complete works of William Shakespeare, suitably parsed into fields
|
||||
* A set of fictitious accounts with randomly generated data
|
||||
* A set of randomly generated log files
|
||||
|
||||
Create a new working directory where you want to download the files. From that directory, run the following commands:
|
||||
|
||||
[source,shell]
|
||||
curl -O https://download.elastic.co/demos/kibana/gettingstarted/7.x/shakespeare.json
|
||||
curl -O https://download.elastic.co/demos/kibana/gettingstarted/7.x/accounts.zip
|
||||
curl -O https://download.elastic.co/demos/kibana/gettingstarted/7.x/logs.jsonl.gz
|
||||
|
||||
Two of the data sets are compressed. To extract the files, use these commands:
|
||||
|
||||
[source,shell]
|
||||
unzip accounts.zip
|
||||
gunzip logs.jsonl.gz
|
||||
|
||||
==== Structure of the data sets
|
||||
|
||||
The Shakespeare data set has this structure:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"line_id": INT,
|
||||
"play_name": "String",
|
||||
"speech_number": INT,
|
||||
"line_number": "String",
|
||||
"speaker": "String",
|
||||
"text_entry": "String",
|
||||
}
|
||||
|
||||
The accounts data set is structured as follows:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"account_number": INT,
|
||||
"balance": INT,
|
||||
"firstname": "String",
|
||||
"lastname": "String",
|
||||
"age": INT,
|
||||
"gender": "M or F",
|
||||
"address": "String",
|
||||
"employer": "String",
|
||||
"email": "String",
|
||||
"city": "String",
|
||||
"state": "String"
|
||||
}
|
||||
|
||||
The logs data set has dozens of different fields. Here are the notable fields for this tutorial:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"memory": INT,
|
||||
"geo.coordinates": "geo_point"
|
||||
"@timestamp": "date"
|
||||
}
|
||||
|
||||
==== Set up mappings
|
||||
|
||||
Before you load the Shakespeare and logs data sets, you must set up {ref}/mapping.html[_mappings_] for the fields.
|
||||
Mappings divide the documents in the index into logical groups and specify the characteristics
|
||||
of the fields. These characteristics include the searchability of the field
|
||||
and whether it's _tokenized_, or broken up into separate words.
|
||||
|
||||
NOTE: If security is enabled, you must have the `all` Kibana privilege to run this tutorial.
|
||||
You must also have the `create`, `manage` `read`, `write,` and `delete`
|
||||
index privileges. See {xpack-ref}/security-privileges.html[Security Privileges]
|
||||
for more information.
|
||||
|
||||
In Kibana *Dev Tools > Console*, set up a mapping for the Shakespeare data set:
|
||||
|
||||
[source,js]
|
||||
PUT /shakespeare
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"speaker": {"type": "keyword"},
|
||||
"play_name": {"type": "keyword"},
|
||||
"line_id": {"type": "integer"},
|
||||
"speech_number": {"type": "integer"}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//CONSOLE
|
||||
|
||||
This mapping specifies field characteristics for the data set:
|
||||
|
||||
* The `speaker` and `play_name` fields are keyword fields. These fields are not analyzed.
|
||||
The strings are treated as a single unit even if they contain multiple words.
|
||||
* The `line_id` and `speech_number` fields are integers.
|
||||
|
||||
The logs data set requires a mapping to label the latitude and longitude pairs
|
||||
as geographic locations by applying the `geo_point` type.
|
||||
|
||||
[source,js]
|
||||
PUT /logstash-2015.05.18
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//CONSOLE
|
||||
|
||||
[source,js]
|
||||
PUT /logstash-2015.05.19
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//CONSOLE
|
||||
|
||||
[source,js]
|
||||
PUT /logstash-2015.05.20
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//CONSOLE
|
||||
|
||||
The accounts data set doesn't require any mappings.
|
||||
|
||||
==== Load the data sets
|
||||
|
||||
At this point, you're ready to use the Elasticsearch {ref}/docs-bulk.html[bulk]
|
||||
API to load the data sets:
|
||||
|
||||
[source,shell]
|
||||
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/bank/account/_bulk?pretty' --data-binary @accounts.json
|
||||
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
|
||||
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/_bulk?pretty' --data-binary @logs.jsonl
|
||||
|
||||
Or for Windows users, in Powershell:
|
||||
[source,shell]
|
||||
Invoke-RestMethod "http://<host>:<port>/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json"
|
||||
Invoke-RestMethod "http://<host>:<port>/shakespeare/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare.json"
|
||||
Invoke-RestMethod "http://<host>:<port>/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl"
|
||||
|
||||
These commands might take some time to execute, depending on the available computing resources.
|
||||
|
||||
Verify successful loading:
|
||||
|
||||
[source,js]
|
||||
GET /_cat/indices?v
|
||||
|
||||
//CONSOLE
|
||||
|
||||
Your output should look similar to this:
|
||||
|
||||
[source,shell]
|
||||
health status index pri rep docs.count docs.deleted store.size pri.store.size
|
||||
yellow open bank 1 1 1000 0 418.2kb 418.2kb
|
||||
yellow open shakespeare 1 1 111396 0 17.6mb 17.6mb
|
||||
yellow open logstash-2015.05.18 1 1 4631 0 15.6mb 15.6mb
|
||||
yellow open logstash-2015.05.19 1 1 4624 0 15.7mb 15.7mb
|
||||
yellow open logstash-2015.05.20 1 1 4750 0 16.4mb 16.4mb
|
|
@ -1,31 +1,207 @@
|
|||
[[tutorial-sample-data]]
|
||||
== Explore {kib} using the Flight dashboard
|
||||
== Explore {kib} using sample data
|
||||
|
||||
You’re new to {kib} and want to try it out. With one click, you can install
|
||||
the Flights sample data and start interacting with Kibana.
|
||||
Ready to get some hands-on experience with Kibana?
|
||||
In this tutorial, you’ll work
|
||||
with Kibana sample data and learn to:
|
||||
|
||||
The Flights data set contains data for four airlines.
|
||||
You can load the data and preconfigured dashboard from the {kib} home page.
|
||||
* <<tutorial-sample-filter, Filter and query data in visualizations>>
|
||||
* <<tutorial-sample-discover, Discover and explore your documents>>
|
||||
* <<tutorial-sample-edit, Edit a visualization>>
|
||||
* <<tutorial-sample-inspect, Inspect the data behind a visualization>>
|
||||
|
||||
. On the home page, click the link next to *Sample data*.
|
||||
. On the *Sample flight data* card, click *Add*.
|
||||
. Click *View data*.
|
||||
|
||||
You’re taken to the *Global Flight* dashboard, a collection of charts, graphs,
|
||||
maps, and other visualizations of the the data in the `kibana_sample_data_flights` index.
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-dashboard.png[]
|
||||
|
||||
In this tutorial, you’ll learn to:
|
||||
|
||||
* Filter the data
|
||||
* Query the data
|
||||
* Discover the data
|
||||
* Edit a visualization
|
||||
* Inspect the data behind the scenes
|
||||
|
||||
NOTE: If security is enabled, you must have `read`, `write`, and `manage` privileges
|
||||
on the `kibana_sample_data_*` indices. See {xpack-ref}/security-privileges.html[Security Privileges]
|
||||
for more information.
|
||||
|
||||
|
||||
[float]
|
||||
=== Add sample data
|
||||
|
||||
Install the Flights sample data set, if you haven't already.
|
||||
|
||||
. On the {kib} home page, click the link underneath *Add sample data*.
|
||||
. On the *Sample flight data* card, click *Add data*.
|
||||
. Once the data is added, click *View data > Dashboard*.
|
||||
+
|
||||
You’re taken to the *Global Flight* dashboard, a collection of charts, graphs,
|
||||
maps, and other visualizations of the the data in the `kibana_sample_data_flights` index.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-dashboard.png[]
|
||||
|
||||
[float]
|
||||
[[tutorial-sample-filter]]
|
||||
=== Filter and query the data
|
||||
|
||||
You can use filters and queries to
|
||||
narrow the view of the data.
|
||||
For more detailed information on these actions, see
|
||||
{ref}/query-filter-context.html[Query and filter context].
|
||||
|
||||
[float]
|
||||
==== Filter the data
|
||||
|
||||
. In the *Controls* visualization, set an *Origin City* and a *Destination City*.
|
||||
. Click *Apply changes*.
|
||||
+
|
||||
The `OriginCityName` and the `DestCityName` fields are filtered to match
|
||||
the data you specified.
|
||||
+
|
||||
For example, this dashboard shows the data for flights from London to Oslo.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-filter.png[]
|
||||
|
||||
. To add a filter manually, click *Add filter* in the filter bar,
|
||||
and specify the data you want to view.
|
||||
|
||||
. When you are finished experimenting, remove all filters.
|
||||
|
||||
|
||||
[float]
|
||||
[[tutorial-sample-query]]
|
||||
==== Query the data
|
||||
|
||||
. To find all flights out of Rome, enter this query in the query bar and click *Update*:
|
||||
+
|
||||
[source,text]
|
||||
OriginCityName:Rome
|
||||
|
||||
. For a more complex query with AND and OR, try this:
|
||||
+
|
||||
[source,text]
|
||||
OriginCityName:Rome AND (Carrier:JetBeats OR "Kibana Airlines")
|
||||
+
|
||||
The dashboard updates to show data for the flights out of Rome on JetBeats and
|
||||
{kib} Airlines.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-query.png[]
|
||||
|
||||
. When you are finished exploring the dashboard, remove the query by
|
||||
clearing the contents in the query bar and clicking *Update*.
|
||||
|
||||
[float]
|
||||
[[tutorial-sample-discover]]
|
||||
=== Discover the data
|
||||
|
||||
In Discover, you have access to every document in every index that
|
||||
matches the selected index pattern. The index pattern tells {kib} which {es} index you are currently
|
||||
exploring. You can submit search queries, filter the
|
||||
search results, and view document data.
|
||||
|
||||
. In the side navigation, click *Discover*.
|
||||
|
||||
. Ensure `kibana_sample_data_flights` is the current index pattern.
|
||||
You might need to click *New* in the menu bar to refresh the data.
|
||||
+
|
||||
You'll see a histogram that shows the distribution of
|
||||
documents over time. A table lists the fields for
|
||||
each matching document. By default, all fields are shown.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-discover1.png[]
|
||||
|
||||
. To choose which fields to display,
|
||||
hover the pointer over the list of *Available fields*, and then click *add* next
|
||||
to each field you want include as a column in the table.
|
||||
+
|
||||
For example, if you add the `DestAirportID` and `DestWeather` fields,
|
||||
the display includes columns for those two fields.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-discover2.png[]
|
||||
|
||||
[float]
|
||||
[[tutorial-sample-edit]]
|
||||
=== Edit a visualization
|
||||
|
||||
You have edit permissions for the *Global Flight* dashboard, so you can change
|
||||
the appearance and behavior of the visualizations. For example, you might want
|
||||
to see which airline has the lowest average fares.
|
||||
|
||||
. In the side navigation, click *Recently viewed* and open the *Global Flight Dashboard*.
|
||||
. In the menu bar, click *Edit*.
|
||||
. In the *Average Ticket Price* visualization, click the gear icon in
|
||||
the upper right.
|
||||
. From the *Options* menu, select *Edit visualization*.
|
||||
+
|
||||
*Average Ticket Price* is a metric visualization.
|
||||
To specify which groups to display
|
||||
in this visualization, you use an {es} {ref}/search-aggregations.html[bucket aggregation].
|
||||
This aggregation sorts the documents that match your search criteria into different
|
||||
categories, or buckets.
|
||||
|
||||
[float]
|
||||
==== Create a bucket aggregation
|
||||
|
||||
. In the *Buckets* pane, select *Add > Split group*.
|
||||
. In the *Aggregation* dropdown, select *Terms*.
|
||||
. In the *Field* dropdown, select *Carrier*.
|
||||
. Set *Descending* to *4*.
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
+
|
||||
You now see the average ticket price for all four airlines.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-edit1.png[]
|
||||
|
||||
[float]
|
||||
==== Save the visualization
|
||||
|
||||
. In the menu bar, click *Save*.
|
||||
. Leave the visualization name as is and confirm the save.
|
||||
. Go to the *Global Flight* dashboard and scroll the *Average Ticket Price* visualization to see the four prices.
|
||||
. Optionally, edit the dashboard. Resize the panel
|
||||
for the *Average Ticket Price* visualization by dragging the
|
||||
handle in the lower right. You can also rearrange the visualizations by clicking
|
||||
the header and dragging. Be sure to save the dashboard.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-edit2.png[]
|
||||
|
||||
[float]
|
||||
[[tutorial-sample-inspect]]
|
||||
=== Inspect the data
|
||||
|
||||
Seeing visualizations of your data is great,
|
||||
but sometimes you need to look at the actual data to
|
||||
understand what's really going on. You can inspect the data behind any visualization
|
||||
and view the {es} query used to retrieve it.
|
||||
|
||||
. In the dashboard, hover the pointer over the pie chart, and then click the icon in the upper right.
|
||||
. From the *Options* menu, select *Inspect*.
|
||||
+
|
||||
The initial view shows the document count.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-inspect1.png[]
|
||||
|
||||
. To look at the query used to fetch the data for the visualization, select *View > Requests*
|
||||
in the upper right of the Inspect pane.
|
||||
|
||||
[float]
|
||||
[[tutorial-sample-remove]]
|
||||
=== Remove the sample data set
|
||||
When you’re done experimenting with the sample data set, you can remove it.
|
||||
|
||||
. Go to the *Sample data* page.
|
||||
. On the *Sample flight data* card, click *Remove*.
|
||||
|
||||
[float]
|
||||
=== Next steps
|
||||
|
||||
Now that you have a handle on the {kib} basics, you might be interested in the
|
||||
tutorial <<tutorial-build-dashboard, Build your own dashboard>>, where you'll learn to:
|
||||
|
||||
* Load data
|
||||
* Define an index pattern
|
||||
* Discover and explore data
|
||||
* Create visualizations
|
||||
* Add visualizations to a dashboard
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,27 +0,0 @@
|
|||
[[tutorial-sample-discover]]
|
||||
=== Using Discover
|
||||
|
||||
In the Discover application, the Flight data is presented in a table. You can
|
||||
interactively explore the data, including searching and filtering.
|
||||
|
||||
* In the side navigation, select *Discover*.
|
||||
|
||||
The current index pattern appears below the filter bar. An
|
||||
<<index-patterns, index pattern>> tells {kib} which {es} indices you want to
|
||||
explore.
|
||||
|
||||
The `kibana_sample_data_flights` index contains a time field. A histogram
|
||||
shows the distribution of documents over time.
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-discover1.png[]
|
||||
|
||||
By default, all fields are shown for each matching document. To choose which fields to display,
|
||||
hover the pointer over the the list of *Available Fields* and then click *add* next
|
||||
to each field you want include as a column in the table.
|
||||
|
||||
For example, if you add the `DestAirportID` and `DestWeather` fields,
|
||||
the display includes columns for those two fields:
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-discover2.png[]
|
|
@ -1,45 +0,0 @@
|
|||
[[tutorial-sample-edit]]
|
||||
=== Editing a visualization
|
||||
|
||||
You have edit permissions for the *Global Flight* dashboard so you can change
|
||||
the appearance and behavior of the visualizations. For example, you might want
|
||||
to see which airline has the lowest average fares.
|
||||
|
||||
. Go to the *Global Flight* dashboard.
|
||||
. In the menu bar, click *Edit*.
|
||||
. In the *Average Ticket Price* visualization, click the gear icon in
|
||||
the upper right.
|
||||
. From the *Options* menu, select *Edit visualization*.
|
||||
|
||||
==== Edit a metric visualization
|
||||
|
||||
*Average Ticket Price* is a metric visualization.
|
||||
To specify which groups to display
|
||||
in this visualization, you use an {es} {ref}/search-aggregations.html[bucket aggregation].
|
||||
This aggregation sorts the documents that match your search criteria into different
|
||||
categories, or buckets.
|
||||
|
||||
. In the *Buckets* pane, select *Split Group*.
|
||||
. In the *Aggregation* dropdown menu, select *Terms*.
|
||||
. In the *Field* dropdown, select *Carrier*.
|
||||
. Set *Descending* to four.
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
|
||||
You now see the average ticket price for all four airlines.
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-edit1.png[]
|
||||
|
||||
==== Save the changes
|
||||
|
||||
. In the menu bar, click *Save*.
|
||||
. Leave the visualization name unchanged and click *Save*.
|
||||
. Go to the *Global Flight* dashboard.
|
||||
. Resize the panel for the *Average Ticket Price* visualization by dragging the
|
||||
handle in the lower right.
|
||||
You can also rearrange the visualizations by clicking the header and dragging.
|
||||
. In the menu bar, click *Save* and then confirm the save.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-edit2.png[]
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
[[tutorial-sample-filter]]
|
||||
=== Filtering the data
|
||||
|
||||
Many visualizations in the *Global Flight* dashboard are interactive. You can
|
||||
apply filters to modify the view of the data across all visualizations.
|
||||
|
||||
. In the *Controls* visualization, set an *Origin City* and a *Destination City*.
|
||||
. Click *Apply changes*.
|
||||
+
|
||||
The `OriginCityName` and the `DestCityName` fields are filtered to match
|
||||
the data you specified.
|
||||
+
|
||||
For example, this dashboard shows the data for flights from London to Newark
|
||||
and Pittsburgh.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-filter.png[]
|
||||
+
|
||||
. To remove the filters, in the *Controls* visualization, click *Clear form*, and then
|
||||
*Apply changes*.
|
||||
|
||||
You can also add filters manually. In the filter bar, click *Add a Filter*
|
||||
and specify the data you want to view.
|
|
@ -1,24 +0,0 @@
|
|||
[[tutorial-sample-inspect]]
|
||||
=== Inspecting the data
|
||||
|
||||
Seeing visualizations of your data is great,
|
||||
but sometimes you need to look at the actual data to
|
||||
understand what's really going on. You can inspect the data behind any visualization
|
||||
and view the {es} query used to retrieve it.
|
||||
|
||||
. Hover the pointer over the *Flight Count and Average Ticket Price* visualization.
|
||||
. Click the icon in the upper right.
|
||||
. From the *Options* menu, select *Inspect*.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-inspect1.png[]
|
||||
|
||||
You can also look at the query used to fetch the data for the visualization.
|
||||
|
||||
. Open the *View: Data* menu and select *Requests*.
|
||||
. Click the tabs to look at the request statistics, the Elasticsearch request,
|
||||
and the response in JSON.
|
||||
. To close the editor, click X in the upper right.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-inspect2.png[]
|
|
@ -1,30 +0,0 @@
|
|||
[[tutorial-sample-query]]
|
||||
=== Querying the data
|
||||
|
||||
You can enter an {es} query to narrow the view of the data.
|
||||
|
||||
. To find all flights out of Rome, submit this query:
|
||||
+
|
||||
[source,text]
|
||||
OriginCityName:Rome
|
||||
|
||||
. For a more complex query with AND and OR, try this:
|
||||
+
|
||||
[source,text]
|
||||
OriginCityName:Rome AND (Carrier:JetBeats OR "Kibana Airlines")
|
||||
+
|
||||
The dashboard updates to show data for the flights out of Rome on JetBeats and
|
||||
{kib} Airlines.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-sample-query.png[]
|
||||
|
||||
. When you are finished exploring the dashboard, remove the query by
|
||||
clearing the contents in the query bar and pressing Enter.
|
||||
|
||||
In general, filters are faster than queries. For more information, see {ref}/query-filter-context.html[Query and filter context].
|
||||
|
||||
TIP: {kib} has an experimental autocomplete feature that can
|
||||
help jumpstart your queries. To turn on this feature, click *Options* on the
|
||||
right of the query bar and opt in. With autocomplete enabled,
|
||||
search suggestions are displayed when you start typing your query.
|
|
@ -1,18 +0,0 @@
|
|||
[[tutorial-sample-remove]]
|
||||
=== Wrapping up
|
||||
|
||||
When you’re done experimenting with the sample data set, you can remove it.
|
||||
|
||||
. Go to the {kib} home page and click the link next to *Sample data*.
|
||||
. On the *Sample flight data* card, click *Remove*.
|
||||
|
||||
Now that you have a handle on the {kib} basics, you might be interested in:
|
||||
|
||||
* <<tutorial-build-dashboard, Building your own dashboard>>. You’ll learn how to load your own
|
||||
data, define an index pattern, and create visualizations and dashboards.
|
||||
* <<visualize>>. You’ll find information about all the visualization types
|
||||
{kib} has to offer.
|
||||
* <<dashboard>>. You have the ability to share a dashboard, or embed the dashboard in a web page.
|
||||
* <<discover>>. You'll learn more about searching data and filtering by field.
|
||||
|
||||
|
|
@ -1,46 +1,48 @@
|
|||
[[tutorial-visualizing]]
|
||||
=== Visualizing your data
|
||||
=== Visualize your data
|
||||
|
||||
In the Visualize application, you can shape your data using a variety
|
||||
of charts, tables, and maps, and more. You'll create four
|
||||
visualizations: a pie chart, bar chart, coordinate map, and Markdown widget.
|
||||
of charts, tables, and maps, and more. In this tutorial, you'll create four
|
||||
visualizations:
|
||||
|
||||
. Open *Visualize.*
|
||||
. Click *Create a visualization* or the *+* button. You'll see all the visualization
|
||||
* <<tutorial-visualize-pie, Pie chart>>
|
||||
* <<tutorial-visualize-bar, Bar chart>>
|
||||
* <<tutorial-visualize-map, Coordinate map>>
|
||||
* <<tutorial-visualize-markdown, Markdown widget>>
|
||||
|
||||
[float]
|
||||
[[tutorial-visualize-pie]]
|
||||
=== Pie chart
|
||||
|
||||
You'll use the pie chart to
|
||||
gain insight into the account balances in the bank account data.
|
||||
|
||||
. Open *Visualize* to show the overview page.
|
||||
. Click *Create new visualization*. You'll see all the visualization
|
||||
types in Kibana.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-wizard-step-1.png[]
|
||||
. Click *Pie*.
|
||||
|
||||
. In *New Search*, select the `ba*` index pattern. You'll use the pie chart to
|
||||
gain insight into the account balances in the bank account data.
|
||||
. In *Choose a source*, select the `ba*` index pattern.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-wizard-step-2.png[]
|
||||
|
||||
=== Pie chart
|
||||
|
||||
Initially, the pie contains a single "slice."
|
||||
That's because the default search matched all documents.
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-pie-1.png[]
|
||||
|
||||
+
|
||||
To specify which slices to display in the pie, you use an Elasticsearch
|
||||
{ref}/search-aggregations.html[bucket aggregation]. This aggregation
|
||||
sorts the documents that match your search criteria into different
|
||||
categories, also known as _buckets_.
|
||||
|
||||
Use a bucket aggregation to establish
|
||||
categories. You'll use a bucket aggregation to establish
|
||||
multiple ranges of account balances and find out how many accounts fall into
|
||||
each range.
|
||||
|
||||
. In the *Buckets* pane, click *Split Slices.*
|
||||
. In the *Aggregation* dropdown menu, select *Range*.
|
||||
. In the *Field* dropdown menu, select *balance*.
|
||||
. Click *Add Range* four times to bring the total number of ranges to six.
|
||||
. Define the following ranges:
|
||||
. In the *Buckets* pane, click *Add > Split slices.*
|
||||
+
|
||||
.. In the *Aggregation* dropdown, select *Range*.
|
||||
.. In the *Field* dropdown, select *balance*.
|
||||
.. Click *Add range* four times to bring the total number of ranges to six.
|
||||
.. Define the following ranges:
|
||||
+
|
||||
[source,text]
|
||||
0 999
|
||||
|
@ -51,120 +53,117 @@ each range.
|
|||
31000 50000
|
||||
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
|
||||
+
|
||||
Now you can see what proportion of the 1000 accounts fall into each balance
|
||||
range.
|
||||
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-pie-2.png[]
|
||||
|
||||
Add another bucket aggregation that looks at the ages of the account
|
||||
. Add another bucket aggregation that looks at the ages of the account
|
||||
holders.
|
||||
|
||||
. At the bottom of the *Buckets* pane, click *Add sub-buckets*.
|
||||
. In *Select buckets type,* click *Split Slices*.
|
||||
. In the *Sub Aggregation* dropdown, select *Terms*.
|
||||
. In the *Field* dropdown, select *age*.
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
.. At the bottom of the *Buckets* pane, click *Add*.
|
||||
.. For *sub-bucket type,* select *Split slices*.
|
||||
.. In the *Sub aggregation* dropdown, select *Terms*.
|
||||
.. In the *Field* dropdown, select *age*.
|
||||
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
+
|
||||
Now you can see the break down of the ages of the account holders, displayed
|
||||
in a ring around the balance ranges.
|
||||
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-pie-3.png[]
|
||||
|
||||
To save this chart so you can use it later:
|
||||
|
||||
Click *Save* in the top menu bar and enter `Pie Example`.
|
||||
. To save this chart so you can use it later, click *Save* in
|
||||
the top menu bar and enter `Pie Example`.
|
||||
|
||||
[float]
|
||||
[[tutorial-visualize-bar]]
|
||||
=== Bar chart
|
||||
|
||||
You'll use a bar chart to look at the Shakespeare data set and compare
|
||||
the number of speaking parts in the plays.
|
||||
|
||||
Create a *Vertical Bar* chart and set the search source to `shakes*`.
|
||||
|
||||
. Create a *Vertical Bar* chart and set the search source to `shakes*`.
|
||||
+
|
||||
Initially, the chart is a single bar that shows the total count
|
||||
of documents that match the default wildcard query.
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-bar-1.png[]
|
||||
. Show the number of speaking parts per play along the Y-axis.
|
||||
|
||||
Show the number of speaking parts per play along the Y-axis.
|
||||
This requires you to configure the Y-axis
|
||||
{ref}/search-aggregations.html[metric aggregation.]
|
||||
This aggregation computes metrics based on values from the search results.
|
||||
.. In the *Metrics* pane, expand *Y-axis*.
|
||||
.. Set *Aggregation* to *Unique Count*.
|
||||
.. Set *Field* to *speaker*.
|
||||
.. In the *Custom label* box, enter `Speaking Parts`.
|
||||
|
||||
. In the *Metrics* pane, expand *Y-Axis*.
|
||||
. Set *Aggregation* to *Unique Count*.
|
||||
. Set *Field* to *speaker*.
|
||||
. In the *Custom Label* box, enter `Speaking Parts`.
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
|
||||
. Show the plays along the X-axis.
|
||||
|
||||
.. In the *Buckets* pane, click *Add > X-axis*.
|
||||
.. Set *Aggregation* to *Terms*.
|
||||
.. Set *Field* to *play_name*.
|
||||
.. To list plays alphabetically, in the *Order* dropdown, select *Ascending*.
|
||||
.. Give the axis a custom label, `Play Name`.
|
||||
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-bar-1.5.png[]
|
||||
|
||||
|
||||
Show the plays along the X-axis.
|
||||
|
||||
. In the *Buckets* pane, click *X-Axis*.
|
||||
. Set *Aggregation* to *Terms* and *Field* to *play_name*.
|
||||
. To list plays alphabetically, in the *Order* dropdown menu, select *Ascending*.
|
||||
. Give the axis a custom label, `Play Name`.
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
|
||||
. *Save* this chart with the name `Bar Example`.
|
||||
+
|
||||
Hovering over a bar shows a tooltip with the number of speaking parts for
|
||||
that play.
|
||||
|
||||
+
|
||||
Notice how the individual play names show up as whole phrases, instead of
|
||||
broken into individual words. This is the result of the mapping
|
||||
you did at the beginning of the tutorial, when you marked the `play_name` field
|
||||
as `not analyzed`.
|
||||
|
||||
*Save* this chart with the name `Bar Example`.
|
||||
|
||||
[float]
|
||||
[[tutorial-visualize-map]]
|
||||
=== Coordinate map
|
||||
|
||||
Using a coordinate map, you can visualize geographic information in the log file sample data.
|
||||
|
||||
. Create a *Coordinate map* and set the search source to `logstash*`.
|
||||
. In the top menu bar, click the time picker on the far right.
|
||||
. Click *Absolute*.
|
||||
. Set the start time to May 18, 2015 and the end time to May 20, 2015.
|
||||
. Click *Go*.
|
||||
|
||||
+
|
||||
You haven't defined any buckets yet, so the visualization is a map of the world.
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-map-1.png[]
|
||||
. Set the time.
|
||||
.. In the time filter, click *Show dates*.
|
||||
.. Click the start date, then *Absolute*.
|
||||
.. Set the *Start date* to May 18, 2015.
|
||||
.. In the time filter, click *now*, then *Absolute*.
|
||||
.. Set the *End date* to May 20, 2015.
|
||||
|
||||
Now map the geo coordinates from the log files.
|
||||
. Map the geo coordinates from the log files.
|
||||
|
||||
.. In the *Buckets* pane, click *Add > Geo coordinates*.
|
||||
.. Set *Aggregation* to *Geohash*.
|
||||
.. Set *Field* to *geo.coordinates*.
|
||||
|
||||
. In the *Buckets* pane, click *Geo Coordinates*.
|
||||
. Set *Aggregation* to *Geohash* and *Field* to *geo.coordinates*.
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
|
||||
+
|
||||
The map now looks like this:
|
||||
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-map-2.png[]
|
||||
|
||||
You can navigate the map by clicking and dragging. The controls
|
||||
on the top left of the map enable you to zoom the map and set filters.
|
||||
Give them a try.
|
||||
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-map-3.png[]
|
||||
|
||||
*Save* this map with the name `Map Example`.
|
||||
. Navigate the map by clicking and dragging. Use the controls
|
||||
on the left to zoom the map and set filters.
|
||||
. *Save* this map with the name `Map Example`.
|
||||
|
||||
[float]
|
||||
[[tutorial-visualize-markdown]]
|
||||
=== Markdown
|
||||
|
||||
The final visualization is a Markdown widget that renders formatted text.
|
||||
|
||||
. Create a *Markdown* visualization.
|
||||
. In the text box, enter the following:
|
||||
. Copy the following text into the text box.
|
||||
+
|
||||
[source,markdown]
|
||||
# This is a tutorial dashboard!
|
||||
|
@ -172,10 +171,10 @@ The Markdown widget uses **markdown** syntax.
|
|||
> Blockquotes in Markdown use the > character.
|
||||
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
|
||||
The Markdown renders in the preview pane:
|
||||
|
||||
+
|
||||
The Markdown renders in the preview pane.
|
||||
+
|
||||
[role="screenshot"]
|
||||
image::images/tutorial-visualize-md-2.png[]
|
||||
|
||||
*Save* this visualization with the name `Markdown Example`.
|
||||
. *Save* this visualization with the name `Markdown Example`.
|
||||
|
|
|
@ -1,14 +0,0 @@
|
|||
[[wrapping-up]]
|
||||
=== Wrapping up
|
||||
|
||||
Now that you have a handle on the basics, you're ready to start exploring
|
||||
your own data with Kibana.
|
||||
|
||||
* See {kibana-ref}/discover.html[Discover] for information about searching and filtering
|
||||
your data.
|
||||
* See {kibana-ref}/visualize.html[Visualize] for information about the visualization
|
||||
types Kibana has to offer.
|
||||
* See {kibana-ref}/management.html[Management] for information about configuring Kibana
|
||||
and managing your saved objects.
|
||||
* See {kibana-ref}/console-kibana.html[Console] to learn about the interactive
|
||||
console you can use to submit REST requests to Elasticsearch.
|
Before Width: | Height: | Size: 660 KiB After Width: | Height: | Size: 284 KiB |
Before Width: | Height: | Size: 442 B After Width: | Height: | Size: 575 B |
Before Width: | Height: | Size: 470 KiB After Width: | Height: | Size: 406 KiB |
Before Width: | Height: | Size: 384 KiB After Width: | Height: | Size: 298 KiB |
Before Width: | Height: | Size: 130 KiB After Width: | Height: | Size: 86 KiB |
Before Width: | Height: | Size: 204 KiB After Width: | Height: | Size: 156 KiB |
Before Width: | Height: | Size: 174 KiB After Width: | Height: | Size: 94 KiB |
Before Width: | Height: | Size: 378 KiB After Width: | Height: | Size: 273 KiB |
BIN
docs/images/tutorial-sample-discover-2.png
Normal file
After Width: | Height: | Size: 298 KiB |
Before Width: | Height: | Size: 947 KiB After Width: | Height: | Size: 309 KiB |
Before Width: | Height: | Size: 386 KiB After Width: | Height: | Size: 183 KiB |
Before Width: | Height: | Size: 191 KiB After Width: | Height: | Size: 169 KiB |
Before Width: | Height: | Size: 335 KiB After Width: | Height: | Size: 298 KiB |
Before Width: | Height: | Size: 302 KiB After Width: | Height: | Size: 223 KiB |
Before Width: | Height: | Size: 268 KiB After Width: | Height: | Size: 164 KiB |
Before Width: | Height: | Size: 309 KiB After Width: | Height: | Size: 222 KiB |
Before Width: | Height: | Size: 164 KiB After Width: | Height: | Size: 106 KiB |
Before Width: | Height: | Size: 139 KiB |
Before Width: | Height: | Size: 183 KiB |
Before Width: | Height: | Size: 466 KiB |
Before Width: | Height: | Size: 522 KiB After Width: | Height: | Size: 318 KiB |
Before Width: | Height: | Size: 601 KiB |
Before Width: | Height: | Size: 149 KiB After Width: | Height: | Size: 105 KiB |
Before Width: | Height: | Size: 132 KiB After Width: | Height: | Size: 90 KiB |
Before Width: | Height: | Size: 183 KiB After Width: | Height: | Size: 140 KiB |
Before Width: | Height: | Size: 208 KiB After Width: | Height: | Size: 168 KiB |
Before Width: | Height: | Size: 118 KiB After Width: | Height: | Size: 137 KiB |
Before Width: | Height: | Size: 113 KiB |