[DOCS: Getting Started] Edited text and updated screenshots] (#18699)

* [DOCS: Getting Started] Edited text and updated screenshots]

* [DOCS|Getting Started] Edited for consistency in addressing the user

* [DOCS|Getting Started] Incorporated review comments

* [DOCS|GS] Style changes for consistency
This commit is contained in:
gchaps 2018-05-14 13:54:23 -07:00 committed by GitHub
parent 41fff1feec
commit 2bcb0d2975
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
28 changed files with 240 additions and 199 deletions

View file

@ -8,19 +8,16 @@ This tutorial shows you how to:
* Load a sample data set into Elasticsearch * Load a sample data set into Elasticsearch
* Define an index pattern * Define an index pattern
* Explore the sample data with {kibana-ref}/discover.html[Discover] * Discover and explore the sample data
* Set up {kibana-ref}/visualize.html[_visualizations_] of the sample data * Visualize the data
* Assemble visualizations into a {kibana-ref}/dashboard.html[Dashboard] * Assemble visualizations into a dashboard
Before you begin, make sure you've <<install, installed Kibana>> and established Before you begin, make sure you've <<install, installed Kibana>> and established
a {kibana-ref}/connect-to-elasticsearch.html[connection to Elasticsearch]. a {kibana-ref}/connect-to-elasticsearch.html[connection to Elasticsearch].
You might also be interested in these video tutorials: You might also be interested in the
https://www.elastic.co/webinars/getting-started-kibana[Getting Started with Kibana]
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-1[High-level Kibana introduction, pie charts] video tutorial.
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-2[Data discovery, bar charts, and line charts]
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-3[Coordinate maps]
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-4[Embedding Kibana visualizations]
-- --
include::getting-started/tutorial-load-dataset.asciidoc[] include::getting-started/tutorial-load-dataset.asciidoc[]

View file

@ -1,20 +1,27 @@
[[tutorial-dashboard]] [[tutorial-dashboard]]
== Putting it all Together with Dashboards == Putting it Together in a Dashboard
A dashboard is a collection of visualizations that you can arrange and share. A dashboard is a collection of visualizations that you can arrange and share.
To build a dashboard that contains the visualizations you saved during this tutorial: Here you'll build a dashboard that contains the visualizations you saved during
this tutorial.
. Click *Dashboard* in the side navigation. . Open *Dashboard*.
. Click *Add* to display the list of saved visualizations. . Click *Create new dashboard*.
. Click _Markdown Example_, _Pie Example_, _Bar Example_, and _Map Example_, then close the list of . Click *Add*.
visualizations by clicking the small up-arrow at the bottom of the list. . Click *Bar Example*, *Map Example*, *Markdown Example*, and *Pie Example*.
Hovering over a visualization displays the container controls that enable you to
edit, move, delete, and resize the visualization.
Your sample dashboard should end up looking roughly like this: Your sample dashboard look like this:
[role="screenshot"]
image::images/tutorial-dashboard.png[] image::images/tutorial-dashboard.png[]
You can rearrange the visualizations by clicking a the header of a
visualization and dragging. The gear icon in the top right of a visualization
displays controls for editing and deleting the visualization. A resize control
is on the lower right.
To get a link to share or HTML code to embed the dashboard in a web page, save To get a link to share or HTML code to embed the dashboard in a web page, save
the dashboard and click *Share*. the dashboard and click *Share*.
*Save* your dashboard.

View file

@ -1,19 +1,45 @@
[[tutorial-define-index]] [[tutorial-define-index]]
== Defining Your Index Patterns == Defining Your Index Patterns
Each set of data loaded to Elasticsearch has an index pattern. In the previous section, the Index patterns tell Kibana which Elasticsearch indices you want to explore.
Shakespeare data set has an index named `shakespeare`, and the accounts data set has an index named `bank`. An _index An index pattern can match the name of a single index, or include a wildcard
pattern_ is a string with optional wildcards that can match multiple indices. For example, in the common logging use (*) to match multiple indices.
case, a typical index name contains the date in YYYY.MM.DD format, and an index pattern for May would look something
like `logstash-2015.05*`.
For this tutorial, any pattern that matches the name of an index we've loaded will work. Open a browser and For example, Logstash typically creates a
navigate to `localhost:5601`. Click the *Management* tab, then the *Index Patterns* tab. Under *Index Pattern* specify `shakes*` as the index pattern for the Shakespeare data set and click *Next step* to define the index pattern, then series of indices in the format `logstash-YYYY.MMM.DD`. To explore all
define a second index pattern named `ba*`. of the log data from May 2018, you could specify the index pattern
`logstash-2018.05*`.
The Logstash data set does contain time-series data, so after clicking *Add New* to define the index for this data Create patterns for the Shakespeare data set, which has an
set, make sure the *Index contains time-based events* box is checked and select the `@timestamp` field from the index named `shakespeare,` and the accounts data set, which has an index named
*Time-field name* drop-down. `bank.` These data sets don't contain time-series data.
. In Kibana, open *Management*, and then click *Index Patterns.*
. If this is your first index pattern, the *Create index pattern* page opens automatically.
Otherwise, click *Create index pattern* in the upper left.
. Enter `shakes*` in the *Index pattern* field.
+
[role="screenshot"]
image::images/tutorial-pattern-1.png[]
. Click *Next step*.
. In *Configure settings*, click *Create index pattern*. For this pattern,
you don't need to configure any settings.
. Define a second index pattern named `ba*` You don't need to configure any settings for this pattern.
Now create an index pattern for the Logstash data set. This data set
contains time-series data.
. Define an index pattern named `logstash*.`
. Click *Next step*.
. In *Configure settings*, select *@timestamp* in the *Time Filter field name* dropdown menu.
. Click *Create index pattern*.
NOTE: When you define an index pattern, the indices that match that pattern must
exist in Elasticsearch and they must contain data. To check which indices are
available, go to *Dev Tools > Console* and enter `GET _cat/indices`. Alternately, use
`curl -XGET "http://localhost:9200/_cat/indices"`.
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
contain data. You can check what indices exist by using `GET _cat/indices` in <<console-kibana>>, under Dev Tools, or `curl -XGET "http://localhost:9200/_cat/indices"`.

View file

@ -1,42 +1,29 @@
[[tutorial-discovering]] [[tutorial-discovering]]
== Discovering Your Data == Discovering Your Data
Click *Discover* in the side navigation to display Kibana's data discovery functions: Using the Discover application, you can enter
an {ref}/query-dsl-query-string-query.html#query-string-syntax[Elasticsearch
image::images/tutorial-discover.png[] query] to search your data and filter the results.
In the query bar, you can enter an
{ref}/query-dsl-query-string-query.html#query-string-syntax[Elasticsearch
query] to search your data. You can explore the results in Discover and create
visualizations of saved searches in Visualize.
The current index pattern is displayed beneath the query bar. The index pattern
determines which indices are searched when you submit a query. To search a
different set of indices, select different pattern from the drop down menu.
To add an index pattern, go to *Management/Kibana/Index Patterns* and click
*Add New*.
You can construct searches by using the field names and the values you're
interested in. With numeric fields you can use comparison operators such as
greater than (>), less than (<), or equals (=). You can link elements with the
logical operators AND, OR, and NOT, all in uppercase.
To try it out, select the `ba*` index pattern and enter the following query string
in the query bar:
. Open *Discover*. The `shakes*` pattern is the current index pattern.
. Click the caret to the right of `shakes*`, and select `ba*`.
. In the search field, enter the following string:
+
[source,text] [source,text]
account_number:<100 AND balance:>47500 account_number:<100 AND balance:>47500
This query returns all account numbers between zero and 99 with balances in The search returns all account numbers between zero and 99 with balances in
excess of 47,500. When searching the sample bank data, it returns 5 results: excess of 47,500. It returns results for account numbers 8, 32, 78, 85, and 97.
Account numbers 8, 32, 78, 85, and 97.
[role="screenshot"]
image::images/tutorial-discover-2.png[] image::images/tutorial-discover-2.png[]
By default, all fields are shown for each matching document. To choose which By default, all fields are shown for each matching document. To choose which
document fields to display, hover over the Available Fields list and click the fields to display, hover the mouse over the the list of *Available Fields*
*add* button next to each field you want include. For example, if you add and then click *add* next to each field you want include.
just the `account_number`, the display changes to a simple list of five
account numbers:
For example, if you add the `account_number` field, the display changes to a list of five
account numbers.
[role="screenshot"]
image::images/tutorial-discover-3.png[] image::images/tutorial-discover-3.png[]

View file

@ -1,22 +1,22 @@
[[tutorial-load-dataset]] [[tutorial-load-dataset]]
== Loading Sample Data == Loading Sample Data
The tutorials in this section rely on the following data sets: This tutorial requires three data sets:
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here: * The complete works of William Shakespeare, suitably parsed into fields. Download
https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json[shakespeare.json]. https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json[`shakespeare.json`].
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here: * A set of fictitious accounts with randomly generated data. Download
https://download.elastic.co/demos/kibana/gettingstarted/accounts.zip[accounts.zip] https://download.elastic.co/demos/kibana/gettingstarted/accounts.zip[`accounts.zip`].
* A set of randomly generated log files. Download this data set by clicking here: * A set of randomly generated log files. Download
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logs.jsonl.gz] https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[`logs.jsonl.gz`].
Two of the data sets are compressed. Use the following commands to extract the files: Two of the data sets are compressed. To extract the files, use these commands:
[source,shell] [source,shell]
unzip accounts.zip unzip accounts.zip
gunzip logs.jsonl.gz gunzip logs.jsonl.gz
The Shakespeare data set is organized in the following schema: The Shakespeare data set has this structure:
[source,json] [source,json]
{ {
@ -28,7 +28,7 @@ The Shakespeare data set is organized in the following schema:
"text_entry": "String", "text_entry": "String",
} }
The accounts data set is organized in the following schema: The accounts data set is structured as follows:
[source,json] [source,json]
{ {
@ -45,7 +45,7 @@ The accounts data set is organized in the following schema:
"state": "String" "state": "String"
} }
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are: The logs data set has dozens of different fields. Here are the notable fields for this tutorial:
[source,json] [source,json]
{ {
@ -54,11 +54,12 @@ The schema for the logs data set has dozens of different fields, but the notable
"@timestamp": "date" "@timestamp": "date"
} }
Before we load the Shakespeare and logs data sets, we need to set up {ref}/mapping.html[_mappings_] for the fields. Before you load the Shakespeare and logs data sets, you must set up {ref}/mapping.html[_mappings_] for the fields.
Mapping divides the documents in the index into logical groups and specifies a field's characteristics, such as the Mappings divide the documents in the index into logical groups and specify the characteristics
field's searchability or whether or not it's _tokenized_, or broken up into separate words. of the fields. These characteristics include the searchability of the field
and whether it's _tokenized_, or broken up into separate words.
Use the following command in the Kibana Console to set up a mapping for the Shakespeare data set: In Kibana *Dev Tools > Console*, set up a mapping for the Shakespeare data set:
[source,js] [source,js]
PUT /shakespeare PUT /shakespeare
@ -77,15 +78,14 @@ PUT /shakespeare
//CONSOLE //CONSOLE
This mapping specifies the following qualities for the data set: This mapping specifies field characteristics for the data set:
* Because the _speaker_ and _play_name_ fields are keyword fields, they are not analyzed. The strings are treated as a single unit even if they contain multiple words. * The `speaker` and `play_name` fields are keyword fields. These fields are not analyzed.
* The _line_id_ and _speech_number_ fields are integers. The strings are treated as a single unit even if they contain multiple words.
* The `line_id` and `speech_number` fields are integers.
The logs data set requires a mapping to label the latitude/longitude pairs in the logs as geographic locations by The logs data set requires a mapping to label the latitude and longitude pairs
applying the `geo_point` type to those fields. as geographic locations by applying the `geo_point` type.
Use the following commands to establish `geo_point` mapping for the logs:
[source,js] [source,js]
PUT /logstash-2015.05.18 PUT /logstash-2015.05.18
@ -147,8 +147,10 @@ PUT /logstash-2015.05.20
//CONSOLE //CONSOLE
The accounts data set doesn't require any mappings, so at this point we're ready to use the Elasticsearch The accounts data set doesn't require any mappings.
{ref}/docs-bulk.html[`bulk`] API to load the data sets with the following commands:
At this point, you're ready to use the Elasticsearch {ref}/docs-bulk.html[bulk]
API to load the data sets:
[source,shell] [source,shell]
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
@ -161,16 +163,16 @@ Invoke-RestMethod "http://localhost:9200/bank/account/_bulk?pretty" -Method Post
Invoke-RestMethod "http://localhost:9200/shakespeare/doc/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare_6.0.json" Invoke-RestMethod "http://localhost:9200/shakespeare/doc/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare_6.0.json"
Invoke-RestMethod "http://localhost:9200/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl" Invoke-RestMethod "http://localhost:9200/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl"
These commands may take some time to execute, depending on the computing resources available. These commands might take some time to execute, depending on the available computing resources.
Verify successful loading with the following command: Verify successful loading:
[source,js] [source,js]
GET /_cat/indices?v GET /_cat/indices?v
//CONSOLE //CONSOLE
You should see output similar to the following: Your output should look similar to this:
[source,shell] [source,shell]
health status index pri rep docs.count docs.deleted store.size pri.store.size health status index pri rep docs.count docs.deleted store.size pri.store.size

View file

@ -1,46 +1,45 @@
[[tutorial-visualizing]] [[tutorial-visualizing]]
== Visualizing Your Data == Visualizing Your Data
To start visualizing your data, click *Visualize* in the side navigation. In the Visualize application, you can shape your data using a variety
of charts, tables, and maps, and more. You'll create four
The *Visualize* tools enable you to view your data in several ways. For example, visualizations: a pie chart, bar chart, coordinate map, and Markdown widget.
let's use that venerable visualization, the pie chart, to get some insight
into the account balances in the sample bank account data. To get started, click the big blue
**Create a visualization** button in the center of the screen.
image::images/tutorial-visualize-landing.png[]
There are a number of visualization types to choose from. Let's click the one
called *Pie*.
. Open *Visualize.*
. Click *Create a visualization* or the *+* button. You'll see all the visualization
types in Kibana.
+
[role="screenshot"]
image::images/tutorial-visualize-wizard-step-1.png[] image::images/tutorial-visualize-wizard-step-1.png[]
. Click *Pie*.
You can build visualizations from saved searches, or enter new search criteria. To enter . In *New Search*, select the `ba*` index pattern. You'll use the pie chart to
new search criteria, you first need to select an index pattern to specify gain insight into the account balances in the bank account data.
what indices to search. We want to search the account data, so select the `ba*` +
index pattern. [role="screenshot"]
image::images/tutorial-visualize-wizard-step-2.png[] image::images/tutorial-visualize-wizard-step-2.png[]
The default search matches all documents. Initially, a single "slice" === Pie Chart
encompasses the entire pie:
Initially, the pie contains a single "slice."
That's because the default search matched all documents.
[role="screenshot"]
image::images/tutorial-visualize-pie-1.png[] image::images/tutorial-visualize-pie-1.png[]
To specify what slices to display in the chart, you use an Elasticsearch To specify which slices to display in the pie, you use an Elasticsearch
{ref}/search-aggregations.html[bucket aggregation]. A bucket aggregation {ref}/search-aggregations.html[bucket aggregation]. This aggregation
simply sorts the documents that match your search criteria into different sorts the documents that match your search criteria into different
categories, aka _buckets_. For example, the account data includes the balance categories, also known as _buckets_.
of each account. Using a bucket aggregation, you can establish multiple ranges
of account balances and find out how many accounts fall into each range.
To define a bucket for each range: Use a bucket aggregation to establish
multiple ranges of account balances and find out how many accounts fall into
each range.
. Click the *Split Slices* buckets type. . In the *Buckets* pane, click *Split Slices.*
. Select *Range* from the *Aggregation* list. . In the *Aggregation* dropdown menu, select *Range*.
. Select the *balance* field from the *Field* list. . In the *Field* dropdown menu, select *balance*.
. Click *Add Range* four times to bring the . Click *Add Range* four times to bring the total number of ranges to six.
total number of ranges to six.
. Define the following ranges: . Define the following ranges:
+ +
[source,text] [source,text]
@ -51,142 +50,165 @@ total number of ranges to six.
15000 30999 15000 30999
31000 50000 31000 50000
. Click *Apply changes* image:images/apply-changes-button.png[] to update the chart. . Click *Apply changes* image:images/apply-changes-button.png[].
Now you can see what proportion of the 1000 accounts fall into each balance Now you can see what proportion of the 1000 accounts fall into each balance
range. range.
[role="screenshot"]
image::images/tutorial-visualize-pie-2.png[] image::images/tutorial-visualize-pie-2.png[]
Let's take a look at another dimension of the data: the account holder's Add another bucket aggregation that looks at the ages of the account
age. By adding another bucket aggregation, you can see the ages of the account holders.
holders in each balance range:
. Click *Add sub-buckets* below the buckets list. . At the bottom of the *Buckets* pane, click *Add sub-buckets*.
. Click *Split Slices* in the buckets type list. . In *Select buckets type,* click *Split Slices*.
. Select *Terms* from the aggregation list. . In the *Sub Aggregation* dropdown, select *Terms*.
. Select *age* from the field list. . In the *Field* dropdown, select *age*.
. Click *Apply changes* image:images/apply-changes-button.png[]. . Click *Apply changes* image:images/apply-changes-button.png[].
Now you can see the break down of the account holders' ages displayed Now you can see the break down of the ages of the account holders, displayed
in a ring around the balance ranges. in a ring around the balance ranges.
[role="screenshot"]
image::images/tutorial-visualize-pie-3.png[] image::images/tutorial-visualize-pie-3.png[]
To save this chart so we can use it later, click *Save* and enter the name _Pie Example_. To save this chart so you can use it later, click *Save* in the top menu bar
and enter `Pie Example`.
Next, we're going to look at data in the Shakespeare data set. Let's find out how the === Bar Chart
plays compare when it comes to the number of speaking parts and display the information
in a bar chart:
. Click *New* and select *Vertical bar chart*. You'll use a bar chart to look at the Shakespeare data set and compare
. Select the `shakes*` index pattern. Since you haven't defined any buckets yet, the number of speaking parts in the plays.
you'll see a single big bar that shows the total count of documents that match
the default wildcard query. Create a *Vertical Bar* chart and set the search source to `shakes*`.
+ Initially, the chart is a single bar that shows the total count
of documents that match the default wildcard query.
[role="screenshot"]
image::images/tutorial-visualize-bar-1.png[] image::images/tutorial-visualize-bar-1.png[]
. To show the number of speaking parts per play along the y-axis, you need to Show the number of speaking parts per play along the Y-axis.
configure the Y-axis {ref}/search-aggregations.html[metric aggregation]. A metric This requires you to configure the Y-axis
aggregation computes metrics based on values extracted from the search results. {ref}/search-aggregations.html[metric aggregation.]
To get the number of speaking parts per play, select the *Unique Count* This aggregation computes metrics based on values from the search results.
aggregation and choose *speaker* from the field list. You can also give the
axis a custom label, _Speaking Parts_.
. To show the different plays long the x-axis, select the X-Axis buckets type, . In the *Metrics* pane, expand *Y-Axis*.
select *Terms* from the aggregation list, and choose *play_name* from the field . Set *Aggregation* to *Unique Count*.
list. To list them alphabetically, select *Ascending* order. You can also give . Set *Field* to *speaker*.
the axis a custom label, _Play Name_. . In the *Custom Label* box, enter `Speaking Parts`.
. Click *Apply changes* image:images/apply-changes-button.png[].
. Click *Apply changes* image:images/apply-changes-button.png[] to view the
results.
image::images/tutorial-visualize-bar-2.png[] [role="screenshot"]
image::images/tutorial-visualize-bar-1.5.png[]
Notice how the individual play names show up as whole phrases, instead of being broken down into individual words. This
is the result of the mapping we did at the beginning of the tutorial, when we marked the *play_name* field as 'not
analyzed'.
Hovering over each bar shows you the number of speaking parts for each play as a tooltip. To turn tooltips Show the plays along the X-axis.
off and configure other options for your visualizations, select the Visualization builder's *Options* tab.
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these . In the *Buckets* pane, click *X-Axis*.
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part. . Set *Aggregation* to *Terms* and *Field* to *play_name*.
. To list plays alphabetically, in the *Order* dropdown menu, select *Ascending*.
. Give the axis a custom label, `Play Name`.
. Click *Apply changes* image:images/apply-changes-button.png[].
[role="screenshot"]
image::images/0[]
Hovering over a bar shows a tooltip with the number of speaking parts for
that play.
Notice how the individual play names show up as whole phrases, instead of
broken into individual words. This is the result of the mapping
you did at the beginning of the tutorial, when your marked the `play_name` field
as `not analyzed`.
////
You might
also be curious to see which plays make the greatest demands on an
individual actor. Let's show the maximum number of speeches for a given part.
. Click *Add metrics* to add a Y-axis aggregation. . Click *Add metrics* to add a Y-axis aggregation.
. Choose the *Max* aggregation and select the *speech_number* field. . Set *Aggregation* to `Max` and *Field* to `speech_number`.
. Click *Options* and change the *Bar Mode* from *stacked* to *normal*. . Click *Metrics & Axes* and then change *Mode* from `stacked` to `normal`.
. Click *Apply changes* image:images/apply-changes-button.png[]. Your chart should now look like this: . Click *Apply changes* image:images/apply-changes-button.png[].
[role="screenshot"]
image::images/tutorial-visualize-bar-3.png[] image::images/tutorial-visualize-bar-3.png[]
As you can see, _Love's Labours Lost_ has an unusually high maximum speech number, compared to the other plays, and The play Love's Labours Lost has an unusually high maximum speech number compared to the other plays.
might therefore make more demands on an actor's memory.
Note how the *Number of speaking parts* Y-axis starts at zero, but the bars don't begin to differentiate until 18. To Note how the *Number of speaking parts* Y-axis starts at zero, but the bars don't begin to differentiate until 18. To
make the differences stand out, starting the Y-axis at a value closer to the minimum, go to Options and select make the differences stand out, starting the Y-axis at a value closer to the minimum, go to Options and select
*Scale Y-Axis to data bounds*. *Scale Y-Axis to data bounds*.
////
*Save* this chart with the name `Bar Example`.
Save this chart with the name _Bar Example_. === Coordinate Map
Next, we're going to use a coordinate map chart to visualize geographic information in our log file sample data. Using a coordinate map, you can visualize geographic information in the log file sample data.
. Click *New*. . Create a *Coordinate map* and set the search source to `logstash*`.
. Select *Coordinate map*. . In the top menu bar, click the time picker on the far right.
. Select the `logstash-*` index pattern.
. Set the time window for the events we're exploring:
. Click the time picker in the Kibana toolbar.
. Click *Absolute*. . Click *Absolute*.
. Set the start time to May 18, 2015 and the end time to May 20, 2015. . Set the start time to May 18, 2015 and the end time to May 20, 2015.
+ . Click *Go*.
image::images/tutorial-timepicker.png[]
. Once you've got the time range set up, click the *Go* button and close the time picker by You haven't defined any buckets yet, so the visualization is a map of the world.
clicking the small up arrow in the bottom right corner.
You'll see a map of the world, since we haven't defined any buckets yet:
[role="screenshot"]
image::images/tutorial-visualize-map-1.png[] image::images/tutorial-visualize-map-1.png[]
To map the geo coordinates from the log files select *Geo Coordinates* as Now map the geo coordinates from the log files.
the bucket and click *Apply changes* image:images/apply-changes-button.png[].
Your chart should now look like this:
. In the *Buckets* pane, click *Geo Coordinates*.
. Set *Aggregation* to *Geohash* and *Field* to *geo.coordinates*.
. Click *Apply changes* image:images/apply-changes-button.png[].
The map now looks like this:
[role="screenshot"]
image::images/tutorial-visualize-map-2.png[] image::images/tutorial-visualize-map-2.png[]
You can navigate the map by clicking and dragging, zoom with the You can navigate the map by clicking and dragging. The controls
image:images/viz-zoom.png[] buttons, or hit the *Fit Data Bounds* on the top left of the map enable you to zoom the map and set filters.
Give them a try.
////
- Zoom image:images/viz-zoom.png[] buttons,
- *Fit Data Bounds*
image:images/viz-fit-bounds.png[] button to zoom to the lowest level that image:images/viz-fit-bounds.png[] button to zoom to the lowest level that
includes all the points. You can also include or exclude a rectangular area includes all the points.
- Include or exclude a rectangular area
by clicking the *Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[] by clicking the *Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[]
button and drawing a bounding box on the map. Applied filters are displayed button and drawing a bounding box on the map. Applied filters are displayed
below the query bar. Hovering over a filter displays controls to toggle, below the query bar. Hovering over a filter displays controls to toggle,
pin, invert, or delete the filter. pin, invert, or delete the filter.
////
[role="screenshot"]
image::images/tutorial-visualize-map-3.png[] image::images/tutorial-visualize-map-3.png[]
Save this map with the name _Map Example_. *Save* this map with the name `Map Example`.
Finally, create a Markdown widget to display extra information: === Markdown
. Click *New*. The final visualization is a Markdown widget that renders formatted text.
. Select *Markdown widget*.
. Enter the following text in the field: . Create a *Markdown* visualization.
. In the text box, enter the following:
+ +
[source,markdown] [source,markdown]
# This is a tutorial dashboard! # This is a tutorial dashboard!
The Markdown widget uses **markdown** syntax. The Markdown widget uses **markdown** syntax.
> Blockquotes in Markdown use the > character. > Blockquotes in Markdown use the > character.
. Click *Apply changes* image:images/apply-changes-button.png[] render the Markdown in the . Click *Apply changes* image:images/apply-changes-button.png[].
preview pane.
+
image::images/tutorial-visualize-md-1.png[]
The Markdown renders in the preview pane:
[role="screenshot"]
image::images/tutorial-visualize-md-2.png[] image::images/tutorial-visualize-md-2.png[]
Save this visualization with the name _Markdown Example_. *Save* this visualization with the name `Markdown Example`.

View file

@ -4,11 +4,11 @@
Now that you have a handle on the basics, you're ready to start exploring Now that you have a handle on the basics, you're ready to start exploring
your own data with Kibana. your own data with Kibana.
* See {kibana-ref}/discover.html[Discover] for more information about searching and filtering * See {kibana-ref}/discover.html[Discover] for information about searching and filtering
your data. your data.
* See {kibana-ref}/visualize.html[Visualize] for information about all of the visualization * See {kibana-ref}/visualize.html[Visualize] for information about the visualization
types Kibana has to offer. types Kibana has to offer.
* See {kibana-ref}/management.html[Management] for information about configuring Kibana * See {kibana-ref}/management.html[Management] for information about configuring Kibana
and managing your saved objects. and managing your saved objects.
* See {kibana-ref}/console-kibana.html[Console] for information about the interactive * See {kibana-ref}/console-kibana.html[Console] to learn about the interactive
console UI you can use to submit REST requests to Elasticsearch. console you can use to submit REST requests to Elasticsearch.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 433 KiB

After

Width:  |  Height:  |  Size: 470 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 156 KiB

After

Width:  |  Height:  |  Size: 384 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 130 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 139 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 183 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 157 KiB

After

Width:  |  Height:  |  Size: 466 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 312 KiB

After

Width:  |  Height:  |  Size: 522 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 336 KiB

After

Width:  |  Height:  |  Size: 601 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 149 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 132 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 183 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 188 KiB

After

Width:  |  Height:  |  Size: 208 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 118 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 113 KiB

Before After
Before After