Fixed merge conflicts in define-index and load-dataset. (#19089)

This commit is contained in:
gchaps 2018-05-18 12:00:57 -07:00 committed by GitHub
parent 55107878c1
commit 0b6af35ba3
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
28 changed files with 245 additions and 202 deletions

View file

@ -3,24 +3,21 @@
[partintro]
--
Ready to get some hands-on experience with Kibana?
Ready to get some hands-on experience with Kibana?
This tutorial shows you how to:
* Load a sample data set into Elasticsearch
* Define an index pattern
* Explore the sample data with {kibana-ref}/discover.html[Discover]
* Set up {kibana-ref}/visualize.html[_visualizations_] of the sample data
* Assemble visualizations into a {kibana-ref}/dashboard.html[Dashboard]
* Discover and explore the sample data
* Visualize the data
* Assemble visualizations into a dashboard
Before you begin, make sure you've <<install, installed Kibana>> and established
a {kibana-ref}/connect-to-elasticsearch.html[connection to Elasticsearch].
You might also be interested in these video tutorials:
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-1[High-level Kibana introduction, pie charts]
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-2[Data discovery, bar charts, and line charts]
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-3[Coordinate maps]
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-4[Embedding Kibana visualizations]
You might also be interested in the
https://www.elastic.co/webinars/getting-started-kibana[Getting Started with Kibana]
video tutorial.
--
include::getting-started/tutorial-load-dataset.asciidoc[]

View file

@ -1,20 +1,27 @@
[[tutorial-dashboard]]
== Putting it all Together with Dashboards
== Putting it Together in a Dashboard
A dashboard is a collection of visualizations that you can arrange and share.
To build a dashboard that contains the visualizations you saved during this tutorial:
A dashboard is a collection of visualizations that you can arrange and share.
Here you'll build a dashboard that contains the visualizations you saved during
this tutorial.
. Click *Dashboard* in the side navigation.
. Click *Add* to display the list of saved visualizations.
. Click _Markdown Example_, _Pie Example_, _Bar Example_, and _Map Example_, then close the list of
visualizations by clicking the small up-arrow at the bottom of the list.
. Open *Dashboard*.
. Click *Create new dashboard*.
. Click *Add*.
. Click *Bar Example*, *Map Example*, *Markdown Example*, and *Pie Example*.
Hovering over a visualization displays the container controls that enable you to
edit, move, delete, and resize the visualization.
Your sample dashboard should end up looking roughly like this:
Your sample dashboard look like this:
[role="screenshot"]
image::images/tutorial-dashboard.png[]
You can rearrange the visualizations by clicking a the header of a
visualization and dragging. The gear icon in the top right of a visualization
displays controls for editing and deleting the visualization. A resize control
is on the lower right.
To get a link to share or HTML code to embed the dashboard in a web page, save
the dashboard and click *Share*.
*Save* your dashboard.

View file

@ -1,22 +1,44 @@
[[tutorial-define-index]]
== Defining Your Index Patterns
Each set of data loaded to Elasticsearch has an index pattern. In the previous section, the
Shakespeare data set has an index named `shakespeare`, and the accounts data set has an index named `bank`. An _index
pattern_ is a string with optional wildcards that can match multiple indices. For example, in the common logging use
case, a typical index name contains the date in YYYY.MM.DD format, and an index pattern for May would look something
like `logstash-2015.05*`.
Index patterns tell Kibana which Elasticsearch indices you want to explore.
An index pattern can match the name of a single index, or include a wildcard
(*) to match multiple indices.
For this tutorial, any pattern that matches the name of an index we've loaded will work. Open a browser and
navigate to `localhost:5601`. Click the *Management* tab, then the *Index Patterns* tab. Click *Add New* to define a new index
pattern. Two of the sample data sets, the Shakespeare plays and the financial accounts, don't contain time-series data.
Make sure the *Index contains time-based events* box is unchecked when you create index patterns for these data sets.
Specify `shakes*` as the index pattern for the Shakespeare data set and click *Create* to define the index pattern, then
define a second index pattern named `ba*`.
For example, Logstash typically creates a
series of indices in the format `logstash-YYYY.MMM.DD`. To explore all
of the log data from May 2018, you could specify the index pattern
`logstash-2018.05*`.
The Logstash data set does contain time-series data, so after clicking *Add New* to define the index for this data
set, make sure the *Index contains time-based events* box is checked and select the `@timestamp` field from the
*Time-field name* drop-down.
Create patterns for the Shakespeare data set, which has an
index named `shakespeare,` and the accounts data set, which has an index named
`bank.` These data sets don't contain time-series data.
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
contain data. You can check what indices exist by using `GET _cat/indices` in <<console-kibana>>, under Dev Tools, or `curl -XGET "http://localhost:9200/_cat/indices"`.
. In Kibana, open *Management*, and then click *Index Patterns.*
. If this is your first index pattern, the *Create index pattern* page opens automatically.
Otherwise, click *Create index pattern* in the upper left.
. Enter `shakes*` in the *Index pattern* field.
+
[role="screenshot"]
image::images/tutorial-pattern-1.png[]
. Click *Next step*.
. In *Configure settings*, click *Create index pattern*. For this pattern,
you don't need to configure any settings.
. Define a second index pattern named `ba*` You don't need to configure any settings for this pattern.
Now create an index pattern for the Logstash data set. This data set
contains time-series data.
. Define an index pattern named `logstash*.`
. Click *Next step*.
. In *Configure settings*, select *@timestamp* in the *Time Filter field name* dropdown menu.
. Click *Create index pattern*.
NOTE: When you define an index pattern, the indices that match that pattern must
exist in Elasticsearch and they must contain data. To check which indices are
available, go to *Dev Tools > Console* and enter `GET _cat/indices`. Alternately, use
`curl -XGET "http://localhost:9200/_cat/indices"`.

View file

@ -1,42 +1,29 @@
[[tutorial-discovering]]
== Discovering Your Data
Click *Discover* in the side navigation to display Kibana's data discovery functions:
image::images/tutorial-discover.png[]
In the query bar, you can enter an
{ref}/query-dsl-query-string-query.html#query-string-syntax[Elasticsearch
query] to search your data. You can explore the results in Discover and create
visualizations of saved searches in Visualize.
The current index pattern is displayed beneath the query bar. The index pattern
determines which indices are searched when you submit a query. To search a
different set of indices, select different pattern from the drop down menu.
To add an index pattern, go to *Management/Kibana/Index Patterns* and click
*Add New*.
You can construct searches by using the field names and the values you're
interested in. With numeric fields you can use comparison operators such as
greater than (>), less than (<), or equals (=). You can link elements with the
logical operators AND, OR, and NOT, all in uppercase.
To try it out, select the `ba*` index pattern and enter the following query string
in the query bar:
Using the Discover application, you can enter
an {ref}/query-dsl-query-string-query.html#query-string-syntax[Elasticsearch
query] to search your data and filter the results.
. Open *Discover*. The `shakes*` pattern is the current index pattern.
. Click the caret to the right of `shakes*`, and select `ba*`.
. In the search field, enter the following string:
+
[source,text]
account_number:<100 AND balance:>47500
This query returns all account numbers between zero and 99 with balances in
excess of 47,500. When searching the sample bank data, it returns 5 results:
Account numbers 8, 32, 78, 85, and 97.
The search returns all account numbers between zero and 99 with balances in
excess of 47,500. It returns results for account numbers 8, 32, 78, 85, and 97.
[role="screenshot"]
image::images/tutorial-discover-2.png[]
By default, all fields are shown for each matching document. To choose which
document fields to display, hover over the Available Fields list and click the
*add* button next to each field you want include. For example, if you add
just the `account_number`, the display changes to a simple list of five
account numbers:
fields to display, hover the mouse over the the list of *Available Fields*
and then click *add* next to each field you want include.
For example, if you add the `account_number` field, the display changes to a list of five
account numbers.
[role="screenshot"]
image::images/tutorial-discover-3.png[]

View file

@ -1,22 +1,22 @@
[[tutorial-load-dataset]]
== Loading Sample Data
The tutorials in this section rely on the following data sets:
This tutorial requires three data sets:
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here:
https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json[shakespeare.json].
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here:
https://download.elastic.co/demos/kibana/gettingstarted/accounts.zip[accounts.zip]
* A set of randomly generated log files. Download this data set by clicking here:
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logs.jsonl.gz]
* The complete works of William Shakespeare, suitably parsed into fields. Download
https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json[`shakespeare.json`].
* A set of fictitious accounts with randomly generated data. Download
https://download.elastic.co/demos/kibana/gettingstarted/accounts.zip[`accounts.zip`].
* A set of randomly generated log files. Download
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[`logs.jsonl.gz`].
Two of the data sets are compressed. Use the following commands to extract the files:
Two of the data sets are compressed. To extract the files, use these commands:
[source,shell]
unzip accounts.zip
gunzip logs.jsonl.gz
The Shakespeare data set is organized in the following schema:
The Shakespeare data set has this structure:
[source,json]
{
@ -28,7 +28,7 @@ The Shakespeare data set is organized in the following schema:
"text_entry": "String",
}
The accounts data set is organized in the following schema:
The accounts data set is structured as follows:
[source,json]
{
@ -45,7 +45,7 @@ The accounts data set is organized in the following schema:
"state": "String"
}
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are:
The logs data set has dozens of different fields. Here are the notable fields for this tutorial:
[source,json]
{
@ -54,11 +54,12 @@ The schema for the logs data set has dozens of different fields, but the notable
"@timestamp": "date"
}
Before we load the Shakespeare and logs data sets, we need to set up {ref}/mapping.html[_mappings_] for the fields.
Mapping divides the documents in the index into logical groups and specifies a field's characteristics, such as the
field's searchability or whether or not it's _tokenized_, or broken up into separate words.
Before you load the Shakespeare and logs data sets, you must set up {ref}/mapping.html[_mappings_] for the fields.
Mappings divide the documents in the index into logical groups and specify the characteristics
of the fields. These characteristics include the searchability of the field
and whether it's _tokenized_, or broken up into separate words.
Use the following command in a terminal (eg `bash`) to set up a mapping for the Shakespeare data set:
In Kibana *Dev Tools > Console*, set up a mapping for the Shakespeare data set:
[source,js]
PUT /shakespeare
@ -77,15 +78,14 @@ PUT /shakespeare
//CONSOLE
This mapping specifies the following qualities for the data set:
This mapping specifies field characteristics for the data set:
* Because the _speaker_ and _play_name_ fields are keyword fields, they are not analyzed. The strings are treated as a single unit even if they contain multiple words.
* The _line_id_ and _speech_number_ fields are integers.
* The `speaker` and `play_name` fields are keyword fields. These fields are not analyzed.
The strings are treated as a single unit even if they contain multiple words.
* The `line_id` and `speech_number` fields are integers.
The logs data set requires a mapping to label the latitude/longitude pairs in the logs as geographic locations by
applying the `geo_point` type to those fields.
Use the following commands to establish `geo_point` mapping for the logs:
The logs data set requires a mapping to label the latitude and longitude pairs
as geographic locations by applying the `geo_point` type.
[source,js]
PUT /logstash-2015.05.18
@ -147,24 +147,32 @@ PUT /logstash-2015.05.20
//CONSOLE
The accounts data set doesn't require any mappings, so at this point we're ready to use the Elasticsearch
{ref}/docs-bulk.html[`bulk`] API to load the data sets with the following commands:
The accounts data set doesn't require any mappings.
At this point, you're ready to use the Elasticsearch {ref}/docs-bulk.html[bulk]
API to load the data sets:
[source,shell]
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/doc/_bulk?pretty' --data-binary @shakespeare_6.0.json
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
These commands may take some time to execute, depending on the computing resources available.
Or for Windows users, in Powershell:
[source,shell]
Invoke-RestMethod "http://localhost:9200/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json"
Invoke-RestMethod "http://localhost:9200/shakespeare/doc/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare_6.0.json"
Invoke-RestMethod "http://localhost:9200/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl"
Verify successful loading with the following command:
These commands might take some time to execute, depending on the available computing resources.
Verify successful loading:
[source,js]
GET /_cat/indices?v
//CONSOLE
You should see output similar to the following:
Your output should look similar to this:
[source,shell]
health status index pri rep docs.count docs.deleted store.size pri.store.size

View file

@ -1,46 +1,45 @@
[[tutorial-visualizing]]
== Visualizing Your Data
To start visualizing your data, click *Visualize* in the side navigation.
The *Visualize* tools enable you to view your data in several ways. For example,
let's use that venerable visualization, the pie chart, to get some insight
into the account balances in the sample bank account data. To get started, click the big blue
**Create a visualization** button in the center of the screen.
image::images/tutorial-visualize-landing.png[]
There are a number of visualization types to choose from. Let's click the one
called *Pie*.
In the Visualize application, you can shape your data using a variety
of charts, tables, and maps, and more. You'll create four
visualizations: a pie chart, bar chart, coordinate map, and Markdown widget.
. Open *Visualize.*
. Click *Create a visualization* or the *+* button. You'll see all the visualization
types in Kibana.
+
[role="screenshot"]
image::images/tutorial-visualize-wizard-step-1.png[]
. Click *Pie*.
You can build visualizations from saved searches, or enter new search criteria. To enter
new search criteria, you first need to select an index pattern to specify
what indices to search. We want to search the account data, so select the `ba*`
index pattern.
. In *New Search*, select the `ba*` index pattern. You'll use the pie chart to
gain insight into the account balances in the bank account data.
+
[role="screenshot"]
image::images/tutorial-visualize-wizard-step-2.png[]
The default search matches all documents. Initially, a single "slice"
encompasses the entire pie:
=== Pie Chart
Initially, the pie contains a single "slice."
That's because the default search matched all documents.
[role="screenshot"]
image::images/tutorial-visualize-pie-1.png[]
To specify what slices to display in the chart, you use an Elasticsearch
{ref}/search-aggregations.html[bucket aggregation]. A bucket aggregation
simply sorts the documents that match your search criteria into different
categories, aka _buckets_. For example, the account data includes the balance
of each account. Using a bucket aggregation, you can establish multiple ranges
of account balances and find out how many accounts fall into each range.
To specify which slices to display in the pie, you use an Elasticsearch
{ref}/search-aggregations.html[bucket aggregation]. This aggregation
sorts the documents that match your search criteria into different
categories, also known as _buckets_.
To define a bucket for each range:
Use a bucket aggregation to establish
multiple ranges of account balances and find out how many accounts fall into
each range.
. Click the *Split Slices* buckets type.
. Select *Range* from the *Aggregation* list.
. Select the *balance* field from the *Field* list.
. Click *Add Range* four times to bring the
total number of ranges to six.
. In the *Buckets* pane, click *Split Slices.*
. In the *Aggregation* dropdown menu, select *Range*.
. In the *Field* dropdown menu, select *balance*.
. Click *Add Range* four times to bring the total number of ranges to six.
. Define the following ranges:
+
[source,text]
@ -51,142 +50,165 @@ total number of ranges to six.
15000 30999
31000 50000
. Click *Apply changes* image:images/apply-changes-button.png[] to update the chart.
. Click *Apply changes* image:images/apply-changes-button.png[].
Now you can see what proportion of the 1000 accounts fall into each balance
range.
[role="screenshot"]
image::images/tutorial-visualize-pie-2.png[]
Let's take a look at another dimension of the data: the account holder's
age. By adding another bucket aggregation, you can see the ages of the account
holders in each balance range:
Add another bucket aggregation that looks at the ages of the account
holders.
. Click *Add sub-buckets* below the buckets list.
. Click *Split Slices* in the buckets type list.
. Select *Terms* from the aggregation list.
. Select *age* from the field list.
. At the bottom of the *Buckets* pane, click *Add sub-buckets*.
. In *Select buckets type,* click *Split Slices*.
. In the *Sub Aggregation* dropdown, select *Terms*.
. In the *Field* dropdown, select *age*.
. Click *Apply changes* image:images/apply-changes-button.png[].
Now you can see the break down of the account holders' ages displayed
Now you can see the break down of the ages of the account holders, displayed
in a ring around the balance ranges.
[role="screenshot"]
image::images/tutorial-visualize-pie-3.png[]
To save this chart so we can use it later, click *Save* and enter the name _Pie Example_.
To save this chart so you can use it later, click *Save* in the top menu bar
and enter `Pie Example`.
Next, we're going to look at data in the Shakespeare data set. Let's find out how the
plays compare when it comes to the number of speaking parts and display the information
in a bar chart:
=== Bar Chart
. Click *New* and select *Vertical bar chart*.
. Select the `shakes*` index pattern. Since you haven't defined any buckets yet,
you'll see a single big bar that shows the total count of documents that match
the default wildcard query.
+
You'll use a bar chart to look at the Shakespeare data set and compare
the number of speaking parts in the plays.
Create a *Vertical Bar* chart and set the search source to `shakes*`.
Initially, the chart is a single bar that shows the total count
of documents that match the default wildcard query.
[role="screenshot"]
image::images/tutorial-visualize-bar-1.png[]
. To show the number of speaking parts per play along the y-axis, you need to
configure the Y-axis {ref}/search-aggregations.html[metric aggregation]. A metric
aggregation computes metrics based on values extracted from the search results.
To get the number of speaking parts per play, select the *Unique Count*
aggregation and choose *speaker* from the field list. You can also give the
axis a custom label, _Speaking Parts_.
Show the number of speaking parts per play along the Y-axis.
This requires you to configure the Y-axis
{ref}/search-aggregations.html[metric aggregation.]
This aggregation computes metrics based on values from the search results.
. To show the different plays long the x-axis, select the X-Axis buckets type,
select *Terms* from the aggregation list, and choose *play_name* from the field
list. To list them alphabetically, select *Ascending* order. You can also give
the axis a custom label, _Play Name_.
. In the *Metrics* pane, expand *Y-Axis*.
. Set *Aggregation* to *Unique Count*.
. Set *Field* to *speaker*.
. In the *Custom Label* box, enter `Speaking Parts`.
. Click *Apply changes* image:images/apply-changes-button.png[].
. Click *Apply changes* image:images/apply-changes-button.png[] to view the
results.
image::images/tutorial-visualize-bar-2.png[]
[role="screenshot"]
image::images/tutorial-visualize-bar-1.5.png[]
Notice how the individual play names show up as whole phrases, instead of being broken down into individual words. This
is the result of the mapping we did at the beginning of the tutorial, when we marked the *play_name* field as 'not
analyzed'.
Hovering over each bar shows you the number of speaking parts for each play as a tooltip. To turn tooltips
off and configure other options for your visualizations, select the Visualization builder's *Options* tab.
Show the plays along the X-axis.
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part.
. In the *Buckets* pane, click *X-Axis*.
. Set *Aggregation* to *Terms* and *Field* to *play_name*.
. To list plays alphabetically, in the *Order* dropdown menu, select *Ascending*.
. Give the axis a custom label, `Play Name`.
. Click *Apply changes* image:images/apply-changes-button.png[].
[role="screenshot"]
image::images/0[]
Hovering over a bar shows a tooltip with the number of speaking parts for
that play.
Notice how the individual play names show up as whole phrases, instead of
broken into individual words. This is the result of the mapping
you did at the beginning of the tutorial, when your marked the `play_name` field
as `not analyzed`.
////
You might
also be curious to see which plays make the greatest demands on an
individual actor. Let's show the maximum number of speeches for a given part.
. Click *Add metrics* to add a Y-axis aggregation.
. Choose the *Max* aggregation and select the *speech_number* field.
. Click *Options* and change the *Bar Mode* from *stacked* to *normal*.
. Click *Apply changes* image:images/apply-changes-button.png[]. Your chart should now look like this:
. Set *Aggregation* to `Max` and *Field* to `speech_number`.
. Click *Metrics & Axes* and then change *Mode* from `stacked` to `normal`.
. Click *Apply changes* image:images/apply-changes-button.png[].
[role="screenshot"]
image::images/tutorial-visualize-bar-3.png[]
As you can see, _Love's Labours Lost_ has an unusually high maximum speech number, compared to the other plays, and
might therefore make more demands on an actor's memory.
The play Love's Labours Lost has an unusually high maximum speech number compared to the other plays.
Note how the *Number of speaking parts* Y-axis starts at zero, but the bars don't begin to differentiate until 18. To
make the differences stand out, starting the Y-axis at a value closer to the minimum, go to Options and select
*Scale Y-Axis to data bounds*.
////
*Save* this chart with the name `Bar Example`.
Save this chart with the name _Bar Example_.
=== Coordinate Map
Next, we're going to use a coordinate map chart to visualize geographic information in our log file sample data.
Using a coordinate map, you can visualize geographic information in the log file sample data.
. Click *New*.
. Select *Coordinate map*.
. Select the `logstash-*` index pattern.
. Set the time window for the events we're exploring:
. Click the time picker in the Kibana toolbar.
. Create a *Coordinate map* and set the search source to `logstash*`.
. In the top menu bar, click the time picker on the far right.
. Click *Absolute*.
. Set the start time to May 18, 2015 and the end time to May 20, 2015.
+
image::images/tutorial-timepicker.png[]
. Click *Go*.
. Once you've got the time range set up, click the *Go* button and close the time picker by
clicking the small up arrow in the bottom right corner.
You'll see a map of the world, since we haven't defined any buckets yet:
You haven't defined any buckets yet, so the visualization is a map of the world.
[role="screenshot"]
image::images/tutorial-visualize-map-1.png[]
To map the geo coordinates from the log files select *Geo Coordinates* as
the bucket and click *Apply changes* image:images/apply-changes-button.png[].
Your chart should now look like this:
Now map the geo coordinates from the log files.
. In the *Buckets* pane, click *Geo Coordinates*.
. Set *Aggregation* to *Geohash* and *Field* to *geo.coordinates*.
. Click *Apply changes* image:images/apply-changes-button.png[].
The map now looks like this:
[role="screenshot"]
image::images/tutorial-visualize-map-2.png[]
You can navigate the map by clicking and dragging, zoom with the
image:images/viz-zoom.png[] buttons, or hit the *Fit Data Bounds*
You can navigate the map by clicking and dragging. The controls
on the top left of the map enable you to zoom the map and set filters.
Give them a try.
////
- Zoom image:images/viz-zoom.png[] buttons,
- *Fit Data Bounds*
image:images/viz-fit-bounds.png[] button to zoom to the lowest level that
includes all the points. You can also include or exclude a rectangular area
includes all the points.
- Include or exclude a rectangular area
by clicking the *Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[]
button and drawing a bounding box on the map. Applied filters are displayed
below the query bar. Hovering over a filter displays controls to toggle,
pin, invert, or delete the filter.
////
[role="screenshot"]
image::images/tutorial-visualize-map-3.png[]
Save this map with the name _Map Example_.
*Save* this map with the name `Map Example`.
Finally, create a Markdown widget to display extra information:
=== Markdown
. Click *New*.
. Select *Markdown widget*.
. Enter the following text in the field:
The final visualization is a Markdown widget that renders formatted text.
. Create a *Markdown* visualization.
. In the text box, enter the following:
+
[source,markdown]
# This is a tutorial dashboard!
The Markdown widget uses **markdown** syntax.
> Blockquotes in Markdown use the > character.
. Click *Apply changes* image:images/apply-changes-button.png[] render the Markdown in the
preview pane.
+
image::images/tutorial-visualize-md-1.png[]
. Click *Apply changes* image:images/apply-changes-button.png[].
The Markdown renders in the preview pane:
[role="screenshot"]
image::images/tutorial-visualize-md-2.png[]
Save this visualization with the name _Markdown Example_.
*Save* this visualization with the name `Markdown Example`.

View file

@ -4,11 +4,11 @@
Now that you have a handle on the basics, you're ready to start exploring
your own data with Kibana.
* See {kibana-ref}/discover.html[Discover] for more information about searching and filtering
* See {kibana-ref}/discover.html[Discover] for information about searching and filtering
your data.
* See {kibana-ref}/visualize.html[Visualize] for information about all of the visualization
* See {kibana-ref}/visualize.html[Visualize] for information about the visualization
types Kibana has to offer.
* See {kibana-ref}/management.html[Management] for information about configuring Kibana
and managing your saved objects.
* See {kibana-ref}/console-kibana.html[Console] for information about the interactive
console UI you can use to submit REST requests to Elasticsearch.
and managing your saved objects.
* See {kibana-ref}/console-kibana.html[Console] to learn about the interactive
console you can use to submit REST requests to Elasticsearch.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 433 KiB

After

Width:  |  Height:  |  Size: 470 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 156 KiB

After

Width:  |  Height:  |  Size: 384 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 130 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 139 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 183 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 157 KiB

After

Width:  |  Height:  |  Size: 466 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 312 KiB

After

Width:  |  Height:  |  Size: 522 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 336 KiB

After

Width:  |  Height:  |  Size: 601 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 149 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 132 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 183 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 188 KiB

After

Width:  |  Height:  |  Size: 208 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 118 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 113 KiB

Before After
Before After