mirror of
https://github.com/elastic/kibana.git
synced 2025-04-24 17:59:23 -04:00
* [DOCS] Dashboard-first refresh * Fixes broken links and partinto error * Fixes images in panel table * Fixes broken links * Fixes broken drilldowns link * Fixes images and table * Removed un needed files and added edit content * Update docs/getting-started/tutorial-visualizing.asciidoc Co-authored-by: gchaps <33642766+gchaps@users.noreply.github.com> * Review comments * Review comments * Removed blocks * Typo fix * Update docs/getting-started/tutorial-sample-data.asciidoc Co-authored-by: Wylie Conlon <wylieconlon@gmail.com> * Update docs/getting-started/tutorial-discovering.asciidoc Co-authored-by: Wylie Conlon <wylieconlon@gmail.com> * Update docs/getting-started/tutorial-sample-data.asciidoc Co-authored-by: Wylie Conlon <wylieconlon@gmail.com> * Update docs/getting-started/tutorial-visualizing.asciidoc Co-authored-by: Wylie Conlon <wylieconlon@gmail.com> * Update docs/user/dashboard/edit-dashboards.asciidoc Co-authored-by: Wylie Conlon <wylieconlon@gmail.com> * Update docs/user/dashboard/dashboard.asciidoc Co-authored-by: Wylie Conlon <wylieconlon@gmail.com> * Update docs/user/dashboard/dashboard.asciidoc Co-authored-by: Wylie Conlon <wylieconlon@gmail.com> * Update docs/user/dashboard/aggregation-reference.asciidoc Co-authored-by: Wylie Conlon <wylieconlon@gmail.com> * Review comments Co-authored-by: gchaps <33642766+gchaps@users.noreply.github.com> Co-authored-by: Wylie Conlon <wylieconlon@gmail.com>
212 lines
6.1 KiB
Text
212 lines
6.1 KiB
Text
[[create-your-own-dashboard]]
|
|
== Create your own dashboard
|
|
|
|
Ready to add data to {kib} and create your own dashboard? In this tutorial, you'll use three types of data sets that'll help you learn to:
|
|
|
|
* <<load-the-data-sets, Load data into Elasticsearch>>
|
|
* <<tutorial-define-index, Define an index pattern>>
|
|
* <<explore-your-data, Discover and explore data>>
|
|
* <<tutorial-visualizing, Visualize data>>
|
|
|
|
[float]
|
|
[[download-the-data]]
|
|
=== Download the data
|
|
|
|
To complete the tutorial, you'll download and use the following data sets:
|
|
|
|
* The complete works of William Shakespeare, suitably parsed into fields
|
|
* A set of fictitious bank accounts with randomly generated data
|
|
* A set of randomly generated log files
|
|
|
|
Create a new working directory where you want to download the files. From that directory, run the following commands:
|
|
|
|
[source,shell]
|
|
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/shakespeare.json
|
|
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/accounts.zip
|
|
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/logs.jsonl.gz
|
|
|
|
Two of the data sets are compressed. To extract the files, use the following commands:
|
|
|
|
[source,shell]
|
|
unzip accounts.zip
|
|
gunzip logs.jsonl.gz
|
|
|
|
[float]
|
|
==== Structure of the data sets
|
|
|
|
The Shakespeare data set has the following structure:
|
|
|
|
[source,json]
|
|
{
|
|
"line_id": INT,
|
|
"play_name": "String",
|
|
"speech_number": INT,
|
|
"line_number": "String",
|
|
"speaker": "String",
|
|
"text_entry": "String",
|
|
}
|
|
|
|
The accounts data set has the following structure:
|
|
|
|
[source,json]
|
|
{
|
|
"account_number": INT,
|
|
"balance": INT,
|
|
"firstname": "String",
|
|
"lastname": "String",
|
|
"age": INT,
|
|
"gender": "M or F",
|
|
"address": "String",
|
|
"employer": "String",
|
|
"email": "String",
|
|
"city": "String",
|
|
"state": "String"
|
|
}
|
|
|
|
The logs data set has dozens of different fields. The notable fields include the following:
|
|
|
|
[source,json]
|
|
{
|
|
"memory": INT,
|
|
"geo.coordinates": "geo_point"
|
|
"@timestamp": "date"
|
|
}
|
|
|
|
[float]
|
|
==== Set up mappings
|
|
|
|
Before you load the Shakespeare and logs data sets, you must set up {ref}/mapping.html[_mappings_] for the fields.
|
|
Mappings divide the documents in the index into logical groups and specify the characteristics
|
|
of the fields. These characteristics include the searchability of the field
|
|
and whether it's _tokenized_, or broken up into separate words.
|
|
|
|
NOTE: If security is enabled, you must have the `all` Kibana privilege to run this tutorial.
|
|
You must also have the `create`, `manage` `read`, `write,` and `delete`
|
|
index privileges. See {ref}/security-privileges.html[Security privileges]
|
|
for more information.
|
|
|
|
Open the menu, then go to *Dev Tools*. On the *Console* page, set up a mapping for the Shakespeare data set:
|
|
|
|
[source,js]
|
|
PUT /shakespeare
|
|
{
|
|
"mappings": {
|
|
"properties": {
|
|
"speaker": {"type": "keyword"},
|
|
"play_name": {"type": "keyword"},
|
|
"line_id": {"type": "integer"},
|
|
"speech_number": {"type": "integer"}
|
|
}
|
|
}
|
|
}
|
|
|
|
//CONSOLE
|
|
|
|
The mapping specifies field characteristics for the data set:
|
|
|
|
* The `speaker` and `play_name` fields are keyword fields. These fields are not analyzed.
|
|
The strings are treated as a single unit even if they contain multiple words.
|
|
|
|
* The `line_id` and `speech_number` fields are integers.
|
|
|
|
The logs data set requires a mapping to label the latitude and longitude pairs
|
|
as geographic locations by applying the `geo_point` type.
|
|
|
|
[source,js]
|
|
PUT /logstash-2015.05.18
|
|
{
|
|
"mappings": {
|
|
"properties": {
|
|
"geo": {
|
|
"properties": {
|
|
"coordinates": {
|
|
"type": "geo_point"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
//CONSOLE
|
|
|
|
[source,js]
|
|
PUT /logstash-2015.05.19
|
|
{
|
|
"mappings": {
|
|
"properties": {
|
|
"geo": {
|
|
"properties": {
|
|
"coordinates": {
|
|
"type": "geo_point"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
//CONSOLE
|
|
|
|
[source,js]
|
|
PUT /logstash-2015.05.20
|
|
{
|
|
"mappings": {
|
|
"properties": {
|
|
"geo": {
|
|
"properties": {
|
|
"coordinates": {
|
|
"type": "geo_point"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
//CONSOLE
|
|
|
|
The accounts data set doesn't require any mappings.
|
|
|
|
[float]
|
|
[[load-the-data-sets]]
|
|
==== Load the data sets
|
|
|
|
At this point, you're ready to use the Elasticsearch {ref}/docs-bulk.html[bulk]
|
|
API to load the data sets:
|
|
|
|
[source,shell]
|
|
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/bank/_bulk?pretty' --data-binary @accounts.json
|
|
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
|
|
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/_bulk?pretty' --data-binary @logs.jsonl
|
|
|
|
Or for Windows users, in Powershell:
|
|
[source,shell]
|
|
Invoke-RestMethod "http://<host>:<port>/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json"
|
|
Invoke-RestMethod "http://<host>:<port>/shakespeare/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare.json"
|
|
Invoke-RestMethod "http://<host>:<port>/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl"
|
|
|
|
These commands might take some time to execute, depending on the available computing resources.
|
|
|
|
When you define an index pattern, the indices that match the pattern must
|
|
exist in {es} and contain data.
|
|
|
|
To verify the availability of the indices, open the menu, go to *Dev Tools > Console*, then enter:
|
|
|
|
[source,js]
|
|
GET /_cat/indices?v
|
|
|
|
Alternately, use:
|
|
|
|
[source,shell]
|
|
`curl -XGET "http://localhost:9200/_cat/indices"`.
|
|
|
|
The output should look similar to:
|
|
|
|
[source,shell]
|
|
health status index pri rep docs.count docs.deleted store.size pri.store.size
|
|
yellow open bank 1 1 1000 0 418.2kb 418.2kb
|
|
yellow open shakespeare 1 1 111396 0 17.6mb 17.6mb
|
|
yellow open logstash-2015.05.18 1 1 4631 0 15.6mb 15.6mb
|
|
yellow open logstash-2015.05.19 1 1 4624 0 15.7mb 15.7mb
|
|
yellow open logstash-2015.05.20 1 1 4750 0 16.4mb 16.4mb
|