mirror of
https://github.com/elastic/kibana.git
synced 2025-04-23 09:19:04 -04:00
Remove Upload CSV from getting started docs
This commit is contained in:
parent
b7fd17b492
commit
48f40d59c2
5 changed files with 8 additions and 1025 deletions
|
@ -6,7 +6,7 @@ key Kibana functionality. By the end of this tutorial, you will have:
|
|||
|
||||
* Loaded a sample data set into your Elasticsearch installation
|
||||
* Defined at least one index pattern
|
||||
* Use the <<discover, Discover>> functionality to explore your data
|
||||
* Used the <<discover, Discover>> functionality to explore your data
|
||||
* Set up some <<visualize,_visualizations_>> to graphically represent your data
|
||||
* Assembled visualizations into a <<dashboard,Dashboard>>
|
||||
|
||||
|
@ -27,14 +27,15 @@ The tutorials in this section rely on the following data sets:
|
|||
|
||||
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here:
|
||||
https://www.elastic.co/guide/en/kibana/3.0/snippets/shakespeare.json[shakespeare.json].
|
||||
* A set of fictitious accounts with randomly generated data, in CSV format. Download this data set by clicking here:
|
||||
https://raw.githubusercontent.com/elastic/kibana/master/docs/tutorial/accounts.csv[accounts.csv]
|
||||
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here:
|
||||
https://github.com/bly2k/files/blob/master/accounts.zip?raw=true[accounts.zip]
|
||||
* A set of randomly generated log files. Download this data set by clicking here:
|
||||
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logs.jsonl.gz]
|
||||
|
||||
Extract the logs with the following command:
|
||||
Two of the data sets are compressed. Use the following commands to extract the files:
|
||||
|
||||
[source,shell]
|
||||
unzip accounts.zip
|
||||
gunzip logs.jsonl.gz
|
||||
|
||||
The Shakespeare data set is organized in the following schema:
|
||||
|
@ -81,8 +82,6 @@ field's searchability or whether or not it's _tokenized_, or broken up into sepa
|
|||
|
||||
Use the following command to set up a mapping for the Shakespeare data set:
|
||||
|
||||
=============
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/shakespeare -d '
|
||||
{
|
||||
|
@ -99,8 +98,6 @@ curl -XPUT http://localhost:9200/shakespeare -d '
|
|||
}
|
||||
';
|
||||
|
||||
=============
|
||||
|
||||
This mapping specifies the following qualities for the data set:
|
||||
|
||||
* The _speaker_ field is a string that isn't analyzed. The string in this field is treated as a single unit, even if
|
||||
|
@ -170,29 +167,16 @@ curl -XPUT http://localhost:9200/logstash-2015.05.20 -d '
|
|||
}
|
||||
';
|
||||
|
||||
At this point we're ready to use the Elasticsearch {ref}/docs-bulk.html[`bulk`] API to load the data sets with the
|
||||
following commands:
|
||||
The accounts data set doesn't require any mappings, so at this point we're ready to use the Elasticsearch
|
||||
{ref}/docs-bulk.html[`bulk`] API to load the data sets with the following commands:
|
||||
|
||||
[source,shell]
|
||||
curl -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
|
||||
curl -XPOST 'localhost:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
|
||||
curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
|
||||
|
||||
These commands may take some time to execute, depending on the computing resources available.
|
||||
|
||||
To load the Accounts data set, click the *Management* image:images/SettingsButton.jpg[gear icon] tab, the
|
||||
select *Upload CSV*.
|
||||
|
||||
image::images/management-panel.png[kibana management panel]
|
||||
|
||||
Click *Select File*, then navigate to the `accounts.csv` file. Review the sample, then click *Next*.
|
||||
|
||||
image::images/csv-sample.png[sample csv import]
|
||||
|
||||
Review the index pattern built by the CSV import function. You can change any field types from the drop-downs, but for
|
||||
this tutorial, accept the defaults. Enter `bank` as the name for the index pattern, then click *Save*.
|
||||
|
||||
image::images/sample-index.png[sample index pattern]
|
||||
|
||||
Verify successful loading with the following command:
|
||||
|
||||
[source,shell]
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 289 KiB |
Binary file not shown.
Before Width: | Height: | Size: 64 KiB |
Binary file not shown.
Before Width: | Height: | Size: 191 KiB |
File diff suppressed because it is too large
Load diff
Loading…
Add table
Add a link
Reference in a new issue