Incorporated review comments, added Discover Topic.

This commit is contained in:
debadair 2015-02-13 10:22:02 -08:00
parent 6d386a288f
commit ea5656d433
12 changed files with 295 additions and 113 deletions

View file

@ -5,3 +5,11 @@ Kibana is a web application that you access through port 5601. All you need to
do is point your web browser at the machine where Kibana is running and
specify the port number. For example, `localhost:5601` or `http://YOURDOMAIN.com:5601`.
When you access Kibana, the Discover page loads by default with the default index
pattern selected. The time filter is set to the last 15 minutes and the search
query is set to match-all (*).
image:images/Discover-Start.jpg[Kibana start page]
If you don't see any documents, try setting the time filter to a wider time range.
If you still don't see any results, it's possible that you don't **have** any documents.

View file

@ -1,2 +1,102 @@
[[discover]]
== Discovering your Data
== Discover
You can interactively explore your data from the Discover page. You have access to every document in every index that matches the selected index pattern. You can submit search queries, filter the search results, and view document data. You can also see the number of documents that match the search query and get field value statistics. If a time field is configured for the selected index pattern, a bar chart displays the distribution of documents over time.
[float]
=== Setting a Time Filter
A time filter restricts the search results to a specific time period. You can set a time filter if your index contains time-based events and a time-field is configured for the selected index pattern.
The default time filter is the last 15 minutes. You can use the Time Picker to change the time filter,
or interactively select a specific time interval or time range in the time chart.
To set a time filter with the time picker:
. Click the time filter displayed in the upper right corner of the menu bar.
. To set a quick filter, simply click one of the shortcut links.
. To specify a relative time filter, click **Relative** and enter the relative start time. You can specify
the relative start time as any number of seconds, minutes, hours, days, months, or years ago.
. To specify an absolute time filter, click **Absolute** and enter the start date in the **From** field and the end date in the **To** field.
. Click the caret at the bottom of the Time Picker to hide it.
[float]
=== Searching Your Data
The search bar at the top allows Kibana to use Elasticsearch's support for Lucene Query String syntax. Let's say we're searching web server logs that have been parsed into a few fields.
We can of course do free text search. Find requests that contain the number 200, in any field.
----
200
----
Or we can search in a specific field. Find 200 in the status field:
----
status:200
----
Find all from 400-499 status codes:
----
status:[400 TO 499]
----
Find status codes 400-499 with the extension php:
----
status:[400 TO 499] AND extension:PHP
----
Or HTML
----
status:[400 TO 499] AND (extension:php OR extension:html)
----
You can read more about the Lucene Query String syntax in the [Lucene documentation](https://lucene.apache.org/core/2_9_4/queryparsersyntax.html).
While Lucene query syntax is simple and very powerful, Kibana also supports the full Elasticsearch, JSON based, Query DSL. See the [Elasticsearch documentation](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax) for usage and examples.
=== Automatically Refreshing the Page
You can configure a refresh interval to automatically refresh the Discover page with the latest
index data. This periodically resubmits the search query.
When a refresh interval is set, it is displayed to the left of the Time Filter in the menu bar.
To set the refresh interval:
. Click the Time Filter in the upper right corner of the menu bar.
. Click the the refresh interval you want to set.
image:images/Discover-TimePicker.jpg[Time Picker]
[float]
=== Filtering by Field
When you expand a document in the document list you will see two magnifying glasses next to indexed terms, one with a plus sign and one with a minus sign. If you click on the magnifying glass with the plus sign it will add a filter to the query for that term. If you click on the magnifying glass with the minus sign, it will add a negative filter (which will remove any documents containing the term). Both filters will appear in the filter bar underneath the **search bar**. When you hover over the filters in the filter bar you will see an option to toggle or remove them. There is also a link to remove all the filters.
[float]
=== Viewing Document Data
Once you see some documents, you can begin to explore Discover. In the document list, Kibana will show you the localized version of the time field you specified in your index pattern, as well as the `_source` of the Elasticsearch document.
**Tip:** By default the table contains 500 of the most recent documents. You can increase the number of documents in the table from the advanced settings screen. See the [Setting section](#advanced) of the documentation.
Click on the expand button to the left of the time. Kibana will read the fields from the document and present them in a table. The + and - buttons allow you to quickly filter for documents that share common traits with the one you're looking at. Click the JSON tab at the top of the table to see the full, pretty printed, original document.
Click the expand button again to collapse the detailed view of the document.
[float]
==== Adding Columns to the Documents Table
The field list has several powerful functions. The first being the ability to add columns to the document list. If no fields are selected `_source` will be automatically selected and shown in the table. Mouse over a field name and click the **add** button that appears. Now, instead of seeing `_source` in the document list, you have the extracted value of the selected field. In addition, the field name has moved up to the **Selected** section of the field list. Add a few more fields. Sweet!
[float]
=== Viewing Field Data Statistics
Now, instead of clicking the **add** button, click the name of the field itself. You will see a breakdown of the 5 most popular values for the field, as well as a count of how many records in the document list the field is present in.

Binary file not shown.

After

Width:  |  Height:  |  Size: 267 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

View file

@ -17,4 +17,6 @@ include::dashboard.asciidoc[]
include::settings.asciidoc[]
include::production.asciidoc[]
include::whats-new.asciidoc[]

View file

@ -3,15 +3,15 @@
Kibana is an open source analytics and visualization platform designed to work
with Elasticsearch. You use Kibana to search, view, and interact with data
stored in Elasticsearch indexes. You can easily perform advanced data analysis
operations and visualize your data in a variety of charts, tables, and maps.
stored in Elasticsearch indices. You can easily perform advanced data analysis
and visualize your data in a variety of charts, tables, and maps.
Kibana makes it easy to understand large volumes of data. Its simple,
browser-based interface enables you to quickly create and share dynamic
dashboards that display changes to Elasticsearch queries in real time.
Setting up Kibana is a snap. You can install Kibana and start exploring your
Elasticsearch indexes in minutes--no code, no additional infrastructure required.
Elasticsearch indices in minutes--no code, no additional infrastructure required.
NOTE: This guide describes how to use Kibana 4. For information about what's new
in Kibana 4, see <<whats-new>>. For information about Kibana 3,
@ -21,14 +21,14 @@ see the http://www.elasticsearch.org/guide/en/kibana/current/index.html[Kibana 3
=== Data Discovery and Visualization
Let's take a look at how you might use Kibana to explore and visualize data.
We've indexed some data from Transport for London (TFL) that shows a week's
worth of transit (Oyster) card usage.
We've indexed some data from Transport for London (TFL) that shows one week
of transit (Oyster) card usage.
From Kibana's Discover page, we can submit search queries, filter the results, and
examine the data in the returned documents. For example, we can get all trips
completed by tube during the week by excluding incomplete trips and trips by bus:
completed by the Tube during the week by excluding incomplete trips and trips by bus:
image::images/TFL-CompletedTrips.jpg[Discover]
image:images/TFL-CompletedTrips.jpg[Discover]
Right away, we can see the peaks for the morning and afternoon commute hours. By default,
the Discover page shows a time-series chart and the first 500 entries that match the
@ -38,20 +38,20 @@ information about exploring your data from the Discover page, see <<discover>>.
You can construct visualizations of your search results from the Visualization page.
Each visualization is associated with a search. For example, we can create a histogram
that shows the weekly London commute traffic via tube using a search that excludes
incomplete trips and trips by bus. The Y-axis is the number of trips. The X-axis shows
that shows the weekly London commute traffic via the Tube using a search that excludes
incomplete trips and trips by bus. The Y-axis shows the number of trips. The X-axis shows
the day and time. By adding a sub-aggregation, we can see the top 3 end stations during
each hour:
image::images/TFL-CommuteHistogram.jpg[Visualize]
image:images/TFL-CommuteHistogram.jpg[Visualize]
You can save and share visualizations and add them to dashboards to make it easy
to monitor and share particular metrics. For example, we could create a dashboard
You can save and share visualizations and combine them into dashboards to make it easy
to correlate related information. For example, we could create a dashboard
that displays several visualizations of the TFL data:
image::images/TFL-Dashboard.jpg[Dashboard]
image:images/TFL-Dashboard.jpg[Dashboard]
For more information about creating and sharing visualizations, see <<visualize>>.
For more information about working with Dashboards, see <<dashboard>>.
For more information about working with Dashboards, see <<dashboard, Dashboard>>.

94
docs/production.asciidoc Normal file
View file

@ -0,0 +1,94 @@
[[production]]
== Using Kibana in a Production Environment
When you set up Kibana in a production environment, rather than on your local
machine, you need to consider:
* Where you are going to run Kibana.
* Whether you need to encrypt communications to and from Kibana.
* If you need to control access to your data.
=== Deployment Considerations
How you deploy Kibana largely depends on your use case. If you are the only user,
you can run Kibana on your local machine and configure it to point to whatever
Elasticsearch instance you want to interact with. Conversely, if you have a large
number of heavy Kibana users, you might need to load balance across multiple
Kibana instances that are all connected to the same Elasticsearch instance.
While Kibana isn't terribly resource intensive, we still recommend running Kibana
on its own node, rather than on one of your Elasticsearch nodes.
=== Configuring Kibana to Work with Shield
If you are using Shield to authenticate Elasticsearch users, you need to provide
Kibana with user credentials so it can access the `.kibana` index. The Kibana user
needs permission to perform the following actions on the `.kibana` index:
----
'.kibana':
- indices:admin/create
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
- indices:admin/create
----
For more information about configuring access in Shield,
see https://www.elasticsearch.org/guide/en/shield/current/authorization.html[Authorization]
in the Shield documentation.
To configure credentials for Kibana, set the `kibana_elasticsearch_username` and
`kibana_elasticsearch_password` properties in `kibana.yml`:
----
# If your Elasticsearch is protected with basic auth:
kibana_elasticsearch_username: kibana4
kibana_elasticsearch_password: kibana4
----
=== Enabling SSL
Kibana supports SSL encryption for both client requests and the requests the Kibana server
sends to Elasticsearch.
To encrypt communications between the browser and the Kibana server, you configure the `ssl_key_file `and `ssl_cert_file` properties in `kibana.yml`:
----
# SSL for outgoing requests from the Kibana Server (PEM formatted)
ssl_key_file: /path/to/your/server.key
ssl_cert_file: /path/to/your/server.crt
----
If you are using Shield or a proxy that provides an HTTPS endpoint for Elasticsearch,
you can configure Kibana to access Elasticsearch via HTTPS so communications between
the Kibana server and Elasticsearch are encrypted.
To do this, you specify the HTTPS
protocol when you configure the Elasticsearch URL in `kibana.yml`:
----
elasticsearch: "https://<your_elasticsearch_host>.com:9200"
----
If you are using a self-signed certificate for Elasticsearch, set the `ca` property in
`kibana.yml` to specify the location of the PEM file. Setting the `ca` property lets you leave the `verify_ssl` option enabled.
----
# If you need to provide a CA certificate for your Elasticsarech instance, put
# the path of the pem file here.
ca: /path/to/your/ca/cacert.pem
----
=== Controlling access
You can use http://www.elasticsearch.org/overview/shield/[Elasticsearch Shield]
(Shield) to control what Elasticsearch data users can access through Kibana.
Shield provides index-level access control. If a user isn't authorized to run
the query that populates a Kibana visualization, the user just sees an empty
visualization.
To configure access to Kibana using Shield, you create one or more Shield roles
for Kibana using the `kibana4` default role as a starting point. For more
information, see http://www.elasticsearch.org/guide/en/shield/current/_shield_with_kibana_4.html[Using Shield with Kibana 4].

View file

@ -1,23 +1,23 @@
[[settings]]
== Configuring Kibana Settings
== Settings
To use Kibana, you have to tell it about the Elasticsearch indexes that you
To use Kibana, you have to tell it about the Elasticsearch indices that you
want to explore by configuring one or more index patterns. You can also:
* Create scripted fields that are computed on the fly from your data. You can
browse and visualize scripted fields, but you cannot search them.
* Set advanced options such as the number of rows to show in a table and
how many of the most popular fields to show. (Use caution when modifying advanced options,
as it's possible to set values that are incompatible with one another.)
how many of the most popular fields to show. Use caution when modifying advanced options,
as it's possible to set values that are incompatible with one another.
* Configure Kibana for a production environment
[[settings-create-pattern]]
=== Create an Index Pattern to Connect to Elasticsearch
An _index pattern_ identifies one or more Elasticsearch indexes that you want to
An _index pattern_ identifies one or more Elasticsearch indices that you want to
explore with Kibana. Kibana looks for index names that match the specified pattern.
An asterisk (*) in the pattern matches zero or more characters. For example, the pattern
`myindex-*` matches all indexes whose names start with `myindex-`, such as `myindex-1`
`myindex-*` matches all indices whose names start with `myindex-`, such as `myindex-1`
and `myindex-2`.
If you use event times to create index names (for example, if you're pushing data
@ -25,7 +25,7 @@ into Elasticsearch from Logstash), the index pattern can also contain a date for
In this case, the static text in the pattern must be enclosed in brackets, and you
specify the date format using the tokens described in <<date-format-tokens>>.
For example, `[logstash-]YYYY.MM.DD` matches all indexes whose names have a
For example, `[logstash-]YYYY.MM.DD` matches all indices whose names have a
timestamp of the form `YYYY.MM.DD` appended to the prefix `logstash-`, such as
`logstash-2015.01.31` and `logstash-2015-02-01`.
@ -35,25 +35,25 @@ To create an index pattern to connect to Elasticsearch:
. Go to the *Settings > Indices* tab.
. Specify an index pattern that matches the name of one or more of your Elasticsearch
indexes. (By default, Kibana guesses that you're you're working with log data being
fed into Elasticsearch by Logstash.)
indices. By default, Kibana guesses that you're you're working with log data being
fed into Elasticsearch by Logstash.
+
NOTE: When you switch between top-level tabs, Kibana remembers where you were at.
NOTE: When you switch between top-level tabs, Kibana remembers where you were.
For example, if you view a particular index pattern from the Settings tab, switch
to the Discover tab, and then go back to the Settings tab, Kibana displays the
index pattern you last looked at. To get to the create new pattern form, click
the *Add New* button in the Index Patterns list.
index pattern you last looked at. To get to the create pattern form, click
the *Add* button in the Index Patterns list.
. If your index contains a timestamp field that you want to use to perform
time-based comparisons, select the *Index contains time-based events* option
and select the index field that contains the timestamp. (Kibana reads the
index mapping to list all of the fields that contain a timestamp.)
and select the index field that contains the timestamp. Kibana reads the
index mapping to list all of the fields that contain a timestamp.
. If new indexes are generated periodically and have a timestamp appended to
. If new indices are generated periodically and have a timestamp appended to
the name, select the *Use event times to create index names* option and select
the *Index pattern interval*. This enables Kibana to search only those indices
that could possibly contain data in the time range you specify. (This is
primarily applicable if you are using Logstash to feed data in to Elasticsearch.)
that could possibly contain data in the time range you specify. This is
primarily applicable if you are using Logstash to feed data into Elasticsearch.
. Click *Create* to add the index pattern.
@ -152,7 +152,7 @@ To delete an index pattern:
=== Create a Scripted Field
Scripted fields compute data on the fly from the data in your
Elasticsearch indexes. Scripted field data is shown on the Discover tab as
Elasticsearch indices. Scripted field data is shown on the Discover tab as
part of the document data, and you can use scripted fields in your visualizations.
(Scripted field values are computed at query time so they aren't indexed and
cannot be searched.)
@ -163,27 +163,15 @@ that there's no built-in validation of a scripted field. If your scripts are
buggy, you'll get exceptions whenever you try to view the dynamically generated
data.
When creating scripted fields in Kibana, you use http://groovy.codehaus.org/[Groovy].
Elasticsearch sandboxes Groovy scripts used by scripted fields to ensure they dont
perform unwanted actions.
Scripted fields use the Lucene expression syntax. For more information,
see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html#_lucene_expressions_scripts[Lucene Expressions Scripts].
You can reference the value of any index field in your Groovy scripts. Generally,
the best way to get a field value is:
You can reference any single value numeric field in your expressions, for example:
----
doc['field_name'].value
----
This loads the field value directly from the Elasticsearch index. You can also
load field values from the source (`_source.field_name`) or from a stored field
(`_fields['field_name']`), but both techniques are significantly slower. You might
want to load a field value from the source to get the unanalyzed data, but it's
an I/O intensive operation that is often subject to timeouts. To load a field
value from a stored field, the Elasticsearch mapping must designate the field
as a stored field. While this is slightly less resource intensive than loading
values from the source, it's not as fast as loading the field value from the
index.
To create a scripted field:
. Go to *Settings > Indices*
@ -196,12 +184,8 @@ TIP: If you are just getting started with scripted fields, you can click
you can use as a starting point.
. Enter a name for the scripted field.
. Enter the Groovy script that you want to run to compute a value on the fly
. Enter the expression that you want to use to compute a value on the fly
from your index data.
. Select the type of data returned by your Groovy script: IP address, date,
string, number, Boolean, conflict, geo_point, geo_shape, or attachment. The
return type you select must match the type actually returned by your script,
or you will get an error when the script is run.
. Click *Save Scripted Field*.
For more information about scripted fields in Elasticsearch, see
@ -286,55 +270,55 @@ To delete a saved object:
. Click the *Delete* button.
. Confirm that you really want to delete the object.
===
Setting Kibana Server Properties
The Kibana server reads properties from the `kibana.yml` file on startup. The default
settings configure Kibana to run on `localhost:5601`. To change the host or port number, or
connect to Elasticsearch running on a different machine, you'll need to update your `kibana.yml` file. You can also enable SSL and set a variety of other options.
[[production]]
=== Using Kibana in a Production Environment
When you set up Kibana in a production environment, rather than on your local
machine, you need to consider:
.Kibana Server Properties
|===
|Property |Description
* Where you are going to run Kibana.
* Whether you need to encrypt communications to and from Kibana.
* If you need to control access to your data.
|`port`
|The port that the Kibana server runs on. Default: `port: 5601`.
==== Deployment Considerations
How you deploy Kibana largely depends on your use case. If you are the only user,
you can run Kibana on your local machine and configure it to point to whatever
Elasticsearch instance you want to interact with. Conversely, if you have a large
number of heavy Kibana users, you might need to load balance across multiple
Kibana instances that are all connected to the same Elasticsearch instance.
|`host`
|The host to bind the Kibana server to. Default: `host: "0.0.0.0"`.
While Kibana isn't terribly resource intensive, we still recommend running Kibana
on its own node, rather than on one of your Elasticsearch nodes.
|`elasticsearch_url`
|The Elasticsearch instance where the indexes you want to query reside. Default: `elasticsearch_url: "http://localhost:9200"`.
==== Enabling SSL
Kibana supports SSL encryption for both incoming requests and the requests it
sends to Elasticsearch.
|`elasticsearch_preserve_host`
|By default, the hostname specified in `elasticsearch_url` is sent to xxx. To use the hostname of the Kibana server instead, set this option to `false`. Default: `elasticsearch_preserve_host: true`.
To enable SSL for incoming requests, you need to configure an `ssl_key_file`
and `ssl_cert_file` for Kibana in `kibana.yml`. For example:
----
# SSL for outgoing requests from the Kibana Server (PEM formatted)
ssl_key_file: /path/to/your/server.key
ssl_cert_file: /path/to/your/server.crt
----
|`kibana_index`
|The name of the index where saved searched, visualizations, and dashboards will be stored. Default: `kibana_index: .kibana`.
To encrypt the requests that Kibana sends to Elasticsearch, you specify the HTTPS
protocol when you configure the Elasticsearch URL in `kibana.yml`. For example:
|`default_app_id`
|The page that will be displayed when you launch Kibana: `discover`, `visualize`, `dashboard`, or `settings`. Default: `default_app_id: "discover"`.
----
elasticsearch: "https://<your_elasticsearch_host>.com:9200"
----
|`request_timeout`
|How long to wait for responses from the Kibana backend or Elasticsearch, in milliseconds. Default: `request_timeout: 500000`
==== Controlling access
You can use http://www.elasticsearch.org/overview/shield/[Elasticsearch Shield]
(Shield) to control what Elasticsearch data users can access through Kibana.
Shield provides index-level access control. If a user isn't authorized to run
the query that populates a Kibana visualization, the user just sees an empty
visualization.
|`shard_timeout`
|How long Elasticsearch should wait for responses from shards. Set to 0 to disable. Default: `shard_timeout: 0`.
To configure access to Kibana using Shield, you create one or more Shield roles
for Kibana using the `kibana4` default role as a starting point. For more
information, see http://www.elasticsearch.org/guide/en/shield/current/_shield_with_kibana_4.html[Using Shield with Kibana 4].
|`verify_ssl`
|Indicates whether or not to validate the Elasticsearch SSL certificate. Set to false to disable SSL verification. Default: `verify_ssl: true`.
|`ca`
|The path to the CA certificate for your Elasticsearch instance. Specify if you are using a self-signed certificate
so the certificate can be verified. (Otherwise, you have to disable `verify_ssl`.) Default: none.
|`ssl_key_file`
|The path to your Kibana server's key file. Must be set to encrypt communications between the browser and Kibana. Default: none.
|`ssl_cert_file`
|The path to your Kibana server's certificate file. Must be set to encrypt communications between the browser and Kibana. Default: none.
|`pid_file`
|The location where you want to store the process ID file. If not specified, the PID file is stored in `/var/run/kibana.pid`. Default: none.
|===

View file

@ -1,10 +1,10 @@
[[setup]]
== Getting Kibana Up and Running
You can set up Kibana and start exploring your Elasticsearch indexes in minutes.
You can set up Kibana and start exploring your Elasticsearch indices in minutes.
All you need is:
* Elasticsearch 1.4.0 or later
* An up-to-date web browser
* Elasticsearch 1.4.3 or later
* An up-to-date web browser. For a list of offically-supported browsers, see http://www.elasticsearch.com/support/matrix[Supported Browsers]).
* Information about your Elasticsearch installation:
** URL of the Elasticsearch instance you want to connect to.
** Which index(es) you want to search. You can use the Elasticsearch http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-indices.html[`_cat/indices/`] command to list your indices.
@ -21,17 +21,18 @@ That's it! Kibana is now running on port 5601.
TIP: By default, Kibana connects to the Elasticsearch instance running on `localhost`. To connect to a different Elasticsearch instance, modify the Elasticsearch URL in the `kibana.yml` configuration file and restart Kibana. For information about using Kibana with your production nodes, see <<production>>.
=== Connect Kibana with Elasticsearch
Before you can start using Kibana, you need to tell it which Elasticsearch index(es) you want to explore. The first time you access Kibana, you are prompted to define an _index pattern_ that matches the name of one or more of your indexes. That's it. That's all you need to configure to start using Kibana.
Before you can start using Kibana, you need to tell it which Elasticsearch index(es) you want to explore. The first time you access Kibana, you are prompted to define an _index pattern_ that matches the name of one or more of your indices. That's it. That's all you need to configure to start using Kibana.
TIP: You can add index patterns at any time from the <<settings-create-pattern,Settings tab>>.
To configure the Elasticsearch index(es) you want to access with Kibana:
. Point your browser at port 5601 to access the Kibana UI. For example, `localhost:5601` or `http://YOURDOMAIN.com:5601`.
// image::images/kibana-start.jpg[Kibana start page]
. Specify an index pattern that matches the name of one or more of your Elasticsearch indexes. By default, Kibana guesses that you're you're working with data being fed into Elasticsearch by Logstash. If that's the case, you can use the default `logstash-*` as your index pattern. The asterisk (*) matches zero or more characters in an index's name. If your Elasticsearch indexes follow some other naming convention, enter an appropriate pattern. (The "pattern" can also simply be the name of a single index.)
. If your index contains a timestamp field that you want to use to perform time-based comparisons, select the index field that contains the timestamp. (Kibana reads the index mapping to list all of the fields that contain a timestamp.) If your index doesn't have time-based data, disable the *Index contains time-based events* option.
. If new indexes are generated periodically and have a timestamp appended to the name, select the *Use event times to create index names* option and select the *Index pattern interval*. This improves search performance by enabling Kibana to search only those indices that could contain data in the time range you specify. (This is primarily applicable if you are using Logstash to feed data in to Elasticsearch.)
image:images/Discover-Start.jpg[Kibana start page]
. Specify an index pattern that matches the name of one or more of your Elasticsearch indices. By default, Kibana guesses that you're you're working with data being fed into Elasticsearch by Logstash. If that's the case, you can use the default `logstash-*` as your index pattern. The asterisk (*) matches zero or more characters in an index's name. If your Elasticsearch indices follow some other naming convention, enter an appropriate pattern. The "pattern" can also simply be the name of a single index.
. If your index contains a timestamp field that you want to use to perform time-based comparisons, select the index field that contains the timestamp. Kibana reads the index mapping to list all of the fields that contain a timestamp. If your index doesn't have time-based data, disable the *Index contains time-based events* option.
. If new indices are generated periodically and have a timestamp appended to the name, select the *Use event times to create index names* option and select the *Index pattern interval*. This improves search performance by enabling Kibana to search only those indices that could contain data in the time range you specify. This is primarily applicable if you are using Logstash to feed data into Elasticsearch.
. Click *Create* to add the index pattern. This first pattern is automatically configured as the default. When you have more than one index pattern, you can designate which one to use as the default from **Settings > Indices**.
Voila! Kibana is now connected to your Elasticsearch data. Kibana displays a read-only list of fields configured for the matching index.

View file

@ -12,7 +12,7 @@ data source types:
Visualizations rely on the {ref}/search-aggregations.html[aggregation] feature introduced in Elasticsearch 1.
[float]
[[getting-started]]
[[create-vis]]
=== Creating a New Visualization
To start the New Visualization wizard, click on the *Visualize* tab at the top left of the page. If you are already

View file

@ -35,26 +35,19 @@ performing computations on the fly
searches to visualizations and add the same visualization to multiple dashboards
* Visualizations support an unlimited number of nested aggregations so you can
display new types of visualizations, such as "doughnut" charts
* New URL format eliminates the need for templated and scripted dashboards.
* New URL format eliminates the need for templated and scripted dashboards
* Better mobile experience
* Faster dashboard loading thanks to a reduction in the number HTTP calls needed to load the page
* SSL encryption for client requests as well as requests to and from Elasticsearch
* Search result highlighting
* Easily access and export the data behind any visualization:
* Easy to access and export the data behind any visualization:
** View in a table or view as JSON
** Export in CSV format
** See the Elasticsearch request and response
* Easily share and embed individual visualizations as well as dashboards
* Share and embed individual visualizations as well as dashboards
=== Nuts and Bolts
* Ships with its own webserver and uses Node.js on the backend. Installation
binaries are provided for Linux, Windows, and Mac OS.
* Uses the D3 framework to display visualizations.
Saved search and visualization management
Scripted fields support
Easily embeddable charts
CSV data export
binaries are provided for Linux, Windows, and Mac OS
* Uses the D3 framework to display visualizations