Incorporated review comments & added what's new topic.

This commit is contained in:
debadair 2015-02-10 16:59:04 -08:00
parent b9704c8984
commit 6d386a288f
5 changed files with 105 additions and 41 deletions

View file

@ -15,4 +15,6 @@ include::visualize.asciidoc[]
include::dashboard.asciidoc[]
include::settings.asciidoc[]
include::settings.asciidoc[]
include::whats-new.asciidoc[]

View file

@ -1,10 +1,10 @@
[[introduction]]
== Introduction
Kibana is an open source (Apache Licensed) analytics and visualization tool
for Elasticsearch. You use Kibana to search, view, and interact with data in
Elasticsearch indexes. You can easily perform time-based comparisons and
visualize your data using a variety of charts, tables, and maps.
Kibana is an open source analytics and visualization platform designed to work
with Elasticsearch. You use Kibana to search, view, and interact with data
stored in Elasticsearch indexes. You can easily perform advanced data analysis
operations and visualize your data in a variety of charts, tables, and maps.
Kibana makes it easy to understand large volumes of data. Its simple,
browser-based interface enables you to quickly create and share dynamic
@ -13,6 +13,11 @@ dashboards that display changes to Elasticsearch queries in real time.
Setting up Kibana is a snap. You can install Kibana and start exploring your
Elasticsearch indexes in minutes--no code, no additional infrastructure required.
NOTE: This guide describes how to use Kibana 4. For information about what's new
in Kibana 4, see <<whats-new>>. For information about Kibana 3,
see the http://www.elasticsearch.org/guide/en/kibana/current/index.html[Kibana 3 User Guide].
=== Data Discovery and Visualization
Let's take a look at how you might use Kibana to explore and visualize data.

View file

@ -1,5 +1,5 @@
[[settings]]
== Configuring Kibana
== Configuring Kibana Settings
To use Kibana, you have to tell it about the Elasticsearch indexes that you
want to explore by configuring one or more index patterns. You can also:
@ -127,6 +127,21 @@ To set a different pattern as the default index pattern:
NOTE: You can also manually set the default index pattern in *Advanced > Settings*.
=== Reload the Index Fields List
When you add an index mapping, Kibana automatically scans the index(es) that
match the pattern to display a list of the index fields. You can reload the
index fields list to pick up any newly-added fields.
Reloading the index fields list also resets Kibana's popularity counters for the fields.
The popularity counters keep track of the fields you've used most often within Kibana
and are used to sort fields within lists.
To reload the index fields list:
. Go to the *Settings > Indices* tab.
. Select an index pattern from the Index Patterns list.
. Click the pattern's *Reload* button.
=== Delete an Index Pattern
To delete an index pattern:
@ -237,12 +252,20 @@ or *Dashboard* page. To view a saved object:
. Go to *Settings > Objects*.
. Select the object you want to view.
. Click the *View* button.
. Click the *Save* button.
Editing a saved object enables you to directly modify the object definition.
You can change the name of the object, add a description, and modify the
JSON that defines the object's properties.
If you attempt to access an object whose index has been deleted, Kibana displays
its Edit Object page. You can:
* Recreate the index so you can continue using the object.
* Delete the object and recreate it using a different index.
* Change the index name referenced in the object's `kibanaSavedObjectMeta.searchSourceJSON`
to point to an existing index pattern. This is useful if the index you were working
with has been renamed.
WARNING: No validation is performed for object properties. Submitting invalid
changes will render the object unusable. Generally, you should use the
*Discover*, *Visualize*, or *Dashboard* pages to create new objects instead of
@ -311,31 +334,7 @@ the query that populates a Kibana visualization, the user just sees an empty
visualization.
To configure access to Kibana using Shield, you create one or more Shield roles
for Kibana using the `kibana4` default role as a starting point. For example,
the following role grants access to the `logstash-*` indices from Kibana:
----
kibana-log-analysis:
cluster: cluster:monitor/nodes/info, cluster:monitor/health
indices:
'logstash-*':
- indices:admin/mappings/fields/get
- indices:admin/validate/query
- indices:data/read/search
- indices:data/read/msearch
- indices:admin/get
'.kibana':
- indices:admin/exists
- indices:admin/mapping/put
- indices:admin/mappings/fields/get
- indices:admin/refresh
- indices:admin/validate/query
- indices:data/read/get
- indices:data/read/mget
- indices:data/read/search
- indices:data/write/delete
- indices:data/write/index
- indices:data/write/update
----
for Kibana using the `kibana4` default role as a starting point. For more
information, see http://www.elasticsearch.org/guide/en/shield/current/_shield_with_kibana_4.html[Using Shield with Kibana 4].

View file

@ -14,16 +14,14 @@ To get Kibana up and running:
. Download the http://www.elasticsearch.org/overview/kibana/installation/[Kibana binary package] for your platform.
. Extract the `.zip` or `tar.gz` archive file.
. Run Kibana from the install directory: `bin/kibana` (Linux/MacOSX) or `bin/kibana.bat` (Windows).
. Run Kibana from the install directory: `bin/kibana` (Linux/MacOSX) or `bin\kibana.bat` (Windows).
That's it! Kibana is now running on port 5601.
TIP: By default, Kibana connects to the Elasticsearch instance running on `localhost`. To connect to a different Elasticsearch instance,
modify the Elasticsearch URL in the `kibana.yml` configuration file and restart Kibana. For information about using Kibana with your production nodes, see <<production>>.
TIP: By default, Kibana connects to the Elasticsearch instance running on `localhost`. To connect to a different Elasticsearch instance, modify the Elasticsearch URL in the `kibana.yml` configuration file and restart Kibana. For information about using Kibana with your production nodes, see <<production>>.
=== Connect Kibana with Elasticsearch
Before you can start using Kibana, you need to tell it which Elasticsearch index(es) you want to explore. The first time
you access Kibana, you are prompted to define an _index pattern_ that matches the name of one or more of your indexes. That's it. That's all you need to configure to start using Kibana.
Before you can start using Kibana, you need to tell it which Elasticsearch index(es) you want to explore. The first time you access Kibana, you are prompted to define an _index pattern_ that matches the name of one or more of your indexes. That's it. That's all you need to configure to start using Kibana.
TIP: You can add index patterns at any time from the <<settings-create-pattern,Settings tab>>.
@ -31,9 +29,9 @@ To configure the Elasticsearch index(es) you want to access with Kibana:
. Point your browser at port 5601 to access the Kibana UI. For example, `localhost:5601` or `http://YOURDOMAIN.com:5601`.
// image::images/kibana-start.jpg[Kibana start page]
. Specify an index pattern that matches the name of one or more of your Elasticsearch indexes. By default, Kibana guesses that you're you're working with log data being fed into Elasticsearch by Logstash. If that's the case, you can use the default `logstash-*` as your index pattern. The asterisk (*) matches zero or more characters in an index's name. If your Elasticsearch indexes follow some other naming convention, enter an appropriate pattern. (The "pattern" can also simply be the name of a single index.)
. If your index contains a timestamp field that you want to use to perform time-based comparisons, select the *Index contains time-based events* option and select the index field that contains the timestamp. (Kibana reads the index mapping to list all of the fields that contain a timestamp.)
. If new indexes are generated periodically and have a timestamp appended to the name, select the *Use event times to create index names* option and select the *Index pattern interval*. This enables Kibana to search only those indices that could possibly contain data in the time range you specify. (This is primarily applicable if you are using Logstash to feed data in to Elasticsearch.)
. Specify an index pattern that matches the name of one or more of your Elasticsearch indexes. By default, Kibana guesses that you're you're working with data being fed into Elasticsearch by Logstash. If that's the case, you can use the default `logstash-*` as your index pattern. The asterisk (*) matches zero or more characters in an index's name. If your Elasticsearch indexes follow some other naming convention, enter an appropriate pattern. (The "pattern" can also simply be the name of a single index.)
. If your index contains a timestamp field that you want to use to perform time-based comparisons, select the index field that contains the timestamp. (Kibana reads the index mapping to list all of the fields that contain a timestamp.) If your index doesn't have time-based data, disable the *Index contains time-based events* option.
. If new indexes are generated periodically and have a timestamp appended to the name, select the *Use event times to create index names* option and select the *Index pattern interval*. This improves search performance by enabling Kibana to search only those indices that could contain data in the time range you specify. (This is primarily applicable if you are using Logstash to feed data in to Elasticsearch.)
. Click *Create* to add the index pattern. This first pattern is automatically configured as the default. When you have more than one index pattern, you can designate which one to use as the default from **Settings > Indices**.
Voila! Kibana is now connected to your Elasticsearch data. Kibana displays a read-only list of fields configured for the matching index.

60
docs/whats-new.asciidoc Normal file
View file

@ -0,0 +1,60 @@
[[whats-new]]
== What's New in Kibana 4
Kibana 4 provides dozens of new features that enable you to compose questions,
get answers, and solve problems like never before. It has a brand-new look and
feel and improved workflows for discovering and visualizing your data and
building and sharing dashboards.
=== Key Features
* New data search and discovery interface
* Unified visualization builder for your favorite visualizations and some brand
new ones:
** Area Chart
** Data Table
** Line Chart
** Markdown Text Widget
** Metric
** Pie Chart (including "doughnut" charts)
** Raw Document Widget
** Tile Map
** Vertical Bar Chart
* Drag and drop dashboard builder that enables you to quickly add, rearrange,
resize, and remove visualizations
* Advanced aggregation-based analytics capabilities, including support for:
** Unique counts (cardinality)
** Non-date histograms
** Ranges
** Significant terms
** Percentiles
* Expressions-based scripted fields enable you to perform ad-hoc analysis by
performing computations on the fly
=== Improvements
* Ability to save searches and visualizations enables you to link
searches to visualizations and add the same visualization to multiple dashboards
* Visualizations support an unlimited number of nested aggregations so you can
display new types of visualizations, such as "doughnut" charts
* New URL format eliminates the need for templated and scripted dashboards.
* Better mobile experience
* Faster dashboard loading thanks to a reduction in the number HTTP calls needed to load the page
* SSL encryption for client requests as well as requests to and from Elasticsearch
* Search result highlighting
* Easily access and export the data behind any visualization:
** View in a table or view as JSON
** Export in CSV format
** See the Elasticsearch request and response
* Easily share and embed individual visualizations as well as dashboards
=== Nuts and Bolts
* Ships with its own webserver and uses Node.js on the backend. Installation
binaries are provided for Linux, Windows, and Mac OS.
* Uses the D3 framework to display visualizations.
Saved search and visualization management
Scripted fields support
Easily embeddable charts
CSV data export