Merge branch '5.0' into supportCloudTesting
10
README.md
|
@ -1,4 +1,4 @@
|
|||
# Kibana 5.0.0
|
||||
# Kibana 5.0.1
|
||||
|
||||
Kibana is an open source ([Apache Licensed](https://github.com/elastic/kibana/blob/master/LICENSE.md)), browser based analytics and search dashboard for Elasticsearch. Kibana is a snap to setup and start using. Kibana strives to be easy to get started with, while also being flexible and powerful, just like Elasticsearch.
|
||||
|
||||
|
@ -58,7 +58,7 @@ For the daring, snapshot builds are available. These builds are created after ea
|
|||
|
||||
| platform | |
|
||||
| --- | --- |
|
||||
| OSX | [tar](http://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.0-SNAPSHOT-darwin-x86_64.tar.gz) |
|
||||
| Linux x64 | [tar](http://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.0-SNAPSHOT-linux-x86_64.tar.gz) [deb](https://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.0-SNAPSHOT-amd64.deb) [rpm](https://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.0-SNAPSHOT-x86_64.rpm) |
|
||||
| Linux x86 | [tar](http://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.0-SNAPSHOT-linux-x86.tar.gz) [deb](https://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.0-SNAPSHOT-i386.deb) [rpm](https://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.0-SNAPSHOT-i686.rpm) |
|
||||
| Windows | [zip](http://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.0-SNAPSHOT-windows-x86.zip) |
|
||||
| OSX | [tar](http://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.1-SNAPSHOT-darwin-x86_64.tar.gz) |
|
||||
| Linux x64 | [tar](http://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.1-SNAPSHOT-linux-x86_64.tar.gz) [deb](https://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.1-SNAPSHOT-amd64.deb) [rpm](https://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.1-SNAPSHOT-x86_64.rpm) |
|
||||
| Linux x86 | [tar](http://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.1-SNAPSHOT-linux-x86.tar.gz) [deb](https://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.1-SNAPSHOT-i386.deb) [rpm](https://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.1-SNAPSHOT-i686.rpm) |
|
||||
| Windows | [zip](http://download.elastic.co/kibana/kibana-snapshot/kibana-5.0.1-SNAPSHOT-windows-x86.zip) |
|
||||
|
|
|
@ -1,12 +0,0 @@
|
|||
[[kibana-apps]]
|
||||
== Kibana Apps
|
||||
|
||||
The Kibana UI serves as a framework that can contain several different applications. You can switch between these
|
||||
applications by clicking the image:images/app-button.png[App Picker] *App picker* button to display the app bar:
|
||||
|
||||
image::images/app-picker.png[]
|
||||
|
||||
Click an app icon to switch to that app's functionality.
|
||||
|
||||
Applications in the Kibana UI are managed by <<kibana-plugins,_plugins_>>. Plugins can expose app functionality or add new
|
||||
visualization types.
|
|
@ -1,12 +0,0 @@
|
|||
[[breaking-changes]]
|
||||
== Breaking Changes
|
||||
|
||||
This section discusses the changes that you need to be aware of when migrating
|
||||
your application from one version of Kibana to another.
|
||||
|
||||
[[breaking-changes-5.0]]
|
||||
=== Breaking changes in 5.0
|
||||
* {k4pull}8013[Pull Request 8013]: Kibana binds to localhost by default
|
||||
* {k4pull}7855[Pull Request 7855]: Markdown headers require a space between the final hash and title
|
||||
* {k4pull}7308[Pull Request 7308]: Debian and rpm packages install assets to `/usr/share/kibana`, and configuration to `/etc/kibana`
|
||||
* {k4pull}6402[Pull Request 6402]: The plugin installer now has its own executable, it can be found at `/bin/kibana-plugin`
|
|
@ -1,6 +1,8 @@
|
|||
[[console-kibana]]
|
||||
== Console for Kibana
|
||||
= Console
|
||||
|
||||
[partintro]
|
||||
--
|
||||
The Console plugin provides a UI to interact with the REST API of Elasticsearch. Console has two main areas: the *editor*,
|
||||
where you compose requests to Elasticsearch, and the *response* pane, which displays the responses to the request.
|
||||
Enter the address of your Elasticsearch server in the text box on the top of screen. The default value of this address
|
||||
|
@ -63,111 +65,22 @@ but you can easily change this by entering a different url in the Server input:
|
|||
.The Server Input
|
||||
image::images/introduction_server.png["Server",width=400,align="center"]
|
||||
|
||||
[NOTE]
|
||||
Console is a development tool and is configured by default to run on a laptop. If you install it on a server please
|
||||
look at the <<securing_console>> for instructions on how make it secure.
|
||||
|
||||
[float]
|
||||
[[console-ui]]
|
||||
== The Console UI
|
||||
|
||||
In this section you will find a more detailed description of UI of Console. The basic aspects of the UI are explained
|
||||
in the <<console-kibana>> section.
|
||||
--
|
||||
|
||||
[[multi-req]]
|
||||
=== Multiple Requests Support
|
||||
include::console/multi-requests.asciidoc[]
|
||||
|
||||
The Console editor allows writing multiple requests below each other. As shown in the <<console-kibana>> section, you
|
||||
can submit a request to Elasticsearch by positioning the cursor and using the <<action_menu,Action Menu>>. Similarly
|
||||
you can select multiple requests in one go:
|
||||
include::console/auto-formatting.asciidoc[]
|
||||
|
||||
.Selecting Multiple Requests
|
||||
image::images/multiple_requests.png[Multiple Requests]
|
||||
include::console/keyboard-shortcuts.asciidoc[]
|
||||
|
||||
Console will send the request one by one to Elasticsearch and show the output on the right pane as Elasticsearch responds.
|
||||
This is very handy when debugging an issue or trying query combinations in multiple scenarios.
|
||||
include::console/history.asciidoc[]
|
||||
|
||||
Selecting multiple requests also allows you to auto format and copy them as cURL in one go.
|
||||
|
||||
|
||||
[[auto_formatting]]
|
||||
=== Auto Formatting
|
||||
|
||||
Console allows you to auto format messy requests. To do so, position the cursor on the request you would like to format
|
||||
and select Auto Indent from the action menu:
|
||||
|
||||
.Auto Indent a request
|
||||
image::images/auto_format_before.png["Auto format before",width=500,align="center"]
|
||||
|
||||
Console will adjust the JSON body of the request and it will now look like this:
|
||||
|
||||
.A formatted request
|
||||
image::images/auto_format_after.png["Auto format after",width=500,align="center"]
|
||||
|
||||
If you select Auto Indent on a request that is already perfectly formatted, Console will collapse the
|
||||
request body to a single line per document. This is very handy when working with Elasticsearch's bulk APIs:
|
||||
|
||||
.One doc per line
|
||||
image::images/auto_format_bulk.png["Auto format bulk",width=550,align="center"]
|
||||
|
||||
|
||||
[[keyboard_shortcuts]]
|
||||
=== Keyboard shortcuts
|
||||
|
||||
Console comes with a set of nifty keyboard shortcuts making working with it even more efficient. Here is an overview:
|
||||
|
||||
==== General editing
|
||||
|
||||
Ctrl/Cmd + I:: Auto indent current request.
|
||||
Ctrl + Space:: Open Auto complete (even if not typing).
|
||||
Ctrl/Cmd + Enter:: Submit request.
|
||||
Ctrl/Cmd + Up/Down:: Jump to the previous/next request start or end.
|
||||
Ctrl/Cmd + Alt + L:: Collapse/expand current scope.
|
||||
Ctrl/Cmd + Option + 0:: Collapse all scopes but the current one. Expand by adding a shift.
|
||||
|
||||
==== When auto-complete is visible
|
||||
|
||||
Down arrow:: Switch focus to auto-complete menu. Use arrows to further select a term.
|
||||
Enter/Tab:: Select the currently selected or the top most term in auto-complete menu.
|
||||
Esc:: Close auto-complete menu.
|
||||
|
||||
|
||||
=== History
|
||||
|
||||
Console maintains a list of the last 500 requests that were successfully executed by Elasticsearch. The history
|
||||
is available by clicking the clock icon on the top right side of the window. The icons opens the history panel
|
||||
where you can see the old requests. You can also select a request here and it will be added to the editor at
|
||||
the current cursor position.
|
||||
|
||||
.History Panel
|
||||
image::images/history.png["History Panel"]
|
||||
|
||||
|
||||
=== Settings
|
||||
|
||||
Console has multiple settings you can set. All of them are available in the Settings panel. To open the panel
|
||||
click on the cog icon on the top right.
|
||||
|
||||
.Settings Panel
|
||||
image::images/settings.png["Setting Panel"]
|
||||
|
||||
[[securing_console]]
|
||||
=== Securing Console
|
||||
|
||||
Console is meant to be used as a local development tool. As such, it will send requests to any host & port combination,
|
||||
just as a local curl command would. To overcome the CORS limitations enforced by browsers, Console's Node.js backend
|
||||
serves as a proxy to send requests on behalf of the browser. However, if put on a server and exposed to the internet
|
||||
this can become a security risk. In those cases, we highly recommend you lock down the proxy by setting the
|
||||
`console.proxyFilter` setting. The setting accepts a list of regular expressions that are evaluated against each URL
|
||||
the proxy is requested to retrieve. If none of the regular expressions match the proxy will reject the request.
|
||||
|
||||
Here is an example configuration the only allows Console to connect to localhost:
|
||||
|
||||
[source,yaml]
|
||||
--------
|
||||
sense.proxyFilter:
|
||||
- ^https?://(localhost|127\.0\.0\.1|\[::0\]).*
|
||||
--------
|
||||
|
||||
Restart Kibana for these changes to take effect.
|
||||
include::console/settings.asciidoc[]
|
||||
|
||||
include::console/disabling-console.asciidoc[]
|
||||
|
|
19
docs/console/auto-formatting.asciidoc
Normal file
|
@ -0,0 +1,19 @@
|
|||
[[auto-formatting]]
|
||||
== Auto Formatting
|
||||
|
||||
Console allows you to auto format messy requests. To do so, position the cursor on the request you would like to format
|
||||
and select Auto Indent from the action menu:
|
||||
|
||||
.Auto Indent a request
|
||||
image::images/auto_format_before.png["Auto format before",width=500,align="center"]
|
||||
|
||||
Console will adjust the JSON body of the request and it will now look like this:
|
||||
|
||||
.A formatted request
|
||||
image::images/auto_format_after.png["Auto format after",width=500,align="center"]
|
||||
|
||||
If you select Auto Indent on a request that is already perfectly formatted, Console will collapse the
|
||||
request body to a single line per document. This is very handy when working with Elasticsearch's bulk APIs:
|
||||
|
||||
.One doc per line
|
||||
image::images/auto_format_bulk.png["Auto format bulk",width=550,align="center"]
|
10
docs/console/disabling-console.asciidoc
Normal file
|
@ -0,0 +1,10 @@
|
|||
[[disabling-console]]
|
||||
== Disable Console
|
||||
|
||||
If the users of Kibana have no requirements or need to access any of the Console functionality, it can
|
||||
be disabled completely and not even show up as an available app by setting the `console.enabled` Kibana server setting to `false`:
|
||||
|
||||
[source,yaml]
|
||||
--------
|
||||
console.enabled: false
|
||||
--------
|
10
docs/console/history.asciidoc
Normal file
|
@ -0,0 +1,10 @@
|
|||
[[history]]
|
||||
== History
|
||||
|
||||
Console maintains a list of the last 500 requests that were successfully executed by Elasticsearch. The history
|
||||
is available by clicking the clock icon on the top right side of the window. The icons opens the history panel
|
||||
where you can see the old requests. You can also select a request here and it will be added to the editor at
|
||||
the current cursor position.
|
||||
|
||||
.History Panel
|
||||
image::images/history.png["History Panel"]
|
21
docs/console/keyboard-shortcuts.asciidoc
Normal file
|
@ -0,0 +1,21 @@
|
|||
[[keyboard-shortcuts]]
|
||||
== Keyboard shortcuts
|
||||
|
||||
Console comes with a set of nifty keyboard shortcuts making working with it even more efficient. Here is an overview:
|
||||
|
||||
[float]
|
||||
=== General editing
|
||||
|
||||
Ctrl/Cmd + I:: Auto indent current request.
|
||||
Ctrl + Space:: Open Auto complete (even if not typing).
|
||||
Ctrl/Cmd + Enter:: Submit request.
|
||||
Ctrl/Cmd + Up/Down:: Jump to the previous/next request start or end.
|
||||
Ctrl/Cmd + Alt + L:: Collapse/expand current scope.
|
||||
Ctrl/Cmd + Option + 0:: Collapse all scopes but the current one. Expand by adding a shift.
|
||||
|
||||
[float]
|
||||
=== When auto-complete is visible
|
||||
|
||||
Down arrow:: Switch focus to auto-complete menu. Use arrows to further select a term.
|
||||
Enter/Tab:: Select the currently selected or the top most term in auto-complete menu.
|
||||
Esc:: Close auto-complete menu.
|
14
docs/console/multi-requests.asciidoc
Normal file
|
@ -0,0 +1,14 @@
|
|||
[[multi-requests]]
|
||||
== Multiple Requests Support
|
||||
|
||||
The Console editor allows writing multiple requests below each other. As shown in the <<console-kibana>> section, you
|
||||
can submit a request to Elasticsearch by positioning the cursor and using the <<action_menu,Action Menu>>. Similarly
|
||||
you can select multiple requests in one go:
|
||||
|
||||
.Selecting Multiple Requests
|
||||
image::images/multiple_requests.png[Multiple Requests]
|
||||
|
||||
Console will send the request one by one to Elasticsearch and show the output on the right pane as Elasticsearch responds.
|
||||
This is very handy when debugging an issue or trying query combinations in multiple scenarios.
|
||||
|
||||
Selecting multiple requests also allows you to auto format and copy them as cURL in one go.
|
8
docs/console/settings.asciidoc
Normal file
|
@ -0,0 +1,8 @@
|
|||
[[console-settings]]
|
||||
== Settings
|
||||
|
||||
Console has multiple settings you can set. All of them are available in the Settings panel. To open the panel
|
||||
click on the cog icon on the top right.
|
||||
|
||||
.Settings Panel
|
||||
image::images/settings.png["Setting Panel"]
|
|
@ -1,152 +1,153 @@
|
|||
[[dashboard]]
|
||||
== Dashboard
|
||||
= Dashboard
|
||||
|
||||
A Kibana _dashboard_ displays a set of saved visualizations in groups that you can arrange freely. You can save a
|
||||
dashboard to share or reload at a later time.
|
||||
[partintro]
|
||||
--
|
||||
A Kibana _dashboard_ displays a collection of saved visualizations. You can
|
||||
arrange and resize the visualizations as needed and save dashboards so
|
||||
they be reloaded and shared.
|
||||
|
||||
.Sample dashboard
|
||||
image:images/tutorial-dashboard.png[Example dashboard]
|
||||
--
|
||||
|
||||
[float]
|
||||
[[dashboard-getting-started]]
|
||||
=== Getting Started
|
||||
== Building a Dashboard
|
||||
|
||||
You need at least one saved <<visualize, visualization>> to use a dashboard.
|
||||
To build a dashboard:
|
||||
|
||||
[float]
|
||||
[[creating-a-new-dashboard]]
|
||||
==== Building a New Dashboard
|
||||
. Click *Dashboard* in the side navigation. If you haven't previously viewed a
|
||||
dashboard, Kibana displays an empty dashboard. Otherwise, click *New* to start
|
||||
building your dashboard.
|
||||
+
|
||||
image:images/NewDashboard.png[New Dashboard]
|
||||
|
||||
The first time you click the *Dashboard* tab, Kibana displays an empty dashboard.
|
||||
|
||||
image:images/NewDashboard.png[New Dashboard screen]
|
||||
|
||||
Build your dashboard by adding visualizations. By default, Kibana dashboards use a light color theme. To use a dark color
|
||||
theme instead, click the *Options* button and check the *Use dark theme* box.
|
||||
|
||||
NOTE: You can change the default theme in the *Advanced* section of the *Settings* tab.
|
||||
|
||||
[float]
|
||||
[[dash-autorefresh]]
|
||||
include::autorefresh.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[adding-visualizations-to-a-dashboard]]
|
||||
==== Adding Visualizations to a Dashboard
|
||||
. To add a visualization to the dashboard, click *Add* and select the
|
||||
visualization. If you have a large number of visualizations, you can enter a
|
||||
*Filter* string to filter the list.
|
||||
+
|
||||
Kibana displays the selected visualization in a container on the dashboard.
|
||||
If you see a message that the container is too small, you can
|
||||
<<resizing-containers,resize the visualization>>.
|
||||
+
|
||||
NOTE: By default, Kibana dashboards use a light color theme. To use a dark color theme,
|
||||
click *Options* and select *Use dark theme*. To change the default theme, go
|
||||
to *Management/Kibana/Advanced Settings* and set `dashboard:defaultDarkTheme`
|
||||
to `true`.
|
||||
|
||||
To add a visualization to the dashboard, click the *Add* button in the toolbar panel. Select a saved visualization
|
||||
from the list. You can filter the list of visualizations by typing a filter string into the *Visualization Filter*
|
||||
field.
|
||||
|
||||
The visualization you select appears in a _container_ on your dashboard.
|
||||
|
||||
NOTE: If you see a message about the container's height or width being too small, <<resizing-containers,resize the
|
||||
container>>.
|
||||
|
||||
[float]
|
||||
[[saving-dashboards]]
|
||||
==== Saving Dashboards
|
||||
|
||||
To save the dashboard, click the *Save Dashboard* button in the toolbar panel, enter a name for the dashboard in the
|
||||
*Save As* field, and click the *Save* button. By default, dashboards store the time period specified in the time filter
|
||||
when you save a dashboard. To disable this behavior, clear the *Store time with dashboard* box before clicking the
|
||||
*Save* button.
|
||||
|
||||
[float]
|
||||
[[loading-a-saved-dashboard]]
|
||||
==== Loading a Saved Dashboard
|
||||
|
||||
Click the *Load Saved Dashboard* button to display a list of existing dashboards. The saved dashboard selector includes
|
||||
a text field to filter by dashboard name and a link to the Object Editor for managing your saved dashboards. You can
|
||||
also access the Object Editor by clicking *Settings > Objects*.
|
||||
|
||||
[float]
|
||||
[[sharing-dashboards]]
|
||||
==== Sharing Dashboards
|
||||
|
||||
You can share dashboards with other users. You can share a direct link to the Kibana dashboard or embed the dashboard
|
||||
in your Web page.
|
||||
|
||||
NOTE: A user must have Kibana access in order to view embedded dashboards.
|
||||
|
||||
To share a dashboard, click the *Share* button image:images/share-dashboard.png[] to display the _Sharing_ panel.
|
||||
|
||||
Click the *Copy to Clipboard* button image:images/share-link.png[] to copy the native URL or embed HTML to the clipboard.
|
||||
Click the *Generate short URL* button image:images/share-short-link.png[] to create a shortened URL for sharing or
|
||||
embedding.
|
||||
|
||||
[float]
|
||||
[[embedding-dashboards]]
|
||||
==== Embedding Dashboards
|
||||
|
||||
To embed a dashboard, copy the embed code from the _Share_ display into your external web application.
|
||||
. When you're done adding and arranging visualizations, click *Save* to save the
|
||||
dashboard:
|
||||
.. Enter a name for the dashboard.
|
||||
.. To store the time period specified in the time filter with the dashboard, select
|
||||
*Store time with dashboard*.
|
||||
.. Click the *Save* button to store it as a Kibana saved object.
|
||||
|
||||
[float]
|
||||
[[customizing-your-dashboard]]
|
||||
=== Customizing Dashboard Elements
|
||||
=== Arranging Dashboard Elements
|
||||
|
||||
The visualizations in your dashboard are stored in resizable _containers_ that you can arrange on the dashboard. This
|
||||
section discusses customizing these containers.
|
||||
The visualizations in your dashboard are stored in resizable, moveable containers.
|
||||
|
||||
[float]
|
||||
[[moving-containers]]
|
||||
==== Moving Containers
|
||||
==== Moving Visualizations
|
||||
|
||||
Click and hold a container's header to move the container around the dashboard. Other containers will shift as needed
|
||||
to make room for the moving container. Release the mouse button to confirm the container's new location.
|
||||
To reposition a visualization:
|
||||
|
||||
. Hover over it to display the container controls.
|
||||
. Click and hold the *Move* button in the upper right corner of the container.
|
||||
. Drag the container to its new position.
|
||||
. Release the *Move* button.
|
||||
|
||||
[float]
|
||||
[[resizing-containers]]
|
||||
==== Resizing Containers
|
||||
==== Resizing Visualizations
|
||||
|
||||
Move the cursor to the bottom right corner of the container until the cursor changes to point at the corner. After the
|
||||
cursor changes, click and drag the corner of the container to change the container's size. Release the mouse button to
|
||||
confirm the new container size.
|
||||
To resize a visualization:
|
||||
|
||||
. Hover over it to display the container controls.
|
||||
. Click and hold the *Resize* button in the bottom right corner of the container.
|
||||
. Drag to change the dimensions of the container.
|
||||
. Release the *Resize* button.
|
||||
|
||||
[float]
|
||||
[[removing-containers]]
|
||||
==== Removing Containers
|
||||
==== Removing Visualizations
|
||||
|
||||
Click the *x* icon at the top right corner of a container to remove that container from the dashboard. Removing a
|
||||
container from a dashboard does not delete the saved visualization in that container.
|
||||
To remove a visualization from the dashboard:
|
||||
|
||||
. Hover over it to display the container controls.
|
||||
. Click the *Delete* button in the upper right corner of the container.
|
||||
+
|
||||
NOTE: Removing a visualization from a dashboard does _not_ delete the
|
||||
saved visualization.
|
||||
|
||||
[float]
|
||||
[[viewing-detailed-information]]
|
||||
==== Viewing Detailed Information
|
||||
=== Viewing Visualization Data
|
||||
|
||||
To display the raw data behind the visualization, click the bar at the bottom of the container. Tabs with detailed
|
||||
information about the raw data replace the visualization, as in this example:
|
||||
To display the raw data behind a visualization:
|
||||
|
||||
.Table
|
||||
A representation of the underlying data, presented as a paginated data grid. You can sort the items
|
||||
in the table by clicking on the table headers at the top of each column.
|
||||
. Hover over it to display the container controls.
|
||||
. Click the *Expand* button in the lower left corner of the container.
|
||||
This displays a table that contains the underlying data. You can also view
|
||||
the raw Elasticsearch request and response in JSON and the request statistics.
|
||||
The request statistics show the query duration, request duration, total number
|
||||
of matching records, and the index (or index pattern) that was searched.
|
||||
+
|
||||
image:images/NYCTA-Table.jpg[]
|
||||
|
||||
.Request
|
||||
The raw request used to query the server, presented in JSON format.
|
||||
image:images/NYCTA-Request.jpg[]
|
||||
To export the data behind the visualization as a comma-separated-values
|
||||
(CSV) file, click the *Raw* or *Formatted* link at the bottom of the data
|
||||
Table. *Raw* exports the data as it is stored in Elasticsearch. *Formatted*
|
||||
exports the results of any applicable Kibana <<managing-fields,field
|
||||
formatters>>.
|
||||
|
||||
.Response
|
||||
The raw response from the server, presented in JSON format.
|
||||
image:images/NYCTA-Response.jpg[]
|
||||
|
||||
.Statistics
|
||||
A summary of the statistics related to the request and the response, presented as a data grid. The data
|
||||
grid includes the query duration, the request duration, the total number of records found on the server, and the
|
||||
index pattern used to make the query.
|
||||
image:images/NYCTA-Statistics.jpg[]
|
||||
|
||||
To export the raw data behind the visualization as a comma-separated-values (CSV) file, click on either the
|
||||
*Raw* or *Formatted* links at the bottom of any of the detailed information tabs. A raw export contains the data as it
|
||||
is stored in Elasticsearch. A formatted export contains the results of any applicable Kibana [field formatters].
|
||||
To return to the visualization, click the *Collapse* button in the lower left
|
||||
corner of the container.
|
||||
|
||||
[float]
|
||||
[[changing-the-visualization]]
|
||||
=== Changing the Visualization
|
||||
=== Modifying a Visualization
|
||||
|
||||
To open a visualization in the Visualization Editor:
|
||||
|
||||
. Hover over it to display the container controls.
|
||||
. Click the *Edit* button in the upper right corner of the container.
|
||||
|
||||
|
||||
[[loading-a-saved-dashboard]]
|
||||
== Loading a Dashboard
|
||||
|
||||
To open a saved dashboard:
|
||||
|
||||
. Click *Dashboard* in the side navigation.
|
||||
. Click *Open* and select a dashboard. If you have a large number of
|
||||
dashboards, you can enter a *Filter* string to filter the list.
|
||||
+
|
||||
TIP: To import, export, and delete dashboards, click the *Manage Dashboards* link
|
||||
to open *Management/Kibana/Saved Objects/Dashboards*.
|
||||
|
||||
[[sharing-dashboards]]
|
||||
== Sharing a Dashboard
|
||||
|
||||
You can can share a direct link to a Kibana dashboard with another user,
|
||||
or embed the dashboard in a web page. Users must have Kibana access
|
||||
to view embedded dashboards.
|
||||
|
||||
[[embedding-dashboards]]
|
||||
To share a dashboard:
|
||||
|
||||
. Click *Dashboard* in the side navigation.
|
||||
. Open the dashboard you want to share.
|
||||
. Click *Share*.
|
||||
. Copy the link you want to share or the iframe you want to embed. You can
|
||||
share the live dashboard or a static snapshot of the current point in time.
|
||||
+
|
||||
TIP: When sharing a link to a dashboard snapshot, use the *Short URL*. Snapshot
|
||||
URLs are long and can be problematic for Internet Explorer users and other
|
||||
tools.
|
||||
|
||||
Click the _Edit_ button image:images/EditVis.png[Pencil button] at the top right of a container to open the
|
||||
visualization in the <<visualize,Visualize>> page.
|
||||
|
||||
[float]
|
||||
[[dashboard-filters]]
|
||||
include::filter-pinning.asciidoc[]
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
[[discover]]
|
||||
== Discover
|
||||
= Discover
|
||||
|
||||
[partintro]
|
||||
--
|
||||
You can interactively explore your data from the Discover page. You have access to every document in every index that
|
||||
matches the selected index pattern. You can submit search queries, filter the search results, and view document data.
|
||||
You can also see the number of documents that match the search query and get field value statistics. If a time field is
|
||||
|
@ -7,226 +10,18 @@ configured for the selected index pattern, the distribution of documents over ti
|
|||
top of the page.
|
||||
|
||||
image::images/Discover-Start-Annotated.jpg[Discover Page]
|
||||
--
|
||||
|
||||
[float]
|
||||
[[set-time-filter]]
|
||||
=== Setting the Time Filter
|
||||
The Time Filter restricts the search results to a specific time period. You can set a time filter if your index
|
||||
contains time-based events and a time-field is configured for the selected index pattern.
|
||||
include::discover/set-time-filter.asciidoc[]
|
||||
|
||||
By default the time filter is set to the last 15 minutes. You can use the Time Picker to change the time filter
|
||||
or select a specific time interval or time range in the histogram at the top of the page.
|
||||
|
||||
To set a time filter with the Time Picker:
|
||||
|
||||
. Click the Time Filter displayed in the upper right corner of the menu bar to open the Time Picker.
|
||||
. To set a quick filter, simply click one of the shortcut links.
|
||||
. To specify a relative Time Filter, click *Relative* and enter the relative start time. You can specify
|
||||
the relative start time as any number of seconds, minutes, hours, days, months, or years ago.
|
||||
. To specify an absolute Time Filter, click *Absolute* and enter the start date in the *From* field and the end date in
|
||||
the *To* field.
|
||||
. Click the caret at the bottom of the Time Picker to hide it.
|
||||
|
||||
To set a Time Filter from the histogram, do one of the following:
|
||||
|
||||
* Click the bar that represents the time interval you want to zoom in on.
|
||||
* Click and drag to view a specific timespan. You must start the selection with the cursor over the background of the
|
||||
chart--the cursor changes to a plus sign when you hover over a valid start point.
|
||||
|
||||
You can use the browser Back button to undo your changes.
|
||||
|
||||
The histogram lists the time range you're currently exploring, as well as the intervals that range is currently using.
|
||||
To change the intervals, click the link and select an interval from the drop-down. The default behavior automatically
|
||||
sets an interval based on the time range.
|
||||
|
||||
[float]
|
||||
[[search]]
|
||||
=== Searching Your Data
|
||||
You can search the indices that match the current index pattern by submitting a search from the Discover page.
|
||||
You can enter simple query strings, use the
|
||||
Lucene https://lucene.apache.org/core/2_9_4/queryparsersyntax.html[query syntax], or use the full JSON-based
|
||||
{ref}/query-dsl.html[Elasticsearch Query DSL].
|
||||
|
||||
When you submit a search, the histogram, Documents table, and Fields list are updated to reflect
|
||||
the search results. The total number of hits (matching documents) is shown in the upper right corner of the
|
||||
histogram. The Documents table shows the first five hundred hits. By default, the hits are listed in reverse
|
||||
chronological order, with the newest documents shown first. You can reverse the sort order by by clicking on the Time
|
||||
column header. You can also sort the table using the values in any indexed field. For more information, see
|
||||
<<sorting,Sorting the Documents Table>>.
|
||||
|
||||
To search your data:
|
||||
|
||||
. Enter a query string in the Search field:
|
||||
+
|
||||
* To perform a free text search, simply enter a text string. For example, if you're searching web server logs, you
|
||||
could enter `safari` to search all fields for the term `safari`.
|
||||
+
|
||||
* To search for a value in a specific field, you prefix the value with the name of the field. For example, you could
|
||||
enter `status:200` to limit the results to entries that contain the value `200` in the `status` field.
|
||||
+
|
||||
* To search for a range of values, you can use the bracketed range syntax, `[START_VALUE TO END_VALUE]`. For example,
|
||||
to find entries that have 4xx status codes, you could enter `status:[400 TO 499]`.
|
||||
+
|
||||
* To specify more complex search criteria, you can use the Boolean operators `AND`, `OR`, and `NOT`. For example,
|
||||
to find entries that have 4xx status codes and have an extension of `php` or `html`, you could enter `status:[400 TO
|
||||
499] AND (extension:php OR extension:html)`.
|
||||
+
|
||||
NOTE: These examples use the Lucene query syntax. You can also submit queries using the Elasticsearch Query DSL. For
|
||||
examples, see {ref}/query-dsl-query-string-query.html#query-string-syntax[query string syntax] in the Elasticsearch
|
||||
Reference.
|
||||
+
|
||||
. Press *Enter* or click the *Search* button to submit your search query.
|
||||
|
||||
[float]
|
||||
[[new-search]]
|
||||
==== Starting a New Search
|
||||
To clear the current search and start a new search, click the *New* button in the Discover toolbar.
|
||||
|
||||
[float]
|
||||
[[save-search]]
|
||||
==== Saving a Search
|
||||
You can reload saved searches on the Discover page and use them as the basis of <<visualize, visualizations>>.
|
||||
Saving a search saves both the search query string and the currently selected index pattern.
|
||||
|
||||
To save the current search:
|
||||
|
||||
. Click the *Save* button in the Discover toolbar.
|
||||
. Enter a name for the search and click *Save*.
|
||||
|
||||
[float]
|
||||
[[load-search]]
|
||||
==== Opening a Saved Search
|
||||
To load a saved search:
|
||||
|
||||
. Click the *Open* button in the Discover toolbar.
|
||||
. Select the search you want to open.
|
||||
|
||||
If the saved search is associated with a different index pattern than is currently selected, opening the saved search
|
||||
also changes the selected index pattern.
|
||||
|
||||
[float]
|
||||
[[select-pattern]]
|
||||
==== Changing Which Indices You're Searching
|
||||
When you submit a search request, the indices that match the currently-selected index pattern are searched. The current
|
||||
index pattern is shown below the search field. To change which indices you are searching, click the name of the current
|
||||
index pattern to display a list of the configured index patterns and select a different index pattern.
|
||||
|
||||
For more information about index patterns, see <<settings-create-pattern, Creating an Index Pattern>>.
|
||||
include::discover/search.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[auto-refresh]]
|
||||
include::discover/autorefresh.asciidoc[]
|
||||
|
||||
include::autorefresh.asciidoc[]
|
||||
include::discover/field-filter.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[field-filter]]
|
||||
=== Filtering by Field
|
||||
You can filter the search results to display only those documents that contain a particular value in a field. You can
|
||||
also create negative filters that exclude documents that contain the specified field value.
|
||||
include::discover/document-data.asciidoc[]
|
||||
|
||||
You can add filters from the Fields list or from the Documents table. When you add a filter, it is displayed in the
|
||||
filter bar below the search query. From the filter bar, you can enable or disable a filter, invert the filter (change
|
||||
it from a positive filter to a negative filter and vice-versa), toggle the filter on or off, or remove it entirely.
|
||||
Click the small left-facing arrow to the right of the index pattern selection drop-down to collapse the Fields list.
|
||||
|
||||
To add a filter from the Fields list:
|
||||
|
||||
. Click the name of the field you want to filter on. This displays the top five values for that field. To the right of
|
||||
each value, there are two magnifying glass buttons--one for adding a regular (positive) filter, and
|
||||
one for adding a negative filter.
|
||||
. To add a positive filter, click the *Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button].
|
||||
This filters out documents that don't contain that value in the field.
|
||||
. To add a negative filter, click the *Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button].
|
||||
This excludes documents that contain that value in the field.
|
||||
|
||||
To add a filter from the Documents table:
|
||||
|
||||
. Expand a document in the Documents table by clicking the *Expand* button image:images/ExpandButton.jpg[Expand Button]
|
||||
to the left of the document's entry in the first column (the first column is usually Time). To the right of each field
|
||||
name, there are two magnifying glass buttons--one for adding a regular (positive) filter, and one for adding a negative
|
||||
filter.
|
||||
. To add a positive filter based on the document's value in a field, click the
|
||||
*Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button]. This filters out documents that don't
|
||||
contain the specified value in that field.
|
||||
. To add a negative filter based on the document's value in a field, click the
|
||||
*Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button]. This excludes documents that contain
|
||||
the specified value in that field.
|
||||
|
||||
[float]
|
||||
[[discover-filters]]
|
||||
include::filter-pinning.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[document-data]]
|
||||
=== Viewing Document Data
|
||||
When you submit a search query, the 500 most recent documents that match the query are listed in the Documents table.
|
||||
You can configure the number of documents shown in the table by setting the `discover:sampleSize` property in
|
||||
<<advanced-options,Advanced Settings>>. By default, the table shows the localized version of the time field specified
|
||||
in the selected index pattern and the document `_source`. You can <<adding-columns, add fields to the Documents table>>
|
||||
from the Fields list. You can <<sorting, sort the listed documents>> by any indexed field that's included in the table.
|
||||
|
||||
To view a document's field data, click the *Expand* button image:images/ExpandButton.jpg[Expand Button] to the left of
|
||||
the document's entry in the first column (the first column is usually Time). Kibana reads the document data from
|
||||
Elasticsearch and displays the document fields in a table. The table contains a row for each field that contains the
|
||||
name of the field, add filter buttons, and the field value.
|
||||
|
||||
image::images/Expanded-Document.png[]
|
||||
|
||||
. To view the original JSON document (pretty-printed), click the *JSON* tab.
|
||||
. To view the document data as a separate page, click the link. You can bookmark and share this link to provide direct
|
||||
access to a particular document.
|
||||
. To collapse the document details, click the *Collapse* button image:images/CollapseButton.jpg[Collapse Button].
|
||||
. To toggle a particular field's column in the Documents table, click the
|
||||
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
|
||||
|
||||
[float]
|
||||
[[sorting]]
|
||||
==== Sorting the Document List
|
||||
You can sort the documents in the Documents table by the values in any indexed field. Documents in index patterns that
|
||||
are configured with time fields are sorted in reverse chronological order by default.
|
||||
|
||||
To change the sort order, click the name of the field you want to sort by. The fields you can use for sorting have a
|
||||
sort button to the right of the field name. Clicking the field name a second time reverses the sort order.
|
||||
|
||||
[float]
|
||||
[[adding-columns]]
|
||||
==== Adding Field Columns to the Documents Table
|
||||
By default, the Documents table shows the localized version of the time field specified in the selected index pattern
|
||||
and the document `_source`. You can add fields to the table from the Fields list or from a document's expanded view.
|
||||
|
||||
To add field columns to the Documents table:
|
||||
|
||||
. Mouse over a field in the Fields list and click its *add* button image:images/AddFieldButton.jpg[Add Field Button].
|
||||
. Repeat until you've added all the fields you want to display in the Documents table.
|
||||
. Alternately, add a field column directly from a document's expanded view by clicking the
|
||||
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
|
||||
|
||||
The added field columns replace the `_source` column in the Documents table. The added fields are also
|
||||
listed in the *Selected Fields* section at the top of the field list.
|
||||
|
||||
To rearrange the field columns in the table, mouse over the header of the column you want to move and click the *Move*
|
||||
button.
|
||||
|
||||
image:images/Discover-MoveColumn.jpg[Move Column]
|
||||
|
||||
[float]
|
||||
[[removing-columns]]
|
||||
==== Removing Field Columns from the Documents Table
|
||||
To remove field columns from the Documents table:
|
||||
|
||||
. Mouse over the field you want to remove in the *Selected Fields* section of the Fields list and click its *remove*
|
||||
button image:images/RemoveFieldButton.jpg[Remove Field Button].
|
||||
. Repeat until you've removed all the fields you want to drop from the Documents table.
|
||||
|
||||
[float]
|
||||
[[viewing-field-stats]]
|
||||
=== Viewing Field Data Statistics
|
||||
From the field list, you can see how many documents in the Documents table contain a particular field, what the top 5
|
||||
values are, and what percentage of documents contain each value.
|
||||
|
||||
To view field data statistics, click the name of a field in the Fields list. The field can be anywhere in the Fields
|
||||
list.
|
||||
|
||||
image:images/Discover-FieldStats.jpg[Field Statistics]
|
||||
|
||||
TIP: To create a visualization based on the field, click the *Visualize* button below the field statistics.
|
||||
include::discover/viewing-field-stats.asciidoc[]
|
||||
|
|
61
docs/discover/document-data.asciidoc
Normal file
|
@ -0,0 +1,61 @@
|
|||
[[document-data]]
|
||||
== Viewing Document Data
|
||||
|
||||
When you submit a search query, the 500 most recent documents that match the query are listed in the Documents table.
|
||||
You can configure the number of documents shown in the table by setting the `discover:sampleSize` property in
|
||||
<<advanced-options,Advanced Settings>>. By default, the table shows the localized version of the time field specified
|
||||
in the selected index pattern and the document `_source`. You can <<adding-columns, add fields to the Documents table>>
|
||||
from the Fields list. You can <<sorting, sort the listed documents>> by any indexed field that's included in the table.
|
||||
|
||||
To view a document's field data, click the *Expand* button image:images/ExpandButton.jpg[Expand Button] to the left of
|
||||
the document's entry in the first column (the first column is usually Time). Kibana reads the document data from
|
||||
Elasticsearch and displays the document fields in a table. The table contains a row for each field that contains the
|
||||
name of the field, add filter buttons, and the field value.
|
||||
|
||||
image::images/Expanded-Document.png[]
|
||||
|
||||
. To view the original JSON document (pretty-printed), click the *JSON* tab.
|
||||
. To view the document data as a separate page, click the link. You can bookmark and share this link to provide direct
|
||||
access to a particular document.
|
||||
. To collapse the document details, click the *Collapse* button image:images/CollapseButton.jpg[Collapse Button].
|
||||
. To toggle a particular field's column in the Documents table, click the
|
||||
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
|
||||
|
||||
[float]
|
||||
[[sorting]]
|
||||
=== Sorting the Document List
|
||||
You can sort the documents in the Documents table by the values in any indexed field. Documents in index patterns that
|
||||
are configured with time fields are sorted in reverse chronological order by default.
|
||||
|
||||
To change the sort order, click the name of the field you want to sort by. The fields you can use for sorting have a
|
||||
sort button to the right of the field name. Clicking the field name a second time reverses the sort order.
|
||||
|
||||
[float]
|
||||
[[adding-columns]]
|
||||
=== Adding Field Columns to the Documents Table
|
||||
By default, the Documents table shows the localized version of the time field specified in the selected index pattern
|
||||
and the document `_source`. You can add fields to the table from the Fields list or from a document's expanded view.
|
||||
|
||||
To add field columns to the Documents table:
|
||||
|
||||
. Mouse over a field in the Fields list and click its *add* button image:images/AddFieldButton.jpg[Add Field Button].
|
||||
. Repeat until you've added all the fields you want to display in the Documents table.
|
||||
. Alternately, add a field column directly from a document's expanded view by clicking the
|
||||
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
|
||||
|
||||
The added field columns replace the `_source` column in the Documents table. The added fields are also
|
||||
listed in the *Selected Fields* section at the top of the field list.
|
||||
|
||||
To rearrange the field columns in the table, mouse over the header of the column you want to move and click the *Move*
|
||||
button.
|
||||
|
||||
image:images/Discover-MoveColumn.jpg[Move Column]
|
||||
|
||||
[float]
|
||||
[[removing-columns]]
|
||||
=== Removing Field Columns from the Documents Table
|
||||
To remove field columns from the Documents table:
|
||||
|
||||
. Mouse over the field you want to remove in the *Selected Fields* section of the Fields list and click its *remove*
|
||||
button image:images/RemoveFieldButton.jpg[Remove Field Button].
|
||||
. Repeat until you've removed all the fields you want to drop from the Documents table.
|
36
docs/discover/field-filter.asciidoc
Normal file
|
@ -0,0 +1,36 @@
|
|||
[[field-filter]]
|
||||
== Filtering by Field
|
||||
You can filter the search results to display only those documents that contain a particular value in a field. You can
|
||||
also create negative filters that exclude documents that contain the specified field value.
|
||||
|
||||
You can add filters from the Fields list or from the Documents table. When you add a filter, it is displayed in the
|
||||
filter bar below the search query. From the filter bar, you can enable or disable a filter, invert the filter (change
|
||||
it from a positive filter to a negative filter and vice-versa), toggle the filter on or off, or remove it entirely.
|
||||
Click the small left-facing arrow to the right of the index pattern selection drop-down to collapse the Fields list.
|
||||
|
||||
To add a filter from the Fields list:
|
||||
|
||||
. Click the name of the field you want to filter on. This displays the top five values for that field. To the right of
|
||||
each value, there are two magnifying glass buttons--one for adding a regular (positive) filter, and
|
||||
one for adding a negative filter.
|
||||
. To add a positive filter, click the *Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button].
|
||||
This filters out documents that don't contain that value in the field.
|
||||
. To add a negative filter, click the *Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button].
|
||||
This excludes documents that contain that value in the field.
|
||||
|
||||
To add a filter from the Documents table:
|
||||
|
||||
. Expand a document in the Documents table by clicking the *Expand* button image:images/ExpandButton.jpg[Expand Button]
|
||||
to the left of the document's entry in the first column (the first column is usually Time). To the right of each field
|
||||
name, there are two magnifying glass buttons--one for adding a regular (positive) filter, and one for adding a negative
|
||||
filter.
|
||||
. To add a positive filter based on the document's value in a field, click the
|
||||
*Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button]. This filters out documents that don't
|
||||
contain the specified value in that field.
|
||||
. To add a negative filter based on the document's value in a field, click the
|
||||
*Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button]. This excludes documents that contain
|
||||
the specified value in that field.
|
||||
|
||||
[float]
|
||||
[[discover-filters]]
|
||||
include::filter-pinning.asciidoc[]
|
72
docs/discover/search.asciidoc
Normal file
|
@ -0,0 +1,72 @@
|
|||
[[search]]
|
||||
== Searching Your Data
|
||||
You can search the indices that match the current index pattern by submitting a search from the Discover page.
|
||||
You can enter simple query strings, use the
|
||||
Lucene https://lucene.apache.org/core/2_9_4/queryparsersyntax.html[query syntax], or use the full JSON-based
|
||||
{es-ref}query-dsl.html[Elasticsearch Query DSL].
|
||||
|
||||
When you submit a search, the histogram, Documents table, and Fields list are updated to reflect
|
||||
the search results. The total number of hits (matching documents) is shown in the upper right corner of the
|
||||
histogram. The Documents table shows the first five hundred hits. By default, the hits are listed in reverse
|
||||
chronological order, with the newest documents shown first. You can reverse the sort order by by clicking on the Time
|
||||
column header. You can also sort the table using the values in any indexed field. For more information, see
|
||||
<<sorting,Sorting the Documents Table>>.
|
||||
|
||||
To search your data:
|
||||
|
||||
. Enter a query string in the Search field:
|
||||
+
|
||||
* To perform a free text search, simply enter a text string. For example, if you're searching web server logs, you
|
||||
could enter `safari` to search all fields for the term `safari`.
|
||||
+
|
||||
* To search for a value in a specific field, you prefix the value with the name of the field. For example, you could
|
||||
enter `status:200` to limit the results to entries that contain the value `200` in the `status` field.
|
||||
+
|
||||
* To search for a range of values, you can use the bracketed range syntax, `[START_VALUE TO END_VALUE]`. For example,
|
||||
to find entries that have 4xx status codes, you could enter `status:[400 TO 499]`.
|
||||
+
|
||||
* To specify more complex search criteria, you can use the Boolean operators `AND`, `OR`, and `NOT`. For example,
|
||||
to find entries that have 4xx status codes and have an extension of `php` or `html`, you could enter `status:[400 TO
|
||||
499] AND (extension:php OR extension:html)`.
|
||||
+
|
||||
NOTE: These examples use the Lucene query syntax. You can also submit queries using the Elasticsearch Query DSL. For
|
||||
examples, see {es-ref}query-dsl-query-string-query.html#query-string-syntax[query string syntax] in the Elasticsearch
|
||||
Reference.
|
||||
+
|
||||
. Press *Enter* or click the *Search* button to submit your search query.
|
||||
|
||||
[float]
|
||||
[[new-search]]
|
||||
=== Starting a New Search
|
||||
To clear the current search and start a new search, click the *New* button in the Discover toolbar.
|
||||
|
||||
[float]
|
||||
[[save-search]]
|
||||
=== Saving a Search
|
||||
You can reload saved searches on the Discover page and use them as the basis of <<visualize, visualizations>>.
|
||||
Saving a search saves both the search query string and the currently selected index pattern.
|
||||
|
||||
To save the current search:
|
||||
|
||||
. Click the *Save* button in the Discover toolbar.
|
||||
. Enter a name for the search and click *Save*.
|
||||
|
||||
[float]
|
||||
[[load-search]]
|
||||
=== Opening a Saved Search
|
||||
To load a saved search:
|
||||
|
||||
. Click the *Open* button in the Discover toolbar.
|
||||
. Select the search you want to open.
|
||||
|
||||
If the saved search is associated with a different index pattern than is currently selected, opening the saved search
|
||||
also changes the selected index pattern.
|
||||
|
||||
[float]
|
||||
[[select-pattern]]
|
||||
=== Changing Which Indices You're Searching
|
||||
When you submit a search request, the indices that match the currently-selected index pattern are searched. The current
|
||||
index pattern is shown below the search field. To change which indices you are searching, click the name of the current
|
||||
index pattern to display a list of the configured index patterns and select a different index pattern.
|
||||
|
||||
For more information about index patterns, see <<settings-create-pattern, Creating an Index Pattern>>.
|
29
docs/discover/set-time-filter.asciidoc
Normal file
|
@ -0,0 +1,29 @@
|
|||
[[set-time-filter]]
|
||||
== Setting the Time Filter
|
||||
The Time Filter restricts the search results to a specific time period. You can set a time filter if your index
|
||||
contains time-based events and a time-field is configured for the selected index pattern.
|
||||
|
||||
By default the time filter is set to the last 15 minutes. You can use the Time Picker to change the time filter
|
||||
or select a specific time interval or time range in the histogram at the top of the page.
|
||||
|
||||
To set a time filter with the Time Picker:
|
||||
|
||||
. Click the Time Filter displayed in the upper right corner of the menu bar to open the Time Picker.
|
||||
. To set a quick filter, simply click one of the shortcut links.
|
||||
. To specify a relative Time Filter, click *Relative* and enter the relative start time. You can specify
|
||||
the relative start time as any number of seconds, minutes, hours, days, months, or years ago.
|
||||
. To specify an absolute Time Filter, click *Absolute* and enter the start date in the *From* field and the end date in
|
||||
the *To* field.
|
||||
. Click the caret at the bottom of the Time Picker to hide it.
|
||||
|
||||
To set a Time Filter from the histogram, do one of the following:
|
||||
|
||||
* Click the bar that represents the time interval you want to zoom in on.
|
||||
* Click and drag to view a specific timespan. You must start the selection with the cursor over the background of the
|
||||
chart--the cursor changes to a plus sign when you hover over a valid start point.
|
||||
|
||||
You can use the browser Back button to undo your changes.
|
||||
|
||||
The histogram lists the time range you're currently exploring, as well as the intervals that range is currently using.
|
||||
To change the intervals, click the link and select an interval from the drop-down. The default behavior automatically
|
||||
sets an interval based on the time range.
|
12
docs/discover/viewing-field-stats.asciidoc
Normal file
|
@ -0,0 +1,12 @@
|
|||
[[viewing-field-stats]]
|
||||
== Viewing Field Data Statistics
|
||||
|
||||
From the field list, you can see how many documents in the Documents table contain a particular field, what the top 5
|
||||
values are, and what percentage of documents contain each value.
|
||||
|
||||
To view field data statistics, click the name of a field in the Fields list. The field can be anywhere in the Fields
|
||||
list.
|
||||
|
||||
image:images/Discover-FieldStats.jpg[Field Statistics]
|
||||
|
||||
TIP: To create a visualization based on the field, click the *Visualize* button below the field statistics.
|
|
@ -1,411 +1,36 @@
|
|||
[[getting-started]]
|
||||
== Getting Started with Kibana
|
||||
= Getting Started
|
||||
|
||||
Now that you have Kibana <<setup,installed>>, you can step through this tutorial to get fast hands-on experience with
|
||||
key Kibana functionality. By the end of this tutorial, you will have:
|
||||
[partintro]
|
||||
--
|
||||
Ready to get some hands-on experience with Kibana?
|
||||
This tutorial shows you how to:
|
||||
|
||||
* Loaded a sample data set into your Elasticsearch installation
|
||||
* Defined at least one index pattern
|
||||
* Used the <<discover, Discover>> functionality to explore your data
|
||||
* Set up some <<visualize,_visualizations_>> to graphically represent your data
|
||||
* Assembled visualizations into a <<dashboard,Dashboard>>
|
||||
* Load a sample data set into Elasticsearch
|
||||
* Define an index pattern
|
||||
* Explore the sample data with <<discover, Discover>>
|
||||
* Set up <<visualize,_visualizations_>> of the sample data
|
||||
* Assemble visualizations into a <<dashboard,Dashboard>>
|
||||
|
||||
The material in this section assumes you have a working Kibana install connected to a working Elasticsearch install.
|
||||
Before you begin, make sure you've <<install, installed Kibana>> and established
|
||||
a <<connect-to-elasticsearch, connection to Elasticsearch>>.
|
||||
|
||||
Video tutorials are also available:
|
||||
You might also be interested in these video tutorials:
|
||||
|
||||
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-1[High-level Kibana introduction, pie charts]
|
||||
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-2[Data discovery, bar charts, and line charts]
|
||||
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-3[Tile maps]
|
||||
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-4[Embedding Kibana visualizations]
|
||||
--
|
||||
|
||||
[float]
|
||||
[[tutorial-load-dataset]]
|
||||
=== Before You Start: Loading Sample Data
|
||||
include::getting-started/tutorial-load-dataset.asciidoc[]
|
||||
|
||||
The tutorials in this section rely on the following data sets:
|
||||
include::getting-started/tutorial-define-index.asciidoc[]
|
||||
|
||||
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here:
|
||||
https://www.elastic.co/guide/en/kibana/3.0/snippets/shakespeare.json[shakespeare.json].
|
||||
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here:
|
||||
https://github.com/bly2k/files/blob/master/accounts.zip?raw=true[accounts.zip]
|
||||
* A set of randomly generated log files. Download this data set by clicking here:
|
||||
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logs.jsonl.gz]
|
||||
include::getting-started/tutorial-discovering.asciidoc[]
|
||||
|
||||
Two of the data sets are compressed. Use the following commands to extract the files:
|
||||
include::getting-started/tutorial-visualizing.asciidoc[]
|
||||
|
||||
[source,shell]
|
||||
unzip accounts.zip
|
||||
gunzip logs.jsonl.gz
|
||||
include::getting-started/tutorial-dashboard.asciidoc[]
|
||||
|
||||
The Shakespeare data set is organized in the following schema:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"line_id": INT,
|
||||
"play_name": "String",
|
||||
"speech_number": INT,
|
||||
"line_number": "String",
|
||||
"speaker": "String",
|
||||
"text_entry": "String",
|
||||
}
|
||||
|
||||
The accounts data set is organized in the following schema:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"account_number": INT,
|
||||
"balance": INT,
|
||||
"firstname": "String",
|
||||
"lastname": "String",
|
||||
"age": INT,
|
||||
"gender": "M or F",
|
||||
"address": "String",
|
||||
"employer": "String",
|
||||
"email": "String",
|
||||
"city": "String",
|
||||
"state": "String"
|
||||
}
|
||||
|
||||
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"memory": INT,
|
||||
"geo.coordinates": "geo_point"
|
||||
"@timestamp": "date"
|
||||
}
|
||||
|
||||
Before we load the Shakespeare and logs data sets, we need to set up {ref}mapping.html[_mappings_] for the fields.
|
||||
Mapping divides the documents in the index into logical groups and specifies a field's characteristics, such as the
|
||||
field's searchability or whether or not it's _tokenized_, or broken up into separate words.
|
||||
|
||||
Use the following command to set up a mapping for the Shakespeare data set:
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/shakespeare -d '
|
||||
{
|
||||
"mappings" : {
|
||||
"_default_" : {
|
||||
"properties" : {
|
||||
"speaker" : {"type": "string", "index" : "not_analyzed" },
|
||||
"play_name" : {"type": "string", "index" : "not_analyzed" },
|
||||
"line_id" : { "type" : "integer" },
|
||||
"speech_number" : { "type" : "integer" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
This mapping specifies the following qualities for the data set:
|
||||
|
||||
* The _speaker_ field is a string that isn't analyzed. The string in this field is treated as a single unit, even if
|
||||
there are multiple words in the field.
|
||||
* The same applies to the _play_name_ field.
|
||||
* The _line_id_ and _speech_number_ fields are integers.
|
||||
|
||||
The logs data set requires a mapping to label the latitude/longitude pairs in the logs as geographic locations by
|
||||
applying the `geo_point` type to those fields.
|
||||
|
||||
Use the following commands to establish `geo_point` mapping for the logs:
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.18 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.19 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.20 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
The accounts data set doesn't require any mappings, so at this point we're ready to use the Elasticsearch
|
||||
{ref}/docs-bulk.html[`bulk`] API to load the data sets with the following commands:
|
||||
|
||||
[source,shell]
|
||||
curl -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
|
||||
curl -XPOST 'localhost:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
|
||||
curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
|
||||
|
||||
These commands may take some time to execute, depending on the computing resources available.
|
||||
|
||||
Verify successful loading with the following command:
|
||||
|
||||
[source,shell]
|
||||
curl 'localhost:9200/_cat/indices?v'
|
||||
|
||||
You should see output similar to the following:
|
||||
|
||||
[source,shell]
|
||||
health status index pri rep docs.count docs.deleted store.size pri.store.size
|
||||
yellow open bank 5 1 1000 0 418.2kb 418.2kb
|
||||
yellow open shakespeare 5 1 111396 0 17.6mb 17.6mb
|
||||
yellow open logstash-2015.05.18 5 1 4631 0 15.6mb 15.6mb
|
||||
yellow open logstash-2015.05.19 5 1 4624 0 15.7mb 15.7mb
|
||||
yellow open logstash-2015.05.20 5 1 4750 0 16.4mb 16.4mb
|
||||
|
||||
[[tutorial-define-index]]
|
||||
=== Defining Your Index Patterns
|
||||
|
||||
Each set of data loaded to Elasticsearch has an <<settings-create-pattern,index pattern>>. In the previous section, the
|
||||
Shakespeare data set has an index named `shakespeare`, and the accounts data set has an index named `bank`. An _index
|
||||
pattern_ is a string with optional wildcards that can match multiple indices. For example, in the common logging use
|
||||
case, a typical index name contains the date in MM-DD-YYYY format, and an index pattern for May would look something
|
||||
like `logstash-2015.05*`.
|
||||
|
||||
For this tutorial, any pattern that matches the name of an index we've loaded will work. Open a browser and
|
||||
navigate to `localhost:5601`. Click the *Settings* tab, then the *Indices* tab. Click *Add New* to define a new index
|
||||
pattern. Two of the sample data sets, the Shakespeare plays and the financial accounts, don't contain time-series data.
|
||||
Make sure the *Index contains time-based events* box is unchecked when you create index patterns for these data sets.
|
||||
Specify `shakes*` as the index pattern for the Shakespeare data set and click *Create* to define the index pattern, then
|
||||
define a second index pattern named `ba*`.
|
||||
|
||||
The Logstash data set does contain time-series data, so after clicking *Add New* to define the index for this data
|
||||
set, make sure the *Index contains time-based events* box is checked and select the `@timestamp` field from the
|
||||
*Time-field name* drop-down.
|
||||
|
||||
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
|
||||
contain data.
|
||||
|
||||
[float]
|
||||
[[tutorial-discovering]]
|
||||
=== Discovering Your Data
|
||||
|
||||
Click the *Discover* image:images/discover-compass.png[Compass icon] tab to display Kibana's data discovery functions:
|
||||
|
||||
image::images/tutorial-discover.png[]
|
||||
|
||||
Right under the tab itself, there is a search box where you can search your data. Searches take a specific
|
||||
{ref}/query-dsl-query-string-query.html#query-string-syntax[query syntax] that enable you to create custom searches,
|
||||
which you can save and load by clicking the buttons to the right of the search box.
|
||||
|
||||
Beneath the search box, the current index pattern is displayed in a drop-down. You can change the index pattern by
|
||||
selecting a different pattern from the drop-down selector.
|
||||
|
||||
You can construct searches by using the field names and the values you're interested in. With numeric fields you can
|
||||
use comparison operators such as greater than (>), less than (<), or equals (=). You can link elements with the
|
||||
logical operators AND, OR, and NOT, all in uppercase.
|
||||
|
||||
Try selecting the `ba*` index pattern and putting the following search into the search box:
|
||||
|
||||
[source,text]
|
||||
account_number:<100 AND balance:>47500
|
||||
|
||||
This search returns all account numbers between zero and 99 with balances in excess of 47,500.
|
||||
|
||||
If you're using the linked sample data set, this search returns 5 results: Account numbers 8, 32, 78, 85, and 97.
|
||||
|
||||
image::images/tutorial-discover-2.png[]
|
||||
|
||||
To narrow the display to only the specific fields of interest, highlight each field in the list that displays under the
|
||||
index pattern and click the *Add* button. Note how, in this example, adding the `account_number` field changes the
|
||||
display from the full text of five records to a simple list of five account numbers:
|
||||
|
||||
image::images/tutorial-discover-3.png[]
|
||||
|
||||
[[tutorial-visualizing]]
|
||||
=== Data Visualization: Beyond Discovery
|
||||
|
||||
The visualization tools available on the *Visualize* tab enable you to display aspects of your data sets in several
|
||||
different ways.
|
||||
|
||||
Click on the *Visualize* image:images/visualize-icon.png[Bar chart icon] tab to start:
|
||||
|
||||
image::images/tutorial-visualize.png[]
|
||||
|
||||
Click on *Pie chart*, then *From a new search*. Select the `ba*` index pattern.
|
||||
|
||||
Visualizations depend on Elasticsearch {ref}/search-aggregations.html[aggregations] in two different types: _bucket_
|
||||
aggregations and _metric_ aggregations. A bucket aggregation sorts your data according to criteria you specify. For
|
||||
example, in our accounts data set, we can establish a range of account balances, then display what proportions of the
|
||||
total fall into which range of balances.
|
||||
|
||||
The whole pie displays, since we haven't specified any buckets yet.
|
||||
|
||||
image::images/tutorial-visualize-pie-1.png[]
|
||||
|
||||
Select *Split Slices* from the *Select buckets type* list, then select *Range* from the *Aggregation* drop-down
|
||||
selector. Select the *balance* field from the *Field* drop-down, then click on *Add Range* four times to bring the
|
||||
total number of ranges to six. Enter the following ranges:
|
||||
|
||||
[source,text]
|
||||
0 999
|
||||
1000 2999
|
||||
3000 6999
|
||||
7000 14999
|
||||
15000 30999
|
||||
31000 50000
|
||||
|
||||
Click the *Apply changes* button image:images/apply-changes-button.png[] to display the chart:
|
||||
|
||||
image::images/tutorial-visualize-pie-2.png[]
|
||||
|
||||
This shows you what proportion of the 1000 accounts fall in these balance ranges. To see another dimension of the data,
|
||||
we're going to add another bucket aggregation. We can break down each of the balance ranges further by the account
|
||||
holder's age.
|
||||
|
||||
Click *Add sub-buckets* at the bottom, then select *Split Slices*. Choose the *Terms* aggregation and the *age* field from
|
||||
the drop-downs.
|
||||
Click the *Apply changes* button image:images/apply-changes-button.png[] to add an external ring with the new
|
||||
results.
|
||||
|
||||
image::images/tutorial-visualize-pie-3.png[]
|
||||
|
||||
Save this chart by clicking the *Save Visualization* button to the right of the search field. Name the visualization
|
||||
_Pie Example_.
|
||||
|
||||
Next, we're going to make a bar chart. Click on *New Visualization*, then *Vertical bar chart*. Select *From a new
|
||||
search* and the `shakes*` index pattern. You'll see a single big bar, since we haven't defined any buckets yet:
|
||||
|
||||
image::images/tutorial-visualize-bar-1.png[]
|
||||
|
||||
For the Y-axis metrics aggregation, select *Unique Count*, with *speaker* as the field. For Shakespeare plays, it might
|
||||
be useful to know which plays have the lowest number of distinct speaking parts, if your theater company is short on
|
||||
actors. For the X-Axis buckets, select the *Terms* aggregation with the *play_name* field. For the *Order*, select
|
||||
*Ascending*, leaving the *Size* at 5. Write a description for the axes in the *Custom Label* fields.
|
||||
|
||||
Leave the other elements at their default values and click the *Apply changes* button
|
||||
image:images/apply-changes-button.png[]. Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-bar-2.png[]
|
||||
|
||||
Notice how the individual play names show up as whole phrases, instead of being broken down into individual words. This
|
||||
is the result of the mapping we did at the beginning of the tutorial, when we marked the *play_name* field as 'not
|
||||
analyzed'.
|
||||
|
||||
Hovering on each bar shows you the number of speaking parts for each play as a tooltip. You can turn this behavior off,
|
||||
as well as change many other options for your visualizations, by clicking the *Options* tab in the top left.
|
||||
|
||||
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these
|
||||
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part. Add
|
||||
a Y-axis aggregation with the *Add metrics* button, then choose the *Max* aggregation for the *speech_number* field. In
|
||||
the *Options* tab, change the *Bar Mode* drop-down to *grouped*, then click the *Apply changes* button
|
||||
image:images/apply-changes-button.png[]. Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-bar-3.png[]
|
||||
|
||||
As you can see, _Love's Labours Lost_ has an unusually high maximum speech number, compared to the other plays, and
|
||||
might therefore make more demands on an actor's memory.
|
||||
|
||||
Note how the *Number of speaking parts* Y-axis starts at zero, but the bars don't begin to differentiate until 18. To
|
||||
make the differences stand out, starting the Y-axis at a value closer to the minimum, check the
|
||||
*Scale Y-Axis to data bounds* box in the *Options* tab.
|
||||
|
||||
Save this chart with the name _Bar Example_.
|
||||
|
||||
Next, we're going to make a tile map chart to visualize some geographic data. Click on *New Visualization*, then
|
||||
*Tile map*. Select *From a new search* and the `logstash-*` index pattern. Define the time window for the events
|
||||
we're exploring by clicking the time selector at the top right of the Kibana interface. Click on *Absolute*, then set
|
||||
the start time to May 18, 2015 and the end time for the range to May 20, 2015:
|
||||
|
||||
image::images/tutorial-timepicker.png[]
|
||||
|
||||
Once you've got the time range set up, click the *Go* button, then close the time picker by clicking the small up arrow
|
||||
at the bottom. You'll see a map of the world, since we haven't defined any buckets yet:
|
||||
|
||||
image::images/tutorial-visualize-map-1.png[]
|
||||
|
||||
Select *Geo Coordinates* as the bucket, then click the *Apply changes* button image:images/apply-changes-button.png[].
|
||||
Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-map-2.png[]
|
||||
|
||||
You can navigate the map by clicking and dragging, zoom with the image:images/viz-zoom.png[] buttons, or hit the *Fit
|
||||
Data Bounds* image:images/viz-fit-bounds.png[] button to zoom to the lowest level that includes all the points. You can
|
||||
also create a filter to define a rectangle on the map, either to include or exclude, by clicking the
|
||||
*Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[] button and drawing a bounding box on the map.
|
||||
A green oval with the filter definition displays right under the query box:
|
||||
|
||||
image::images/tutorial-visualize-map-3.png[]
|
||||
|
||||
Hover on the filter to display the controls to toggle, pin, invert, or delete the filter. Save this chart with the name
|
||||
_Map Example_.
|
||||
|
||||
Finally, we're going to define a sample Markdown widget to display on our dashboard. Click on *New Visualization*, then
|
||||
*Markdown widget*, to display a very simple Markdown entry field:
|
||||
|
||||
image::images/tutorial-visualize-md-1.png[]
|
||||
|
||||
Write the following text in the field:
|
||||
|
||||
[source,markdown]
|
||||
# This is a tutorial dashboard!
|
||||
The Markdown widget uses **markdown** syntax.
|
||||
> Blockquotes in Markdown use the > character.
|
||||
|
||||
Click the *Apply changes* button image:images/apply-changes-button.png[] to display the rendered Markdown in the
|
||||
preview pane:
|
||||
|
||||
image::images/tutorial-visualize-md-2.png[]
|
||||
|
||||
Save this visualization with the name _Markdown Example_.
|
||||
|
||||
[[tutorial-dashboard]]
|
||||
=== Putting it all Together with Dashboards
|
||||
|
||||
A Kibana dashboard is a collection of visualizations that you can arrange and share. To get started, click the
|
||||
*Dashboard* tab, then the *Add Visualization* button at the far right of the search box to display the list of saved
|
||||
visualizations. Select _Markdown Example_, _Pie Example_, _Bar Example_, and _Map Example_, then close the list of
|
||||
visualizations by clicking the small up-arrow at the bottom of the list. You can move the containers for each
|
||||
visualization by clicking and dragging the title bar. Resize the containers by dragging the lower right corner of a
|
||||
visualization's container. Your sample dashboard should end up looking roughly like this:
|
||||
|
||||
image::images/tutorial-dashboard.png[]
|
||||
|
||||
Click the *Save Dashboard* button, then name the dashboard _Tutorial Dashboard_. You can share a saved dashboard by
|
||||
clicking the *Share* button to display HTML embedding code as well as a direct link.
|
||||
|
||||
[float]
|
||||
[[wrapping-up]]
|
||||
=== Wrapping Up
|
||||
|
||||
Now that you've handled the basic aspects of Kibana's functionality, you're ready to explore Kibana in further detail.
|
||||
Take a look at the rest of the documentation for more details!
|
||||
include::getting-started/wrapping-up.asciidoc[]
|
||||
|
|
20
docs/getting-started/tutorial-dashboard.asciidoc
Normal file
|
@ -0,0 +1,20 @@
|
|||
[[tutorial-dashboard]]
|
||||
== Putting it all Together with Dashboards
|
||||
|
||||
A dashboard is a collection of visualizations that you can arrange and share.
|
||||
To build a dashboard that contains the visualizations you saved during this tutorial:
|
||||
|
||||
. Click *Dashboard* in the side navigation.
|
||||
. Click *Add* to display the list of saved visualizations.
|
||||
. Click _Markdown Example_, _Pie Example_, _Bar Example_, and _Map Example_, then close the list of
|
||||
visualizations by clicking the small up-arrow at the bottom of the list.
|
||||
|
||||
Hovering over a visualization displays the container controls that enable you to
|
||||
edit, move, delete, and resize the visualization.
|
||||
|
||||
Your sample dashboard should end up looking roughly like this:
|
||||
|
||||
image::images/tutorial-dashboard.png[]
|
||||
|
||||
To get a link to share or HTML code to embed the dashboard in a web page, save
|
||||
the dashboard and click *Share*.
|
22
docs/getting-started/tutorial-define-index.asciidoc
Normal file
|
@ -0,0 +1,22 @@
|
|||
[[tutorial-define-index]]
|
||||
== Defining Your Index Patterns
|
||||
|
||||
Each set of data loaded to Elasticsearch has an index pattern. In the previous section, the
|
||||
Shakespeare data set has an index named `shakespeare`, and the accounts data set has an index named `bank`. An _index
|
||||
pattern_ is a string with optional wildcards that can match multiple indices. For example, in the common logging use
|
||||
case, a typical index name contains the date in MM-DD-YYYY format, and an index pattern for May would look something
|
||||
like `logstash-2015.05*`.
|
||||
|
||||
For this tutorial, any pattern that matches the name of an index we've loaded will work. Open a browser and
|
||||
navigate to `localhost:5601`. Click the *Settings* tab, then the *Indices* tab. Click *Add New* to define a new index
|
||||
pattern. Two of the sample data sets, the Shakespeare plays and the financial accounts, don't contain time-series data.
|
||||
Make sure the *Index contains time-based events* box is unchecked when you create index patterns for these data sets.
|
||||
Specify `shakes*` as the index pattern for the Shakespeare data set and click *Create* to define the index pattern, then
|
||||
define a second index pattern named `ba*`.
|
||||
|
||||
The Logstash data set does contain time-series data, so after clicking *Add New* to define the index for this data
|
||||
set, make sure the *Index contains time-based events* box is checked and select the `@timestamp` field from the
|
||||
*Time-field name* drop-down.
|
||||
|
||||
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
|
||||
contain data.
|
42
docs/getting-started/tutorial-discovering.asciidoc
Normal file
|
@ -0,0 +1,42 @@
|
|||
[[tutorial-discovering]]
|
||||
== Discovering Your Data
|
||||
|
||||
Click *Discover* in the side navigation to display Kibana's data discovery functions:
|
||||
|
||||
image::images/tutorial-discover.png[]
|
||||
|
||||
In the query bar, you can enter an
|
||||
{es-ref}query-dsl-query-string-query.html#query-string-syntax[Elasticsearch
|
||||
query] to search your data. You can explore the results in Discover and create
|
||||
visualizations of saved searches in Visualize.
|
||||
|
||||
The current index pattern is displayed beneath the query bar. The index pattern
|
||||
determines which indices are searched when you submit a query. To search a
|
||||
different set of indices, select different pattern from the drop down menu.
|
||||
To add an index pattern, go to *Management/Kibana/Index Patterns* and click
|
||||
*Add New*.
|
||||
|
||||
You can construct searches by using the field names and the values you're
|
||||
interested in. With numeric fields you can use comparison operators such as
|
||||
greater than (>), less than (<), or equals (=). You can link elements with the
|
||||
logical operators AND, OR, and NOT, all in uppercase.
|
||||
|
||||
To try it out, select the `ba*` index pattern and enter the following query string
|
||||
in the query bar:
|
||||
|
||||
[source,text]
|
||||
account_number:<100 AND balance:>47500
|
||||
|
||||
This query returns all account numbers between zero and 99 with balances in
|
||||
excess of 47,500. When searching the sample bank data, it returns 5 results:
|
||||
Account numbers 8, 32, 78, 85, and 97.
|
||||
|
||||
image::images/tutorial-discover-2.png[]
|
||||
|
||||
By default, all fields are shown for each matching document. To choose which
|
||||
document fields to display, hover over the Available Fields list and click the
|
||||
*add* button next to each field you want include. For example, if you add
|
||||
just the `account_number`, the display changes to a simple list of five
|
||||
account numbers:
|
||||
|
||||
image::images/tutorial-discover-3.png[]
|
171
docs/getting-started/tutorial-load-dataset.asciidoc
Normal file
|
@ -0,0 +1,171 @@
|
|||
[[tutorial-load-dataset]]
|
||||
== Loading Sample Data
|
||||
|
||||
The tutorials in this section rely on the following data sets:
|
||||
|
||||
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here:
|
||||
https://www.elastic.co/guide/en/kibana/3.0/snippets/shakespeare.json[shakespeare.json].
|
||||
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here:
|
||||
https://github.com/bly2k/files/blob/master/accounts.zip?raw=true[accounts.zip]
|
||||
* A set of randomly generated log files. Download this data set by clicking here:
|
||||
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logs.jsonl.gz]
|
||||
|
||||
Two of the data sets are compressed. Use the following commands to extract the files:
|
||||
|
||||
[source,shell]
|
||||
unzip accounts.zip
|
||||
gunzip logs.jsonl.gz
|
||||
|
||||
The Shakespeare data set is organized in the following schema:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"line_id": INT,
|
||||
"play_name": "String",
|
||||
"speech_number": INT,
|
||||
"line_number": "String",
|
||||
"speaker": "String",
|
||||
"text_entry": "String",
|
||||
}
|
||||
|
||||
The accounts data set is organized in the following schema:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"account_number": INT,
|
||||
"balance": INT,
|
||||
"firstname": "String",
|
||||
"lastname": "String",
|
||||
"age": INT,
|
||||
"gender": "M or F",
|
||||
"address": "String",
|
||||
"employer": "String",
|
||||
"email": "String",
|
||||
"city": "String",
|
||||
"state": "String"
|
||||
}
|
||||
|
||||
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"memory": INT,
|
||||
"geo.coordinates": "geo_point"
|
||||
"@timestamp": "date"
|
||||
}
|
||||
|
||||
Before we load the Shakespeare and logs data sets, we need to set up {es-ref}mapping.html[_mappings_] for the fields.
|
||||
Mapping divides the documents in the index into logical groups and specifies a field's characteristics, such as the
|
||||
field's searchability or whether or not it's _tokenized_, or broken up into separate words.
|
||||
|
||||
Use the following command to set up a mapping for the Shakespeare data set:
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/shakespeare -d '
|
||||
{
|
||||
"mappings" : {
|
||||
"_default_" : {
|
||||
"properties" : {
|
||||
"speaker" : {"type": "string", "index" : "not_analyzed" },
|
||||
"play_name" : {"type": "string", "index" : "not_analyzed" },
|
||||
"line_id" : { "type" : "integer" },
|
||||
"speech_number" : { "type" : "integer" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
This mapping specifies the following qualities for the data set:
|
||||
|
||||
* The _speaker_ field is a string that isn't analyzed. The string in this field is treated as a single unit, even if
|
||||
there are multiple words in the field.
|
||||
* The same applies to the _play_name_ field.
|
||||
* The _line_id_ and _speech_number_ fields are integers.
|
||||
|
||||
The logs data set requires a mapping to label the latitude/longitude pairs in the logs as geographic locations by
|
||||
applying the `geo_point` type to those fields.
|
||||
|
||||
Use the following commands to establish `geo_point` mapping for the logs:
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.18 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.19 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.20 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
The accounts data set doesn't require any mappings, so at this point we're ready to use the Elasticsearch
|
||||
{es-ref}docs-bulk.html[`bulk`] API to load the data sets with the following commands:
|
||||
|
||||
[source,shell]
|
||||
curl -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
|
||||
curl -XPOST 'localhost:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
|
||||
curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
|
||||
|
||||
These commands may take some time to execute, depending on the computing resources available.
|
||||
|
||||
Verify successful loading with the following command:
|
||||
|
||||
[source,shell]
|
||||
curl 'localhost:9200/_cat/indices?v'
|
||||
|
||||
You should see output similar to the following:
|
||||
|
||||
[source,shell]
|
||||
health status index pri rep docs.count docs.deleted store.size pri.store.size
|
||||
yellow open bank 5 1 1000 0 418.2kb 418.2kb
|
||||
yellow open shakespeare 5 1 111396 0 17.6mb 17.6mb
|
||||
yellow open logstash-2015.05.18 5 1 4631 0 15.6mb 15.6mb
|
||||
yellow open logstash-2015.05.19 5 1 4624 0 15.7mb 15.7mb
|
||||
yellow open logstash-2015.05.20 5 1 4750 0 16.4mb 16.4mb
|
185
docs/getting-started/tutorial-visualizing.asciidoc
Normal file
|
@ -0,0 +1,185 @@
|
|||
[[tutorial-visualizing]]
|
||||
== Visualizing Your Data
|
||||
|
||||
To start visualizing your data, click *Visualize* in the side navigation:
|
||||
|
||||
image::images/tutorial-visualize.png[]
|
||||
|
||||
The *Visualize* tools enable you to view your data in several ways. For example,
|
||||
let's use that venerable visualization, the pie chart, to get some insight
|
||||
into the account balances in the sample bank account data.
|
||||
|
||||
To get started, click *Pie chart* in the list of visualizations. You can build
|
||||
visualizations from saved searches, or enter new search criteria. To enter
|
||||
new search criteria, you first need to select an index pattern to specify
|
||||
what indices to search. We want to search the account data, so select the `ba*`
|
||||
index pattern.
|
||||
|
||||
The default search matches all documents. Initially, a single "slice"
|
||||
encompasses the entire pie:
|
||||
|
||||
image::images/tutorial-visualize-pie-1.png[]
|
||||
|
||||
To specify what slices to display in the chart, you use an Elasticsearch
|
||||
{es-ref}search-aggregations.html[bucket aggregation]. A bucket aggregation
|
||||
simply sorts the documents that match your search criteria into different
|
||||
categories, aka _buckets_. For example, the account data includes the balance
|
||||
of each account. Using a bucket aggregation, you can establish multiple ranges
|
||||
of account balances and find out how many accounts fall into each range.
|
||||
|
||||
To define a bucket for each range:
|
||||
|
||||
. Click the *Split Slices* buckets type.
|
||||
. Select *Range* from the *Aggregation* list.
|
||||
. Select the *balance* field from the *Field* list.
|
||||
. Click *Add Range* four times to bring the
|
||||
total number of ranges to six.
|
||||
. Define the following ranges:
|
||||
+
|
||||
[source,text]
|
||||
0 999
|
||||
1000 2999
|
||||
3000 6999
|
||||
7000 14999
|
||||
15000 30999
|
||||
31000 50000
|
||||
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[] to update the chart.
|
||||
|
||||
Now you can see what proportion of the 1000 accounts fall into each balance
|
||||
range.
|
||||
|
||||
image::images/tutorial-visualize-pie-2.png[]
|
||||
|
||||
Let's take a look at another dimension of the data: the account holder's
|
||||
age. By adding another bucket aggregation, you can see the ages of the account
|
||||
holders in each balance range:
|
||||
|
||||
. Click *Add sub-buckets* below the buckets list.
|
||||
. Click *Split Slices* in the buckets type list.
|
||||
. Select *Terms* from the aggregation list.
|
||||
. Select *age* from the field list.
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[].
|
||||
|
||||
Now you can see the break down of the account holders' ages displayed
|
||||
in a ring around the balance ranges.
|
||||
|
||||
image::images/tutorial-visualize-pie-3.png[]
|
||||
|
||||
To save this chart so we can use it later, click *Save* and enter the name _Pie Example_.
|
||||
|
||||
Next, we're going to look at data in the Shakespeare data set. Let's find out how the
|
||||
plays compare when it comes to the number of speaking parts and display the information
|
||||
in a bar chart:
|
||||
|
||||
. Click *New* and select *Vertical bar chart*.
|
||||
. Select the `shakes*` index pattern. Since you haven't defined any buckets yet,
|
||||
you'll see a single big bar that shows the total count of documents that match
|
||||
the default wildcard query.
|
||||
+
|
||||
image::images/tutorial-visualize-bar-1.png[]
|
||||
|
||||
. To show the number of speaking parts per play along the y-axis, you need to
|
||||
configure the Y-axis {es-ref}search-aggregations.html[metric aggregation]. A metric
|
||||
aggregation computes metrics based on values extracted from the search results.
|
||||
To get the number of speaking parts per play, select the *Unique Count*
|
||||
aggregation and choose *speaker* from the field list. You can also give the
|
||||
axis a custom label, _Speaking Parts_.
|
||||
|
||||
. To show the different plays long the x-axis, select the X-Axis buckets type,
|
||||
select *Terms* from the aggregation list, and choose *play_name* from the field
|
||||
list. To list them alphabetically, select *Ascending* order. You can also give
|
||||
the axis a custom label, _Play Name_.
|
||||
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[] to view the
|
||||
results.
|
||||
|
||||
image::images/tutorial-visualize-bar-2.png[]
|
||||
|
||||
Notice how the individual play names show up as whole phrases, instead of being broken down into individual words. This
|
||||
is the result of the mapping we did at the beginning of the tutorial, when we marked the *play_name* field as 'not
|
||||
analyzed'.
|
||||
|
||||
Hovering over each bar shows you the number of speaking parts for each play as a tooltip. To turn tooltips
|
||||
off and configure other options for your visualizations, select the Visualization builder's *Options* tab.
|
||||
|
||||
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these
|
||||
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part.
|
||||
|
||||
. Click *Add metrics* to add a Y-axis aggregation.
|
||||
. Choose the *Max* aggregation and select the *speech_number* field.
|
||||
. Click *Options* and change the *Bar Mode* to *grouped*.
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[]. Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-bar-3.png[]
|
||||
|
||||
As you can see, _Love's Labours Lost_ has an unusually high maximum speech number, compared to the other plays, and
|
||||
might therefore make more demands on an actor's memory.
|
||||
|
||||
Note how the *Number of speaking parts* Y-axis starts at zero, but the bars don't begin to differentiate until 18. To
|
||||
make the differences stand out, starting the Y-axis at a value closer to the minimum, go to Options and select
|
||||
*Scale Y-Axis to data bounds*.
|
||||
|
||||
Save this chart with the name _Bar Example_.
|
||||
|
||||
Next, we're going to use a tile map chart to visualize geographic information in our log file sample data.
|
||||
|
||||
. Click *New*.
|
||||
. Select *Tile map*.
|
||||
. Select the `logstash-*` index pattern.
|
||||
. Set the time window for the events we're exploring:
|
||||
. Click the time picker in the Kibana toolbar.
|
||||
. Click *Absolute*.
|
||||
. Set the start time to May 18, 2015 and the end time to May 20, 2015.
|
||||
+
|
||||
image::images/tutorial-timepicker.png[]
|
||||
|
||||
. Once you've got the time range set up, click the *Go* button and close the time picker by
|
||||
clicking the small up arrow in the bottom right corner.
|
||||
|
||||
You'll see a map of the world, since we haven't defined any buckets yet:
|
||||
|
||||
image::images/tutorial-visualize-map-1.png[]
|
||||
|
||||
To map the geo coordinates from the log files select *Geo Coordinates* as
|
||||
the bucket and click *Apply changes* image:images/apply-changes-button.png[].
|
||||
Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-map-2.png[]
|
||||
|
||||
You can navigate the map by clicking and dragging, zoom with the
|
||||
image:images/viz-zoom.png[] buttons, or hit the *Fit Data Bounds*
|
||||
image:images/viz-fit-bounds.png[] button to zoom to the lowest level that
|
||||
includes all the points. You can also include or exclude a rectangular area
|
||||
by clicking the *Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[]
|
||||
button and drawing a bounding box on the map. Applied filters are displayed
|
||||
below the query bar. Hovering over a filter displays controls to toggle,
|
||||
pin, invert, or delete the filter.
|
||||
|
||||
image::images/tutorial-visualize-map-3.png[]
|
||||
|
||||
Save this map with the name _Map Example_.
|
||||
|
||||
Finally, create a Markdown widget to display extra information:
|
||||
|
||||
. Click *New*.
|
||||
. Select *Markdown widget*.
|
||||
. Enter the following text in the field:
|
||||
+
|
||||
[source,markdown]
|
||||
# This is a tutorial dashboard!
|
||||
The Markdown widget uses **markdown** syntax.
|
||||
> Blockquotes in Markdown use the > character.
|
||||
|
||||
. Click *Apply changes* image:images/apply-changes-button.png[] render the Markdown in the
|
||||
preview pane.
|
||||
+
|
||||
image::images/tutorial-visualize-md-1.png[]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
image::images/tutorial-visualize-md-2.png[]
|
||||
|
||||
Save this visualization with the name _Markdown Example_.
|
14
docs/getting-started/wrapping-up.asciidoc
Normal file
|
@ -0,0 +1,14 @@
|
|||
[[wrapping-up]]
|
||||
== Wrapping Up
|
||||
|
||||
Now that you have a handle on the basics, you're ready to start exploring
|
||||
your own data with Kibana.
|
||||
|
||||
* See <<discover, Discover>> for more information about searching and filtering
|
||||
your data.
|
||||
* See <<visualize, Visualize>> for information about all of the visualization
|
||||
types Kibana has to offer.
|
||||
* See <<management, Management>> for information about configuring Kibana
|
||||
and managing your saved objects.
|
||||
* See <<console-kibana, Console>> for information about the interactive
|
||||
console UI you can use to submit REST requests to Elasticsearch.
|
Before Width: | Height: | Size: 472 B |
Before Width: | Height: | Size: 270 B |
Before Width: | Height: | Size: 31 KiB |
Before Width: | Height: | Size: 664 KiB After Width: | Height: | Size: 1.1 MiB |
Before Width: | Height: | Size: 62 KiB |
Before Width: | Height: | Size: 6.8 KiB |
Before Width: | Height: | Size: 120 KiB |
Before Width: | Height: | Size: 316 B |
Before Width: | Height: | Size: 1.3 KiB |
Before Width: | Height: | Size: 7.3 KiB |
Before Width: | Height: | Size: 88 KiB |
Before Width: | Height: | Size: 41 KiB |
Before Width: | Height: | Size: 38 KiB |
Before Width: | Height: | Size: 27 KiB |
Before Width: | Height: | Size: 18 KiB |
Before Width: | Height: | Size: 32 KiB After Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 285 KiB |
Before Width: | Height: | Size: 7.1 KiB |
Before Width: | Height: | Size: 20 KiB |
Before Width: | Height: | Size: 7.2 KiB |
Before Width: | Height: | Size: 257 KiB |
Before Width: | Height: | Size: 256 KiB |
Before Width: | Height: | Size: 187 KiB |
Before Width: | Height: | Size: 304 KiB |
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 748 B |
Before Width: | Height: | Size: 38 KiB |
Before Width: | Height: | Size: 26 KiB |
BIN
docs/images/bar-terms-agg.jpg
Normal file
After Width: | Height: | Size: 179 KiB |
BIN
docs/images/bar-terms-subagg.jpg
Normal file
After Width: | Height: | Size: 258 KiB |
Before Width: | Height: | Size: 196 KiB |
Before Width: | Height: | Size: 655 B |
Before Width: | Height: | Size: 65 KiB |
Before Width: | Height: | Size: 90 KiB |
Before Width: | Height: | Size: 114 KiB |
Before Width: | Height: | Size: 86 KiB |
Before Width: | Height: | Size: 180 KiB |
Before Width: | Height: | Size: 69 KiB |
Before Width: | Height: | Size: 4.8 KiB |
Before Width: | Height: | Size: 4.9 KiB |
Before Width: | Height: | Size: 125 KiB |
BIN
docs/images/timelion-arg-help.jpg
Normal file
After Width: | Height: | Size: 186 KiB |
Before Width: | Height: | Size: 843 KiB After Width: | Height: | Size: 225 KiB |
Before Width: | Height: | Size: 352 KiB After Width: | Height: | Size: 156 KiB |
Before Width: | Height: | Size: 106 KiB After Width: | Height: | Size: 49 KiB |
Before Width: | Height: | Size: 653 KiB After Width: | Height: | Size: 130 KiB |
Before Width: | Height: | Size: 25 KiB After Width: | Height: | Size: 57 KiB |
Before Width: | Height: | Size: 68 KiB After Width: | Height: | Size: 62 KiB |
Before Width: | Height: | Size: 53 KiB After Width: | Height: | Size: 84 KiB |
Before Width: | Height: | Size: 368 KiB After Width: | Height: | Size: 157 KiB |
Before Width: | Height: | Size: 281 KiB After Width: | Height: | Size: 180 KiB |
Before Width: | Height: | Size: 2 MiB After Width: | Height: | Size: 175 KiB |
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 58 KiB |
Before Width: | Height: | Size: 62 KiB After Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 303 KiB After Width: | Height: | Size: 116 KiB |
|
@ -1,37 +1,40 @@
|
|||
[[kibana-guide]]
|
||||
= Kibana User Guide
|
||||
|
||||
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/5.0/
|
||||
:xpack: https://www.elastic.co/guide/en/x-pack/5.0/
|
||||
:scyld: X-Pack Security
|
||||
:k4issue: https://github.com/elastic/kibana/issues/
|
||||
:k4pull: https://github.com/elastic/kibana/pull/
|
||||
:version: 5.0.0-rc1
|
||||
:esversion: 5.0.0-rc1
|
||||
:packageversion: 5.0.0-rc
|
||||
:version: 5.0.0
|
||||
:major-version: 5.x
|
||||
|
||||
//////////
|
||||
release-state can be: released | prerelease | unreleased
|
||||
//////////
|
||||
|
||||
:release-state: released
|
||||
:es-ref: https://www.elastic.co/guide/en/elasticsearch/reference/5.0/
|
||||
:xpack-ref: https://www.elastic.co/guide/en/x-pack/current/
|
||||
:issue: https://github.com/elastic/kibana/issues/
|
||||
:pull: https://github.com/elastic/kibana/pull/
|
||||
|
||||
|
||||
include::introduction.asciidoc[]
|
||||
|
||||
include::setup.asciidoc[]
|
||||
|
||||
include::migration.asciidoc[]
|
||||
|
||||
include::getting-started.asciidoc[]
|
||||
|
||||
include::breaking-changes.asciidoc[]
|
||||
|
||||
include::plugins.asciidoc[]
|
||||
|
||||
include::access.asciidoc[]
|
||||
|
||||
include::discover.asciidoc[]
|
||||
|
||||
include::visualize.asciidoc[]
|
||||
|
||||
include::dashboard.asciidoc[]
|
||||
|
||||
include::timelion.asciidoc[]
|
||||
|
||||
include::console.asciidoc[]
|
||||
|
||||
include::settings.asciidoc[]
|
||||
include::management.asciidoc[]
|
||||
|
||||
include::production.asciidoc[]
|
||||
include::plugins.asciidoc[]
|
||||
|
||||
include::releasenotes.asciidoc[]
|
||||
include::release-notes.asciidoc[]
|
||||
|
|
|
@ -10,47 +10,3 @@ create and share dynamic dashboards that display changes to Elasticsearch querie
|
|||
|
||||
Setting up Kibana is a snap. You can install Kibana and start exploring your Elasticsearch indices in minutes -- no
|
||||
code, no additional infrastructure required.
|
||||
|
||||
For more information about creating and sharing visualizations and dashboards, see the <<visualize, Visualize>>
|
||||
and <<dashboard, Dashboard>> topics. A complete <<getting-started,tutorial>> covering several aspects of Kibana's
|
||||
functionality is also available.
|
||||
|
||||
NOTE: This guide describes how to use Kibana {version}. For information about what's new in Kibana {version}, see
|
||||
the <<releasenotes, release notes>>.
|
||||
|
||||
////
|
||||
[float]
|
||||
[[data-discovery]]
|
||||
=== Data Discovery and Visualization
|
||||
|
||||
Let's take a look at how you might use Kibana to explore and visualize data.
|
||||
We've indexed some data from Transport for London (TFL) that shows one week
|
||||
of transit (Oyster) card usage.
|
||||
|
||||
From Kibana's Discover page, we can submit search queries, filter the results, and
|
||||
examine the data in the returned documents. For example, we can get all trips
|
||||
completed by the Tube during the week by excluding incomplete trips and trips by bus:
|
||||
|
||||
image:images/TFL-CompletedTrips.jpg[Discover]
|
||||
|
||||
Right away, we can see the peaks for the morning and afternoon commute hours in the
|
||||
histogram. By default, the Discover page also shows the first 500 entries that match the
|
||||
search criteria. You can change the time filter, interact with the histogram to drill
|
||||
down into the data, and view the details of particular documents. For more
|
||||
information about exploring your data from the Discover page, see <<discover, Discover>>.
|
||||
|
||||
You can construct visualizations of your search results from the Visualization page.
|
||||
Each visualization is associated with a search. For example, we can create a histogram
|
||||
that shows the weekly London commute traffic via the Tube using our previous search.
|
||||
The Y-axis shows the number of trips. The X-axis shows
|
||||
the day and time. By adding a sub-aggregation, we can see the top 3 end stations during
|
||||
each hour:
|
||||
|
||||
image:images/TFL-CommuteHistogram.jpg[Visualize]
|
||||
|
||||
You can save and share visualizations and combine them into dashboards to make it easy
|
||||
to correlate related information. For example, we could create a dashboard
|
||||
that displays several visualizations of the TFL data:
|
||||
|
||||
image:images/TFL-Dashboard.jpg[Dashboard]
|
||||
////
|
||||
|
|
|
@ -1,114 +0,0 @@
|
|||
[[setup-repositories]]
|
||||
=== Installing Kibana with apt and yum
|
||||
|
||||
Binary packages for Kibana are available for Unix distributions that support the `apt` and `yum` tools.
|
||||
We also have repositories available for APT and YUM based distributions.
|
||||
|
||||
NOTE: Since the packages are created as part of the Kibana build, source packages are not available.
|
||||
|
||||
Packages are signed with the PGP key http://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4[D88E42B4], which
|
||||
has the following fingerprint:
|
||||
|
||||
4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4
|
||||
|
||||
[float]
|
||||
[[kibana-apt]]
|
||||
===== Installing Kibana with apt-get
|
||||
|
||||
. Download and install the Public Signing Key:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
|
||||
--------------------------------------------------
|
||||
+
|
||||
. Add the repository definition to your `/etc/apt/sources.list.d/kibana.list` file:
|
||||
+
|
||||
["source","sh",subs="attributes"]
|
||||
--------------------------------------------------
|
||||
echo "deb https://artifacts.elastic.co/packages/5.x-prerelease/apt stable main" | sudo tee -a /etc/apt/sources.list.d/kibana.list
|
||||
--------------------------------------------------
|
||||
+
|
||||
[WARNING]
|
||||
==================================================
|
||||
Use the `echo` method described above to add the Kibana repository. Do not use `add-apt-repository`, as that command
|
||||
adds a `deb-src` entry with no corresponding source package.
|
||||
|
||||
When the `deb-src` entry is present, the commands in this procedure generate an error similar to the following:
|
||||
|
||||
Unable to find expected entry 'main/source/Sources' in Release file (Wrong sources.list entry or malformed file)
|
||||
|
||||
Delete the `deb-src` entry from the `/etc/apt/sources.list.d/kibana.list` file to clear the error.
|
||||
==================================================
|
||||
+
|
||||
. Run `apt-get update` to ready the repository. Install Kibana with the following command:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo apt-get update && sudo apt-get install kibana
|
||||
--------------------------------------------------
|
||||
+
|
||||
. Configure Kibana to automatically start during bootup. If your distribution is using the System V version of `init`,
|
||||
run the following command:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo update-rc.d kibana defaults 95 10
|
||||
--------------------------------------------------
|
||||
+
|
||||
. If your distribution is using `systemd`, run the following commands instead:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo /bin/systemctl daemon-reload
|
||||
sudo /bin/systemctl enable kibana.service
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[kibana-yum]]
|
||||
===== Installing Kibana with yum
|
||||
|
||||
WARNING: The repositories set up in this procedure are not compatible with distributions using version 3 of `rpm`, such
|
||||
as CentOS version 5.
|
||||
|
||||
. Download and install the public signing key:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
--------------------------------------------------
|
||||
+
|
||||
. Create a file named `kibana.repo` in the `/etc/yum.repos.d/` directory with the following contents:
|
||||
+
|
||||
["source","sh",subs="attributes"]
|
||||
--------------------------------------------------
|
||||
[kibana-{packageversion}]
|
||||
name=Kibana repository for {packageversion} packages
|
||||
baseurl=https://artifacts.elastic.co/packages/5.x-prerelease/yum
|
||||
gpgcheck=1
|
||||
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
enabled=1
|
||||
--------------------------------------------------
|
||||
+
|
||||
. Install Kibana by running the following command:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
yum install kibana
|
||||
--------------------------------------------------
|
||||
+
|
||||
Configure Kibana to automatically start during bootup. If your distribution is using the System V version of `init`
|
||||
(check with `ps -p 1`), run the following command:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
chkconfig --add kibana
|
||||
--------------------------------------------------
|
||||
+
|
||||
. If your distribution is using `systemd`, run the following commands instead:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo /bin/systemctl daemon-reload
|
||||
sudo /bin/systemctl enable kibana.service
|
||||
--------------------------------------------------
|
22
docs/management.asciidoc
Normal file
|
@ -0,0 +1,22 @@
|
|||
[[management]]
|
||||
= Management
|
||||
|
||||
[partintro]
|
||||
--
|
||||
The Management application is where you perform your runtime configuration of
|
||||
Kibana, including both the initial setup and ongoing configuration of index
|
||||
patterns, advanced settings that tweak the behaviors of Kibana itself, and
|
||||
the various "objects" that you can save throughout Kibana such as searches,
|
||||
visualizations, and dashboards.
|
||||
|
||||
This section is pluginable, so in addition to the out of the box capabitilies,
|
||||
packs such as X-Pack can add additional management capabilities to Kibana.
|
||||
--
|
||||
|
||||
include::management/index-patterns.asciidoc[]
|
||||
|
||||
include::management/managing-fields.asciidoc[]
|
||||
|
||||
include::management/advanced-options.asciidoc[]
|
||||
|
||||
include::management/managing-saved-objects.asciidoc[]
|
|
@ -1,3 +1,18 @@
|
|||
[[advanced-options]]
|
||||
== Setting Advanced Options
|
||||
|
||||
The *Advanced Settings* page enables you to directly edit settings that control the behavior of the Kibana application.
|
||||
For example, you can change the format used to display dates, specify the default index pattern, and set the precision
|
||||
for displayed decimal values.
|
||||
|
||||
To set advanced options:
|
||||
|
||||
. Go to *Settings > Advanced*.
|
||||
. Click the *Edit* button for the option you want to modify.
|
||||
. Enter a new value for the option.
|
||||
. Click the *Save* button.
|
||||
|
||||
[float]
|
||||
[[kibana-settings-reference]]
|
||||
|
||||
WARNING: Modifying the following settings can significantly affect Kibana's performance and cause problems that are
|
||||
|
@ -7,7 +22,7 @@ compatible with other configuration settings. Deleting a custom setting removes
|
|||
.Kibana Settings Reference
|
||||
[horizontal]
|
||||
`query:queryString:options`:: Options for the Lucene query string parser.
|
||||
`sort:options`:: Options for the Elasticsearch {ref}/search-request-sort.html[sort] parameter.
|
||||
`sort:options`:: Options for the Elasticsearch {es-ref}search-request-sort.html[sort] parameter.
|
||||
`dateFormat`:: The format to use for displaying pretty-formatted dates.
|
||||
`dateFormat:tz`:: The timezone that Kibana uses. The default value of `Browser` uses the timezone detected by the browser.
|
||||
`dateFormat:scaled`:: These values define the format used to render ordered time-based data. Formatted timestamps must
|
||||
|
@ -28,7 +43,7 @@ increase request processing time.
|
|||
`histogram:maxBars`:: Date histograms are not generated with more bars than the value of this property, scaling values
|
||||
when necessary.
|
||||
`visualization:tileMap:maxPrecision`:: The maximum geoHash precision displayed on tile maps: 7 is high, 10 is very high,
|
||||
12 is the maximum. {ref}/search-aggregations-bucket-geohashgrid-aggregation.html#_cell_dimensions_at_the_equator[Explanation of cell dimensions].
|
||||
12 is the maximum. {es-ref}search-aggregations-bucket-geohashgrid-aggregation.html#_cell_dimensions_at_the_equator[Explanation of cell dimensions].
|
||||
`visualization:tileMap:WMSdefaults`:: Default properties for the WMS map server support in the tile map.
|
||||
`visualization:colorMapping`:: Maps values to specified colors within visualizations.
|
||||
`visualization:loadingDelay`:: Time to wait before dimming visualizations during query.
|
146
docs/management/index-patterns.asciidoc
Normal file
|
@ -0,0 +1,146 @@
|
|||
[[index-patterns]]
|
||||
== Index Patterns
|
||||
|
||||
To use Kibana, you have to tell it about the Elasticsearch indices that you want to explore by configuring one or more
|
||||
index patterns. You can also:
|
||||
|
||||
* Create scripted fields that are computed on the fly from your data. You can browse and visualize scripted fields, but
|
||||
you cannot search them.
|
||||
* Set advanced options such as the number of rows to show in a table and how many of the most popular fields to show.
|
||||
Use caution when modifying advanced options, as it's possible to set values that are incompatible with one another.
|
||||
* Configure Kibana for a production environment
|
||||
|
||||
[float]
|
||||
[[settings-create-pattern]]
|
||||
== Creating an Index Pattern to Connect to Elasticsearch
|
||||
An _index pattern_ identifies one or more Elasticsearch indices that you want to explore with Kibana. Kibana looks for
|
||||
index names that match the specified pattern.
|
||||
An asterisk (*) in the pattern matches zero or more characters. For example, the pattern `myindex-*` matches all
|
||||
indices whose names start with `myindex-`, such as `myindex-1` and `myindex-2`.
|
||||
|
||||
An index pattern can also simply be the name of a single index.
|
||||
|
||||
To create an index pattern to connect to Elasticsearch:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Specify an index pattern that matches the name of one or more of your Elasticsearch indices. By default, Kibana
|
||||
guesses that you're you're working with log data being fed into Elasticsearch by Logstash.
|
||||
+
|
||||
NOTE: When you switch between top-level tabs, Kibana remembers where you were. For example, if you view a particular
|
||||
index pattern from the Settings tab, switch to the Discover tab, and then go back to the Settings tab, Kibana displays
|
||||
the index pattern you last looked at. To get to the create pattern form, click the *Add* button in the Index Patterns
|
||||
list.
|
||||
|
||||
. If your index contains a timestamp field that you want to use to perform time-based comparisons, select the *Index
|
||||
contains time-based events* option and select the index field that contains the timestamp. Kibana reads the index
|
||||
mapping to list all of the fields that contain a timestamp.
|
||||
|
||||
. By default, Kibana restricts wildcard expansion of time-based index patterns to indices with data within the currently
|
||||
selected time range. Click *Do not expand index pattern when search* to disable this behavior.
|
||||
|
||||
. Click *Create* to add the index pattern.
|
||||
|
||||
. To designate the new pattern as the default pattern to load when you view the Discover tab, click the *favorite*
|
||||
button.
|
||||
|
||||
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
|
||||
contain data.
|
||||
|
||||
To use an event time in an index name, enclose the static text in the pattern and specify the date format using the
|
||||
tokens described in the following table.
|
||||
|
||||
For example, `[logstash-]YYYY.MM.DD` matches all indices whose names have a timestamp of the form `YYYY.MM.DD` appended
|
||||
to the prefix `logstash-`, such as `logstash-2015.01.31` and `logstash-2015-02-01`.
|
||||
|
||||
[float]
|
||||
[[date-format-tokens]]
|
||||
.Date Format Tokens
|
||||
[horizontal]
|
||||
`M`:: Month - cardinal: 1 2 3 ... 12
|
||||
`Mo`:: Month - ordinal: 1st 2nd 3rd ... 12th
|
||||
`MM`:: Month - two digit: 01 02 03 ... 12
|
||||
`MMM`:: Month - abbreviation: Jan Feb Mar ... Dec
|
||||
`MMMM`:: Month - full: January February March ... December
|
||||
`Q`:: Quarter: 1 2 3 4
|
||||
`D`:: Day of Month - cardinal: 1 2 3 ... 31
|
||||
`Do`:: Day of Month - ordinal: 1st 2nd 3rd ... 31st
|
||||
`DD`:: Day of Month - two digit: 01 02 03 ... 31
|
||||
`DDD`:: Day of Year - cardinal: 1 2 3 ... 365
|
||||
`DDDo`:: Day of Year - ordinal: 1st 2nd 3rd ... 365th
|
||||
`DDDD`:: Day of Year - three digit: 001 002 ... 364 365
|
||||
`d`:: Day of Week - cardinal: 0 1 3 ... 6
|
||||
`do`:: Day of Week - ordinal: 0th 1st 2nd ... 6th
|
||||
`dd`:: Day of Week - 2-letter abbreviation: Su Mo Tu ... Sa
|
||||
`ddd`:: Day of Week - 3-letter abbreviation: Sun Mon Tue ... Sat
|
||||
`dddd`:: Day of Week - full: Sunday Monday Tuesday ... Saturday
|
||||
`e`:: Day of Week (locale): 0 1 2 ... 6
|
||||
`E`:: Day of Week (ISO): 1 2 3 ... 7
|
||||
`w`:: Week of Year - cardinal (locale): 1 2 3 ... 53
|
||||
`wo`:: Week of Year - ordinal (locale): 1st 2nd 3rd ... 53rd
|
||||
`ww`:: Week of Year - 2-digit (locale): 01 02 03 ... 53
|
||||
`W`:: Week of Year - cardinal (ISO): 1 2 3 ... 53
|
||||
`Wo`:: Week of Year - ordinal (ISO): 1st 2nd 3rd ... 53rd
|
||||
`WW`:: Week of Year - two-digit (ISO): 01 02 03 ... 53
|
||||
`YY`:: Year - two digit: 70 71 72 ... 30
|
||||
`YYYY`:: Year - four digit: 1970 1971 1972 ... 2030
|
||||
`gg`:: Week Year - two digit (locale): 70 71 72 ... 30
|
||||
`gggg`:: Week Year - four digit (locale): 1970 1971 1972 ... 2030
|
||||
`GG`:: Week Year - two digit (ISO): 70 71 72 ... 30
|
||||
`GGGG`:: Week Year - four digit (ISO): 1970 1971 1972 ... 2030
|
||||
`A`:: AM/PM: AM PM
|
||||
`a`:: am/pm: am pm
|
||||
`H`:: Hour: 0 1 2 ... 23
|
||||
`HH`:: Hour - two digit: 00 01 02 ... 23
|
||||
`h`:: Hour - 12-hour clock: 1 2 3 ... 12
|
||||
`hh`:: Hour - 12-hour clock, 2 digit: 01 02 03 ... 12
|
||||
`m`:: Minute: 0 1 2 ... 59
|
||||
`mm`:: Minute - two-digit: 00 01 02 ... 59
|
||||
`s`:: Second: 0 1 2 ... 59
|
||||
`ss`:: Second - two-digit: 00 01 02 ... 59
|
||||
`S`:: Fractional Second - 10ths: 0 1 2 ... 9
|
||||
`SS`:: Fractional Second - 100ths: 0 1 ... 98 99
|
||||
`SSS`:: Fractional Seconds - 1000ths: 0 1 ... 998 999
|
||||
`Z`:: Timezone - zero UTC offset (hh:mm format): -07:00 -06:00 -05:00 .. +07:00
|
||||
`ZZ`:: Timezone - zero UTC offset (hhmm format): -0700 -0600 -0500 ... +0700
|
||||
`X`:: Unix Timestamp: 1360013296
|
||||
`x`:: Unix Millisecond Timestamp: 1360013296123
|
||||
|
||||
[float]
|
||||
[[set-default-pattern]]
|
||||
== Setting the Default Index Pattern
|
||||
The default index pattern is loaded by automatically when you view the *Discover* tab. Kibana displays a star to the
|
||||
left of the name of the default pattern in the Index Patterns list on the *Settings > Indices* tab. The first pattern
|
||||
you create is automatically designated as the default pattern.
|
||||
|
||||
To set a different pattern as the default index pattern:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Select the pattern you want to set as the default in the Index Patterns list.
|
||||
. Click the pattern's *Favorite* button.
|
||||
|
||||
NOTE: You can also manually set the default index pattern in *Advanced > Settings*.
|
||||
|
||||
[float]
|
||||
[[reload-fields]]
|
||||
== Reloading the Index Fields List
|
||||
When you add an index mapping, Kibana automatically scans the indices that match the pattern to display a list of the
|
||||
index fields. You can reload the index fields list to pick up any newly-added fields.
|
||||
|
||||
Reloading the index fields list also resets Kibana's popularity counters for the fields. The popularity counters keep
|
||||
track of the fields you've used most often within Kibana and are used to sort fields within lists.
|
||||
|
||||
To reload the index fields list:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Select an index pattern from the Index Patterns list.
|
||||
. Click the pattern's *Reload* button.
|
||||
|
||||
[float]
|
||||
[[delete-pattern]]
|
||||
== Deleting an Index Pattern
|
||||
To delete an index pattern:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Select the pattern you want to remove in the Index Patterns list.
|
||||
. Click the pattern's *Delete* button.
|
||||
. Confirm that you want to remove the index pattern.
|
123
docs/management/managing-fields.asciidoc
Normal file
|
@ -0,0 +1,123 @@
|
|||
[[managing-fields]]
|
||||
== Managing Fields
|
||||
|
||||
The fields for the index pattern are listed in a table. Click a column header to sort the table by that column. Click
|
||||
the *Controls* button in the rightmost column for a given field to edit the field's properties. You can manually set
|
||||
the field's format from the *Format* drop-down. Format options vary based on the field's type.
|
||||
|
||||
You can also set the field's popularity value in the *Popularity* text entry box to any desired value. Click the
|
||||
*Update Field* button to confirm your changes or *Cancel* to return to the list of fields.
|
||||
|
||||
Kibana has field formatters for the following field types:
|
||||
|
||||
* <<field-formatters-string, Strings>>
|
||||
* <<field-formatters-date, Dates>>
|
||||
* <<field-formatters-geopoint, Geopoints>>
|
||||
* <<field-formatters-numeric, Numbers>>
|
||||
|
||||
[[field-formatters-string]]
|
||||
=== String Field Formatters
|
||||
|
||||
String fields support the `String` and `Url` formatters.
|
||||
|
||||
include::field-formatters/string-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/url-formatter.asciidoc[]
|
||||
|
||||
[[field-formatters-date]]
|
||||
=== Date Field Formatters
|
||||
|
||||
Date fields support the `Date`, `Url`, and `String` formatters.
|
||||
|
||||
The `Date` formatter enables you to choose the display format of date stamps using the http://moment.js[moment.js]
|
||||
standard format definitions.
|
||||
|
||||
include::field-formatters/string-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/url-formatter.asciidoc[]
|
||||
|
||||
[[field-formatters-geopoint]]
|
||||
=== Geographic Point Field Formatters
|
||||
|
||||
Geographic point fields support the `String` formatter.
|
||||
|
||||
include::field-formatters/string-formatter.asciidoc[]
|
||||
|
||||
[[field-formatters-numeric]]
|
||||
=== Numeric Field Formatters
|
||||
|
||||
Numeric fields support the `Url`, `Bytes`, `Duration`, `Number`, `Percentage`, `String`, and `Color` formatters.
|
||||
|
||||
include::field-formatters/url-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/string-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/duration-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/color-formatter.asciidoc[]
|
||||
|
||||
The `Bytes`, `Number`, and `Percentage` formatters enable you to choose the display formats of numbers in this field using
|
||||
the https://adamwdraper.github.io/Numeral-js/[numeral.js] standard format definitions.
|
||||
|
||||
[[scripted-fields]]
|
||||
=== Scripted Fields
|
||||
|
||||
Scripted fields compute data on the fly from the data in your Elasticsearch indices. Scripted field data is shown on
|
||||
the Discover tab as part of the document data, and you can use scripted fields in your visualizations.
|
||||
Scripted field values are computed at query time so they aren't indexed and cannot be searched.
|
||||
|
||||
NOTE: Kibana cannot query scripted fields.
|
||||
|
||||
WARNING: Computing data on the fly with scripted fields can be very resource intensive and can have a direct impact on
|
||||
Kibana's performance. Keep in mind that there's no built-in validation of a scripted field. If your scripts are
|
||||
buggy, you'll get exceptions whenever you try to view the dynamically generated data.
|
||||
|
||||
Scripted fields use the Lucene expression syntax. For more information,
|
||||
see {es-ref}modules-scripting-expression.html[
|
||||
Lucene Expressions Scripts].
|
||||
|
||||
You can reference any single value numeric field in your expressions, for example:
|
||||
|
||||
----
|
||||
doc['field_name'].value
|
||||
----
|
||||
|
||||
[float]
|
||||
[[create-scripted-field]]
|
||||
=== Creating a Scripted Field
|
||||
To create a scripted field:
|
||||
|
||||
. Go to *Settings > Indices*
|
||||
. Select the index pattern you want to add a scripted field to.
|
||||
. Go to the pattern's *Scripted Fields* tab.
|
||||
. Click *Add Scripted Field*.
|
||||
. Enter a name for the scripted field.
|
||||
. Enter the expression that you want to use to compute a value on the fly from your index data.
|
||||
. Click *Save Scripted Field*.
|
||||
|
||||
For more information about scripted fields in Elasticsearch, see
|
||||
{es-ref}modules-scripting.html[Scripting].
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{es-ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
[float]
|
||||
[[update-scripted-field]]
|
||||
=== Updating a Scripted Field
|
||||
To modify a scripted field:
|
||||
|
||||
. Go to *Settings > Indices*
|
||||
. Click the *Edit* button for the scripted field you want to change.
|
||||
. Make your changes and then click *Save Scripted Field* to update the field.
|
||||
|
||||
WARNING: Keep in mind that there's no built-in validation of a scripted field. If your scripts are buggy, you'll get
|
||||
exceptions whenever you try to view the dynamically generated data.
|
||||
|
||||
[float]
|
||||
[[delete-scripted-field]]
|
||||
=== Deleting a Scripted Field
|
||||
To delete a scripted field:
|
||||
|
||||
. Go to *Settings > Indices*
|
||||
. Click the *Delete* button for the scripted field you want to remove.
|
||||
. Confirm that you really want to delete the field.
|
58
docs/management/managing-saved-objects.asciidoc
Normal file
|
@ -0,0 +1,58 @@
|
|||
[[managing-saved-objects]]
|
||||
== Managing Saved Searches, Visualizations, and Dashboards
|
||||
|
||||
You can view, edit, and delete saved searches, visualizations, and dashboards from *Settings > Objects*. You can also
|
||||
export or import sets of searches, visualizations, and dashboards.
|
||||
|
||||
Viewing a saved object displays the selected item in the *Discover*, *Visualize*, or *Dashboard* page. To view a saved
|
||||
object:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the object you want to view.
|
||||
. Click the *View* button.
|
||||
|
||||
Editing a saved object enables you to directly modify the object definition. You can change the name of the object, add
|
||||
a description, and modify the JSON that defines the object's properties.
|
||||
|
||||
If you attempt to access an object whose index has been deleted, Kibana displays its Edit Object page. You can:
|
||||
|
||||
* Recreate the index so you can continue using the object.
|
||||
* Delete the object and recreate it using a different index.
|
||||
* Change the index name referenced in the object's `kibanaSavedObjectMeta.searchSourceJSON` to point to an existing
|
||||
index pattern. This is useful if the index you were working with has been renamed.
|
||||
|
||||
WARNING: No validation is performed for object properties. Submitting invalid changes will render the object unusable.
|
||||
Generally, you should use the *Discover*, *Visualize*, or *Dashboard* pages to create new objects instead of directly
|
||||
editing existing ones.
|
||||
|
||||
To edit a saved object:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the object you want to edit.
|
||||
. Click the *Edit* button.
|
||||
. Make your changes to the object definition.
|
||||
. Click the *Save Object* button.
|
||||
|
||||
To delete a saved object:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the object you want to delete.
|
||||
. Click the *Delete* button.
|
||||
. Confirm that you really want to delete the object.
|
||||
|
||||
To export a set of objects:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the type of object you want to export. You can export a set of dashboards, searches, or visualizations.
|
||||
. Click the selection box for the objects you want to export, or click the *Select All* box.
|
||||
. Click *Export* to select a location to write the exported JSON.
|
||||
|
||||
WARNING: Exported dashboards do not include their associated index patterns. Re-create the index patterns manually before
|
||||
importing saved dashboards to a Kibana instance running on another Elasticsearch cluster.
|
||||
|
||||
To import a set of objects:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Click *Import* to navigate to the JSON file representing the set of objects to import.
|
||||
. Click *Open* after selecting the JSON file.
|
||||
. If any objects in the set would overwrite objects already present in Kibana, confirm the overwrite.
|
10
docs/migration.asciidoc
Normal file
|
@ -0,0 +1,10 @@
|
|||
[[breaking-changes]]
|
||||
= Breaking changes
|
||||
|
||||
[partintro]
|
||||
--
|
||||
This section discusses the changes that you need to be aware of when migrating
|
||||
your application from one version of Kibana to another.
|
||||
--
|
||||
|
||||
include::migration/migrate_5_0.asciidoc[]
|
136
docs/migration/migrate_5_0.asciidoc
Normal file
|
@ -0,0 +1,136 @@
|
|||
[[breaking-changes-5.0]]
|
||||
== Breaking changes in 5.0
|
||||
|
||||
This section discusses the changes that you need to be aware of when migrating
|
||||
your application to Kibana 5.0.
|
||||
|
||||
[float]
|
||||
=== URL changes for DEB/RPM packages
|
||||
*Details:* The previous `packages.elastic.co` URL has been altered to `artifacts.elastic.co`.
|
||||
|
||||
*Impact:* Ensure you update your repository files before running the upgrade process, or your operating system may not see the new
|
||||
packages.
|
||||
|
||||
[float]
|
||||
=== Kibana binds to localhost by default
|
||||
{pull}8013[Pull Request 8013]
|
||||
|
||||
*Details:* Kibana (like Elasticsearch) now binds to localhost for security purposes instead of 0.0.0.0 (all addresses). Previous binding to 0.0.0.0 also caused issues for Windows users.
|
||||
|
||||
*Impact:* If you are running Kibana inside a container/environment that does not allow localhost binding, this will cause Kibana not to start up unless server.host is configured in the kibana.yml to a valid IP address/host, etc..
|
||||
|
||||
[float]
|
||||
=== Markdown headers
|
||||
|
||||
{pull}7855[Pull Request 7855]
|
||||
|
||||
*Details:* As part of addressing the security issue https://www.elastic.co/community/security[ESA-2016-03] (CVE-2016-1000220) in the Kibana product, the markdown version has been bumped.
|
||||
|
||||
*Impact:* As a result of the fix to ESA-2016-03, there is a slight change in the markdown format for headers.
|
||||
|
||||
Previously, headers are defined using `###` followed by the title:
|
||||
|
||||
###Packetbeat:
|
||||
[Dashboard](/#/dashboard/Packetbeat-Dashboard)
|
||||
[Web transactions](/#/dashboard/HTTP)
|
||||
|
||||
It should now be defined as follows (with a space between ### and the title):
|
||||
|
||||
### Packetbeat:
|
||||
[Dashboard](/#/dashboard/Packetbeat-Dashboard)
|
||||
[Web transactions](/#/dashboard/HTTP)
|
||||
|
||||
[float]
|
||||
=== Linux package install directories
|
||||
|
||||
{pull}7308[Pull Request 7308]
|
||||
|
||||
*Details:* To align with the Elasticsearch packages, Kibana now installs binaries under `/usr/share/kibana` and configuration files under `/etc/kibana`. Previously they were both located under `/opt/kibana`.
|
||||
|
||||
*Impact:* Apart from learning the new location of Kibana binaries and configuration files, you may have to update your automation scripts as needed.
|
||||
|
||||
[float]
|
||||
=== The plugin installer now has its own executable
|
||||
|
||||
{pull}6402[Pull Request 6402]
|
||||
|
||||
*Details:* The new installer can be found at `/bin/kibana-plugin`. When installing/removing Kibana plugins, you will now call `kibana-plugin` instead of the main kibana script.
|
||||
|
||||
*Impact:* You may have to update your automation scripts.
|
||||
|
||||
[float]
|
||||
=== Only whitelisted client headers are sent to Elasticsearch
|
||||
|
||||
{pull}6896[Pull Request 6896]
|
||||
|
||||
*Details:* The only headers that are proxied from the browser client to Elasticsearch are the ones set via the `elasticsearch.requestHeadersWhitelist` server configuration.
|
||||
|
||||
*Impact:* If you're relying on client headers in Elasticsearch, you will need to whitelist the specific headers in your `kibana.yml`.
|
||||
|
||||
[float]
|
||||
=== `server.defaultRoute` is now always prefixed by `server.basePath`
|
||||
|
||||
{pull}6953[Pull Request 6953]
|
||||
|
||||
*Details:* The base path configuration now precedes the default route configuration when accessing the default route.
|
||||
|
||||
*Impact:* If you were relying on both `defaultRoute` and `basePath` configurations, you will need to remove the hardcoded `basePath` from your `defaultRoute`.
|
||||
|
||||
[float]
|
||||
=== Directory listings of static assets are no longer rendered
|
||||
|
||||
{pull}6764[Pull Request 6764]
|
||||
|
||||
*Details:* The server no longer renders a list of static files if you try to access a directory.
|
||||
|
||||
*Impact:* If you were relying on this behavior before, you will need to expose underlying directory listings via a reverse proxy instead.
|
||||
|
||||
[float]
|
||||
=== Console logs display date/time in UTC
|
||||
|
||||
{pull}8534[Pull Request 8534]
|
||||
|
||||
*Details:* All server logs now render in UTC rather than the server's local time.
|
||||
|
||||
*Impact:* If you are parsing the timestamps of Kibana server logs in an automated way, make sure to update your automation to accomodate UTC values.
|
||||
|
||||
[float]
|
||||
=== A column for Average no longer renders along with Standard Deviation
|
||||
|
||||
{pull}7827[Pull Request 7827]
|
||||
|
||||
*Details:* From the early days of Kibana, adding a standard deviation metric to a data table also resulted in an average column being added to that data table. This is no longer the case.
|
||||
|
||||
*Impact:* If you want to have both standard deviation and average in the same data table, then add both columns just as you would any other metric.
|
||||
|
||||
[float]
|
||||
=== Minimum size on terms aggregations has been changed from 0 to 1
|
||||
|
||||
{pull}8339[Pull Request 8339]
|
||||
|
||||
*Details:* Elasticsearch has removed the ability to specify a size of 0 for terms aggregations, so Kibana's minimum value has been adjusted to follow suit.
|
||||
|
||||
*Impact:* Any saved visualization that relies on size=0 will need to be updated.
|
||||
|
||||
[float]
|
||||
=== Dashboards created before 5.0
|
||||
|
||||
*Details:* Loading a 4.x dashboard in Kibana 5.0 results in an internal change
|
||||
to the dashboard's metadata, which you can persist by saving the dashboard.
|
||||
|
||||
*Impact:* This change will not affect the functionality of the dashboard itself,
|
||||
but you must save the dashboard before using certain features such as X-Pack reporting.
|
||||
|
||||
[float]
|
||||
=== Saved objects with previously deprecated Elasticsearch features
|
||||
|
||||
*Details:* Since Kibana 4.3, users have been able to arbitrarily modify filters
|
||||
via a generic JSON editor. If users took advantage of any deprecated Elasticsearch
|
||||
features in this way, then they will cause errors in Kibana since they're removed
|
||||
from Elasticsearch 5.0. Check the Elasticsearch
|
||||
{ref}/breaking_50_search_changes.html#_deprecated_queries_removed[breaking changes]
|
||||
documentation for more details.
|
||||
|
||||
*Impact*: Discover, Visualize, and Dashboard will error for any saved objects that
|
||||
are relying on removed Elasticsearch functionality. Users will need to update the
|
||||
JSON of any affected filters.
|
|
@ -1,37 +1,35 @@
|
|||
[[kibana-plugins]]
|
||||
== Kibana Plugins
|
||||
= Kibana Plugins
|
||||
|
||||
[partintro]
|
||||
--
|
||||
Add-on functionality for Kibana is implemented with plug-in modules. You can use the `bin/kibana-plugin`
|
||||
command to manage these modules. You can also install a plugin manually by moving the plugin file to the
|
||||
`plugins` directory and unpacking the plugin files into a new directory.
|
||||
--
|
||||
|
||||
A list of existing Kibana plugins is available on https://github.com/elastic/kibana/wiki/Known-Plugins[GitHub].
|
||||
|
||||
[float]
|
||||
=== Installing Plugins
|
||||
== Installing Plugins
|
||||
|
||||
Use the following command to install a plugin:
|
||||
|
||||
[source,shell]
|
||||
bin/kibana-plugin install <package name or URL>
|
||||
|
||||
When you specify a plugin name without a URL, the plugin tool attempts to download the plugin from `download.elastic.co`.
|
||||
|
||||
[float]
|
||||
==== Installing Plugins from an Arbitrary URL
|
||||
|
||||
You can specify a URL to a specific plugin, as in the following example:
|
||||
When you specify a plugin name without a URL, the plugin tool attempts to download an official Elastic plugin, such as:
|
||||
|
||||
["source","shell",subs="attributes"]
|
||||
$ bin/kibana-plugin install https://download.elastic.co/kibana/x-pack/x-pack-{version}.zip
|
||||
Attempting to transfer from https://download.elastic.co/kibana/x-pack/x-pack-{version}.zip
|
||||
Transferring <some number> bytes....................
|
||||
Transfer complete
|
||||
Retrieving metadata from plugin archive
|
||||
Extracting plugin archive
|
||||
Extraction complete
|
||||
Optimizing and caching browser bundles...
|
||||
Plugin installation complete
|
||||
$ bin/kibana-plugin install x-pack
|
||||
|
||||
|
||||
[float]
|
||||
=== Installing Plugins from an Arbitrary URL
|
||||
|
||||
You can download official Elastic plugins simply by specifying their name. You
|
||||
can alternatively specify a URL to a specific plugin, as in the following
|
||||
example:
|
||||
|
||||
["source","shell",subs="attributes"]
|
||||
$ bin/kibana-plugin install https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-{version}.zip
|
||||
|
||||
You can specify URLs that use the HTTP, HTTPS, or `file` protocols.
|
||||
|
||||
|
@ -43,40 +41,36 @@ example:
|
|||
|
||||
[source,shell]
|
||||
$ bin/kibana-plugin install file:///some/local/path/x-pack.zip -d path/to/directory
|
||||
Installing sample-plugin
|
||||
Attempting to transfer from file:///some/local/path/x-pack.zip
|
||||
Transferring <some number> bytes....................
|
||||
Transfer complete
|
||||
Retrieving metadata from plugin archive
|
||||
Extracting plugin archive
|
||||
Extraction complete
|
||||
Optimizing and caching browser bundles...
|
||||
Plugin installation complete
|
||||
|
||||
NOTE: This command creates the specified directory if it does not already exist.
|
||||
|
||||
[float]
|
||||
=== Removing Plugins
|
||||
|
||||
Use the `remove` command to remove a plugin, including any configuration information, as in the following example:
|
||||
|
||||
[source,shell]
|
||||
$ bin/kibana-plugin remove timelion
|
||||
|
||||
You can also remove a plugin manually by deleting the plugin's subdirectory under the `plugins/` directory.
|
||||
|
||||
[float]
|
||||
=== Listing Installed Plugins
|
||||
|
||||
Use the `list` command to list the currently installed plugins.
|
||||
|
||||
[float]
|
||||
=== Updating Plugins
|
||||
== Updating & Removing Plugins
|
||||
|
||||
To update a plugin, remove the current version and reinstall the plugin.
|
||||
|
||||
[float]
|
||||
=== Configuring the Plugin Manager
|
||||
To remove a plugin, use the `remove` command, as in the following example:
|
||||
|
||||
[source,shell]
|
||||
$ bin/kibana-plugin remove x-pack
|
||||
|
||||
You can also remove a plugin manually by deleting the plugin's subdirectory under the `plugins/` directory.
|
||||
|
||||
NOTE: Removing a plugin will result in an "optimize" run which will delay the next start of Kibana.
|
||||
|
||||
== Disabling Plugins
|
||||
|
||||
Use the following command to disable a plugin:
|
||||
|
||||
[source,shell]
|
||||
-----------
|
||||
./bin/kibana --<plugin ID>.enabled=false <1>
|
||||
-----------
|
||||
|
||||
NOTE: Disabling or enabling a plugin will result in an "optimize" run which will delay the start of Kibana.
|
||||
|
||||
<1> You can find a plugin's plugin ID as the value of the `name` property in the plugin's `package.json` file.
|
||||
|
||||
== Configuring the Plugin Manager
|
||||
|
||||
By default, the plugin manager provides you with feedback on the status of the activity you've asked the plugin manager
|
||||
to perform. You can control the level of feedback for the `install` and `remove` commands with the `--quiet` and
|
||||
|
@ -95,7 +89,7 @@ bin/kibana-plugin install --timeout 30s sample-plugin
|
|||
bin/kibana-plugin install --timeout 1m sample-plugin
|
||||
|
||||
[float]
|
||||
==== Plugins and Custom Kibana Configurations
|
||||
=== Plugins and Custom Kibana Configurations
|
||||
|
||||
Use the `-c` or `--config` options with the `install` and `remove` commands to specify the path to the configuration file
|
||||
used to start Kibana. By default, Kibana uses the configuration file `config/kibana.yml`. When you change your installed
|
||||
|
@ -110,22 +104,3 @@ you must specify the path to that configuration file each time you use the `bin/
|
|||
64:: Unknown command or incorrect option parameter
|
||||
74:: I/O error
|
||||
70:: Other error
|
||||
|
||||
[float]
|
||||
[[plugin-switcher]]
|
||||
== Switching Plugin Functionality
|
||||
|
||||
The Kibana UI serves as a framework that can contain several different plugins. You can switch between these
|
||||
plugins by clicking the icons for your desired plugins in the left-hand navigation bar.
|
||||
|
||||
[float]
|
||||
=== Disabling Plugins
|
||||
|
||||
Use the following command to disable a plugin:
|
||||
|
||||
[source,shell]
|
||||
-----------
|
||||
./bin/kibana --<plugin ID>.enabled=false <1>
|
||||
-----------
|
||||
|
||||
<1> You can find a plugin's plugin ID as the value of the `name` property in the plugin's `package.json` file.
|
||||
|
|
|
@ -1,324 +0,0 @@
|
|||
[[production]]
|
||||
== Using Kibana in a Production Environment
|
||||
* <<configuring-kibana-shield, Configuring Kibana to Work with {scyld}>>
|
||||
* <<enabling-ssl, Enabling SSL>>
|
||||
* <<controlling-access, Controlling Access>>
|
||||
* <<load-balancing, Load Balancing Across Multiple Elasticsearch Nodes>>
|
||||
|
||||
How you deploy Kibana largely depends on your use case. If you are the only user,
|
||||
you can run Kibana on your local machine and configure it to point to whatever
|
||||
Elasticsearch instance you want to interact with. Conversely, if you have a large
|
||||
number of heavy Kibana users, you might need to load balance across multiple
|
||||
Kibana instances that are all connected to the same Elasticsearch instance.
|
||||
|
||||
While Kibana isn't terribly resource intensive, we still recommend running Kibana
|
||||
separate from your Elasticsearch data or master nodes. To distribute Kibana
|
||||
traffic across the nodes in your Elasticsearch cluster, you can run Kibana
|
||||
and an Elasticsearch client node on the same machine. For more information, see
|
||||
<<load-balancing, Load Balancing Across Multiple Elasticsearch Nodes>>.
|
||||
|
||||
[float]
|
||||
[[configuring-kibana-shield]]
|
||||
=== Configuring Kibana to Work with {scyld}
|
||||
|
||||
Kibana users have to authenticate when your cluster has {scyld} enabled. You
|
||||
configure {scyld} roles for your Kibana users to control what data those users
|
||||
can access. Kibana runs a webserver that makes requests to Elasticsearch on the
|
||||
client's behalf, so you also need to configure credentials for the Kibana server
|
||||
so those requests can be authenticated.
|
||||
|
||||
You must configure Kibana to encrypt communications between the browser and the
|
||||
Kibana server to prevent user passwords from being sent in the clear. If are
|
||||
using SSL/TLS to encrypt traffic to and from the nodes in your Elasticsearch
|
||||
cluster, you must also configure Kibana to connect to Elasticsearch via HTTPS.
|
||||
|
||||
With {scyld} enabled, if you load a Kibana dashboard that accesses data in an
|
||||
index that you are not authorized to view, you get an error that indicates the
|
||||
index does not exist. {scyld} does not currently provide a way to control which
|
||||
users can load which dashboards.
|
||||
|
||||
To use Kibana with {scyld}:
|
||||
|
||||
. Configure the password for the built-in `kibana` user. The Kibana server uses
|
||||
this user to gain access to the cluster monitoring APIs and the `.kibana` index.
|
||||
The server does _not_ need access to user indexes.
|
||||
+
|
||||
By default, the `kibana` user password is set to `changeme`. Change this password
|
||||
through the reset password API:
|
||||
+
|
||||
[source,shell]
|
||||
--------------------------------------------------------------------------------
|
||||
curl -XPUT 'localhost:9200/_security/user/kibana/_password' -d '{
|
||||
"password" : "s0m3th1ngs3cr3t"
|
||||
}'
|
||||
--------------------------------------------------------------------------------
|
||||
+
|
||||
Once reset, you need to add the following property to `kibana.yml`:
|
||||
+
|
||||
[source,yaml]
|
||||
--------------------------------------------------------------------------------
|
||||
elasticsearch.password: "s0m3th1ngs3cr3t"
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
[[kibana-roles]]
|
||||
. Derive Kibana user roles from the example <<kibana-user-role, `my_kibana_user`>>
|
||||
user role. Assign the roles to the Kibana users to control which indices they can
|
||||
access. Kibana users need access to the indices that they will be working with
|
||||
and the `.kibana` index where their saved searches, visualizations, and dashboards
|
||||
are stored. Users also need access to the `.kibana-devnull` index. The example
|
||||
`my_kibana_user` role grants read access to the indices that match the
|
||||
`logstash-*` pattern and full access to the `.kibana` index, which is required.
|
||||
+
|
||||
TIP: You can define as many different roles for your Kibana users as you need.
|
||||
+
|
||||
[[kibana-user-role]]
|
||||
For example, the following `my_kibana_user` role only allows users to discover
|
||||
and visualize data in the `logstash-*` indices.
|
||||
+
|
||||
[source,js]
|
||||
--------------------------------------------------------------------------------
|
||||
{
|
||||
"cluster" : [ "monitor" ],
|
||||
"indices" : [
|
||||
{
|
||||
"names" : [ "logstash-*" ],
|
||||
"privileges" : [ "view_index_metadata", "read" ]
|
||||
},
|
||||
{
|
||||
"names" : [ ".kibana*" ], <1>
|
||||
"privileges" : [ "manage", "read", "index" ]
|
||||
}
|
||||
]
|
||||
}
|
||||
--------------------------------------------------------------------------------
|
||||
<1> All Kibana users need access to the `.kibana` and `.kibana-devnull` indices.
|
||||
|
||||
. Assign the appropriate roles to your Kibana users or groups of users:
|
||||
|
||||
** If you're using the `native` realm, you can assign roles using the
|
||||
{xpack}/security-api-users.html[{scyld} User Management API]. For example, the following
|
||||
creates a user named `jacknich` and assigns it the `kibana_monitoring` role:
|
||||
+
|
||||
[source,js]
|
||||
--------------------------------------------------------------------------------
|
||||
POST /_xpack/security/user/jacknich
|
||||
{
|
||||
"password" : "t0pS3cr3t",
|
||||
"roles" : [ "kibana_monitoring" ]
|
||||
}
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
** If you are using an LDAP or Active Directory realm, you can either assign
|
||||
roles on a per user basis, or assign roles to groups of users. By default, role
|
||||
mappings are stored in {xpack}/mapping-roles.html[`CONFIGDIR/x-pack/role_mapping.yml`].
|
||||
For example, the following snippet assigns the `kibana_monitoring` role to the
|
||||
group named `admins` and the user named Jack Nicholson:
|
||||
+
|
||||
[source,yaml]
|
||||
--------------------------------------------------------------------------------
|
||||
kibana_monitoring:
|
||||
- "cn=admins,dc=example,dc=com"
|
||||
- "cn=Jack Nicholson,dc=example,dc=com"
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
. If you have enabled SSL encryption in {scyld}, configure Kibana to connect
|
||||
to Elasticsearch via HTTPS. To do this:
|
||||
|
||||
.. Specify the HTTPS protocol in the `elasticsearch.url` setting in the Kibana
|
||||
configuration file, `kibana.yml`:
|
||||
+
|
||||
[source,yaml]
|
||||
--------------------------------------------------------------------------------
|
||||
elasticsearch.url: "https://<your_elasticsearch_host>.com:9200"
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
.. If you are using your own CA to sign certificates for Elasticsearch, set the
|
||||
`elasticsearch.ssl.ca` setting in `kibana.yml` to specify the location of the PEM
|
||||
file.
|
||||
+
|
||||
[source,yaml]
|
||||
--------------------------------------------------------------------------------
|
||||
elasticsearch.ssl.ca: /path/to/your/cacert.pem
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
. Configure Kibana to encrypt communications between the browser and the Kibana
|
||||
server. To do this, configure the `server.ssl.key` and `server.ssl.cert` properties
|
||||
in `kibana.yml`:
|
||||
+
|
||||
[source,yaml]
|
||||
--------------------------------------------------------------------------------
|
||||
server.ssl.key: /path/to/your/server.key
|
||||
server.ssl.cert: /path/to/your/server.crt
|
||||
--------------------------------------------------------------------------------
|
||||
+
|
||||
Once you enable SSL encryption between the browser and the Kibana server, access
|
||||
Kibana via HTTPS. For example, `https://localhost:5601`.
|
||||
+
|
||||
NOTE: Enabling browser encryption is required to prevent passing user credentials
|
||||
in the clear.
|
||||
|
||||
. Install X-Pack into Kibana. {scyld} secures user sessions and enables users
|
||||
to log in and out of Kibana. To install the X-Pack on Kibana:
|
||||
|
||||
.. Run the following command in your Kibana installation directory.
|
||||
+
|
||||
[source,console]
|
||||
--------------------------------------------------------------------------------
|
||||
bin/kibana-plugin install x-pack
|
||||
--------------------------------------------------------------------------------
|
||||
+
|
||||
[NOTE]
|
||||
=============================================================================
|
||||
To perform an offline install, download X-Pack from
|
||||
+http://download.elasticsearch.org/kibana/x-pack/xpack-{version}.zip+
|
||||
(http://download.elasticsearch.org/kibana/x-pack/xpack-{version}.zip.sha1.txt[sha1])
|
||||
and run:
|
||||
|
||||
[source,shell]
|
||||
---------------------------------------------------------
|
||||
bin/kibana-plugin install file:///path/to/file/xpack-{version}.tar.gz.
|
||||
---------------------------------------------------------
|
||||
=============================================================================
|
||||
|
||||
.. Set the `xpack.security.encryptionKey` property in the `kibana.yml` configuration file.
|
||||
You can use any text string as the encryption key.
|
||||
+
|
||||
[source,yaml]
|
||||
--------------------------------------------------------------------------------
|
||||
xpack.security.encryptionKey: "something_secret"
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
.. To change the default session duration, set the `xpack.security.sessionTimeout` property
|
||||
in the `kibana.yml` configuration file. By default, sessions expire after 30 minutes.
|
||||
The timeout is specified in milliseconds. For example, set the timeout to 600000
|
||||
to expire sessions after 10 minutes:
|
||||
+
|
||||
[source,yaml]
|
||||
--------------------------------------------------------------------------------
|
||||
xpack.security.sessionTimeout: 600000
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
. Restart Kibana and verify that you can sign in as a user. If you are running
|
||||
Kibana locally, go to `https://localhost:5601` and enter the credentials for a
|
||||
user you've assigned a Kibana user role. For example, you could log in as the
|
||||
`jacknich` user created above.
|
||||
+
|
||||
kibana-login.jpg["Kibana Login",link="images/kibana-login.jpg"]
|
||||
+
|
||||
NOTE: This must be a user who has been assigned a role derived from the example
|
||||
<<kibana-user-role, `my_kibana_user` user role>>. Kibana server credentials
|
||||
should only be used internally by the Kibana server. The Kibana server role
|
||||
doesn't grant permission to access user indices.
|
||||
|
||||
[float]
|
||||
[[security-ui-settings]]
|
||||
===== Kibana {scyld} UI Settings
|
||||
[options="header"]
|
||||
|======
|
||||
| Name | Default | Description
|
||||
| `xpack.security.encryptionKey` | - | An arbitrary string used to encrypt credentials in a
|
||||
cookie. It is crucial that this key is not exposed to
|
||||
users of Kibana. Required.
|
||||
| `xpack.security.sessionTimeout` | `1800000` (30 minutes) | Sets the session duration (in milliseconds).
|
||||
| `xpack.security.cookieName` | `"sid"` | Sets the name of the cookie used for the session.
|
||||
| `xpack.security.skipSslCheck` | `false` | Advanced setting. Set to `true` to enable Kibana to
|
||||
start if `server.ssl.cert` and `server.ssl.key` are
|
||||
not specified in `kibana.yml`. This should only be
|
||||
used if either SSL is configured outside of Kibana
|
||||
(for example, you are routing requests through a load
|
||||
balancer or proxy) or
|
||||
`xpack.security.useUnsafeSessions` is also set to
|
||||
`true`.
|
||||
| `xpack.security.useUnsafeSessions` | `false` | Advanced setting. Set to `true` to use insecure
|
||||
cookies for sessions in Kibana. Requires
|
||||
`xpack.security.skipSslCheck` to also be set to
|
||||
`true`.
|
||||
|======
|
||||
|
||||
|
||||
[float]
|
||||
[[enabling-ssl]]
|
||||
=== Enabling SSL
|
||||
Kibana supports SSL encryption for both client requests and the requests the Kibana server
|
||||
sends to Elasticsearch.
|
||||
|
||||
To encrypt communications between the browser and the Kibana server, you configure the `ssl_key_file` and
|
||||
`ssl_cert_file` properties in `kibana.yml`:
|
||||
|
||||
[source,text]
|
||||
----
|
||||
# SSL for outgoing requests from the Kibana Server (PEM formatted)
|
||||
server.ssl.key: /path/to/your/server.key
|
||||
server.ssl.cert: /path/to/your/server.crt
|
||||
----
|
||||
|
||||
If you are using {scyld} or a proxy that provides an HTTPS endpoint for Elasticsearch,
|
||||
you can configure Kibana to access Elasticsearch via HTTPS so communications between
|
||||
the Kibana server and Elasticsearch are encrypted.
|
||||
|
||||
To do this, you specify the HTTPS
|
||||
protocol when you configure the Elasticsearch URL in `kibana.yml`:
|
||||
|
||||
[source,text]
|
||||
----
|
||||
elasticsearch: "https://<your_elasticsearch_host>.com:9200"
|
||||
----
|
||||
|
||||
If you are using a self-signed certificate for Elasticsearch, set the `ca` property in
|
||||
`kibana.yml` to specify the location of the PEM file. Setting the `ca` property lets you leave the `verify_ssl` option enabled.
|
||||
|
||||
[source,text]
|
||||
----
|
||||
# If you need to provide a CA certificate for your Elasticsearch instance, put
|
||||
# the path of the pem file here.
|
||||
ca: /path/to/your/ca/cacert.pem
|
||||
----
|
||||
|
||||
[float]
|
||||
[[controlling-access]]
|
||||
=== Controlling access
|
||||
You can use {xpack}/xpack-security.html[{scyld}] to control what Elasticsearch data users can access through Kibana.
|
||||
{scyld} provides index-level access control. If a user isn't authorized to run
|
||||
the query that populates a Kibana visualization, the user just sees an empty
|
||||
visualization.
|
||||
|
||||
To configure access to Kibana using {scyld}, you create roles
|
||||
for Kibana using the `my_kibana_user` default role as a starting point. For more
|
||||
information, see {xpack}/kibana.html[Using Kibana with {scyld}].
|
||||
|
||||
[float]
|
||||
[[load-balancing]]
|
||||
=== Load Balancing Across Multiple Elasticsearch Nodes
|
||||
If you have multiple nodes in your Elasticsearch cluster, the easiest way to distribute Kibana requests
|
||||
across the nodes is to run an Elasticsearch _client_ node on the same machine as Kibana.
|
||||
Elasticsearch client nodes are essentially smart load balancers that are part of the cluster. They
|
||||
process incoming HTTP requests, redirect operations to the other nodes in the cluster as needed, and
|
||||
gather and return the results. For more information, see
|
||||
{ref}/modules-node.html[Node] in the Elasticsearch reference.
|
||||
|
||||
To use a local client node to load balance Kibana requests:
|
||||
|
||||
. Install Elasticsearch on the same machine as Kibana.
|
||||
. Configure the node as a client node. In `elasticsearch.yml`, set both `node.data` and `node.master` to `false`:
|
||||
+
|
||||
--------
|
||||
# 3. You want this node to be neither master nor data node, but
|
||||
# to act as a "search load balancer" (fetching data from nodes,
|
||||
# aggregating results, etc.)
|
||||
#
|
||||
node.master: false
|
||||
node.data: false
|
||||
--------
|
||||
. Configure the client node to join your Elasticsearch cluster. In `elasticsearch.yml`, set the `cluster.name` to the
|
||||
name of your cluster.
|
||||
+
|
||||
--------
|
||||
cluster.name: "my_cluster"
|
||||
--------
|
||||
. Make sure Kibana is configured to point to your local client node. In `kibana.yml`, the `elasticsearch.url` should be set to
|
||||
`localhost:9200`.
|
||||
+
|
||||
--------
|
||||
# The Elasticsearch instance to use for all your queries.
|
||||
elasticsearch.url: "http://localhost:9200"
|
||||
--------
|
11
docs/release-notes.asciidoc
Normal file
|
@ -0,0 +1,11 @@
|
|||
[[release-notes]]
|
||||
= Release Notes
|
||||
|
||||
[partintro]
|
||||
--
|
||||
This section summarizes the changes in each release.
|
||||
|
||||
* <<release-notes-5.0.0>>
|
||||
|
||||
--
|
||||
include::release-notes/5.0.0.asciidoc[]
|
126
docs/release-notes/5.0.0.asciidoc
Normal file
|
@ -0,0 +1,126 @@
|
|||
[[release-notes-5.0.0]]
|
||||
== 5.0.0 Release Notes
|
||||
|
||||
The lists below cover changes between 4.6.2 and 5.0.0 only.
|
||||
|
||||
Also see <<breaking-changes-5.0>>.
|
||||
|
||||
[float]
|
||||
[[enhancement-5.0.0]]
|
||||
=== Enhancements
|
||||
CLI::
|
||||
* New plugin installer: `bin/kibana-plugin` {pull}6402[#6402]
|
||||
* Ability to specify multiple config files as CLI arguments {pull}6825[#6825]
|
||||
* Display plugins versions {pull}7221[#7221]
|
||||
Core::
|
||||
* Bind Kibana server to localhost by default {pull}8013[#8013]
|
||||
* Only proxy whitelisted request headers to Elasticsearch {pull}6896[#6896]
|
||||
* Remove client node filtering in the Elasticsearch version check {pull}6840[#6840]
|
||||
* A new design {pull}6239[#6239]
|
||||
* Friendly error message when Kibana is already running {pull}6735[#6735]
|
||||
* Logging configuration can be reloaded with `SIGHUP` {pull}6720[#6720]
|
||||
* Abortable timeout counter to notifications {pull}6364[#6364]
|
||||
* Upgrade Node.js to version 6.9.0 for improved memory use and a segfault fix {pull}8733[#8733]
|
||||
* Warn on startup if plugins don't support the version of Kibana {pull}8283[#8283]
|
||||
* Add additional verification to ensure supported Elasticsearch version {pull}8229[#8229]
|
||||
* Add unique instance identifier {pull}6378[#6378]
|
||||
* Add state:storeInSessionState option enabling shorter URLs and enhancing Internet Explorer support {pull}8022[#8022]
|
||||
* Improve user experience when query returns no results {pull}7286[#7286]
|
||||
* Display message when "Export All" request fails {pull}6976[#6976]
|
||||
Dashboard::
|
||||
* Dashboard refresh interval persisted on save {pull}7365[#7365]
|
||||
Dev Tools::
|
||||
* Add Dev Tools application, including Console (previously known as Sense) {pull}8171[#8171]
|
||||
Discover::
|
||||
* Default columns are configurable {pull}5696[#5696]
|
||||
* Render field type in tooltip when mousing over name {pull}6243[#6243]
|
||||
* Add field-exists filter button to doc table {pull}6166[#6166]
|
||||
* Enable better caching of time-based requests by Elasticsearch {pull}6643[#6643]
|
||||
Filters::
|
||||
* Automatic filter pinning option in advanced settings {pull}5730[#5730]
|
||||
Management::
|
||||
* Rename Settings to Management {pull}7284[#7284]
|
||||
* Add boolean field formatter {pull}7935[#7935]
|
||||
* Add painless support for scripted fields {pull}7700[#7700]
|
||||
* Custom notification banner configured via advanced settings {pull}6791[#6791]
|
||||
* Duration field formatter for numbers {pull}6499[#6499]
|
||||
* Title case field formatter for strings {pull}6413[#6413]
|
||||
Plugins::
|
||||
* Add support for apps to specify their order in the left navigation bar {pull}8767[#8767]
|
||||
* Separate plugin version and supported version of Kibana {pull}8222[#8222]
|
||||
* Expose the Kibana app base URL, no more hardcoding '/app/kibana' in urls {pull}8072[#8072]
|
||||
* Add requireDefaultIndex route option, enabling index pattern independent plugins {pull}7516[#7516]
|
||||
* Add plugin preInit extension point {pull}7069[#7069]
|
||||
* Plugins can prefix their config values {pull}6554[#6554]
|
||||
Server::
|
||||
* Add basePath to server's defaultRoute {pull}6953[#6953]
|
||||
* Do not render directory listings for static assets {pull}6764[#6764]
|
||||
* Automatically redirect http traffic to https {pull}5959[#5959]
|
||||
* Write process pid file as soon as it is known {pull}4680[#4680]
|
||||
* Log most events by default and only errors when in quiet mode {pull}5952[#5952]
|
||||
Sharing::
|
||||
* Improve user interface to emphasize difference between Original URLs and Snapshot URLs. {pull}8172[#8172]
|
||||
Status::
|
||||
* Emit new state and message, on status change {pull}7513[#7513]
|
||||
Timelion::
|
||||
* Add Timelion to Kibana core {pull}7994[#7994]
|
||||
Visualize::
|
||||
* Add y-axis logarithmic scale for bar charts {pull}7939[#7939]
|
||||
* Add option to set legend position {pull}7931[#7931]
|
||||
* Add legend tooltips {pull}7890[#7890]
|
||||
* Add x-axis title labels {pull}7845[#7845]
|
||||
|
||||
[float]
|
||||
[[bug-5.0.0]]
|
||||
=== Bug fixes
|
||||
Core::
|
||||
* Fix alias support when fetching types {pull}8338[#8338]
|
||||
* Report useful error message when sessionStorage is unavailable {pull}8343[#8343]
|
||||
Dashboard::
|
||||
* Prevent dashboard title tooltip from being cut off {pull}6464[#6464]
|
||||
Discover::
|
||||
* Only display Visualize button when a field is aggregatable {pull}8694[#8694]
|
||||
Filters::
|
||||
* Use lt instead of lte for safer upper bound in range filter {pull}7129[#7129]
|
||||
* Fix date histogram filtering {pull}7126[#7126]
|
||||
Management::
|
||||
* No longer remove selection when refreshing fields {pull}8312[#8312]
|
||||
* Notify user of failures when deleting saved objects {pull}7345[#7345]
|
||||
* Add title to visState when the visualization is saved {pull}7185[#7185]
|
||||
* Back button now works {pull}5982[#5982]
|
||||
* Show no value instead of interpolating 'undefined' with empty values in URL string formatters {pull}6291[#6291]
|
||||
Server::
|
||||
* Console logs display date/time in UTC {pull}8534[#8534]
|
||||
Status::
|
||||
* Plugins without init function no longer show statuses {pull}7953[#7953]
|
||||
Timepicker::
|
||||
* Absolute time picker updates when time selection changes {pull}8383[#8383]
|
||||
* Prevent relative timepicker values from being negative {pull}6607[#6607]
|
||||
Visualize::
|
||||
* Remove average from standard deviation metrics {pull}7827[#7827]
|
||||
* Always set output.params.min_doc_count on Histograms {pull}8349[#8349]
|
||||
* Set minimum aggregation size to 1, Elasticsearch returns an error for 0 {pull}8339[#8339]
|
||||
* Add milliseconds to Date Histogram interval options {pull}6796[#6796]
|
||||
* Do not perform unnecessary round-trip to Elasticsearch when there are no changes in request parameters {pull}7960[#7960]
|
||||
* Tile map dots no longer shrink to extreme tiny size on some zooms {pull}8000[#8000]
|
||||
* Table visualizations display correctly when changing paging options {pull}8422[#8422]
|
||||
* Filter non-aggregatable fields from visualization editor {pull}8421[#8421]
|
||||
* Prevent charts from unnecessarily rendering twice {pull}8371[#8371]
|
||||
* Display custom label for percentile ranks aggregation {pull}7123[#7123]
|
||||
* Display custom label for percentile and median metric visualizations {pull}7021[#7021]
|
||||
* Back button now works {pull}5986[#5986]
|
||||
* Fix extraneous bounds for tilemap {pull}7068[#7068]
|
||||
* Median visualization properly shows value rather than `?` {pull}7003[#7003]
|
||||
* Map zoom is persisted when saving visualization {pull}6835[#6835]
|
||||
* Drag aggregations to sort {pull}6566[#6566]
|
||||
* Table sort is persisted on save {pull}5953[#5953]
|
||||
* Ignore extended bounds when "Show empty buckets" unselected {pull}5960[#5960]
|
||||
* Using custom label for standard deviation aggregation {pull}6407[#6407]
|
||||
|
||||
[float]
|
||||
[[deprecation-5.0.0]]
|
||||
=== Deprecations & Removals
|
||||
Visualize::
|
||||
* Remove "Exclude Pattern Flags" and "Include Pattern Flags" from terms and significant terms aggregations {issue}6714[#6714]
|
||||
* Deprecate ascending sort for terms aggregations {pull}8167[#8167]
|
||||
* Deprecate split chart option for tile map visualization {pull}6001[#6001]
|
|
@ -1,44 +0,0 @@
|
|||
[[releasenotes]]
|
||||
== Kibana {version} Release Notes
|
||||
|
||||
The {version} release of Kibana requires Elasticsearch {esversion} or later.
|
||||
|
||||
[float]
|
||||
[[enhancements]]
|
||||
== Enhancements
|
||||
|
||||
* {k4pull}6682[Pull Request 6682]: Renames Sense to Console, and adds the project to Kibana core.
|
||||
* {k4issue}6913[Issue 6913]: Adds Console support for Elasticsearch 5.0 APIs.
|
||||
* {k4pull}6896[Pull Request 6896]: Adds a configurable whitelist of headers for Elasticsearch requests.
|
||||
* {k4pull}6796[Pull Request 6796]: Adds millisecond durations for intervals.
|
||||
* {k4issue}1855[Issue 1855]: Adds advanced setting to configure the starting day of the week.
|
||||
* {k4issue}6378[Issue 6378]: Adds persistent UUIDs to distinguish multiple instances within a cluster.
|
||||
* {k4issue}6531[Issue 6531]: Improved warning for URL lengths that approach browser limits.
|
||||
* {k4issue}6602[Issue 6602]: Improves dark theme support.
|
||||
* {k4issue}6791[Issue 6791]: Enables composition of custom user toast notifications in Advanced Settings.
|
||||
* {k4pull}8014[Pull Request 8014]: Changes the UUID config setting from `uuid` to `server.uuid`, and puts UUID storage into data file instead of Elasticsearch. added[5.0.0-beta1]
|
||||
|
||||
[float]
|
||||
[[bugfixes]]
|
||||
== Bug Fixes
|
||||
|
||||
* {k4pull}6953[Pull Request 6953]: The `defaultRoute` configuration parameter now honors the value of `basePath` and requires a leading slash (`/`).
|
||||
* {k4issue}6794[Issue 6794]: Fixes extraneous bounds when drawing a bounding box on a tilemap visualization.
|
||||
* {k4issue}6246[Issue 6246]: Custom labels display on percentile and median metrics.
|
||||
* {k4issue}6407[Issue 6407]: Custom labels display on standard deviation metrics.
|
||||
* {k4issue}7003[Issue 7003]: Median visualizations no longer only show `?` as the value.
|
||||
* {k4issue}7006[Issue 7006]: The URL shortener now honors custom configuration values for `kibana.index`.
|
||||
* {k4issue}6785[Issue 6785]: Fixes an intermittent issue that prevented installing plugins by name.
|
||||
* {k4issue}6714[Issue 6714]: Removes unsupported flag functionality.
|
||||
* {k4issue}6760[Issue 6760]: Removed directory listings for static assets.
|
||||
* {k4issue}6762[Issue 6762]: Stopped Kibana logo from randomly disappearing in some situations.
|
||||
* {k4issue}6735[Issue 6735]: Clearer error message when trying to start Kibana while it is already running.
|
||||
|
||||
[float]
|
||||
[[plugins-apis]]
|
||||
== Plugins, APIs, and Development Infrastructure
|
||||
|
||||
NOTE: The items in this section are not a complete list of the internal changes relating to development in Kibana. Plugin
|
||||
framework and APIs are not formally documented and not guaranteed to be backward compatible from release to release.
|
||||
|
||||
* {k4pull}7069[Pull Request 7069]: Adds `preInit` functionality.
|