Merge branch 'master' of github.com:elastic/kibana into pr/8597

This commit is contained in:
spalger 2016-10-27 12:31:25 -07:00
commit b583db71f8
301 changed files with 4700 additions and 3837 deletions

View file

@ -1 +1 @@
6.4.0
6.9.0

View file

@ -232,7 +232,7 @@ npm run test:dev -- --kbnServer.testsBundle.pluginId=some_special_plugin --kbnSe
* Open VMWare and go to Window > Virtual Machine Library. Unzip the virtual machine and drag the .vmx file into your Virtual Machine Library.
* Right-click on the virtual machine you just added to your library and select "Snapshots...", and then click the "Take" button in the modal that opens. You can roll back to this snapshot when the VM expires in 90 days.
* In System Preferences > Sharing, change your computer name to be something simple, e.g. "computer".
* Run Kibana with `npm start -- --no-ssl --host=computer.local` (subtituting your computer name).
* Run Kibana with `npm start -- --no-ssl --host=computer.local` (substituting your computer name).
* Now you can run your VM, open the browser, and navigate to `http://computer.local:5601` to test Kibana.
#### Running Browser Automation Tests

View file

@ -21,4 +21,4 @@ if [ ! -x "$NODE" ]; then
exit 1
fi
exec "${NODE}" $NODE_OPTIONS "${DIR}/src/cli" ${@}
exec "${NODE}" $NODE_OPTIONS --no-warnings "${DIR}/src/cli" ${@}

View file

@ -21,4 +21,4 @@ if [ ! -x "$NODE" ]; then
exit 1
fi
exec "${NODE}" $NODE_OPTIONS "${DIR}/src/cli_plugin" ${@}
exec "${NODE}" $NODE_OPTIONS --no-warnings "${DIR}/src/cli_plugin" ${@}

View file

@ -22,7 +22,7 @@ If Not Exist "%NODE%" (
)
TITLE Kibana Server
"%NODE%" %NODE_OPTIONS% "%DIR%\src\cli_plugin" %*
"%NODE%" %NODE_OPTIONS% --no-warnings "%DIR%\src\cli_plugin" %*
:finally

View file

@ -22,7 +22,7 @@ If Not Exist "%NODE%" (
)
TITLE Kibana Server
"%NODE%" %NODE_OPTIONS% "%DIR%\src\cli" %*
"%NODE%" %NODE_OPTIONS% --no-warnings "%DIR%\src\cli" %*
:finally

View file

@ -1,12 +0,0 @@
[[kibana-apps]]
== Kibana Apps
The Kibana UI serves as a framework that can contain several different applications. You can switch between these
applications by clicking the image:images/app-button.png[App Picker] *App picker* button to display the app bar:
image::images/app-picker.png[]
Click an app icon to switch to that app's functionality.
Applications in the Kibana UI are managed by <<kibana-plugins,_plugins_>>. Plugins can expose app functionality or add new
visualization types.

View file

@ -1,6 +1,8 @@
[[console-kibana]]
== Console for Kibana
= Console
[partintro]
--
The Console plugin provides a UI to interact with the REST API of Elasticsearch. Console has two main areas: the *editor*,
where you compose requests to Elasticsearch, and the *response* pane, which displays the responses to the request.
Enter the address of your Elasticsearch server in the text box on the top of screen. The default value of this address
@ -63,120 +65,22 @@ but you can easily change this by entering a different url in the Server input:
.The Server Input
image::images/introduction_server.png["Server",width=400,align="center"]
[NOTE]
Console is a development tool and is configured by default to run on a laptop. If you install it on a server please
look at the <<securing_console>> for instructions on how make it secure.
[float]
[[console-ui]]
== The Console UI
In this section you will find a more detailed description of UI of Console. The basic aspects of the UI are explained
in the <<console-kibana>> section.
--
[[multi-req]]
=== Multiple Requests Support
include::console/multi-requests.asciidoc[]
The Console editor allows writing multiple requests below each other. As shown in the <<console-kibana>> section, you
can submit a request to Elasticsearch by positioning the cursor and using the <<action_menu,Action Menu>>. Similarly
you can select multiple requests in one go:
include::console/auto-formatting.asciidoc[]
.Selecting Multiple Requests
image::images/multiple_requests.png[Multiple Requests]
include::console/keyboard-shortcuts.asciidoc[]
Console will send the request one by one to Elasticsearch and show the output on the right pane as Elasticsearch responds.
This is very handy when debugging an issue or trying query combinations in multiple scenarios.
include::console/history.asciidoc[]
Selecting multiple requests also allows you to auto format and copy them as cURL in one go.
include::console/settings.asciidoc[]
[[auto_formatting]]
=== Auto Formatting
Console allows you to auto format messy requests. To do so, position the cursor on the request you would like to format
and select Auto Indent from the action menu:
.Auto Indent a request
image::images/auto_format_before.png["Auto format before",width=500,align="center"]
Console will adjust the JSON body of the request and it will now look like this:
.A formatted request
image::images/auto_format_after.png["Auto format after",width=500,align="center"]
If you select Auto Indent on a request that is already perfectly formatted, Console will collapse the
request body to a single line per document. This is very handy when working with Elasticsearch's bulk APIs:
.One doc per line
image::images/auto_format_bulk.png["Auto format bulk",width=550,align="center"]
[[keyboard_shortcuts]]
=== Keyboard shortcuts
Console comes with a set of nifty keyboard shortcuts making working with it even more efficient. Here is an overview:
==== General editing
Ctrl/Cmd + I:: Auto indent current request.
Ctrl + Space:: Open Auto complete (even if not typing).
Ctrl/Cmd + Enter:: Submit request.
Ctrl/Cmd + Up/Down:: Jump to the previous/next request start or end.
Ctrl/Cmd + Alt + L:: Collapse/expand current scope.
Ctrl/Cmd + Option + 0:: Collapse all scopes but the current one. Expand by adding a shift.
==== When auto-complete is visible
Down arrow:: Switch focus to auto-complete menu. Use arrows to further select a term.
Enter/Tab:: Select the currently selected or the top most term in auto-complete menu.
Esc:: Close auto-complete menu.
=== History
Console maintains a list of the last 500 requests that were successfully executed by Elasticsearch. The history
is available by clicking the clock icon on the top right side of the window. The icons opens the history panel
where you can see the old requests. You can also select a request here and it will be added to the editor at
the current cursor position.
.History Panel
image::images/history.png["History Panel"]
=== Settings
Console has multiple settings you can set. All of them are available in the Settings panel. To open the panel
click on the cog icon on the top right.
.Settings Panel
image::images/settings.png["Setting Panel"]
[[securing_console]]
=== Securing Console
Console is meant to be used as a local development tool. As such, it will send requests to any host & port combination,
just as a local curl command would. To overcome the CORS limitations enforced by browsers, Console's Node.js backend
serves as a proxy to send requests on behalf of the browser. However, if put on a server and exposed to the internet
this can become a security risk. In those cases, we highly recommend you lock down the proxy by setting the
`console.proxyFilter` Kibana server setting. The setting accepts a list of regular expressions that are evaluated
against each URL the proxy is requested to retrieve. If none of the regular expressions match the proxy will reject
the request.
Here is an example configuration the only allows Console to connect to localhost:
[source,yaml]
--------
console.proxyFilter:
- ^https?://(localhost|127\.0\.0\.1|\[::0\]).*
--------
Restart Kibana for these changes to take effect.
Alternatively if the users of Kibana have no requirements or need to access any of the Console functionality, it can
be disabled completely and not even show up as an available app by setting the `console.enabled` Kibana server setting to `false`:
[source,yaml]
--------
console.enabled:
- false
--------
include::console/disabling-console.asciidoc[]

View file

@ -0,0 +1,19 @@
[[auto-formatting]]
== Auto Formatting
Console allows you to auto format messy requests. To do so, position the cursor on the request you would like to format
and select Auto Indent from the action menu:
.Auto Indent a request
image::images/auto_format_before.png["Auto format before",width=500,align="center"]
Console will adjust the JSON body of the request and it will now look like this:
.A formatted request
image::images/auto_format_after.png["Auto format after",width=500,align="center"]
If you select Auto Indent on a request that is already perfectly formatted, Console will collapse the
request body to a single line per document. This is very handy when working with Elasticsearch's bulk APIs:
.One doc per line
image::images/auto_format_bulk.png["Auto format bulk",width=550,align="center"]

View file

@ -0,0 +1,10 @@
[[disabling-console]]
== Disable Console
If the users of Kibana have no requirements or need to access any of the Console functionality, it can
be disabled completely and not even show up as an available app by setting the `console.enabled` Kibana server setting to `false`:
[source,yaml]
--------
console.enabled: false
--------

View file

@ -0,0 +1,10 @@
[[history]]
== History
Console maintains a list of the last 500 requests that were successfully executed by Elasticsearch. The history
is available by clicking the clock icon on the top right side of the window. The icons opens the history panel
where you can see the old requests. You can also select a request here and it will be added to the editor at
the current cursor position.
.History Panel
image::images/history.png["History Panel"]

View file

@ -0,0 +1,21 @@
[[keyboard-shortcuts]]
== Keyboard shortcuts
Console comes with a set of nifty keyboard shortcuts making working with it even more efficient. Here is an overview:
[float]
=== General editing
Ctrl/Cmd + I:: Auto indent current request.
Ctrl + Space:: Open Auto complete (even if not typing).
Ctrl/Cmd + Enter:: Submit request.
Ctrl/Cmd + Up/Down:: Jump to the previous/next request start or end.
Ctrl/Cmd + Alt + L:: Collapse/expand current scope.
Ctrl/Cmd + Option + 0:: Collapse all scopes but the current one. Expand by adding a shift.
[float]
=== When auto-complete is visible
Down arrow:: Switch focus to auto-complete menu. Use arrows to further select a term.
Enter/Tab:: Select the currently selected or the top most term in auto-complete menu.
Esc:: Close auto-complete menu.

View file

@ -0,0 +1,14 @@
[[multi-requests]]
== Multiple Requests Support
The Console editor allows writing multiple requests below each other. As shown in the <<console-kibana>> section, you
can submit a request to Elasticsearch by positioning the cursor and using the <<action_menu,Action Menu>>. Similarly
you can select multiple requests in one go:
.Selecting Multiple Requests
image::images/multiple_requests.png[Multiple Requests]
Console will send the request one by one to Elasticsearch and show the output on the right pane as Elasticsearch responds.
This is very handy when debugging an issue or trying query combinations in multiple scenarios.
Selecting multiple requests also allows you to auto format and copy them as cURL in one go.

View file

@ -0,0 +1,8 @@
[[console-settings]]
== Settings
Console has multiple settings you can set. All of them are available in the Settings panel. To open the panel
click on the cog icon on the top right.
.Settings Panel
image::images/settings.png["Setting Panel"]

View file

@ -1,152 +1,153 @@
[[dashboard]]
== Dashboard
= Dashboard
A Kibana _dashboard_ displays a set of saved visualizations in groups that you can arrange freely. You can save a
dashboard to share or reload at a later time.
[partintro]
--
A Kibana _dashboard_ displays a collection of saved visualizations. You can
arrange and resize the visualizations as needed and save dashboards so
they be reloaded and shared.
.Sample dashboard
image:images/tutorial-dashboard.png[Example dashboard]
--
[float]
[[dashboard-getting-started]]
=== Getting Started
== Building a Dashboard
You need at least one saved <<visualize, visualization>> to use a dashboard.
To build a dashboard:
[float]
[[creating-a-new-dashboard]]
==== Building a New Dashboard
. Click *Dashboard* in the side navigation. If you haven't previously viewed a
dashboard, Kibana displays an empty dashboard. Otherwise, click *New* to start
building your dashboard.
+
image:images/NewDashboard.png[New Dashboard]
The first time you click the *Dashboard* tab, Kibana displays an empty dashboard.
image:images/NewDashboard.png[New Dashboard screen]
Build your dashboard by adding visualizations. By default, Kibana dashboards use a light color theme. To use a dark color
theme instead, click the *Options* button and check the *Use dark theme* box.
NOTE: You can change the default theme in the *Advanced* section of the *Settings* tab.
[float]
[[dash-autorefresh]]
include::autorefresh.asciidoc[]
[float]
[[adding-visualizations-to-a-dashboard]]
==== Adding Visualizations to a Dashboard
. To add a visualization to the dashboard, click *Add* and select the
visualization. If you have a large number of visualizations, you can enter a
*Filter* string to filter the list.
+
Kibana displays the selected visualization in a container on the dashboard.
If you see a message that the container is too small, you can
<<resizing-containers,resize the visualization>>.
+
NOTE: By default, Kibana dashboards use a light color theme. To use a dark color theme,
click *Options* and select *Use dark theme*. To change the default theme, go
to *Management/Kibana/Advanced Settings* and set `dashboard:defaultDarkTheme`
to `true`.
To add a visualization to the dashboard, click the *Add* button in the toolbar panel. Select a saved visualization
from the list. You can filter the list of visualizations by typing a filter string into the *Visualization Filter*
field.
The visualization you select appears in a _container_ on your dashboard.
NOTE: If you see a message about the container's height or width being too small, <<resizing-containers,resize the
container>>.
[float]
[[saving-dashboards]]
==== Saving Dashboards
To save the dashboard, click the *Save Dashboard* button in the toolbar panel, enter a name for the dashboard in the
*Save As* field, and click the *Save* button. By default, dashboards store the time period specified in the time filter
when you save a dashboard. To disable this behavior, clear the *Store time with dashboard* box before clicking the
*Save* button.
[float]
[[loading-a-saved-dashboard]]
==== Loading a Saved Dashboard
Click the *Load Saved Dashboard* button to display a list of existing dashboards. The saved dashboard selector includes
a text field to filter by dashboard name and a link to the Object Editor for managing your saved dashboards. You can
also access the Object Editor by clicking *Settings > Objects*.
[float]
[[sharing-dashboards]]
==== Sharing Dashboards
You can share dashboards with other users. You can share a direct link to the Kibana dashboard or embed the dashboard
in your Web page.
NOTE: A user must have Kibana access in order to view embedded dashboards.
To share a dashboard, click the *Share* button image:images/share-dashboard.png[] to display the _Sharing_ panel.
Click the *Copy to Clipboard* button image:images/share-link.png[] to copy the native URL or embed HTML to the clipboard.
Click the *Generate short URL* button image:images/share-short-link.png[] to create a shortened URL for sharing or
embedding.
[float]
[[embedding-dashboards]]
==== Embedding Dashboards
To embed a dashboard, copy the embed code from the _Share_ display into your external web application.
. When you're done adding and arranging visualizations, click *Save* to save the
dashboard:
.. Enter a name for the dashboard.
.. To store the time period specified in the time filter with the dashboard, select
*Store time with dashboard*.
.. Click the *Save* button to store it as a Kibana saved object.
[float]
[[customizing-your-dashboard]]
=== Customizing Dashboard Elements
=== Arranging Dashboard Elements
The visualizations in your dashboard are stored in resizable _containers_ that you can arrange on the dashboard. This
section discusses customizing these containers.
The visualizations in your dashboard are stored in resizable, moveable containers.
[float]
[[moving-containers]]
==== Moving Containers
==== Moving Visualizations
Click and hold a container's header to move the container around the dashboard. Other containers will shift as needed
to make room for the moving container. Release the mouse button to confirm the container's new location.
To reposition a visualization:
. Hover over it to display the container controls.
. Click and hold the *Move* button in the upper right corner of the container.
. Drag the container to its new position.
. Release the *Move* button.
[float]
[[resizing-containers]]
==== Resizing Containers
==== Resizing Visualizations
Move the cursor to the bottom right corner of the container until the cursor changes to point at the corner. After the
cursor changes, click and drag the corner of the container to change the container's size. Release the mouse button to
confirm the new container size.
To resize a visualization:
. Hover over it to display the container controls.
. Click and hold the *Resize* button in the bottom right corner of the container.
. Drag to change the dimensions of the container.
. Release the *Resize* button.
[float]
[[removing-containers]]
==== Removing Containers
==== Removing Visualizations
Click the *x* icon at the top right corner of a container to remove that container from the dashboard. Removing a
container from a dashboard does not delete the saved visualization in that container.
To remove a visualization from the dashboard:
. Hover over it to display the container controls.
. Click the *Delete* button in the upper right corner of the container.
+
NOTE: Removing a visualization from a dashboard does _not_ delete the
saved visualization.
[float]
[[viewing-detailed-information]]
==== Viewing Detailed Information
=== Viewing Visualization Data
To display the raw data behind the visualization, click the bar at the bottom of the container. Tabs with detailed
information about the raw data replace the visualization, as in this example:
To display the raw data behind a visualization:
.Table
A representation of the underlying data, presented as a paginated data grid. You can sort the items
in the table by clicking on the table headers at the top of each column.
. Hover over it to display the container controls.
. Click the *Expand* button in the lower left corner of the container.
This displays a table that contains the underlying data. You can also view
the raw Elasticsearch request and response in JSON and the request statistics.
The request statistics show the query duration, request duration, total number
of matching records, and the index (or index pattern) that was searched.
+
image:images/NYCTA-Table.jpg[]
.Request
The raw request used to query the server, presented in JSON format.
image:images/NYCTA-Request.jpg[]
To export the data behind the visualization as a comma-separated-values
(CSV) file, click the *Raw* or *Formatted* link at the bottom of the data
Table. *Raw* exports the data as it is stored in Elasticsearch. *Formatted*
exports the results of any applicable Kibana <<managing-fields,field
formatters>>.
.Response
The raw response from the server, presented in JSON format.
image:images/NYCTA-Response.jpg[]
.Statistics
A summary of the statistics related to the request and the response, presented as a data grid. The data
grid includes the query duration, the request duration, the total number of records found on the server, and the
index pattern used to make the query.
image:images/NYCTA-Statistics.jpg[]
To export the raw data behind the visualization as a comma-separated-values (CSV) file, click on either the
*Raw* or *Formatted* links at the bottom of any of the detailed information tabs. A raw export contains the data as it
is stored in Elasticsearch. A formatted export contains the results of any applicable Kibana [field formatters].
To return to the visualization, click the *Collapse* button in the lower left
corner of the container.
[float]
[[changing-the-visualization]]
=== Changing the Visualization
=== Modifying a Visualization
To open a visualization in the Visualization Editor:
. Hover over it to display the container controls.
. Click the *Edit* button in the upper right corner of the container.
[[loading-a-saved-dashboard]]
== Loading a Dashboard
To open a saved dashboard:
. Click *Dashboard* in the side navigation.
. Click *Open* and select a dashboard. If you have a large number of
dashboards, you can enter a *Filter* string to filter the list.
+
TIP: To import, export, and delete dashboards, click the *Manage Dashboards* link
to open *Management/Kibana/Saved Objects/Dashboards*.
[[sharing-dashboards]]
== Sharing a Dashboard
You can can share a direct link to a Kibana dashboard with another user,
or embed the dashboard in a web page. Users must have Kibana access
to view embedded dashboards.
[[embedding-dashboards]]
To share a dashboard:
. Click *Dashboard* in the side navigation.
. Open the dashboard you want to share.
. Click *Share*.
. Copy the link you want to share or the iframe you want to embed. You can
share the live dashboard or a static snapshot of the current point in time.
+
TIP: When sharing a link to a dashboard snapshot, use the *Short URL*. Snapshot
URLs are long and can be problematic for Internet Explorer users and other
tools.
Click the _Edit_ button image:images/EditVis.png[Pencil button] at the top right of a container to open the
visualization in the <<visualize,Visualize>> page.
[float]
[[dashboard-filters]]
include::filter-pinning.asciidoc[]

View file

@ -1,5 +1,8 @@
[[discover]]
== Discover
= Discover
[partintro]
--
You can interactively explore your data from the Discover page. You have access to every document in every index that
matches the selected index pattern. You can submit search queries, filter the search results, and view document data.
You can also see the number of documents that match the search query and get field value statistics. If a time field is
@ -7,226 +10,18 @@ configured for the selected index pattern, the distribution of documents over ti
top of the page.
image::images/Discover-Start-Annotated.jpg[Discover Page]
--
[float]
[[set-time-filter]]
=== Setting the Time Filter
The Time Filter restricts the search results to a specific time period. You can set a time filter if your index
contains time-based events and a time-field is configured for the selected index pattern.
include::discover/set-time-filter.asciidoc[]
By default the time filter is set to the last 15 minutes. You can use the Time Picker to change the time filter
or select a specific time interval or time range in the histogram at the top of the page.
To set a time filter with the Time Picker:
. Click the Time Filter displayed in the upper right corner of the menu bar to open the Time Picker.
. To set a quick filter, simply click one of the shortcut links.
. To specify a relative Time Filter, click *Relative* and enter the relative start time. You can specify
the relative start time as any number of seconds, minutes, hours, days, months, or years ago.
. To specify an absolute Time Filter, click *Absolute* and enter the start date in the *From* field and the end date in
the *To* field.
. Click the caret at the bottom of the Time Picker to hide it.
To set a Time Filter from the histogram, do one of the following:
* Click the bar that represents the time interval you want to zoom in on.
* Click and drag to view a specific timespan. You must start the selection with the cursor over the background of the
chart--the cursor changes to a plus sign when you hover over a valid start point.
You can use the browser Back button to undo your changes.
The histogram lists the time range you're currently exploring, as well as the intervals that range is currently using.
To change the intervals, click the link and select an interval from the drop-down. The default behavior automatically
sets an interval based on the time range.
[float]
[[search]]
=== Searching Your Data
You can search the indices that match the current index pattern by submitting a search from the Discover page.
You can enter simple query strings, use the
Lucene https://lucene.apache.org/core/2_9_4/queryparsersyntax.html[query syntax], or use the full JSON-based
{ref}/query-dsl.html[Elasticsearch Query DSL].
When you submit a search, the histogram, Documents table, and Fields list are updated to reflect
the search results. The total number of hits (matching documents) is shown in the upper right corner of the
histogram. The Documents table shows the first five hundred hits. By default, the hits are listed in reverse
chronological order, with the newest documents shown first. You can reverse the sort order by by clicking on the Time
column header. You can also sort the table using the values in any indexed field. For more information, see
<<sorting,Sorting the Documents Table>>.
To search your data:
. Enter a query string in the Search field:
+
* To perform a free text search, simply enter a text string. For example, if you're searching web server logs, you
could enter `safari` to search all fields for the term `safari`.
+
* To search for a value in a specific field, you prefix the value with the name of the field. For example, you could
enter `status:200` to limit the results to entries that contain the value `200` in the `status` field.
+
* To search for a range of values, you can use the bracketed range syntax, `[START_VALUE TO END_VALUE]`. For example,
to find entries that have 4xx status codes, you could enter `status:[400 TO 499]`.
+
* To specify more complex search criteria, you can use the Boolean operators `AND`, `OR`, and `NOT`. For example,
to find entries that have 4xx status codes and have an extension of `php` or `html`, you could enter `status:[400 TO
499] AND (extension:php OR extension:html)`.
+
NOTE: These examples use the Lucene query syntax. You can also submit queries using the Elasticsearch Query DSL. For
examples, see {ref}/query-dsl-query-string-query.html#query-string-syntax[query string syntax] in the Elasticsearch
Reference.
+
. Press *Enter* or click the *Search* button to submit your search query.
[float]
[[new-search]]
==== Starting a New Search
To clear the current search and start a new search, click the *New* button in the Discover toolbar.
[float]
[[save-search]]
==== Saving a Search
You can reload saved searches on the Discover page and use them as the basis of <<visualize, visualizations>>.
Saving a search saves both the search query string and the currently selected index pattern.
To save the current search:
. Click the *Save* button in the Discover toolbar.
. Enter a name for the search and click *Save*.
[float]
[[load-search]]
==== Opening a Saved Search
To load a saved search:
. Click the *Open* button in the Discover toolbar.
. Select the search you want to open.
If the saved search is associated with a different index pattern than is currently selected, opening the saved search
also changes the selected index pattern.
[float]
[[select-pattern]]
==== Changing Which Indices You're Searching
When you submit a search request, the indices that match the currently-selected index pattern are searched. The current
index pattern is shown below the search field. To change which indices you are searching, click the name of the current
index pattern to display a list of the configured index patterns and select a different index pattern.
For more information about index patterns, see <<settings-create-pattern, Creating an Index Pattern>>.
include::discover/search.asciidoc[]
[float]
[[auto-refresh]]
include::discover/autorefresh.asciidoc[]
include::autorefresh.asciidoc[]
include::discover/field-filter.asciidoc[]
[float]
[[field-filter]]
=== Filtering by Field
You can filter the search results to display only those documents that contain a particular value in a field. You can
also create negative filters that exclude documents that contain the specified field value.
include::discover/document-data.asciidoc[]
You can add filters from the Fields list or from the Documents table. When you add a filter, it is displayed in the
filter bar below the search query. From the filter bar, you can enable or disable a filter, invert the filter (change
it from a positive filter to a negative filter and vice-versa), toggle the filter on or off, or remove it entirely.
Click the small left-facing arrow to the right of the index pattern selection drop-down to collapse the Fields list.
To add a filter from the Fields list:
. Click the name of the field you want to filter on. This displays the top five values for that field. To the right of
each value, there are two magnifying glass buttons--one for adding a regular (positive) filter, and
one for adding a negative filter.
. To add a positive filter, click the *Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button].
This filters out documents that don't contain that value in the field.
. To add a negative filter, click the *Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button].
This excludes documents that contain that value in the field.
To add a filter from the Documents table:
. Expand a document in the Documents table by clicking the *Expand* button image:images/ExpandButton.jpg[Expand Button]
to the left of the document's entry in the first column (the first column is usually Time). To the right of each field
name, there are two magnifying glass buttons--one for adding a regular (positive) filter, and one for adding a negative
filter.
. To add a positive filter based on the document's value in a field, click the
*Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button]. This filters out documents that don't
contain the specified value in that field.
. To add a negative filter based on the document's value in a field, click the
*Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button]. This excludes documents that contain
the specified value in that field.
[float]
[[discover-filters]]
include::filter-pinning.asciidoc[]
[float]
[[document-data]]
=== Viewing Document Data
When you submit a search query, the 500 most recent documents that match the query are listed in the Documents table.
You can configure the number of documents shown in the table by setting the `discover:sampleSize` property in
<<advanced-options,Advanced Settings>>. By default, the table shows the localized version of the time field specified
in the selected index pattern and the document `_source`. You can <<adding-columns, add fields to the Documents table>>
from the Fields list. You can <<sorting, sort the listed documents>> by any indexed field that's included in the table.
To view a document's field data, click the *Expand* button image:images/ExpandButton.jpg[Expand Button] to the left of
the document's entry in the first column (the first column is usually Time). Kibana reads the document data from
Elasticsearch and displays the document fields in a table. The table contains a row for each field that contains the
name of the field, add filter buttons, and the field value.
image::images/Expanded-Document.png[]
. To view the original JSON document (pretty-printed), click the *JSON* tab.
. To view the document data as a separate page, click the link. You can bookmark and share this link to provide direct
access to a particular document.
. To collapse the document details, click the *Collapse* button image:images/CollapseButton.jpg[Collapse Button].
. To toggle a particular field's column in the Documents table, click the
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
[float]
[[sorting]]
==== Sorting the Document List
You can sort the documents in the Documents table by the values in any indexed field. Documents in index patterns that
are configured with time fields are sorted in reverse chronological order by default.
To change the sort order, click the name of the field you want to sort by. The fields you can use for sorting have a
sort button to the right of the field name. Clicking the field name a second time reverses the sort order.
[float]
[[adding-columns]]
==== Adding Field Columns to the Documents Table
By default, the Documents table shows the localized version of the time field specified in the selected index pattern
and the document `_source`. You can add fields to the table from the Fields list or from a document's expanded view.
To add field columns to the Documents table:
. Mouse over a field in the Fields list and click its *add* button image:images/AddFieldButton.jpg[Add Field Button].
. Repeat until you've added all the fields you want to display in the Documents table.
. Alternately, add a field column directly from a document's expanded view by clicking the
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
The added field columns replace the `_source` column in the Documents table. The added fields are also
listed in the *Selected Fields* section at the top of the field list.
To rearrange the field columns in the table, mouse over the header of the column you want to move and click the *Move*
button.
image:images/Discover-MoveColumn.jpg[Move Column]
[float]
[[removing-columns]]
==== Removing Field Columns from the Documents Table
To remove field columns from the Documents table:
. Mouse over the field you want to remove in the *Selected Fields* section of the Fields list and click its *remove*
button image:images/RemoveFieldButton.jpg[Remove Field Button].
. Repeat until you've removed all the fields you want to drop from the Documents table.
[float]
[[viewing-field-stats]]
=== Viewing Field Data Statistics
From the field list, you can see how many documents in the Documents table contain a particular field, what the top 5
values are, and what percentage of documents contain each value.
To view field data statistics, click the name of a field in the Fields list. The field can be anywhere in the Fields
list.
image:images/Discover-FieldStats.jpg[Field Statistics]
TIP: To create a visualization based on the field, click the *Visualize* button below the field statistics.
include::discover/viewing-field-stats.asciidoc[]

View file

@ -0,0 +1,61 @@
[[document-data]]
== Viewing Document Data
When you submit a search query, the 500 most recent documents that match the query are listed in the Documents table.
You can configure the number of documents shown in the table by setting the `discover:sampleSize` property in
<<advanced-options,Advanced Settings>>. By default, the table shows the localized version of the time field specified
in the selected index pattern and the document `_source`. You can <<adding-columns, add fields to the Documents table>>
from the Fields list. You can <<sorting, sort the listed documents>> by any indexed field that's included in the table.
To view a document's field data, click the *Expand* button image:images/ExpandButton.jpg[Expand Button] to the left of
the document's entry in the first column (the first column is usually Time). Kibana reads the document data from
Elasticsearch and displays the document fields in a table. The table contains a row for each field that contains the
name of the field, add filter buttons, and the field value.
image::images/Expanded-Document.png[]
. To view the original JSON document (pretty-printed), click the *JSON* tab.
. To view the document data as a separate page, click the link. You can bookmark and share this link to provide direct
access to a particular document.
. To collapse the document details, click the *Collapse* button image:images/CollapseButton.jpg[Collapse Button].
. To toggle a particular field's column in the Documents table, click the
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
[float]
[[sorting]]
=== Sorting the Document List
You can sort the documents in the Documents table by the values in any indexed field. Documents in index patterns that
are configured with time fields are sorted in reverse chronological order by default.
To change the sort order, click the name of the field you want to sort by. The fields you can use for sorting have a
sort button to the right of the field name. Clicking the field name a second time reverses the sort order.
[float]
[[adding-columns]]
=== Adding Field Columns to the Documents Table
By default, the Documents table shows the localized version of the time field specified in the selected index pattern
and the document `_source`. You can add fields to the table from the Fields list or from a document's expanded view.
To add field columns to the Documents table:
. Mouse over a field in the Fields list and click its *add* button image:images/AddFieldButton.jpg[Add Field Button].
. Repeat until you've added all the fields you want to display in the Documents table.
. Alternately, add a field column directly from a document's expanded view by clicking the
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
The added field columns replace the `_source` column in the Documents table. The added fields are also
listed in the *Selected Fields* section at the top of the field list.
To rearrange the field columns in the table, mouse over the header of the column you want to move and click the *Move*
button.
image:images/Discover-MoveColumn.jpg[Move Column]
[float]
[[removing-columns]]
=== Removing Field Columns from the Documents Table
To remove field columns from the Documents table:
. Mouse over the field you want to remove in the *Selected Fields* section of the Fields list and click its *remove*
button image:images/RemoveFieldButton.jpg[Remove Field Button].
. Repeat until you've removed all the fields you want to drop from the Documents table.

View file

@ -0,0 +1,36 @@
[[field-filter]]
== Filtering by Field
You can filter the search results to display only those documents that contain a particular value in a field. You can
also create negative filters that exclude documents that contain the specified field value.
You can add filters from the Fields list or from the Documents table. When you add a filter, it is displayed in the
filter bar below the search query. From the filter bar, you can enable or disable a filter, invert the filter (change
it from a positive filter to a negative filter and vice-versa), toggle the filter on or off, or remove it entirely.
Click the small left-facing arrow to the right of the index pattern selection drop-down to collapse the Fields list.
To add a filter from the Fields list:
. Click the name of the field you want to filter on. This displays the top five values for that field. To the right of
each value, there are two magnifying glass buttons--one for adding a regular (positive) filter, and
one for adding a negative filter.
. To add a positive filter, click the *Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button].
This filters out documents that don't contain that value in the field.
. To add a negative filter, click the *Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button].
This excludes documents that contain that value in the field.
To add a filter from the Documents table:
. Expand a document in the Documents table by clicking the *Expand* button image:images/ExpandButton.jpg[Expand Button]
to the left of the document's entry in the first column (the first column is usually Time). To the right of each field
name, there are two magnifying glass buttons--one for adding a regular (positive) filter, and one for adding a negative
filter.
. To add a positive filter based on the document's value in a field, click the
*Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button]. This filters out documents that don't
contain the specified value in that field.
. To add a negative filter based on the document's value in a field, click the
*Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button]. This excludes documents that contain
the specified value in that field.
[float]
[[discover-filters]]
include::filter-pinning.asciidoc[]

View file

@ -0,0 +1,72 @@
[[search]]
== Searching Your Data
You can search the indices that match the current index pattern by submitting a search from the Discover page.
You can enter simple query strings, use the
Lucene https://lucene.apache.org/core/2_9_4/queryparsersyntax.html[query syntax], or use the full JSON-based
{es-ref}query-dsl.html[Elasticsearch Query DSL].
When you submit a search, the histogram, Documents table, and Fields list are updated to reflect
the search results. The total number of hits (matching documents) is shown in the upper right corner of the
histogram. The Documents table shows the first five hundred hits. By default, the hits are listed in reverse
chronological order, with the newest documents shown first. You can reverse the sort order by by clicking on the Time
column header. You can also sort the table using the values in any indexed field. For more information, see
<<sorting,Sorting the Documents Table>>.
To search your data:
. Enter a query string in the Search field:
+
* To perform a free text search, simply enter a text string. For example, if you're searching web server logs, you
could enter `safari` to search all fields for the term `safari`.
+
* To search for a value in a specific field, you prefix the value with the name of the field. For example, you could
enter `status:200` to limit the results to entries that contain the value `200` in the `status` field.
+
* To search for a range of values, you can use the bracketed range syntax, `[START_VALUE TO END_VALUE]`. For example,
to find entries that have 4xx status codes, you could enter `status:[400 TO 499]`.
+
* To specify more complex search criteria, you can use the Boolean operators `AND`, `OR`, and `NOT`. For example,
to find entries that have 4xx status codes and have an extension of `php` or `html`, you could enter `status:[400 TO
499] AND (extension:php OR extension:html)`.
+
NOTE: These examples use the Lucene query syntax. You can also submit queries using the Elasticsearch Query DSL. For
examples, see {es-ref}query-dsl-query-string-query.html#query-string-syntax[query string syntax] in the Elasticsearch
Reference.
+
. Press *Enter* or click the *Search* button to submit your search query.
[float]
[[new-search]]
=== Starting a New Search
To clear the current search and start a new search, click the *New* button in the Discover toolbar.
[float]
[[save-search]]
=== Saving a Search
You can reload saved searches on the Discover page and use them as the basis of <<visualize, visualizations>>.
Saving a search saves both the search query string and the currently selected index pattern.
To save the current search:
. Click the *Save* button in the Discover toolbar.
. Enter a name for the search and click *Save*.
[float]
[[load-search]]
=== Opening a Saved Search
To load a saved search:
. Click the *Open* button in the Discover toolbar.
. Select the search you want to open.
If the saved search is associated with a different index pattern than is currently selected, opening the saved search
also changes the selected index pattern.
[float]
[[select-pattern]]
=== Changing Which Indices You're Searching
When you submit a search request, the indices that match the currently-selected index pattern are searched. The current
index pattern is shown below the search field. To change which indices you are searching, click the name of the current
index pattern to display a list of the configured index patterns and select a different index pattern.
For more information about index patterns, see <<settings-create-pattern, Creating an Index Pattern>>.

View file

@ -0,0 +1,29 @@
[[set-time-filter]]
== Setting the Time Filter
The Time Filter restricts the search results to a specific time period. You can set a time filter if your index
contains time-based events and a time-field is configured for the selected index pattern.
By default the time filter is set to the last 15 minutes. You can use the Time Picker to change the time filter
or select a specific time interval or time range in the histogram at the top of the page.
To set a time filter with the Time Picker:
. Click the Time Filter displayed in the upper right corner of the menu bar to open the Time Picker.
. To set a quick filter, simply click one of the shortcut links.
. To specify a relative Time Filter, click *Relative* and enter the relative start time. You can specify
the relative start time as any number of seconds, minutes, hours, days, months, or years ago.
. To specify an absolute Time Filter, click *Absolute* and enter the start date in the *From* field and the end date in
the *To* field.
. Click the caret at the bottom of the Time Picker to hide it.
To set a Time Filter from the histogram, do one of the following:
* Click the bar that represents the time interval you want to zoom in on.
* Click and drag to view a specific timespan. You must start the selection with the cursor over the background of the
chart--the cursor changes to a plus sign when you hover over a valid start point.
You can use the browser Back button to undo your changes.
The histogram lists the time range you're currently exploring, as well as the intervals that range is currently using.
To change the intervals, click the link and select an interval from the drop-down. The default behavior automatically
sets an interval based on the time range.

View file

@ -0,0 +1,12 @@
[[viewing-field-stats]]
== Viewing Field Data Statistics
From the field list, you can see how many documents in the Documents table contain a particular field, what the top 5
values are, and what percentage of documents contain each value.
To view field data statistics, click the name of a field in the Fields list. The field can be anywhere in the Fields
list.
image:images/Discover-FieldStats.jpg[Field Statistics]
TIP: To create a visualization based on the field, click the *Visualize* button below the field statistics.

View file

@ -1,411 +1,36 @@
[[getting-started]]
== Getting Started with Kibana
= Getting Started
Now that you have Kibana <<setup,installed>>, you can step through this tutorial to get fast hands-on experience with
key Kibana functionality. By the end of this tutorial, you will have:
[partintro]
--
Ready to get some hands-on experience with Kibana?
This tutorial shows you how to:
* Loaded a sample data set into your Elasticsearch installation
* Defined at least one index pattern
* Used the <<discover, Discover>> functionality to explore your data
* Set up some <<visualize,_visualizations_>> to graphically represent your data
* Assembled visualizations into a <<dashboard,Dashboard>>
* Load a sample data set into Elasticsearch
* Define an index pattern
* Explore the sample data with <<discover, Discover>>
* Set up <<visualize,_visualizations_>> of the sample data
* Assemble visualizations into a <<dashboard,Dashboard>>
The material in this section assumes you have a working Kibana install connected to a working Elasticsearch install.
Before you begin, make sure you've <<install, installed Kibana>> and established
a <<connect-to-elasticsearch, connection to Elasticsearch>>.
Video tutorials are also available:
You might also be interested in these video tutorials:
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-1[High-level Kibana introduction, pie charts]
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-2[Data discovery, bar charts, and line charts]
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-3[Tile maps]
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-4[Embedding Kibana visualizations]
--
[float]
[[tutorial-load-dataset]]
=== Before You Start: Loading Sample Data
include::getting-started/tutorial-load-dataset.asciidoc[]
The tutorials in this section rely on the following data sets:
include::getting-started/tutorial-define-index.asciidoc[]
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here:
https://www.elastic.co/guide/en/kibana/3.0/snippets/shakespeare.json[shakespeare.json].
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here:
https://github.com/bly2k/files/blob/master/accounts.zip?raw=true[accounts.zip]
* A set of randomly generated log files. Download this data set by clicking here:
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logs.jsonl.gz]
include::getting-started/tutorial-discovering.asciidoc[]
Two of the data sets are compressed. Use the following commands to extract the files:
include::getting-started/tutorial-visualizing.asciidoc[]
[source,shell]
unzip accounts.zip
gunzip logs.jsonl.gz
include::getting-started/tutorial-dashboard.asciidoc[]
The Shakespeare data set is organized in the following schema:
[source,json]
{
"line_id": INT,
"play_name": "String",
"speech_number": INT,
"line_number": "String",
"speaker": "String",
"text_entry": "String",
}
The accounts data set is organized in the following schema:
[source,json]
{
"account_number": INT,
"balance": INT,
"firstname": "String",
"lastname": "String",
"age": INT,
"gender": "M or F",
"address": "String",
"employer": "String",
"email": "String",
"city": "String",
"state": "String"
}
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are:
[source,json]
{
"memory": INT,
"geo.coordinates": "geo_point"
"@timestamp": "date"
}
Before we load the Shakespeare and logs data sets, we need to set up {ref}mapping.html[_mappings_] for the fields.
Mapping divides the documents in the index into logical groups and specifies a field's characteristics, such as the
field's searchability or whether or not it's _tokenized_, or broken up into separate words.
Use the following command to set up a mapping for the Shakespeare data set:
[source,shell]
curl -XPUT http://localhost:9200/shakespeare -d '
{
"mappings" : {
"_default_" : {
"properties" : {
"speaker" : {"type": "string", "index" : "not_analyzed" },
"play_name" : {"type": "string", "index" : "not_analyzed" },
"line_id" : { "type" : "integer" },
"speech_number" : { "type" : "integer" }
}
}
}
}
';
This mapping specifies the following qualities for the data set:
* The _speaker_ field is a string that isn't analyzed. The string in this field is treated as a single unit, even if
there are multiple words in the field.
* The same applies to the _play_name_ field.
* The _line_id_ and _speech_number_ fields are integers.
The logs data set requires a mapping to label the latitude/longitude pairs in the logs as geographic locations by
applying the `geo_point` type to those fields.
Use the following commands to establish `geo_point` mapping for the logs:
[source,shell]
curl -XPUT http://localhost:9200/logstash-2015.05.18 -d '
{
"mappings": {
"log": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
}
';
[source,shell]
curl -XPUT http://localhost:9200/logstash-2015.05.19 -d '
{
"mappings": {
"log": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
}
';
[source,shell]
curl -XPUT http://localhost:9200/logstash-2015.05.20 -d '
{
"mappings": {
"log": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
}
';
The accounts data set doesn't require any mappings, so at this point we're ready to use the Elasticsearch
{ref}/docs-bulk.html[`bulk`] API to load the data sets with the following commands:
[source,shell]
curl -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
curl -XPOST 'localhost:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
These commands may take some time to execute, depending on the computing resources available.
Verify successful loading with the following command:
[source,shell]
curl 'localhost:9200/_cat/indices?v'
You should see output similar to the following:
[source,shell]
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open bank 5 1 1000 0 418.2kb 418.2kb
yellow open shakespeare 5 1 111396 0 17.6mb 17.6mb
yellow open logstash-2015.05.18 5 1 4631 0 15.6mb 15.6mb
yellow open logstash-2015.05.19 5 1 4624 0 15.7mb 15.7mb
yellow open logstash-2015.05.20 5 1 4750 0 16.4mb 16.4mb
[[tutorial-define-index]]
=== Defining Your Index Patterns
Each set of data loaded to Elasticsearch has an <<settings-create-pattern,index pattern>>. In the previous section, the
Shakespeare data set has an index named `shakespeare`, and the accounts data set has an index named `bank`. An _index
pattern_ is a string with optional wildcards that can match multiple indices. For example, in the common logging use
case, a typical index name contains the date in MM-DD-YYYY format, and an index pattern for May would look something
like `logstash-2015.05*`.
For this tutorial, any pattern that matches the name of an index we've loaded will work. Open a browser and
navigate to `localhost:5601`. Click the *Settings* tab, then the *Indices* tab. Click *Add New* to define a new index
pattern. Two of the sample data sets, the Shakespeare plays and the financial accounts, don't contain time-series data.
Make sure the *Index contains time-based events* box is unchecked when you create index patterns for these data sets.
Specify `shakes*` as the index pattern for the Shakespeare data set and click *Create* to define the index pattern, then
define a second index pattern named `ba*`.
The Logstash data set does contain time-series data, so after clicking *Add New* to define the index for this data
set, make sure the *Index contains time-based events* box is checked and select the `@timestamp` field from the
*Time-field name* drop-down.
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
contain data.
[float]
[[tutorial-discovering]]
=== Discovering Your Data
Click the *Discover* image:images/discover-compass.png[Compass icon] tab to display Kibana's data discovery functions:
image::images/tutorial-discover.png[]
Right under the tab itself, there is a search box where you can search your data. Searches take a specific
{ref}/query-dsl-query-string-query.html#query-string-syntax[query syntax] that enable you to create custom searches,
which you can save and load by clicking the buttons to the right of the search box.
Beneath the search box, the current index pattern is displayed in a drop-down. You can change the index pattern by
selecting a different pattern from the drop-down selector.
You can construct searches by using the field names and the values you're interested in. With numeric fields you can
use comparison operators such as greater than (>), less than (<), or equals (=). You can link elements with the
logical operators AND, OR, and NOT, all in uppercase.
Try selecting the `ba*` index pattern and putting the following search into the search box:
[source,text]
account_number:<100 AND balance:>47500
This search returns all account numbers between zero and 99 with balances in excess of 47,500.
If you're using the linked sample data set, this search returns 5 results: Account numbers 8, 32, 78, 85, and 97.
image::images/tutorial-discover-2.png[]
To narrow the display to only the specific fields of interest, highlight each field in the list that displays under the
index pattern and click the *Add* button. Note how, in this example, adding the `account_number` field changes the
display from the full text of five records to a simple list of five account numbers:
image::images/tutorial-discover-3.png[]
[[tutorial-visualizing]]
=== Data Visualization: Beyond Discovery
The visualization tools available on the *Visualize* tab enable you to display aspects of your data sets in several
different ways.
Click on the *Visualize* image:images/visualize-icon.png[Bar chart icon] tab to start:
image::images/tutorial-visualize.png[]
Click on *Pie chart*, then *From a new search*. Select the `ba*` index pattern.
Visualizations depend on Elasticsearch {ref}/search-aggregations.html[aggregations] in two different types: _bucket_
aggregations and _metric_ aggregations. A bucket aggregation sorts your data according to criteria you specify. For
example, in our accounts data set, we can establish a range of account balances, then display what proportions of the
total fall into which range of balances.
The whole pie displays, since we haven't specified any buckets yet.
image::images/tutorial-visualize-pie-1.png[]
Select *Split Slices* from the *Select buckets type* list, then select *Range* from the *Aggregation* drop-down
selector. Select the *balance* field from the *Field* drop-down, then click on *Add Range* four times to bring the
total number of ranges to six. Enter the following ranges:
[source,text]
0 999
1000 2999
3000 6999
7000 14999
15000 30999
31000 50000
Click the *Apply changes* button image:images/apply-changes-button.png[] to display the chart:
image::images/tutorial-visualize-pie-2.png[]
This shows you what proportion of the 1000 accounts fall in these balance ranges. To see another dimension of the data,
we're going to add another bucket aggregation. We can break down each of the balance ranges further by the account
holder's age.
Click *Add sub-buckets* at the bottom, then select *Split Slices*. Choose the *Terms* aggregation and the *age* field from
the drop-downs.
Click the *Apply changes* button image:images/apply-changes-button.png[] to add an external ring with the new
results.
image::images/tutorial-visualize-pie-3.png[]
Save this chart by clicking the *Save Visualization* button to the right of the search field. Name the visualization
_Pie Example_.
Next, we're going to make a bar chart. Click on *New Visualization*, then *Vertical bar chart*. Select *From a new
search* and the `shakes*` index pattern. You'll see a single big bar, since we haven't defined any buckets yet:
image::images/tutorial-visualize-bar-1.png[]
For the Y-axis metrics aggregation, select *Unique Count*, with *speaker* as the field. For Shakespeare plays, it might
be useful to know which plays have the lowest number of distinct speaking parts, if your theater company is short on
actors. For the X-Axis buckets, select the *Terms* aggregation with the *play_name* field. For the *Order*, select
*Ascending*, leaving the *Size* at 5. Write a description for the axes in the *Custom Label* fields.
Leave the other elements at their default values and click the *Apply changes* button
image:images/apply-changes-button.png[]. Your chart should now look like this:
image::images/tutorial-visualize-bar-2.png[]
Notice how the individual play names show up as whole phrases, instead of being broken down into individual words. This
is the result of the mapping we did at the beginning of the tutorial, when we marked the *play_name* field as 'not
analyzed'.
Hovering on each bar shows you the number of speaking parts for each play as a tooltip. You can turn this behavior off,
as well as change many other options for your visualizations, by clicking the *Options* tab in the top left.
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part. Add
a Y-axis aggregation with the *Add metrics* button, then choose the *Max* aggregation for the *speech_number* field. In
the *Options* tab, change the *Bar Mode* drop-down to *grouped*, then click the *Apply changes* button
image:images/apply-changes-button.png[]. Your chart should now look like this:
image::images/tutorial-visualize-bar-3.png[]
As you can see, _Love's Labours Lost_ has an unusually high maximum speech number, compared to the other plays, and
might therefore make more demands on an actor's memory.
Note how the *Number of speaking parts* Y-axis starts at zero, but the bars don't begin to differentiate until 18. To
make the differences stand out, starting the Y-axis at a value closer to the minimum, check the
*Scale Y-Axis to data bounds* box in the *Options* tab.
Save this chart with the name _Bar Example_.
Next, we're going to make a tile map chart to visualize some geographic data. Click on *New Visualization*, then
*Tile map*. Select *From a new search* and the `logstash-*` index pattern. Define the time window for the events
we're exploring by clicking the time selector at the top right of the Kibana interface. Click on *Absolute*, then set
the start time to May 18, 2015 and the end time for the range to May 20, 2015:
image::images/tutorial-timepicker.png[]
Once you've got the time range set up, click the *Go* button, then close the time picker by clicking the small up arrow
at the bottom. You'll see a map of the world, since we haven't defined any buckets yet:
image::images/tutorial-visualize-map-1.png[]
Select *Geo Coordinates* as the bucket, then click the *Apply changes* button image:images/apply-changes-button.png[].
Your chart should now look like this:
image::images/tutorial-visualize-map-2.png[]
You can navigate the map by clicking and dragging, zoom with the image:images/viz-zoom.png[] buttons, or hit the *Fit
Data Bounds* image:images/viz-fit-bounds.png[] button to zoom to the lowest level that includes all the points. You can
also create a filter to define a rectangle on the map, either to include or exclude, by clicking the
*Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[] button and drawing a bounding box on the map.
A green oval with the filter definition displays right under the query box:
image::images/tutorial-visualize-map-3.png[]
Hover on the filter to display the controls to toggle, pin, invert, or delete the filter. Save this chart with the name
_Map Example_.
Finally, we're going to define a sample Markdown widget to display on our dashboard. Click on *New Visualization*, then
*Markdown widget*, to display a very simple Markdown entry field:
image::images/tutorial-visualize-md-1.png[]
Write the following text in the field:
[source,markdown]
# This is a tutorial dashboard!
The Markdown widget uses **markdown** syntax.
> Blockquotes in Markdown use the > character.
Click the *Apply changes* button image:images/apply-changes-button.png[] to display the rendered Markdown in the
preview pane:
image::images/tutorial-visualize-md-2.png[]
Save this visualization with the name _Markdown Example_.
[[tutorial-dashboard]]
=== Putting it all Together with Dashboards
A Kibana dashboard is a collection of visualizations that you can arrange and share. To get started, click the
*Dashboard* tab, then the *Add Visualization* button at the far right of the search box to display the list of saved
visualizations. Select _Markdown Example_, _Pie Example_, _Bar Example_, and _Map Example_, then close the list of
visualizations by clicking the small up-arrow at the bottom of the list. You can move the containers for each
visualization by clicking and dragging the title bar. Resize the containers by dragging the lower right corner of a
visualization's container. Your sample dashboard should end up looking roughly like this:
image::images/tutorial-dashboard.png[]
Click the *Save Dashboard* button, then name the dashboard _Tutorial Dashboard_. You can share a saved dashboard by
clicking the *Share* button to display HTML embedding code as well as a direct link.
[float]
[[wrapping-up]]
=== Wrapping Up
Now that you've handled the basic aspects of Kibana's functionality, you're ready to explore Kibana in further detail.
Take a look at the rest of the documentation for more details!
include::getting-started/wrapping-up.asciidoc[]

View file

@ -0,0 +1,20 @@
[[tutorial-dashboard]]
== Putting it all Together with Dashboards
A dashboard is a collection of visualizations that you can arrange and share.
To build a dashboard that contains the visualizations you saved during this tutorial:
. Click *Dashboard* in the side navigation.
. Click *Add* to display the list of saved visualizations.
. Click _Markdown Example_, _Pie Example_, _Bar Example_, and _Map Example_, then close the list of
visualizations by clicking the small up-arrow at the bottom of the list.
Hovering over a visualization displays the container controls that enable you to
edit, move, delete, and resize the visualization.
Your sample dashboard should end up looking roughly like this:
image::images/tutorial-dashboard.png[]
To get a link to share or HTML code to embed the dashboard in a web page, save
the dashboard and click *Share*.

View file

@ -0,0 +1,22 @@
[[tutorial-define-index]]
== Defining Your Index Patterns
Each set of data loaded to Elasticsearch has an index pattern. In the previous section, the
Shakespeare data set has an index named `shakespeare`, and the accounts data set has an index named `bank`. An _index
pattern_ is a string with optional wildcards that can match multiple indices. For example, in the common logging use
case, a typical index name contains the date in MM-DD-YYYY format, and an index pattern for May would look something
like `logstash-2015.05*`.
For this tutorial, any pattern that matches the name of an index we've loaded will work. Open a browser and
navigate to `localhost:5601`. Click the *Settings* tab, then the *Indices* tab. Click *Add New* to define a new index
pattern. Two of the sample data sets, the Shakespeare plays and the financial accounts, don't contain time-series data.
Make sure the *Index contains time-based events* box is unchecked when you create index patterns for these data sets.
Specify `shakes*` as the index pattern for the Shakespeare data set and click *Create* to define the index pattern, then
define a second index pattern named `ba*`.
The Logstash data set does contain time-series data, so after clicking *Add New* to define the index for this data
set, make sure the *Index contains time-based events* box is checked and select the `@timestamp` field from the
*Time-field name* drop-down.
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
contain data.

View file

@ -0,0 +1,42 @@
[[tutorial-discovering]]
== Discovering Your Data
Click *Discover* in the side navigation to display Kibana's data discovery functions:
image::images/tutorial-discover.png[]
In the query bar, you can enter an
{es-ref}query-dsl-query-string-query.html#query-string-syntax[Elasticsearch
query] to search your data. You can explore the results in Discover and create
visualizations of saved searches in Visualize.
The current index pattern is displayed beneath the query bar. The index pattern
determines which indices are searched when you submit a query. To search a
different set of indices, select different pattern from the drop down menu.
To add an index pattern, go to *Management/Kibana/Index Patterns* and click
*Add New*.
You can construct searches by using the field names and the values you're
interested in. With numeric fields you can use comparison operators such as
greater than (>), less than (<), or equals (=). You can link elements with the
logical operators AND, OR, and NOT, all in uppercase.
To try it out, select the `ba*` index pattern and enter the following query string
in the query bar:
[source,text]
account_number:<100 AND balance:>47500
This query returns all account numbers between zero and 99 with balances in
excess of 47,500. When searching the sample bank data, it returns 5 results:
Account numbers 8, 32, 78, 85, and 97.
image::images/tutorial-discover-2.png[]
By default, all fields are shown for each matching document. To choose which
document fields to display, hover over the Available Fields list and click the
*add* button next to each field you want include. For example, if you add
just the `account_number`, the display changes to a simple list of five
account numbers:
image::images/tutorial-discover-3.png[]

View file

@ -0,0 +1,171 @@
[[tutorial-load-dataset]]
== Loading Sample Data
The tutorials in this section rely on the following data sets:
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here:
https://www.elastic.co/guide/en/kibana/3.0/snippets/shakespeare.json[shakespeare.json].
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here:
https://github.com/bly2k/files/blob/master/accounts.zip?raw=true[accounts.zip]
* A set of randomly generated log files. Download this data set by clicking here:
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logs.jsonl.gz]
Two of the data sets are compressed. Use the following commands to extract the files:
[source,shell]
unzip accounts.zip
gunzip logs.jsonl.gz
The Shakespeare data set is organized in the following schema:
[source,json]
{
"line_id": INT,
"play_name": "String",
"speech_number": INT,
"line_number": "String",
"speaker": "String",
"text_entry": "String",
}
The accounts data set is organized in the following schema:
[source,json]
{
"account_number": INT,
"balance": INT,
"firstname": "String",
"lastname": "String",
"age": INT,
"gender": "M or F",
"address": "String",
"employer": "String",
"email": "String",
"city": "String",
"state": "String"
}
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are:
[source,json]
{
"memory": INT,
"geo.coordinates": "geo_point"
"@timestamp": "date"
}
Before we load the Shakespeare and logs data sets, we need to set up {es-ref}mapping.html[_mappings_] for the fields.
Mapping divides the documents in the index into logical groups and specifies a field's characteristics, such as the
field's searchability or whether or not it's _tokenized_, or broken up into separate words.
Use the following command to set up a mapping for the Shakespeare data set:
[source,shell]
curl -XPUT http://localhost:9200/shakespeare -d '
{
"mappings" : {
"_default_" : {
"properties" : {
"speaker" : {"type": "string", "index" : "not_analyzed" },
"play_name" : {"type": "string", "index" : "not_analyzed" },
"line_id" : { "type" : "integer" },
"speech_number" : { "type" : "integer" }
}
}
}
}
';
This mapping specifies the following qualities for the data set:
* The _speaker_ field is a string that isn't analyzed. The string in this field is treated as a single unit, even if
there are multiple words in the field.
* The same applies to the _play_name_ field.
* The _line_id_ and _speech_number_ fields are integers.
The logs data set requires a mapping to label the latitude/longitude pairs in the logs as geographic locations by
applying the `geo_point` type to those fields.
Use the following commands to establish `geo_point` mapping for the logs:
[source,shell]
curl -XPUT http://localhost:9200/logstash-2015.05.18 -d '
{
"mappings": {
"log": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
}
';
[source,shell]
curl -XPUT http://localhost:9200/logstash-2015.05.19 -d '
{
"mappings": {
"log": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
}
';
[source,shell]
curl -XPUT http://localhost:9200/logstash-2015.05.20 -d '
{
"mappings": {
"log": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}
}
';
The accounts data set doesn't require any mappings, so at this point we're ready to use the Elasticsearch
{es-ref}docs-bulk.html[`bulk`] API to load the data sets with the following commands:
[source,shell]
curl -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
curl -XPOST 'localhost:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
These commands may take some time to execute, depending on the computing resources available.
Verify successful loading with the following command:
[source,shell]
curl 'localhost:9200/_cat/indices?v'
You should see output similar to the following:
[source,shell]
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open bank 5 1 1000 0 418.2kb 418.2kb
yellow open shakespeare 5 1 111396 0 17.6mb 17.6mb
yellow open logstash-2015.05.18 5 1 4631 0 15.6mb 15.6mb
yellow open logstash-2015.05.19 5 1 4624 0 15.7mb 15.7mb
yellow open logstash-2015.05.20 5 1 4750 0 16.4mb 16.4mb

View file

@ -0,0 +1,185 @@
[[tutorial-visualizing]]
== Visualizing Your Data
To start visualizing your data, click *Visualize* in the side navigation:
image::images/tutorial-visualize.png[]
The *Visualize* tools enable you to view your data in several ways. For example,
let's use that venerable visualization, the pie chart, to get some insight
into the account balances in the sample bank account data.
To get started, click *Pie chart* in the list of visualizations. You can build
visualizations from saved searches, or enter new search criteria. To enter
new search criteria, you first need to select an index pattern to specify
what indices to search. We want to search the account data, so select the `ba*`
index pattern.
The default search matches all documents. Initially, a single "slice"
encompasses the entire pie:
image::images/tutorial-visualize-pie-1.png[]
To specify what slices to display in the chart, you use an Elasticsearch
{es-ref}search-aggregations.html[bucket aggregation]. A bucket aggregation
simply sorts the documents that match your search criteria into different
categories, aka _buckets_. For example, the account data includes the balance
of each account. Using a bucket aggregation, you can establish multiple ranges
of account balances and find out how many accounts fall into each range.
To define a bucket for each range:
. Click the *Split Slices* buckets type.
. Select *Range* from the *Aggregation* list.
. Select the *balance* field from the *Field* list.
. Click *Add Range* four times to bring the
total number of ranges to six.
. Define the following ranges:
+
[source,text]
0 999
1000 2999
3000 6999
7000 14999
15000 30999
31000 50000
. Click *Apply changes* image:images/apply-changes-button.png[] to update the chart.
Now you can see what proportion of the 1000 accounts fall into each balance
range.
image::images/tutorial-visualize-pie-2.png[]
Let's take a look at another dimension of the data: the account holder's
age. By adding another bucket aggregation, you can see the ages of the account
holders in each balance range:
. Click *Add sub-buckets* below the buckets list.
. Click *Split Slices* in the buckets type list.
. Select *Terms* from the aggregation list.
. Select *age* from the field list.
. Click *Apply changes* image:images/apply-changes-button.png[].
Now you can see the break down of the account holders' ages displayed
in a ring around the balance ranges.
image::images/tutorial-visualize-pie-3.png[]
To save this chart so we can use it later, click *Save* and enter the name _Pie Example_.
Next, we're going to look at data in the Shakespeare data set. Let's find out how the
plays compare when it comes to the number of speaking parts and display the information
in a bar chart:
. Click *New* and select *Vertical bar chart*.
. Select the `shakes*` index pattern. Since you haven't defined any buckets yet,
you'll see a single big bar that shows the total count of documents that match
the default wildcard query.
+
image::images/tutorial-visualize-bar-1.png[]
. To show the number of speaking parts per play along the y-axis, you need to
configure the Y-axis {es-ref}search-aggregations.html[metric aggregation]. A metric
aggregation computes metrics based on values extracted from the search results.
To get the number of speaking parts per play, select the *Unique Count*
aggregation and choose *speaker* from the field list. You can also give the
axis a custom label, _Speaking Parts_.
. To show the different plays long the x-axis, select the X-Axis buckets type,
select *Terms* from the aggregation list, and choose *play_name* from the field
list. To list them alphabetically, select *Ascending* order. You can also give
the axis a custom label, _Play Name_.
. Click *Apply changes* image:images/apply-changes-button.png[] to view the
results.
image::images/tutorial-visualize-bar-2.png[]
Notice how the individual play names show up as whole phrases, instead of being broken down into individual words. This
is the result of the mapping we did at the beginning of the tutorial, when we marked the *play_name* field as 'not
analyzed'.
Hovering over each bar shows you the number of speaking parts for each play as a tooltip. To turn tooltips
off and configure other options for your visualizations, select the Visualization builder's *Options* tab.
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part.
. Click *Add metrics* to add a Y-axis aggregation.
. Choose the *Max* aggregation and select the *speech_number* field.
. Click *Options* and change the *Bar Mode* to *grouped*.
. Click *Apply changes* image:images/apply-changes-button.png[]. Your chart should now look like this:
image::images/tutorial-visualize-bar-3.png[]
As you can see, _Love's Labours Lost_ has an unusually high maximum speech number, compared to the other plays, and
might therefore make more demands on an actor's memory.
Note how the *Number of speaking parts* Y-axis starts at zero, but the bars don't begin to differentiate until 18. To
make the differences stand out, starting the Y-axis at a value closer to the minimum, go to Options and select
*Scale Y-Axis to data bounds*.
Save this chart with the name _Bar Example_.
Next, we're going to use a tile map chart to visualize geographic information in our log file sample data.
. Click *New*.
. Select *Tile map*.
. Select the `logstash-*` index pattern.
. Set the time window for the events we're exploring:
. Click the time picker in the Kibana toolbar.
. Click *Absolute*.
. Set the start time to May 18, 2015 and the end time to May 20, 2015.
+
image::images/tutorial-timepicker.png[]
. Once you've got the time range set up, click the *Go* button and close the time picker by
clicking the small up arrow in the bottom right corner.
You'll see a map of the world, since we haven't defined any buckets yet:
image::images/tutorial-visualize-map-1.png[]
To map the geo coordinates from the log files select *Geo Coordinates* as
the bucket and click *Apply changes* image:images/apply-changes-button.png[].
Your chart should now look like this:
image::images/tutorial-visualize-map-2.png[]
You can navigate the map by clicking and dragging, zoom with the
image:images/viz-zoom.png[] buttons, or hit the *Fit Data Bounds*
image:images/viz-fit-bounds.png[] button to zoom to the lowest level that
includes all the points. You can also include or exclude a rectangular area
by clicking the *Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[]
button and drawing a bounding box on the map. Applied filters are displayed
below the query bar. Hovering over a filter displays controls to toggle,
pin, invert, or delete the filter.
image::images/tutorial-visualize-map-3.png[]
Save this map with the name _Map Example_.
Finally, create a Markdown widget to display extra information:
. Click *New*.
. Select *Markdown widget*.
. Enter the following text in the field:
+
[source,markdown]
# This is a tutorial dashboard!
The Markdown widget uses **markdown** syntax.
> Blockquotes in Markdown use the > character.
. Click *Apply changes* image:images/apply-changes-button.png[] render the Markdown in the
preview pane.
+
image::images/tutorial-visualize-md-1.png[]
image::images/tutorial-visualize-md-2.png[]
Save this visualization with the name _Markdown Example_.

View file

@ -0,0 +1,14 @@
[[wrapping-up]]
== Wrapping Up
Now that you have a handle on the basics, you're ready to start exploring
your own data with Kibana.
* See <<discover, Discover>> for more information about searching and filtering
your data.
* See <<visualize, Visualize>> for information about all of the visualization
types Kibana has to offer.
* See <<management, Management>> for information about configuring Kibana
and managing your saved objects.
* See <<console-kibana, Console>> for information about the interactive
console UI you can use to submit REST requests to Elasticsearch.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 472 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 270 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 664 KiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 316 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 285 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 257 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 256 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 304 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 748 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 179 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 655 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 843 KiB

After

Width:  |  Height:  |  Size: 225 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 352 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 106 KiB

After

Width:  |  Height:  |  Size: 49 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 653 KiB

After

Width:  |  Height:  |  Size: 130 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 57 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

After

Width:  |  Height:  |  Size: 62 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 84 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 368 KiB

After

Width:  |  Height:  |  Size: 157 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 281 KiB

After

Width:  |  Height:  |  Size: 180 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2 MiB

After

Width:  |  Height:  |  Size: 175 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 58 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 96 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 303 KiB

After

Width:  |  Height:  |  Size: 116 KiB

Before After
Before After

View file

@ -1,35 +1,38 @@
[[kibana-guide]]
= Kibana User Guide
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/current/
:shield: https://www.elastic.co/guide/en/shield/current
:scyld: X-Pack Security
:k4issue: https://github.com/elastic/kibana/issues/
:k4pull: https://github.com/elastic/kibana/pull/
:version: 5.0.0-alpha5
:esversion: 5.0.0-alpha5
:packageversion: 5.0-alpha
:version: 6.0.0-alpha1
:major-version: 6.x
//////////
release-state can be: released | prerelease | unreleased
//////////
:release-state: unreleased
:es-ref: https://www.elastic.co/guide/en/elasticsearch/reference/master/
:xpack-ref: https://www.elastic.co/guide/en/x-pack/current/
:issue: https://github.com/elastic/kibana/issues/
:pull: https://github.com/elastic/kibana/pull/
include::introduction.asciidoc[]
include::setup.asciidoc[]
include::migration.asciidoc[]
include::getting-started.asciidoc[]
include::plugins.asciidoc[]
include::access.asciidoc[]
include::discover.asciidoc[]
include::visualize.asciidoc[]
include::dashboard.asciidoc[]
include::timelion.asciidoc[]
include::console.asciidoc[]
include::settings.asciidoc[]
include::management.asciidoc[]
include::production.asciidoc[]
include::releasenotes.asciidoc[]
include::plugins.asciidoc[]

View file

@ -10,47 +10,3 @@ create and share dynamic dashboards that display changes to Elasticsearch querie
Setting up Kibana is a snap. You can install Kibana and start exploring your Elasticsearch indices in minutes -- no
code, no additional infrastructure required.
For more information about creating and sharing visualizations and dashboards, see the <<visualize, Visualize>>
and <<dashboard, Dashboard>> topics. A complete <<getting-started,tutorial>> covering several aspects of Kibana's
functionality is also available.
NOTE: This guide describes how to use Kibana {version}. For information about what's new in Kibana {version}, see
the <<releasenotes, release notes>>.
////
[float]
[[data-discovery]]
=== Data Discovery and Visualization
Let's take a look at how you might use Kibana to explore and visualize data.
We've indexed some data from Transport for London (TFL) that shows one week
of transit (Oyster) card usage.
From Kibana's Discover page, we can submit search queries, filter the results, and
examine the data in the returned documents. For example, we can get all trips
completed by the Tube during the week by excluding incomplete trips and trips by bus:
image:images/TFL-CompletedTrips.jpg[Discover]
Right away, we can see the peaks for the morning and afternoon commute hours in the
histogram. By default, the Discover page also shows the first 500 entries that match the
search criteria. You can change the time filter, interact with the histogram to drill
down into the data, and view the details of particular documents. For more
information about exploring your data from the Discover page, see <<discover, Discover>>.
You can construct visualizations of your search results from the Visualization page.
Each visualization is associated with a search. For example, we can create a histogram
that shows the weekly London commute traffic via the Tube using our previous search.
The Y-axis shows the number of trips. The X-axis shows
the day and time. By adding a sub-aggregation, we can see the top 3 end stations during
each hour:
image:images/TFL-CommuteHistogram.jpg[Visualize]
You can save and share visualizations and combine them into dashboards to make it easy
to correlate related information. For example, we could create a dashboard
that displays several visualizations of the TFL data:
image:images/TFL-Dashboard.jpg[Dashboard]
////

View file

@ -1,114 +0,0 @@
[[setup-repositories]]
=== Installing Kibana with apt and yum
Binary packages for Kibana are available for Unix distributions that support the `apt` and `yum` tools.
We also have repositories available for APT and YUM based distributions.
NOTE: Since the packages are created as part of the Kibana build, source packages are not available.
Packages are signed with the PGP key http://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4[D88E42B4], which
has the following fingerprint:
4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4
[float]
[[kibana-apt]]
===== Installing Kibana with apt-get
. Download and install the Public Signing Key:
+
[source,sh]
--------------------------------------------------
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
--------------------------------------------------
+
. Add the repository definition to your `/etc/apt/sources.list.d/kibana.list` file:
+
["source","sh",subs="attributes"]
--------------------------------------------------
echo "deb https://packages.elastic.co/kibana/{packageversion}/debian stable main" | sudo tee -a /etc/apt/sources.list.d/kibana.list
--------------------------------------------------
+
[WARNING]
==================================================
Use the `echo` method described above to add the Kibana repository. Do not use `add-apt-repository`, as that command
adds a `deb-src` entry with no corresponding source package.
When the `deb-src` entry is present, the commands in this procedure generate an error similar to the following:
Unable to find expected entry 'main/source/Sources' in Release file (Wrong sources.list entry or malformed file)
Delete the `deb-src` entry from the `/etc/apt/sources.list.d/kibana.list` file to clear the error.
==================================================
+
. Run `apt-get update` to ready the repository. Install Kibana with the following command:
+
[source,sh]
--------------------------------------------------
sudo apt-get update && sudo apt-get install kibana
--------------------------------------------------
+
. Configure Kibana to automatically start during bootup. If your distribution is using the System V version of `init`,
run the following command:
+
[source,sh]
--------------------------------------------------
sudo update-rc.d kibana defaults 95 10
--------------------------------------------------
+
. If your distribution is using `systemd`, run the following commands instead:
+
[source,sh]
--------------------------------------------------
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
--------------------------------------------------
[float]
[[kibana-yum]]
===== Installing Kibana with yum
WARNING: The repositories set up in this procedure are not compatible with distributions using version 3 of `rpm`, such
as CentOS version 5.
. Download and install the public signing key:
+
[source,sh]
--------------------------------------------------
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
--------------------------------------------------
+
. Create a file named `kibana.repo` in the `/etc/yum.repos.d/` directory with the following contents:
+
["source","sh",subs="attributes"]
--------------------------------------------------
[kibana-{packageversion}]
name=Kibana repository for {packageversion} packages
baseurl=https://packages.elastic.co/kibana/{packageversion}/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
--------------------------------------------------
+
. Install Kibana by running the following command:
+
[source,sh]
--------------------------------------------------
yum install kibana
--------------------------------------------------
+
Configure Kibana to automatically start during bootup. If your distribution is using the System V version of `init`
(check with `ps -p 1`), run the following command:
+
[source,sh]
--------------------------------------------------
chkconfig --add kibana
--------------------------------------------------
+
. If your distribution is using `systemd`, run the following commands instead:
+
[source,sh]
--------------------------------------------------
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
--------------------------------------------------

22
docs/management.asciidoc Normal file
View file

@ -0,0 +1,22 @@
[[management]]
= Management
[partintro]
--
The Management application is where you perform your runtime configuration of
Kibana, including both the initial setup and ongoing configuration of index
patterns, advanced settings that tweak the behaviors of Kibana itself, and
the various "objects" that you can save throughout Kibana such as searches,
visualizations, and dashboards.
This section is pluginable, so in addition to the out of the box capabitilies,
packs such as X-Pack can add additional management capabilities to Kibana.
--
include::management/index-patterns.asciidoc[]
include::management/managing-fields.asciidoc[]
include::management/advanced-options.asciidoc[]
include::management/managing-saved-objects.asciidoc[]

View file

@ -1,13 +1,28 @@
[[advanced-options]]
== Setting Advanced Options
The *Advanced Settings* page enables you to directly edit settings that control the behavior of the Kibana application.
For example, you can change the format used to display dates, specify the default index pattern, and set the precision
for displayed decimal values.
To set advanced options:
. Go to *Settings > Advanced*.
. Click the *Edit* button for the option you want to modify.
. Enter a new value for the option.
. Click the *Save* button.
[float]
[[kibana-settings-reference]]
WARNING: Modifying the following settings can signficantly affect Kibana's performance and cause problems that are
WARNING: Modifying the following settings can significantly affect Kibana's performance and cause problems that are
difficult to diagnose. Setting a property's value to a blank field will revert to the default behavior, which may not be
compatible with other configuration settings. Deleting a custom setting removes it from Kibana permanently.
.Kibana Settings Reference
[horizontal]
`query:queryString:options`:: Options for the Lucene query string parser.
`sort:options`:: Options for the Elasticsearch https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-sort.html[sort] parameter.
`sort:options`:: Options for the Elasticsearch {es-ref}search-request-sort.html[sort] parameter.
`dateFormat`:: The format to use for displaying pretty-formatted dates.
`dateFormat:tz`:: The timezone that Kibana uses. The default value of `Browser` uses the timezone detected by the browser.
`dateFormat:scaled`:: These values define the format used to render ordered time-based data. Formatted timestamps must
@ -18,7 +33,7 @@ adapt to the interval between measurements. Keys are http://en.wikipedia.org/wik
`metaFields`:: An array of fields outside of `_source`. Kibana merges these fields into the document when displaying the
document.
`discover:sampleSize`:: The number of rows to show in the Discover table.
`doc_table:highlight`:: Highlight results in Discover and Saved Searches Dashboard. Highlighing makes request slow when
`doc_table:highlight`:: Highlight results in Discover and Saved Searches Dashboard. Highlighting makes request slow when
working on big documents. Set this property to `false` to disable highlighting.
`courier:maxSegmentCount`:: Kibana splits requests in the Discover app into segments to limit the size of requests sent to
the Elasticsearch cluster. This setting constrains the length of the segment list. Long segment lists can significantly
@ -28,7 +43,7 @@ increase request processing time.
`histogram:maxBars`:: Date histograms are not generated with more bars than the value of this property, scaling values
when necessary.
`visualization:tileMap:maxPrecision`:: The maximum geoHash precision displayed on tile maps: 7 is high, 10 is very high,
12 is the maximum. http://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-geohashgrid-aggregation.html#_cell_dimensions_at_the_equator[Explanation of cell dimensions].
12 is the maximum. {es-ref}search-aggregations-bucket-geohashgrid-aggregation.html#_cell_dimensions_at_the_equator[Explanation of cell dimensions].
`visualization:tileMap:WMSdefaults`:: Default properties for the WMS map server support in the tile map.
`visualization:colorMapping`:: Maps values to specified colors within visualizations.
`visualization:loadingDelay`:: Time to wait before dimming visualizations during query.

View file

@ -0,0 +1,146 @@
[[index-patterns]]
== Index Patterns
To use Kibana, you have to tell it about the Elasticsearch indices that you want to explore by configuring one or more
index patterns. You can also:
* Create scripted fields that are computed on the fly from your data. You can browse and visualize scripted fields, but
you cannot search them.
* Set advanced options such as the number of rows to show in a table and how many of the most popular fields to show.
Use caution when modifying advanced options, as it's possible to set values that are incompatible with one another.
* Configure Kibana for a production environment
[float]
[[settings-create-pattern]]
== Creating an Index Pattern to Connect to Elasticsearch
An _index pattern_ identifies one or more Elasticsearch indices that you want to explore with Kibana. Kibana looks for
index names that match the specified pattern.
An asterisk (*) in the pattern matches zero or more characters. For example, the pattern `myindex-*` matches all
indices whose names start with `myindex-`, such as `myindex-1` and `myindex-2`.
An index pattern can also simply be the name of a single index.
To create an index pattern to connect to Elasticsearch:
. Go to the *Settings > Indices* tab.
. Specify an index pattern that matches the name of one or more of your Elasticsearch indices. By default, Kibana
guesses that you're you're working with log data being fed into Elasticsearch by Logstash.
+
NOTE: When you switch between top-level tabs, Kibana remembers where you were. For example, if you view a particular
index pattern from the Settings tab, switch to the Discover tab, and then go back to the Settings tab, Kibana displays
the index pattern you last looked at. To get to the create pattern form, click the *Add* button in the Index Patterns
list.
. If your index contains a timestamp field that you want to use to perform time-based comparisons, select the *Index
contains time-based events* option and select the index field that contains the timestamp. Kibana reads the index
mapping to list all of the fields that contain a timestamp.
. By default, Kibana restricts wildcard expansion of time-based index patterns to indices with data within the currently
selected time range. Click *Do not expand index pattern when search* to disable this behavior.
. Click *Create* to add the index pattern.
. To designate the new pattern as the default pattern to load when you view the Discover tab, click the *favorite*
button.
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
contain data.
To use an event time in an index name, enclose the static text in the pattern and specify the date format using the
tokens described in the following table.
For example, `[logstash-]YYYY.MM.DD` matches all indices whose names have a timestamp of the form `YYYY.MM.DD` appended
to the prefix `logstash-`, such as `logstash-2015.01.31` and `logstash-2015-02-01`.
[float]
[[date-format-tokens]]
.Date Format Tokens
[horizontal]
`M`:: Month - cardinal: 1 2 3 ... 12
`Mo`:: Month - ordinal: 1st 2nd 3rd ... 12th
`MM`:: Month - two digit: 01 02 03 ... 12
`MMM`:: Month - abbreviation: Jan Feb Mar ... Dec
`MMMM`:: Month - full: January February March ... December
`Q`:: Quarter: 1 2 3 4
`D`:: Day of Month - cardinal: 1 2 3 ... 31
`Do`:: Day of Month - ordinal: 1st 2nd 3rd ... 31st
`DD`:: Day of Month - two digit: 01 02 03 ... 31
`DDD`:: Day of Year - cardinal: 1 2 3 ... 365
`DDDo`:: Day of Year - ordinal: 1st 2nd 3rd ... 365th
`DDDD`:: Day of Year - three digit: 001 002 ... 364 365
`d`:: Day of Week - cardinal: 0 1 3 ... 6
`do`:: Day of Week - ordinal: 0th 1st 2nd ... 6th
`dd`:: Day of Week - 2-letter abbreviation: Su Mo Tu ... Sa
`ddd`:: Day of Week - 3-letter abbreviation: Sun Mon Tue ... Sat
`dddd`:: Day of Week - full: Sunday Monday Tuesday ... Saturday
`e`:: Day of Week (locale): 0 1 2 ... 6
`E`:: Day of Week (ISO): 1 2 3 ... 7
`w`:: Week of Year - cardinal (locale): 1 2 3 ... 53
`wo`:: Week of Year - ordinal (locale): 1st 2nd 3rd ... 53rd
`ww`:: Week of Year - 2-digit (locale): 01 02 03 ... 53
`W`:: Week of Year - cardinal (ISO): 1 2 3 ... 53
`Wo`:: Week of Year - ordinal (ISO): 1st 2nd 3rd ... 53rd
`WW`:: Week of Year - two-digit (ISO): 01 02 03 ... 53
`YY`:: Year - two digit: 70 71 72 ... 30
`YYYY`:: Year - four digit: 1970 1971 1972 ... 2030
`gg`:: Week Year - two digit (locale): 70 71 72 ... 30
`gggg`:: Week Year - four digit (locale): 1970 1971 1972 ... 2030
`GG`:: Week Year - two digit (ISO): 70 71 72 ... 30
`GGGG`:: Week Year - four digit (ISO): 1970 1971 1972 ... 2030
`A`:: AM/PM: AM PM
`a`:: am/pm: am pm
`H`:: Hour: 0 1 2 ... 23
`HH`:: Hour - two digit: 00 01 02 ... 23
`h`:: Hour - 12-hour clock: 1 2 3 ... 12
`hh`:: Hour - 12-hour clock, 2 digit: 01 02 03 ... 12
`m`:: Minute: 0 1 2 ... 59
`mm`:: Minute - two-digit: 00 01 02 ... 59
`s`:: Second: 0 1 2 ... 59
`ss`:: Second - two-digit: 00 01 02 ... 59
`S`:: Fractional Second - 10ths: 0 1 2 ... 9
`SS`:: Fractional Second - 100ths: 0 1 ... 98 99
`SSS`:: Fractional Seconds - 1000ths: 0 1 ... 998 999
`Z`:: Timezone - zero UTC offset (hh:mm format): -07:00 -06:00 -05:00 .. +07:00
`ZZ`:: Timezone - zero UTC offset (hhmm format): -0700 -0600 -0500 ... +0700
`X`:: Unix Timestamp: 1360013296
`x`:: Unix Millisecond Timestamp: 1360013296123
[float]
[[set-default-pattern]]
== Setting the Default Index Pattern
The default index pattern is loaded by automatically when you view the *Discover* tab. Kibana displays a star to the
left of the name of the default pattern in the Index Patterns list on the *Settings > Indices* tab. The first pattern
you create is automatically designated as the default pattern.
To set a different pattern as the default index pattern:
. Go to the *Settings > Indices* tab.
. Select the pattern you want to set as the default in the Index Patterns list.
. Click the pattern's *Favorite* button.
NOTE: You can also manually set the default index pattern in *Advanced > Settings*.
[float]
[[reload-fields]]
== Reloading the Index Fields List
When you add an index mapping, Kibana automatically scans the indices that match the pattern to display a list of the
index fields. You can reload the index fields list to pick up any newly-added fields.
Reloading the index fields list also resets Kibana's popularity counters for the fields. The popularity counters keep
track of the fields you've used most often within Kibana and are used to sort fields within lists.
To reload the index fields list:
. Go to the *Settings > Indices* tab.
. Select an index pattern from the Index Patterns list.
. Click the pattern's *Reload* button.
[float]
[[delete-pattern]]
== Deleting an Index Pattern
To delete an index pattern:
. Go to the *Settings > Indices* tab.
. Select the pattern you want to remove in the Index Patterns list.
. Click the pattern's *Delete* button.
. Confirm that you want to remove the index pattern.

View file

@ -0,0 +1,123 @@
[[managing-fields]]
== Managing Fields
The fields for the index pattern are listed in a table. Click a column header to sort the table by that column. Click
the *Controls* button in the rightmost column for a given field to edit the field's properties. You can manually set
the field's format from the *Format* drop-down. Format options vary based on the field's type.
You can also set the field's popularity value in the *Popularity* text entry box to any desired value. Click the
*Update Field* button to confirm your changes or *Cancel* to return to the list of fields.
Kibana has field formatters for the following field types:
* <<field-formatters-string, Strings>>
* <<field-formatters-date, Dates>>
* <<field-formatters-geopoint, Geopoints>>
* <<field-formatters-numeric, Numbers>>
[[field-formatters-string]]
=== String Field Formatters
String fields support the `String` and `Url` formatters.
include::field-formatters/string-formatter.asciidoc[]
include::field-formatters/url-formatter.asciidoc[]
[[field-formatters-date]]
=== Date Field Formatters
Date fields support the `Date`, `Url`, and `String` formatters.
The `Date` formatter enables you to choose the display format of date stamps using the http://moment.js[moment.js]
standard format definitions.
include::field-formatters/string-formatter.asciidoc[]
include::field-formatters/url-formatter.asciidoc[]
[[field-formatters-geopoint]]
=== Geographic Point Field Formatters
Geographic point fields support the `String` formatter.
include::field-formatters/string-formatter.asciidoc[]
[[field-formatters-numeric]]
=== Numeric Field Formatters
Numeric fields support the `Url`, `Bytes`, `Duration`, `Number`, `Percentage`, `String`, and `Color` formatters.
include::field-formatters/url-formatter.asciidoc[]
include::field-formatters/string-formatter.asciidoc[]
include::field-formatters/duration-formatter.asciidoc[]
include::field-formatters/color-formatter.asciidoc[]
The `Bytes`, `Number`, and `Percentage` formatters enable you to choose the display formats of numbers in this field using
the https://adamwdraper.github.io/Numeral-js/[numeral.js] standard format definitions.
[[scripted-fields]]
=== Scripted Fields
Scripted fields compute data on the fly from the data in your Elasticsearch indices. Scripted field data is shown on
the Discover tab as part of the document data, and you can use scripted fields in your visualizations.
Scripted field values are computed at query time so they aren't indexed and cannot be searched.
NOTE: Kibana cannot query scripted fields.
WARNING: Computing data on the fly with scripted fields can be very resource intensive and can have a direct impact on
Kibana's performance. Keep in mind that there's no built-in validation of a scripted field. If your scripts are
buggy, you'll get exceptions whenever you try to view the dynamically generated data.
Scripted fields use the Lucene expression syntax. For more information,
see {es-ref}modules-scripting-expression.html[
Lucene Expressions Scripts].
You can reference any single value numeric field in your expressions, for example:
----
doc['field_name'].value
----
[float]
[[create-scripted-field]]
=== Creating a Scripted Field
To create a scripted field:
. Go to *Settings > Indices*
. Select the index pattern you want to add a scripted field to.
. Go to the pattern's *Scripted Fields* tab.
. Click *Add Scripted Field*.
. Enter a name for the scripted field.
. Enter the expression that you want to use to compute a value on the fly from your index data.
. Click *Save Scripted Field*.
For more information about scripted fields in Elasticsearch, see
{es-ref}modules-scripting.html[Scripting].
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
{es-ref}modules-scripting.html[dynamic Groovy scripting].
[float]
[[update-scripted-field]]
=== Updating a Scripted Field
To modify a scripted field:
. Go to *Settings > Indices*
. Click the *Edit* button for the scripted field you want to change.
. Make your changes and then click *Save Scripted Field* to update the field.
WARNING: Keep in mind that there's no built-in validation of a scripted field. If your scripts are buggy, you'll get
exceptions whenever you try to view the dynamically generated data.
[float]
[[delete-scripted-field]]
=== Deleting a Scripted Field
To delete a scripted field:
. Go to *Settings > Indices*
. Click the *Delete* button for the scripted field you want to remove.
. Confirm that you really want to delete the field.

View file

@ -0,0 +1,58 @@
[[managing-saved-objects]]
== Managing Saved Searches, Visualizations, and Dashboards
You can view, edit, and delete saved searches, visualizations, and dashboards from *Settings > Objects*. You can also
export or import sets of searches, visualizations, and dashboards.
Viewing a saved object displays the selected item in the *Discover*, *Visualize*, or *Dashboard* page. To view a saved
object:
. Go to *Settings > Objects*.
. Select the object you want to view.
. Click the *View* button.
Editing a saved object enables you to directly modify the object definition. You can change the name of the object, add
a description, and modify the JSON that defines the object's properties.
If you attempt to access an object whose index has been deleted, Kibana displays its Edit Object page. You can:
* Recreate the index so you can continue using the object.
* Delete the object and recreate it using a different index.
* Change the index name referenced in the object's `kibanaSavedObjectMeta.searchSourceJSON` to point to an existing
index pattern. This is useful if the index you were working with has been renamed.
WARNING: No validation is performed for object properties. Submitting invalid changes will render the object unusable.
Generally, you should use the *Discover*, *Visualize*, or *Dashboard* pages to create new objects instead of directly
editing existing ones.
To edit a saved object:
. Go to *Settings > Objects*.
. Select the object you want to edit.
. Click the *Edit* button.
. Make your changes to the object definition.
. Click the *Save Object* button.
To delete a saved object:
. Go to *Settings > Objects*.
. Select the object you want to delete.
. Click the *Delete* button.
. Confirm that you really want to delete the object.
To export a set of objects:
. Go to *Settings > Objects*.
. Select the type of object you want to export. You can export a set of dashboards, searches, or visualizations.
. Click the selection box for the objects you want to export, or click the *Select All* box.
. Click *Export* to select a location to write the exported JSON.
WARNING: Exported dashboards do not include their associated index patterns. Re-create the index patterns manually before
importing saved dashboards to a Kibana instance running on another Elasticsearch cluster.
To import a set of objects:
. Go to *Settings > Objects*.
. Click *Import* to navigate to the JSON file representing the set of objects to import.
. Click *Open* after selecting the JSON file.
. If any objects in the set would overwrite objects already present in Kibana, confirm the overwrite.

10
docs/migration.asciidoc Normal file
View file

@ -0,0 +1,10 @@
[[breaking-changes]]
= Breaking changes
[partintro]
--
This section discusses the changes that you need to be aware of when migrating
your application from one version of Kibana to another.
--
include::migration/migrate_6_0.asciidoc[]

View file

@ -0,0 +1,4 @@
[[breaking-changes-6.0]]
== Breaking changes in 6.0
There are not yet any breaking changes in Kibana 6.0

View file

@ -1,37 +1,35 @@
[[kibana-plugins]]
== Kibana Plugins
= Kibana Plugins
[partintro]
--
Add-on functionality for Kibana is implemented with plug-in modules. You can use the `bin/kibana-plugin`
command to manage these modules. You can also install a plugin manually by moving the plugin file to the
`plugins` directory and unpacking the plugin files into a new directory.
--
A list of existing Kibana plugins is available on https://github.com/elastic/kibana/wiki/Known-Plugins[GitHub].
[float]
=== Installing Plugins
== Installing Plugins
Use the following command to install a plugin:
[source,shell]
bin/kibana-plugin install <package name or URL>
When you specify a plugin name without a URL, the plugin tool attempts to download the plugin from `download.elastic.co`.
[float]
==== Installing Plugins from an Arbitrary URL
You can specify a URL to a specific plugin, as in the following example:
When you specify a plugin name without a URL, the plugin tool attempts to download an official Elastic plugin, such as:
["source","shell",subs="attributes"]
$ bin/kibana-plugin install https://download.elastic.co/kibana/x-pack/x-pack-{version}.zip
Attempting to transfer from https://download.elastic.co/kibana/x-pack/x-pack-{version}.zip
Transferring <some number> bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
Extraction complete
Optimizing and caching browser bundles...
Plugin installation complete
$ bin/kibana-plugin install x-pack
[float]
=== Installing Plugins from an Arbitrary URL
You can download official Elastic plugins simply by specifying their name. You
can alternatively specify a URL to a specific plugin, as in the following
example:
["source","shell",subs="attributes"]
$ bin/kibana-plugin install https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-{version}.zip
You can specify URLs that use the HTTP, HTTPS, or `file` protocols.
@ -43,40 +41,36 @@ example:
[source,shell]
$ bin/kibana-plugin install file:///some/local/path/x-pack.zip -d path/to/directory
Installing sample-plugin
Attempting to transfer from file:///some/local/path/x-pack.zip
Transferring <some number> bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
Extraction complete
Optimizing and caching browser bundles...
Plugin installation complete
NOTE: This command creates the specified directory if it does not already exist.
[float]
=== Removing Plugins
Use the `remove` command to remove a plugin, including any configuration information, as in the following example:
[source,shell]
$ bin/kibana-plugin remove timelion
You can also remove a plugin manually by deleting the plugin's subdirectory under the `plugins/` directory.
[float]
=== Listing Installed Plugins
Use the `list` command to list the currently installed plugins.
[float]
=== Updating Plugins
== Updating & Removing Plugins
To update a plugin, remove the current version and reinstall the plugin.
[float]
=== Configuring the Plugin Manager
To remove a plugin, use the `remove` command, as in the following example:
[source,shell]
$ bin/kibana-plugin remove x-pack
You can also remove a plugin manually by deleting the plugin's subdirectory under the `plugins/` directory.
NOTE: Removing a plugin will result in an "optimize" run which will delay the next start of Kibana.
== Disabling Plugins
Use the following command to disable a plugin:
[source,shell]
-----------
./bin/kibana --<plugin ID>.enabled=false <1>
-----------
NOTE: Disabling or enabling a plugin will result in an "optimize" run which will delay the start of Kibana.
<1> You can find a plugin's plugin ID as the value of the `name` property in the plugin's `package.json` file.
== Configuring the Plugin Manager
By default, the plugin manager provides you with feedback on the status of the activity you've asked the plugin manager
to perform. You can control the level of feedback for the `install` and `remove` commands with the `--quiet` and
@ -95,7 +89,7 @@ bin/kibana-plugin install --timeout 30s sample-plugin
bin/kibana-plugin install --timeout 1m sample-plugin
[float]
==== Plugins and Custom Kibana Configurations
=== Plugins and Custom Kibana Configurations
Use the `-c` or `--config` options with the `install` and `remove` commands to specify the path to the configuration file
used to start Kibana. By default, Kibana uses the configuration file `config/kibana.yml`. When you change your installed
@ -110,22 +104,3 @@ you must specify the path to that configuration file each time you use the `bin/
64:: Unknown command or incorrect option parameter
74:: I/O error
70:: Other error
[float]
[[plugin-switcher]]
== Switching Plugin Functionality
The Kibana UI serves as a framework that can contain several different plugins. You can switch between these
plugins by clicking the icons for your desired plugins in the left-hand navigation bar.
[float]
=== Disabling Plugins
Use the following command to disable a plugin:
[source,shell]
-----------
./bin/kibana --<plugin ID>.enabled=false <1>
-----------
<1> You can find a plugin's plugin ID as the value of the `name` property in the plugin's `package.json` file.

Some files were not shown because too many files have changed in this diff Show more