docs: Overhaul of doc structure for 5.0+ (#8821)
This overhaul of the docs structure puts Kibana's documentation more inline with the structure that is used in Elasticsearch. This will help us better organize the docs going forward as more docs are added. This also includes a few necessary content changes for 5.0.
|
@ -1,12 +0,0 @@
|
|||
[[kibana-apps]]
|
||||
== Kibana Apps
|
||||
|
||||
The Kibana UI serves as a framework that can contain several different applications. You can switch between these
|
||||
applications by clicking the image:images/app-button.png[App Picker] *App picker* button to display the app bar:
|
||||
|
||||
image::images/app-picker.png[]
|
||||
|
||||
Click an app icon to switch to that app's functionality.
|
||||
|
||||
Applications in the Kibana UI are managed by <<kibana-plugins,_plugins_>>. Plugins can expose app functionality or add new
|
||||
visualization types.
|
|
@ -1,6 +1,8 @@
|
|||
[[console-kibana]]
|
||||
== Console for Kibana
|
||||
= Console
|
||||
|
||||
[partintro]
|
||||
--
|
||||
The Console plugin provides a UI to interact with the REST API of Elasticsearch. Console has two main areas: the *editor*,
|
||||
where you compose requests to Elasticsearch, and the *response* pane, which displays the responses to the request.
|
||||
Enter the address of your Elasticsearch server in the text box on the top of screen. The default value of this address
|
||||
|
@ -63,120 +65,22 @@ but you can easily change this by entering a different url in the Server input:
|
|||
.The Server Input
|
||||
image::images/introduction_server.png["Server",width=400,align="center"]
|
||||
|
||||
[NOTE]
|
||||
Console is a development tool and is configured by default to run on a laptop. If you install it on a server please
|
||||
look at the <<securing_console>> for instructions on how make it secure.
|
||||
|
||||
[float]
|
||||
[[console-ui]]
|
||||
== The Console UI
|
||||
|
||||
In this section you will find a more detailed description of UI of Console. The basic aspects of the UI are explained
|
||||
in the <<console-kibana>> section.
|
||||
--
|
||||
|
||||
[[multi-req]]
|
||||
=== Multiple Requests Support
|
||||
include::console/multi-requests.asciidoc[]
|
||||
|
||||
The Console editor allows writing multiple requests below each other. As shown in the <<console-kibana>> section, you
|
||||
can submit a request to Elasticsearch by positioning the cursor and using the <<action_menu,Action Menu>>. Similarly
|
||||
you can select multiple requests in one go:
|
||||
include::console/auto-formatting.asciidoc[]
|
||||
|
||||
.Selecting Multiple Requests
|
||||
image::images/multiple_requests.png[Multiple Requests]
|
||||
include::console/keyboard-shortcuts.asciidoc[]
|
||||
|
||||
Console will send the request one by one to Elasticsearch and show the output on the right pane as Elasticsearch responds.
|
||||
This is very handy when debugging an issue or trying query combinations in multiple scenarios.
|
||||
include::console/history.asciidoc[]
|
||||
|
||||
Selecting multiple requests also allows you to auto format and copy them as cURL in one go.
|
||||
include::console/settings.asciidoc[]
|
||||
|
||||
|
||||
[[auto_formatting]]
|
||||
=== Auto Formatting
|
||||
|
||||
Console allows you to auto format messy requests. To do so, position the cursor on the request you would like to format
|
||||
and select Auto Indent from the action menu:
|
||||
|
||||
.Auto Indent a request
|
||||
image::images/auto_format_before.png["Auto format before",width=500,align="center"]
|
||||
|
||||
Console will adjust the JSON body of the request and it will now look like this:
|
||||
|
||||
.A formatted request
|
||||
image::images/auto_format_after.png["Auto format after",width=500,align="center"]
|
||||
|
||||
If you select Auto Indent on a request that is already perfectly formatted, Console will collapse the
|
||||
request body to a single line per document. This is very handy when working with Elasticsearch's bulk APIs:
|
||||
|
||||
.One doc per line
|
||||
image::images/auto_format_bulk.png["Auto format bulk",width=550,align="center"]
|
||||
|
||||
|
||||
[[keyboard_shortcuts]]
|
||||
=== Keyboard shortcuts
|
||||
|
||||
Console comes with a set of nifty keyboard shortcuts making working with it even more efficient. Here is an overview:
|
||||
|
||||
==== General editing
|
||||
|
||||
Ctrl/Cmd + I:: Auto indent current request.
|
||||
Ctrl + Space:: Open Auto complete (even if not typing).
|
||||
Ctrl/Cmd + Enter:: Submit request.
|
||||
Ctrl/Cmd + Up/Down:: Jump to the previous/next request start or end.
|
||||
Ctrl/Cmd + Alt + L:: Collapse/expand current scope.
|
||||
Ctrl/Cmd + Option + 0:: Collapse all scopes but the current one. Expand by adding a shift.
|
||||
|
||||
==== When auto-complete is visible
|
||||
|
||||
Down arrow:: Switch focus to auto-complete menu. Use arrows to further select a term.
|
||||
Enter/Tab:: Select the currently selected or the top most term in auto-complete menu.
|
||||
Esc:: Close auto-complete menu.
|
||||
|
||||
|
||||
=== History
|
||||
|
||||
Console maintains a list of the last 500 requests that were successfully executed by Elasticsearch. The history
|
||||
is available by clicking the clock icon on the top right side of the window. The icons opens the history panel
|
||||
where you can see the old requests. You can also select a request here and it will be added to the editor at
|
||||
the current cursor position.
|
||||
|
||||
.History Panel
|
||||
image::images/history.png["History Panel"]
|
||||
|
||||
|
||||
=== Settings
|
||||
|
||||
Console has multiple settings you can set. All of them are available in the Settings panel. To open the panel
|
||||
click on the cog icon on the top right.
|
||||
|
||||
.Settings Panel
|
||||
image::images/settings.png["Setting Panel"]
|
||||
|
||||
[[securing_console]]
|
||||
=== Securing Console
|
||||
|
||||
Console is meant to be used as a local development tool. As such, it will send requests to any host & port combination,
|
||||
just as a local curl command would. To overcome the CORS limitations enforced by browsers, Console's Node.js backend
|
||||
serves as a proxy to send requests on behalf of the browser. However, if put on a server and exposed to the internet
|
||||
this can become a security risk. In those cases, we highly recommend you lock down the proxy by setting the
|
||||
`console.proxyFilter` Kibana server setting. The setting accepts a list of regular expressions that are evaluated
|
||||
against each URL the proxy is requested to retrieve. If none of the regular expressions match the proxy will reject
|
||||
the request.
|
||||
|
||||
Here is an example configuration the only allows Console to connect to localhost:
|
||||
|
||||
[source,yaml]
|
||||
--------
|
||||
console.proxyFilter:
|
||||
- ^https?://(localhost|127\.0\.0\.1|\[::0\]).*
|
||||
--------
|
||||
|
||||
Restart Kibana for these changes to take effect.
|
||||
|
||||
Alternatively if the users of Kibana have no requirements or need to access any of the Console functionality, it can
|
||||
be disabled completely and not even show up as an available app by setting the `console.enabled` Kibana server setting to `false`:
|
||||
|
||||
[source,yaml]
|
||||
--------
|
||||
console.enabled:
|
||||
- false
|
||||
--------
|
||||
include::console/disabling-console.asciidoc[]
|
||||
|
|
19
docs/console/auto-formatting.asciidoc
Normal file
|
@ -0,0 +1,19 @@
|
|||
[[auto-formatting]]
|
||||
== Auto Formatting
|
||||
|
||||
Console allows you to auto format messy requests. To do so, position the cursor on the request you would like to format
|
||||
and select Auto Indent from the action menu:
|
||||
|
||||
.Auto Indent a request
|
||||
image::images/auto_format_before.png["Auto format before",width=500,align="center"]
|
||||
|
||||
Console will adjust the JSON body of the request and it will now look like this:
|
||||
|
||||
.A formatted request
|
||||
image::images/auto_format_after.png["Auto format after",width=500,align="center"]
|
||||
|
||||
If you select Auto Indent on a request that is already perfectly formatted, Console will collapse the
|
||||
request body to a single line per document. This is very handy when working with Elasticsearch's bulk APIs:
|
||||
|
||||
.One doc per line
|
||||
image::images/auto_format_bulk.png["Auto format bulk",width=550,align="center"]
|
10
docs/console/disabling-console.asciidoc
Normal file
|
@ -0,0 +1,10 @@
|
|||
[[disabling-console]]
|
||||
== Disable Console
|
||||
|
||||
If the users of Kibana have no requirements or need to access any of the Console functionality, it can
|
||||
be disabled completely and not even show up as an available app by setting the `console.enabled` Kibana server setting to `false`:
|
||||
|
||||
[source,yaml]
|
||||
--------
|
||||
console.enabled: false
|
||||
--------
|
10
docs/console/history.asciidoc
Normal file
|
@ -0,0 +1,10 @@
|
|||
[[history]]
|
||||
== History
|
||||
|
||||
Console maintains a list of the last 500 requests that were successfully executed by Elasticsearch. The history
|
||||
is available by clicking the clock icon on the top right side of the window. The icons opens the history panel
|
||||
where you can see the old requests. You can also select a request here and it will be added to the editor at
|
||||
the current cursor position.
|
||||
|
||||
.History Panel
|
||||
image::images/history.png["History Panel"]
|
21
docs/console/keyboard-shortcuts.asciidoc
Normal file
|
@ -0,0 +1,21 @@
|
|||
[[keyboard-shortcuts]]
|
||||
== Keyboard shortcuts
|
||||
|
||||
Console comes with a set of nifty keyboard shortcuts making working with it even more efficient. Here is an overview:
|
||||
|
||||
[float]
|
||||
=== General editing
|
||||
|
||||
Ctrl/Cmd + I:: Auto indent current request.
|
||||
Ctrl + Space:: Open Auto complete (even if not typing).
|
||||
Ctrl/Cmd + Enter:: Submit request.
|
||||
Ctrl/Cmd + Up/Down:: Jump to the previous/next request start or end.
|
||||
Ctrl/Cmd + Alt + L:: Collapse/expand current scope.
|
||||
Ctrl/Cmd + Option + 0:: Collapse all scopes but the current one. Expand by adding a shift.
|
||||
|
||||
[float]
|
||||
=== When auto-complete is visible
|
||||
|
||||
Down arrow:: Switch focus to auto-complete menu. Use arrows to further select a term.
|
||||
Enter/Tab:: Select the currently selected or the top most term in auto-complete menu.
|
||||
Esc:: Close auto-complete menu.
|
14
docs/console/multi-requests.asciidoc
Normal file
|
@ -0,0 +1,14 @@
|
|||
[[multi-requests]]
|
||||
== Multiple Requests Support
|
||||
|
||||
The Console editor allows writing multiple requests below each other. As shown in the <<console-kibana>> section, you
|
||||
can submit a request to Elasticsearch by positioning the cursor and using the <<action_menu,Action Menu>>. Similarly
|
||||
you can select multiple requests in one go:
|
||||
|
||||
.Selecting Multiple Requests
|
||||
image::images/multiple_requests.png[Multiple Requests]
|
||||
|
||||
Console will send the request one by one to Elasticsearch and show the output on the right pane as Elasticsearch responds.
|
||||
This is very handy when debugging an issue or trying query combinations in multiple scenarios.
|
||||
|
||||
Selecting multiple requests also allows you to auto format and copy them as cURL in one go.
|
8
docs/console/settings.asciidoc
Normal file
|
@ -0,0 +1,8 @@
|
|||
[[console-settings]]
|
||||
== Settings
|
||||
|
||||
Console has multiple settings you can set. All of them are available in the Settings panel. To open the panel
|
||||
click on the cog icon on the top right.
|
||||
|
||||
.Settings Panel
|
||||
image::images/settings.png["Setting Panel"]
|
|
@ -1,21 +1,23 @@
|
|||
[[dashboard]]
|
||||
== Dashboard
|
||||
= Dashboard
|
||||
|
||||
[partintro]
|
||||
--
|
||||
A Kibana _dashboard_ displays a set of saved visualizations in groups that you can arrange freely. You can save a
|
||||
dashboard to share or reload at a later time.
|
||||
|
||||
.Sample dashboard
|
||||
image:images/tutorial-dashboard.png[Example dashboard]
|
||||
--
|
||||
|
||||
[float]
|
||||
[[dashboard-getting-started]]
|
||||
=== Getting Started
|
||||
== Getting Started
|
||||
|
||||
You need at least one saved <<visualize, visualization>> to use a dashboard.
|
||||
|
||||
[float]
|
||||
[[creating-a-new-dashboard]]
|
||||
==== Building a New Dashboard
|
||||
=== Building a New Dashboard
|
||||
|
||||
The first time you click the *Dashboard* tab, Kibana displays an empty dashboard.
|
||||
|
||||
|
@ -28,11 +30,11 @@ NOTE: You can change the default theme in the *Advanced* section of the *Setting
|
|||
|
||||
[float]
|
||||
[[dash-autorefresh]]
|
||||
include::autorefresh.asciidoc[]
|
||||
include::discover/autorefresh.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[adding-visualizations-to-a-dashboard]]
|
||||
==== Adding Visualizations to a Dashboard
|
||||
=== Adding Visualizations to a Dashboard
|
||||
|
||||
To add a visualization to the dashboard, click the *Add* button in the toolbar panel. Select a saved visualization
|
||||
from the list. You can filter the list of visualizations by typing a filter string into the *Visualization Filter*
|
||||
|
@ -45,7 +47,7 @@ container>>.
|
|||
|
||||
[float]
|
||||
[[saving-dashboards]]
|
||||
==== Saving Dashboards
|
||||
=== Saving Dashboards
|
||||
|
||||
To save the dashboard, click the *Save Dashboard* button in the toolbar panel, enter a name for the dashboard in the
|
||||
*Save As* field, and click the *Save* button. By default, dashboards store the time period specified in the time filter
|
||||
|
@ -54,7 +56,7 @@ when you save a dashboard. To disable this behavior, clear the *Store time with
|
|||
|
||||
[float]
|
||||
[[loading-a-saved-dashboard]]
|
||||
==== Loading a Saved Dashboard
|
||||
=== Loading a Saved Dashboard
|
||||
|
||||
Click the *Load Saved Dashboard* button to display a list of existing dashboards. The saved dashboard selector includes
|
||||
a text field to filter by dashboard name and a link to the Object Editor for managing your saved dashboards. You can
|
||||
|
@ -62,7 +64,7 @@ also access the Object Editor by clicking *Settings > Objects*.
|
|||
|
||||
[float]
|
||||
[[sharing-dashboards]]
|
||||
==== Sharing Dashboards
|
||||
=== Sharing Dashboards
|
||||
|
||||
You can share dashboards with other users. You can share a direct link to the Kibana dashboard or embed the dashboard
|
||||
in your Web page.
|
||||
|
@ -77,27 +79,26 @@ embedding.
|
|||
|
||||
[float]
|
||||
[[embedding-dashboards]]
|
||||
==== Embedding Dashboards
|
||||
=== Embedding Dashboards
|
||||
|
||||
To embed a dashboard, copy the embed code from the _Share_ display into your external web application.
|
||||
|
||||
[float]
|
||||
[[customizing-your-dashboard]]
|
||||
=== Customizing Dashboard Elements
|
||||
== Customizing Dashboard Elements
|
||||
|
||||
The visualizations in your dashboard are stored in resizable _containers_ that you can arrange on the dashboard. This
|
||||
section discusses customizing these containers.
|
||||
|
||||
[float]
|
||||
[[moving-containers]]
|
||||
==== Moving Containers
|
||||
=== Moving Containers
|
||||
|
||||
Click and hold a container's header to move the container around the dashboard. Other containers will shift as needed
|
||||
to make room for the moving container. Release the mouse button to confirm the container's new location.
|
||||
|
||||
[float]
|
||||
[[resizing-containers]]
|
||||
==== Resizing Containers
|
||||
=== Resizing Containers
|
||||
|
||||
Move the cursor to the bottom right corner of the container until the cursor changes to point at the corner. After the
|
||||
cursor changes, click and drag the corner of the container to change the container's size. Release the mouse button to
|
||||
|
@ -105,14 +106,14 @@ confirm the new container size.
|
|||
|
||||
[float]
|
||||
[[removing-containers]]
|
||||
==== Removing Containers
|
||||
=== Removing Containers
|
||||
|
||||
Click the *x* icon at the top right corner of a container to remove that container from the dashboard. Removing a
|
||||
container from a dashboard does not delete the saved visualization in that container.
|
||||
|
||||
[float]
|
||||
[[viewing-detailed-information]]
|
||||
==== Viewing Detailed Information
|
||||
=== Viewing Detailed Information
|
||||
|
||||
To display the raw data behind the visualization, click the bar at the bottom of the container. Tabs with detailed
|
||||
information about the raw data replace the visualization, as in this example:
|
||||
|
@ -140,13 +141,12 @@ To export the raw data behind the visualization as a comma-separated-values (CSV
|
|||
*Raw* or *Formatted* links at the bottom of any of the detailed information tabs. A raw export contains the data as it
|
||||
is stored in Elasticsearch. A formatted export contains the results of any applicable Kibana [field formatters].
|
||||
|
||||
[float]
|
||||
[[changing-the-visualization]]
|
||||
=== Changing the Visualization
|
||||
== Changing the Visualization
|
||||
|
||||
Click the _Edit_ button image:images/EditVis.png[Pencil button] at the top right of a container to open the
|
||||
visualization in the <<visualize,Visualize>> page.
|
||||
|
||||
[float]
|
||||
[[dashboard-filters]]
|
||||
include::filter-pinning.asciidoc[]
|
||||
include::discover/filter-pinning.asciidoc[]
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
[[discover]]
|
||||
== Discover
|
||||
= Discover
|
||||
|
||||
[partintro]
|
||||
--
|
||||
You can interactively explore your data from the Discover page. You have access to every document in every index that
|
||||
matches the selected index pattern. You can submit search queries, filter the search results, and view document data.
|
||||
You can also see the number of documents that match the search query and get field value statistics. If a time field is
|
||||
|
@ -7,226 +10,18 @@ configured for the selected index pattern, the distribution of documents over ti
|
|||
top of the page.
|
||||
|
||||
image::images/Discover-Start-Annotated.jpg[Discover Page]
|
||||
--
|
||||
|
||||
[float]
|
||||
[[set-time-filter]]
|
||||
=== Setting the Time Filter
|
||||
The Time Filter restricts the search results to a specific time period. You can set a time filter if your index
|
||||
contains time-based events and a time-field is configured for the selected index pattern.
|
||||
include::discover/set-time-filter.asciidoc[]
|
||||
|
||||
By default the time filter is set to the last 15 minutes. You can use the Time Picker to change the time filter
|
||||
or select a specific time interval or time range in the histogram at the top of the page.
|
||||
|
||||
To set a time filter with the Time Picker:
|
||||
|
||||
. Click the Time Filter displayed in the upper right corner of the menu bar to open the Time Picker.
|
||||
. To set a quick filter, simply click one of the shortcut links.
|
||||
. To specify a relative Time Filter, click *Relative* and enter the relative start time. You can specify
|
||||
the relative start time as any number of seconds, minutes, hours, days, months, or years ago.
|
||||
. To specify an absolute Time Filter, click *Absolute* and enter the start date in the *From* field and the end date in
|
||||
the *To* field.
|
||||
. Click the caret at the bottom of the Time Picker to hide it.
|
||||
|
||||
To set a Time Filter from the histogram, do one of the following:
|
||||
|
||||
* Click the bar that represents the time interval you want to zoom in on.
|
||||
* Click and drag to view a specific timespan. You must start the selection with the cursor over the background of the
|
||||
chart--the cursor changes to a plus sign when you hover over a valid start point.
|
||||
|
||||
You can use the browser Back button to undo your changes.
|
||||
|
||||
The histogram lists the time range you're currently exploring, as well as the intervals that range is currently using.
|
||||
To change the intervals, click the link and select an interval from the drop-down. The default behavior automatically
|
||||
sets an interval based on the time range.
|
||||
|
||||
[float]
|
||||
[[search]]
|
||||
=== Searching Your Data
|
||||
You can search the indices that match the current index pattern by submitting a search from the Discover page.
|
||||
You can enter simple query strings, use the
|
||||
Lucene https://lucene.apache.org/core/2_9_4/queryparsersyntax.html[query syntax], or use the full JSON-based
|
||||
{ref}/query-dsl.html[Elasticsearch Query DSL].
|
||||
|
||||
When you submit a search, the histogram, Documents table, and Fields list are updated to reflect
|
||||
the search results. The total number of hits (matching documents) is shown in the upper right corner of the
|
||||
histogram. The Documents table shows the first five hundred hits. By default, the hits are listed in reverse
|
||||
chronological order, with the newest documents shown first. You can reverse the sort order by by clicking on the Time
|
||||
column header. You can also sort the table using the values in any indexed field. For more information, see
|
||||
<<sorting,Sorting the Documents Table>>.
|
||||
|
||||
To search your data:
|
||||
|
||||
. Enter a query string in the Search field:
|
||||
+
|
||||
* To perform a free text search, simply enter a text string. For example, if you're searching web server logs, you
|
||||
could enter `safari` to search all fields for the term `safari`.
|
||||
+
|
||||
* To search for a value in a specific field, you prefix the value with the name of the field. For example, you could
|
||||
enter `status:200` to limit the results to entries that contain the value `200` in the `status` field.
|
||||
+
|
||||
* To search for a range of values, you can use the bracketed range syntax, `[START_VALUE TO END_VALUE]`. For example,
|
||||
to find entries that have 4xx status codes, you could enter `status:[400 TO 499]`.
|
||||
+
|
||||
* To specify more complex search criteria, you can use the Boolean operators `AND`, `OR`, and `NOT`. For example,
|
||||
to find entries that have 4xx status codes and have an extension of `php` or `html`, you could enter `status:[400 TO
|
||||
499] AND (extension:php OR extension:html)`.
|
||||
+
|
||||
NOTE: These examples use the Lucene query syntax. You can also submit queries using the Elasticsearch Query DSL. For
|
||||
examples, see {ref}/query-dsl-query-string-query.html#query-string-syntax[query string syntax] in the Elasticsearch
|
||||
Reference.
|
||||
+
|
||||
. Press *Enter* or click the *Search* button to submit your search query.
|
||||
|
||||
[float]
|
||||
[[new-search]]
|
||||
==== Starting a New Search
|
||||
To clear the current search and start a new search, click the *New* button in the Discover toolbar.
|
||||
|
||||
[float]
|
||||
[[save-search]]
|
||||
==== Saving a Search
|
||||
You can reload saved searches on the Discover page and use them as the basis of <<visualize, visualizations>>.
|
||||
Saving a search saves both the search query string and the currently selected index pattern.
|
||||
|
||||
To save the current search:
|
||||
|
||||
. Click the *Save* button in the Discover toolbar.
|
||||
. Enter a name for the search and click *Save*.
|
||||
|
||||
[float]
|
||||
[[load-search]]
|
||||
==== Opening a Saved Search
|
||||
To load a saved search:
|
||||
|
||||
. Click the *Open* button in the Discover toolbar.
|
||||
. Select the search you want to open.
|
||||
|
||||
If the saved search is associated with a different index pattern than is currently selected, opening the saved search
|
||||
also changes the selected index pattern.
|
||||
|
||||
[float]
|
||||
[[select-pattern]]
|
||||
==== Changing Which Indices You're Searching
|
||||
When you submit a search request, the indices that match the currently-selected index pattern are searched. The current
|
||||
index pattern is shown below the search field. To change which indices you are searching, click the name of the current
|
||||
index pattern to display a list of the configured index patterns and select a different index pattern.
|
||||
|
||||
For more information about index patterns, see <<settings-create-pattern, Creating an Index Pattern>>.
|
||||
include::discover/search.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[auto-refresh]]
|
||||
include::discover/autorefresh.asciidoc[]
|
||||
|
||||
include::autorefresh.asciidoc[]
|
||||
include::discover/field-filter.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[field-filter]]
|
||||
=== Filtering by Field
|
||||
You can filter the search results to display only those documents that contain a particular value in a field. You can
|
||||
also create negative filters that exclude documents that contain the specified field value.
|
||||
include::discover/document-data.asciidoc[]
|
||||
|
||||
You can add filters from the Fields list or from the Documents table. When you add a filter, it is displayed in the
|
||||
filter bar below the search query. From the filter bar, you can enable or disable a filter, invert the filter (change
|
||||
it from a positive filter to a negative filter and vice-versa), toggle the filter on or off, or remove it entirely.
|
||||
Click the small left-facing arrow to the right of the index pattern selection drop-down to collapse the Fields list.
|
||||
|
||||
To add a filter from the Fields list:
|
||||
|
||||
. Click the name of the field you want to filter on. This displays the top five values for that field. To the right of
|
||||
each value, there are two magnifying glass buttons--one for adding a regular (positive) filter, and
|
||||
one for adding a negative filter.
|
||||
. To add a positive filter, click the *Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button].
|
||||
This filters out documents that don't contain that value in the field.
|
||||
. To add a negative filter, click the *Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button].
|
||||
This excludes documents that contain that value in the field.
|
||||
|
||||
To add a filter from the Documents table:
|
||||
|
||||
. Expand a document in the Documents table by clicking the *Expand* button image:images/ExpandButton.jpg[Expand Button]
|
||||
to the left of the document's entry in the first column (the first column is usually Time). To the right of each field
|
||||
name, there are two magnifying glass buttons--one for adding a regular (positive) filter, and one for adding a negative
|
||||
filter.
|
||||
. To add a positive filter based on the document's value in a field, click the
|
||||
*Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button]. This filters out documents that don't
|
||||
contain the specified value in that field.
|
||||
. To add a negative filter based on the document's value in a field, click the
|
||||
*Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button]. This excludes documents that contain
|
||||
the specified value in that field.
|
||||
|
||||
[float]
|
||||
[[discover-filters]]
|
||||
include::filter-pinning.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[document-data]]
|
||||
=== Viewing Document Data
|
||||
When you submit a search query, the 500 most recent documents that match the query are listed in the Documents table.
|
||||
You can configure the number of documents shown in the table by setting the `discover:sampleSize` property in
|
||||
<<advanced-options,Advanced Settings>>. By default, the table shows the localized version of the time field specified
|
||||
in the selected index pattern and the document `_source`. You can <<adding-columns, add fields to the Documents table>>
|
||||
from the Fields list. You can <<sorting, sort the listed documents>> by any indexed field that's included in the table.
|
||||
|
||||
To view a document's field data, click the *Expand* button image:images/ExpandButton.jpg[Expand Button] to the left of
|
||||
the document's entry in the first column (the first column is usually Time). Kibana reads the document data from
|
||||
Elasticsearch and displays the document fields in a table. The table contains a row for each field that contains the
|
||||
name of the field, add filter buttons, and the field value.
|
||||
|
||||
image::images/Expanded-Document.png[]
|
||||
|
||||
. To view the original JSON document (pretty-printed), click the *JSON* tab.
|
||||
. To view the document data as a separate page, click the link. You can bookmark and share this link to provide direct
|
||||
access to a particular document.
|
||||
. To collapse the document details, click the *Collapse* button image:images/CollapseButton.jpg[Collapse Button].
|
||||
. To toggle a particular field's column in the Documents table, click the
|
||||
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
|
||||
|
||||
[float]
|
||||
[[sorting]]
|
||||
==== Sorting the Document List
|
||||
You can sort the documents in the Documents table by the values in any indexed field. Documents in index patterns that
|
||||
are configured with time fields are sorted in reverse chronological order by default.
|
||||
|
||||
To change the sort order, click the name of the field you want to sort by. The fields you can use for sorting have a
|
||||
sort button to the right of the field name. Clicking the field name a second time reverses the sort order.
|
||||
|
||||
[float]
|
||||
[[adding-columns]]
|
||||
==== Adding Field Columns to the Documents Table
|
||||
By default, the Documents table shows the localized version of the time field specified in the selected index pattern
|
||||
and the document `_source`. You can add fields to the table from the Fields list or from a document's expanded view.
|
||||
|
||||
To add field columns to the Documents table:
|
||||
|
||||
. Mouse over a field in the Fields list and click its *add* button image:images/AddFieldButton.jpg[Add Field Button].
|
||||
. Repeat until you've added all the fields you want to display in the Documents table.
|
||||
. Alternately, add a field column directly from a document's expanded view by clicking the
|
||||
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
|
||||
|
||||
The added field columns replace the `_source` column in the Documents table. The added fields are also
|
||||
listed in the *Selected Fields* section at the top of the field list.
|
||||
|
||||
To rearrange the field columns in the table, mouse over the header of the column you want to move and click the *Move*
|
||||
button.
|
||||
|
||||
image:images/Discover-MoveColumn.jpg[Move Column]
|
||||
|
||||
[float]
|
||||
[[removing-columns]]
|
||||
==== Removing Field Columns from the Documents Table
|
||||
To remove field columns from the Documents table:
|
||||
|
||||
. Mouse over the field you want to remove in the *Selected Fields* section of the Fields list and click its *remove*
|
||||
button image:images/RemoveFieldButton.jpg[Remove Field Button].
|
||||
. Repeat until you've removed all the fields you want to drop from the Documents table.
|
||||
|
||||
[float]
|
||||
[[viewing-field-stats]]
|
||||
=== Viewing Field Data Statistics
|
||||
From the field list, you can see how many documents in the Documents table contain a particular field, what the top 5
|
||||
values are, and what percentage of documents contain each value.
|
||||
|
||||
To view field data statistics, click the name of a field in the Fields list. The field can be anywhere in the Fields
|
||||
list.
|
||||
|
||||
image:images/Discover-FieldStats.jpg[Field Statistics]
|
||||
|
||||
TIP: To create a visualization based on the field, click the *Visualize* button below the field statistics.
|
||||
include::discover/viewing-field-stats.asciidoc[]
|
||||
|
|
61
docs/discover/document-data.asciidoc
Normal file
|
@ -0,0 +1,61 @@
|
|||
[[document-data]]
|
||||
== Viewing Document Data
|
||||
|
||||
When you submit a search query, the 500 most recent documents that match the query are listed in the Documents table.
|
||||
You can configure the number of documents shown in the table by setting the `discover:sampleSize` property in
|
||||
<<advanced-options,Advanced Settings>>. By default, the table shows the localized version of the time field specified
|
||||
in the selected index pattern and the document `_source`. You can <<adding-columns, add fields to the Documents table>>
|
||||
from the Fields list. You can <<sorting, sort the listed documents>> by any indexed field that's included in the table.
|
||||
|
||||
To view a document's field data, click the *Expand* button image:images/ExpandButton.jpg[Expand Button] to the left of
|
||||
the document's entry in the first column (the first column is usually Time). Kibana reads the document data from
|
||||
Elasticsearch and displays the document fields in a table. The table contains a row for each field that contains the
|
||||
name of the field, add filter buttons, and the field value.
|
||||
|
||||
image::images/Expanded-Document.png[]
|
||||
|
||||
. To view the original JSON document (pretty-printed), click the *JSON* tab.
|
||||
. To view the document data as a separate page, click the link. You can bookmark and share this link to provide direct
|
||||
access to a particular document.
|
||||
. To collapse the document details, click the *Collapse* button image:images/CollapseButton.jpg[Collapse Button].
|
||||
. To toggle a particular field's column in the Documents table, click the
|
||||
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
|
||||
|
||||
[float]
|
||||
[[sorting]]
|
||||
=== Sorting the Document List
|
||||
You can sort the documents in the Documents table by the values in any indexed field. Documents in index patterns that
|
||||
are configured with time fields are sorted in reverse chronological order by default.
|
||||
|
||||
To change the sort order, click the name of the field you want to sort by. The fields you can use for sorting have a
|
||||
sort button to the right of the field name. Clicking the field name a second time reverses the sort order.
|
||||
|
||||
[float]
|
||||
[[adding-columns]]
|
||||
=== Adding Field Columns to the Documents Table
|
||||
By default, the Documents table shows the localized version of the time field specified in the selected index pattern
|
||||
and the document `_source`. You can add fields to the table from the Fields list or from a document's expanded view.
|
||||
|
||||
To add field columns to the Documents table:
|
||||
|
||||
. Mouse over a field in the Fields list and click its *add* button image:images/AddFieldButton.jpg[Add Field Button].
|
||||
. Repeat until you've added all the fields you want to display in the Documents table.
|
||||
. Alternately, add a field column directly from a document's expanded view by clicking the
|
||||
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
|
||||
|
||||
The added field columns replace the `_source` column in the Documents table. The added fields are also
|
||||
listed in the *Selected Fields* section at the top of the field list.
|
||||
|
||||
To rearrange the field columns in the table, mouse over the header of the column you want to move and click the *Move*
|
||||
button.
|
||||
|
||||
image:images/Discover-MoveColumn.jpg[Move Column]
|
||||
|
||||
[float]
|
||||
[[removing-columns]]
|
||||
=== Removing Field Columns from the Documents Table
|
||||
To remove field columns from the Documents table:
|
||||
|
||||
. Mouse over the field you want to remove in the *Selected Fields* section of the Fields list and click its *remove*
|
||||
button image:images/RemoveFieldButton.jpg[Remove Field Button].
|
||||
. Repeat until you've removed all the fields you want to drop from the Documents table.
|
36
docs/discover/field-filter.asciidoc
Normal file
|
@ -0,0 +1,36 @@
|
|||
[[field-filter]]
|
||||
== Filtering by Field
|
||||
You can filter the search results to display only those documents that contain a particular value in a field. You can
|
||||
also create negative filters that exclude documents that contain the specified field value.
|
||||
|
||||
You can add filters from the Fields list or from the Documents table. When you add a filter, it is displayed in the
|
||||
filter bar below the search query. From the filter bar, you can enable or disable a filter, invert the filter (change
|
||||
it from a positive filter to a negative filter and vice-versa), toggle the filter on or off, or remove it entirely.
|
||||
Click the small left-facing arrow to the right of the index pattern selection drop-down to collapse the Fields list.
|
||||
|
||||
To add a filter from the Fields list:
|
||||
|
||||
. Click the name of the field you want to filter on. This displays the top five values for that field. To the right of
|
||||
each value, there are two magnifying glass buttons--one for adding a regular (positive) filter, and
|
||||
one for adding a negative filter.
|
||||
. To add a positive filter, click the *Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button].
|
||||
This filters out documents that don't contain that value in the field.
|
||||
. To add a negative filter, click the *Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button].
|
||||
This excludes documents that contain that value in the field.
|
||||
|
||||
To add a filter from the Documents table:
|
||||
|
||||
. Expand a document in the Documents table by clicking the *Expand* button image:images/ExpandButton.jpg[Expand Button]
|
||||
to the left of the document's entry in the first column (the first column is usually Time). To the right of each field
|
||||
name, there are two magnifying glass buttons--one for adding a regular (positive) filter, and one for adding a negative
|
||||
filter.
|
||||
. To add a positive filter based on the document's value in a field, click the
|
||||
*Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button]. This filters out documents that don't
|
||||
contain the specified value in that field.
|
||||
. To add a negative filter based on the document's value in a field, click the
|
||||
*Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button]. This excludes documents that contain
|
||||
the specified value in that field.
|
||||
|
||||
[float]
|
||||
[[discover-filters]]
|
||||
include::filter-pinning.asciidoc[]
|
72
docs/discover/search.asciidoc
Normal file
|
@ -0,0 +1,72 @@
|
|||
[[search]]
|
||||
== Searching Your Data
|
||||
You can search the indices that match the current index pattern by submitting a search from the Discover page.
|
||||
You can enter simple query strings, use the
|
||||
Lucene https://lucene.apache.org/core/2_9_4/queryparsersyntax.html[query syntax], or use the full JSON-based
|
||||
{es-ref}query-dsl.html[Elasticsearch Query DSL].
|
||||
|
||||
When you submit a search, the histogram, Documents table, and Fields list are updated to reflect
|
||||
the search results. The total number of hits (matching documents) is shown in the upper right corner of the
|
||||
histogram. The Documents table shows the first five hundred hits. By default, the hits are listed in reverse
|
||||
chronological order, with the newest documents shown first. You can reverse the sort order by by clicking on the Time
|
||||
column header. You can also sort the table using the values in any indexed field. For more information, see
|
||||
<<sorting,Sorting the Documents Table>>.
|
||||
|
||||
To search your data:
|
||||
|
||||
. Enter a query string in the Search field:
|
||||
+
|
||||
* To perform a free text search, simply enter a text string. For example, if you're searching web server logs, you
|
||||
could enter `safari` to search all fields for the term `safari`.
|
||||
+
|
||||
* To search for a value in a specific field, you prefix the value with the name of the field. For example, you could
|
||||
enter `status:200` to limit the results to entries that contain the value `200` in the `status` field.
|
||||
+
|
||||
* To search for a range of values, you can use the bracketed range syntax, `[START_VALUE TO END_VALUE]`. For example,
|
||||
to find entries that have 4xx status codes, you could enter `status:[400 TO 499]`.
|
||||
+
|
||||
* To specify more complex search criteria, you can use the Boolean operators `AND`, `OR`, and `NOT`. For example,
|
||||
to find entries that have 4xx status codes and have an extension of `php` or `html`, you could enter `status:[400 TO
|
||||
499] AND (extension:php OR extension:html)`.
|
||||
+
|
||||
NOTE: These examples use the Lucene query syntax. You can also submit queries using the Elasticsearch Query DSL. For
|
||||
examples, see {es-ref}query-dsl-query-string-query.html#query-string-syntax[query string syntax] in the Elasticsearch
|
||||
Reference.
|
||||
+
|
||||
. Press *Enter* or click the *Search* button to submit your search query.
|
||||
|
||||
[float]
|
||||
[[new-search]]
|
||||
=== Starting a New Search
|
||||
To clear the current search and start a new search, click the *New* button in the Discover toolbar.
|
||||
|
||||
[float]
|
||||
[[save-search]]
|
||||
=== Saving a Search
|
||||
You can reload saved searches on the Discover page and use them as the basis of <<visualize, visualizations>>.
|
||||
Saving a search saves both the search query string and the currently selected index pattern.
|
||||
|
||||
To save the current search:
|
||||
|
||||
. Click the *Save* button in the Discover toolbar.
|
||||
. Enter a name for the search and click *Save*.
|
||||
|
||||
[float]
|
||||
[[load-search]]
|
||||
=== Opening a Saved Search
|
||||
To load a saved search:
|
||||
|
||||
. Click the *Open* button in the Discover toolbar.
|
||||
. Select the search you want to open.
|
||||
|
||||
If the saved search is associated with a different index pattern than is currently selected, opening the saved search
|
||||
also changes the selected index pattern.
|
||||
|
||||
[float]
|
||||
[[select-pattern]]
|
||||
=== Changing Which Indices You're Searching
|
||||
When you submit a search request, the indices that match the currently-selected index pattern are searched. The current
|
||||
index pattern is shown below the search field. To change which indices you are searching, click the name of the current
|
||||
index pattern to display a list of the configured index patterns and select a different index pattern.
|
||||
|
||||
For more information about index patterns, see <<settings-create-pattern, Creating an Index Pattern>>.
|
29
docs/discover/set-time-filter.asciidoc
Normal file
|
@ -0,0 +1,29 @@
|
|||
[[set-time-filter]]
|
||||
== Setting the Time Filter
|
||||
The Time Filter restricts the search results to a specific time period. You can set a time filter if your index
|
||||
contains time-based events and a time-field is configured for the selected index pattern.
|
||||
|
||||
By default the time filter is set to the last 15 minutes. You can use the Time Picker to change the time filter
|
||||
or select a specific time interval or time range in the histogram at the top of the page.
|
||||
|
||||
To set a time filter with the Time Picker:
|
||||
|
||||
. Click the Time Filter displayed in the upper right corner of the menu bar to open the Time Picker.
|
||||
. To set a quick filter, simply click one of the shortcut links.
|
||||
. To specify a relative Time Filter, click *Relative* and enter the relative start time. You can specify
|
||||
the relative start time as any number of seconds, minutes, hours, days, months, or years ago.
|
||||
. To specify an absolute Time Filter, click *Absolute* and enter the start date in the *From* field and the end date in
|
||||
the *To* field.
|
||||
. Click the caret at the bottom of the Time Picker to hide it.
|
||||
|
||||
To set a Time Filter from the histogram, do one of the following:
|
||||
|
||||
* Click the bar that represents the time interval you want to zoom in on.
|
||||
* Click and drag to view a specific timespan. You must start the selection with the cursor over the background of the
|
||||
chart--the cursor changes to a plus sign when you hover over a valid start point.
|
||||
|
||||
You can use the browser Back button to undo your changes.
|
||||
|
||||
The histogram lists the time range you're currently exploring, as well as the intervals that range is currently using.
|
||||
To change the intervals, click the link and select an interval from the drop-down. The default behavior automatically
|
||||
sets an interval based on the time range.
|
12
docs/discover/viewing-field-stats.asciidoc
Normal file
|
@ -0,0 +1,12 @@
|
|||
[[viewing-field-stats]]
|
||||
== Viewing Field Data Statistics
|
||||
|
||||
From the field list, you can see how many documents in the Documents table contain a particular field, what the top 5
|
||||
values are, and what percentage of documents contain each value.
|
||||
|
||||
To view field data statistics, click the name of a field in the Fields list. The field can be anywhere in the Fields
|
||||
list.
|
||||
|
||||
image:images/Discover-FieldStats.jpg[Field Statistics]
|
||||
|
||||
TIP: To create a visualization based on the field, click the *Visualize* button below the field statistics.
|
|
@ -1,6 +1,8 @@
|
|||
[[getting-started]]
|
||||
== Getting Started with Kibana
|
||||
= Getting Started
|
||||
|
||||
[partintro]
|
||||
--
|
||||
Now that you have Kibana <<setup,installed>>, you can step through this tutorial to get fast hands-on experience with
|
||||
key Kibana functionality. By the end of this tutorial, you will have:
|
||||
|
||||
|
@ -18,394 +20,16 @@ Video tutorials are also available:
|
|||
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-2[Data discovery, bar charts, and line charts]
|
||||
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-3[Tile maps]
|
||||
* https://www.elastic.co/blog/kibana-4-video-tutorials-part-4[Embedding Kibana visualizations]
|
||||
--
|
||||
|
||||
[float]
|
||||
[[tutorial-load-dataset]]
|
||||
=== Before You Start: Loading Sample Data
|
||||
include::getting-started/tutorial-load-dataset.asciidoc[]
|
||||
|
||||
The tutorials in this section rely on the following data sets:
|
||||
include::getting-started/tutorial-define-index.asciidoc[]
|
||||
|
||||
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here:
|
||||
https://www.elastic.co/guide/en/kibana/3.0/snippets/shakespeare.json[shakespeare.json].
|
||||
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here:
|
||||
https://github.com/bly2k/files/blob/master/accounts.zip?raw=true[accounts.zip]
|
||||
* A set of randomly generated log files. Download this data set by clicking here:
|
||||
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logs.jsonl.gz]
|
||||
include::getting-started/tutorial-discovering.asciidoc[]
|
||||
|
||||
Two of the data sets are compressed. Use the following commands to extract the files:
|
||||
include::getting-started/tutorial-visualizing.asciidoc[]
|
||||
|
||||
[source,shell]
|
||||
unzip accounts.zip
|
||||
gunzip logs.jsonl.gz
|
||||
include::getting-started/tutorial-dashboard.asciidoc[]
|
||||
|
||||
The Shakespeare data set is organized in the following schema:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"line_id": INT,
|
||||
"play_name": "String",
|
||||
"speech_number": INT,
|
||||
"line_number": "String",
|
||||
"speaker": "String",
|
||||
"text_entry": "String",
|
||||
}
|
||||
|
||||
The accounts data set is organized in the following schema:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"account_number": INT,
|
||||
"balance": INT,
|
||||
"firstname": "String",
|
||||
"lastname": "String",
|
||||
"age": INT,
|
||||
"gender": "M or F",
|
||||
"address": "String",
|
||||
"employer": "String",
|
||||
"email": "String",
|
||||
"city": "String",
|
||||
"state": "String"
|
||||
}
|
||||
|
||||
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"memory": INT,
|
||||
"geo.coordinates": "geo_point"
|
||||
"@timestamp": "date"
|
||||
}
|
||||
|
||||
Before we load the Shakespeare and logs data sets, we need to set up {ref}mapping.html[_mappings_] for the fields.
|
||||
Mapping divides the documents in the index into logical groups and specifies a field's characteristics, such as the
|
||||
field's searchability or whether or not it's _tokenized_, or broken up into separate words.
|
||||
|
||||
Use the following command to set up a mapping for the Shakespeare data set:
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/shakespeare -d '
|
||||
{
|
||||
"mappings" : {
|
||||
"_default_" : {
|
||||
"properties" : {
|
||||
"speaker" : {"type": "string", "index" : "not_analyzed" },
|
||||
"play_name" : {"type": "string", "index" : "not_analyzed" },
|
||||
"line_id" : { "type" : "integer" },
|
||||
"speech_number" : { "type" : "integer" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
This mapping specifies the following qualities for the data set:
|
||||
|
||||
* The _speaker_ field is a string that isn't analyzed. The string in this field is treated as a single unit, even if
|
||||
there are multiple words in the field.
|
||||
* The same applies to the _play_name_ field.
|
||||
* The _line_id_ and _speech_number_ fields are integers.
|
||||
|
||||
The logs data set requires a mapping to label the latitude/longitude pairs in the logs as geographic locations by
|
||||
applying the `geo_point` type to those fields.
|
||||
|
||||
Use the following commands to establish `geo_point` mapping for the logs:
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.18 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.19 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.20 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
The accounts data set doesn't require any mappings, so at this point we're ready to use the Elasticsearch
|
||||
{ref}/docs-bulk.html[`bulk`] API to load the data sets with the following commands:
|
||||
|
||||
[source,shell]
|
||||
curl -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
|
||||
curl -XPOST 'localhost:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
|
||||
curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
|
||||
|
||||
These commands may take some time to execute, depending on the computing resources available.
|
||||
|
||||
Verify successful loading with the following command:
|
||||
|
||||
[source,shell]
|
||||
curl 'localhost:9200/_cat/indices?v'
|
||||
|
||||
You should see output similar to the following:
|
||||
|
||||
[source,shell]
|
||||
health status index pri rep docs.count docs.deleted store.size pri.store.size
|
||||
yellow open bank 5 1 1000 0 418.2kb 418.2kb
|
||||
yellow open shakespeare 5 1 111396 0 17.6mb 17.6mb
|
||||
yellow open logstash-2015.05.18 5 1 4631 0 15.6mb 15.6mb
|
||||
yellow open logstash-2015.05.19 5 1 4624 0 15.7mb 15.7mb
|
||||
yellow open logstash-2015.05.20 5 1 4750 0 16.4mb 16.4mb
|
||||
|
||||
[[tutorial-define-index]]
|
||||
=== Defining Your Index Patterns
|
||||
|
||||
Each set of data loaded to Elasticsearch has an <<settings-create-pattern,index pattern>>. In the previous section, the
|
||||
Shakespeare data set has an index named `shakespeare`, and the accounts data set has an index named `bank`. An _index
|
||||
pattern_ is a string with optional wildcards that can match multiple indices. For example, in the common logging use
|
||||
case, a typical index name contains the date in MM-DD-YYYY format, and an index pattern for May would look something
|
||||
like `logstash-2015.05*`.
|
||||
|
||||
For this tutorial, any pattern that matches the name of an index we've loaded will work. Open a browser and
|
||||
navigate to `localhost:5601`. Click the *Settings* tab, then the *Indices* tab. Click *Add New* to define a new index
|
||||
pattern. Two of the sample data sets, the Shakespeare plays and the financial accounts, don't contain time-series data.
|
||||
Make sure the *Index contains time-based events* box is unchecked when you create index patterns for these data sets.
|
||||
Specify `shakes*` as the index pattern for the Shakespeare data set and click *Create* to define the index pattern, then
|
||||
define a second index pattern named `ba*`.
|
||||
|
||||
The Logstash data set does contain time-series data, so after clicking *Add New* to define the index for this data
|
||||
set, make sure the *Index contains time-based events* box is checked and select the `@timestamp` field from the
|
||||
*Time-field name* drop-down.
|
||||
|
||||
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
|
||||
contain data.
|
||||
|
||||
[float]
|
||||
[[tutorial-discovering]]
|
||||
=== Discovering Your Data
|
||||
|
||||
Click the *Discover* image:images/discover-compass.png[Compass icon] tab to display Kibana's data discovery functions:
|
||||
|
||||
image::images/tutorial-discover.png[]
|
||||
|
||||
Right under the tab itself, there is a search box where you can search your data. Searches take a specific
|
||||
{ref}/query-dsl-query-string-query.html#query-string-syntax[query syntax] that enable you to create custom searches,
|
||||
which you can save and load by clicking the buttons to the right of the search box.
|
||||
|
||||
Beneath the search box, the current index pattern is displayed in a drop-down. You can change the index pattern by
|
||||
selecting a different pattern from the drop-down selector.
|
||||
|
||||
You can construct searches by using the field names and the values you're interested in. With numeric fields you can
|
||||
use comparison operators such as greater than (>), less than (<), or equals (=). You can link elements with the
|
||||
logical operators AND, OR, and NOT, all in uppercase.
|
||||
|
||||
Try selecting the `ba*` index pattern and putting the following search into the search box:
|
||||
|
||||
[source,text]
|
||||
account_number:<100 AND balance:>47500
|
||||
|
||||
This search returns all account numbers between zero and 99 with balances in excess of 47,500.
|
||||
|
||||
If you're using the linked sample data set, this search returns 5 results: Account numbers 8, 32, 78, 85, and 97.
|
||||
|
||||
image::images/tutorial-discover-2.png[]
|
||||
|
||||
To narrow the display to only the specific fields of interest, highlight each field in the list that displays under the
|
||||
index pattern and click the *Add* button. Note how, in this example, adding the `account_number` field changes the
|
||||
display from the full text of five records to a simple list of five account numbers:
|
||||
|
||||
image::images/tutorial-discover-3.png[]
|
||||
|
||||
[[tutorial-visualizing]]
|
||||
=== Data Visualization: Beyond Discovery
|
||||
|
||||
The visualization tools available on the *Visualize* tab enable you to display aspects of your data sets in several
|
||||
different ways.
|
||||
|
||||
Click on the *Visualize* image:images/visualize-icon.png[Bar chart icon] tab to start:
|
||||
|
||||
image::images/tutorial-visualize.png[]
|
||||
|
||||
Click on *Pie chart*, then *From a new search*. Select the `ba*` index pattern.
|
||||
|
||||
Visualizations depend on Elasticsearch {ref}/search-aggregations.html[aggregations] in two different types: _bucket_
|
||||
aggregations and _metric_ aggregations. A bucket aggregation sorts your data according to criteria you specify. For
|
||||
example, in our accounts data set, we can establish a range of account balances, then display what proportions of the
|
||||
total fall into which range of balances.
|
||||
|
||||
The whole pie displays, since we haven't specified any buckets yet.
|
||||
|
||||
image::images/tutorial-visualize-pie-1.png[]
|
||||
|
||||
Select *Split Slices* from the *Select buckets type* list, then select *Range* from the *Aggregation* drop-down
|
||||
selector. Select the *balance* field from the *Field* drop-down, then click on *Add Range* four times to bring the
|
||||
total number of ranges to six. Enter the following ranges:
|
||||
|
||||
[source,text]
|
||||
0 999
|
||||
1000 2999
|
||||
3000 6999
|
||||
7000 14999
|
||||
15000 30999
|
||||
31000 50000
|
||||
|
||||
Click the *Apply changes* button image:images/apply-changes-button.png[] to display the chart:
|
||||
|
||||
image::images/tutorial-visualize-pie-2.png[]
|
||||
|
||||
This shows you what proportion of the 1000 accounts fall in these balance ranges. To see another dimension of the data,
|
||||
we're going to add another bucket aggregation. We can break down each of the balance ranges further by the account
|
||||
holder's age.
|
||||
|
||||
Click *Add sub-buckets* at the bottom, then select *Split Slices*. Choose the *Terms* aggregation and the *age* field from
|
||||
the drop-downs.
|
||||
Click the *Apply changes* button image:images/apply-changes-button.png[] to add an external ring with the new
|
||||
results.
|
||||
|
||||
image::images/tutorial-visualize-pie-3.png[]
|
||||
|
||||
Save this chart by clicking the *Save Visualization* button to the right of the search field. Name the visualization
|
||||
_Pie Example_.
|
||||
|
||||
Next, we're going to make a bar chart. Click on *New Visualization*, then *Vertical bar chart*. Select *From a new
|
||||
search* and the `shakes*` index pattern. You'll see a single big bar, since we haven't defined any buckets yet:
|
||||
|
||||
image::images/tutorial-visualize-bar-1.png[]
|
||||
|
||||
For the Y-axis metrics aggregation, select *Unique Count*, with *speaker* as the field. For Shakespeare plays, it might
|
||||
be useful to know which plays have the lowest number of distinct speaking parts, if your theater company is short on
|
||||
actors. For the X-Axis buckets, select the *Terms* aggregation with the *play_name* field. For the *Order*, select
|
||||
*Ascending*, leaving the *Size* at 5. Write a description for the axes in the *Custom Label* fields.
|
||||
|
||||
Leave the other elements at their default values and click the *Apply changes* button
|
||||
image:images/apply-changes-button.png[]. Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-bar-2.png[]
|
||||
|
||||
Notice how the individual play names show up as whole phrases, instead of being broken down into individual words. This
|
||||
is the result of the mapping we did at the beginning of the tutorial, when we marked the *play_name* field as 'not
|
||||
analyzed'.
|
||||
|
||||
Hovering on each bar shows you the number of speaking parts for each play as a tooltip. You can turn this behavior off,
|
||||
as well as change many other options for your visualizations, by clicking the *Options* tab in the top left.
|
||||
|
||||
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these
|
||||
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part. Add
|
||||
a Y-axis aggregation with the *Add metrics* button, then choose the *Max* aggregation for the *speech_number* field. In
|
||||
the *Options* tab, change the *Bar Mode* drop-down to *grouped*, then click the *Apply changes* button
|
||||
image:images/apply-changes-button.png[]. Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-bar-3.png[]
|
||||
|
||||
As you can see, _Love's Labours Lost_ has an unusually high maximum speech number, compared to the other plays, and
|
||||
might therefore make more demands on an actor's memory.
|
||||
|
||||
Note how the *Number of speaking parts* Y-axis starts at zero, but the bars don't begin to differentiate until 18. To
|
||||
make the differences stand out, starting the Y-axis at a value closer to the minimum, check the
|
||||
*Scale Y-Axis to data bounds* box in the *Options* tab.
|
||||
|
||||
Save this chart with the name _Bar Example_.
|
||||
|
||||
Next, we're going to make a tile map chart to visualize some geographic data. Click on *New Visualization*, then
|
||||
*Tile map*. Select *From a new search* and the `logstash-*` index pattern. Define the time window for the events
|
||||
we're exploring by clicking the time selector at the top right of the Kibana interface. Click on *Absolute*, then set
|
||||
the start time to May 18, 2015 and the end time for the range to May 20, 2015:
|
||||
|
||||
image::images/tutorial-timepicker.png[]
|
||||
|
||||
Once you've got the time range set up, click the *Go* button, then close the time picker by clicking the small up arrow
|
||||
at the bottom. You'll see a map of the world, since we haven't defined any buckets yet:
|
||||
|
||||
image::images/tutorial-visualize-map-1.png[]
|
||||
|
||||
Select *Geo Coordinates* as the bucket, then click the *Apply changes* button image:images/apply-changes-button.png[].
|
||||
Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-map-2.png[]
|
||||
|
||||
You can navigate the map by clicking and dragging, zoom with the image:images/viz-zoom.png[] buttons, or hit the *Fit
|
||||
Data Bounds* image:images/viz-fit-bounds.png[] button to zoom to the lowest level that includes all the points. You can
|
||||
also create a filter to define a rectangle on the map, either to include or exclude, by clicking the
|
||||
*Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[] button and drawing a bounding box on the map.
|
||||
A green oval with the filter definition displays right under the query box:
|
||||
|
||||
image::images/tutorial-visualize-map-3.png[]
|
||||
|
||||
Hover on the filter to display the controls to toggle, pin, invert, or delete the filter. Save this chart with the name
|
||||
_Map Example_.
|
||||
|
||||
Finally, we're going to define a sample Markdown widget to display on our dashboard. Click on *New Visualization*, then
|
||||
*Markdown widget*, to display a very simple Markdown entry field:
|
||||
|
||||
image::images/tutorial-visualize-md-1.png[]
|
||||
|
||||
Write the following text in the field:
|
||||
|
||||
[source,markdown]
|
||||
# This is a tutorial dashboard!
|
||||
The Markdown widget uses **markdown** syntax.
|
||||
> Blockquotes in Markdown use the > character.
|
||||
|
||||
Click the *Apply changes* button image:images/apply-changes-button.png[] to display the rendered Markdown in the
|
||||
preview pane:
|
||||
|
||||
image::images/tutorial-visualize-md-2.png[]
|
||||
|
||||
Save this visualization with the name _Markdown Example_.
|
||||
|
||||
[[tutorial-dashboard]]
|
||||
=== Putting it all Together with Dashboards
|
||||
|
||||
A Kibana dashboard is a collection of visualizations that you can arrange and share. To get started, click the
|
||||
*Dashboard* tab, then the *Add Visualization* button at the far right of the search box to display the list of saved
|
||||
visualizations. Select _Markdown Example_, _Pie Example_, _Bar Example_, and _Map Example_, then close the list of
|
||||
visualizations by clicking the small up-arrow at the bottom of the list. You can move the containers for each
|
||||
visualization by clicking and dragging the title bar. Resize the containers by dragging the lower right corner of a
|
||||
visualization's container. Your sample dashboard should end up looking roughly like this:
|
||||
|
||||
image::images/tutorial-dashboard.png[]
|
||||
|
||||
Click the *Save Dashboard* button, then name the dashboard _Tutorial Dashboard_. You can share a saved dashboard by
|
||||
clicking the *Share* button to display HTML embedding code as well as a direct link.
|
||||
|
||||
[float]
|
||||
[[wrapping-up]]
|
||||
=== Wrapping Up
|
||||
|
||||
Now that you've handled the basic aspects of Kibana's functionality, you're ready to explore Kibana in further detail.
|
||||
Take a look at the rest of the documentation for more details!
|
||||
include::getting-started/wrapping-up.asciidoc[]
|
||||
|
|
14
docs/getting-started/tutorial-dashboard.asciidoc
Normal file
|
@ -0,0 +1,14 @@
|
|||
[[tutorial-dashboard]]
|
||||
== Putting it all Together with Dashboards
|
||||
|
||||
A Kibana dashboard is a collection of visualizations that you can arrange and share. To get started, click the
|
||||
*Dashboard* tab, then the *Add Visualization* button at the far right of the search box to display the list of saved
|
||||
visualizations. Select _Markdown Example_, _Pie Example_, _Bar Example_, and _Map Example_, then close the list of
|
||||
visualizations by clicking the small up-arrow at the bottom of the list. You can move the containers for each
|
||||
visualization by clicking and dragging the title bar. Resize the containers by dragging the lower right corner of a
|
||||
visualization's container. Your sample dashboard should end up looking roughly like this:
|
||||
|
||||
image::images/tutorial-dashboard.png[]
|
||||
|
||||
Click the *Save Dashboard* button, then name the dashboard _Tutorial Dashboard_. You can share a saved dashboard by
|
||||
clicking the *Share* button to display HTML embedding code as well as a direct link.
|
22
docs/getting-started/tutorial-define-index.asciidoc
Normal file
|
@ -0,0 +1,22 @@
|
|||
[[tutorial-define-index]]
|
||||
== Defining Your Index Patterns
|
||||
|
||||
Each set of data loaded to Elasticsearch has an index pattern. In the previous section, the
|
||||
Shakespeare data set has an index named `shakespeare`, and the accounts data set has an index named `bank`. An _index
|
||||
pattern_ is a string with optional wildcards that can match multiple indices. For example, in the common logging use
|
||||
case, a typical index name contains the date in MM-DD-YYYY format, and an index pattern for May would look something
|
||||
like `logstash-2015.05*`.
|
||||
|
||||
For this tutorial, any pattern that matches the name of an index we've loaded will work. Open a browser and
|
||||
navigate to `localhost:5601`. Click the *Settings* tab, then the *Indices* tab. Click *Add New* to define a new index
|
||||
pattern. Two of the sample data sets, the Shakespeare plays and the financial accounts, don't contain time-series data.
|
||||
Make sure the *Index contains time-based events* box is unchecked when you create index patterns for these data sets.
|
||||
Specify `shakes*` as the index pattern for the Shakespeare data set and click *Create* to define the index pattern, then
|
||||
define a second index pattern named `ba*`.
|
||||
|
||||
The Logstash data set does contain time-series data, so after clicking *Add New* to define the index for this data
|
||||
set, make sure the *Index contains time-based events* box is checked and select the `@timestamp` field from the
|
||||
*Time-field name* drop-down.
|
||||
|
||||
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
|
||||
contain data.
|
34
docs/getting-started/tutorial-discovering.asciidoc
Normal file
|
@ -0,0 +1,34 @@
|
|||
[[tutorial-discovering]]
|
||||
== Discovering Your Data
|
||||
|
||||
Click the *Discover* image:images/discover-compass.png[Compass icon] tab to display Kibana's data discovery functions:
|
||||
|
||||
image::images/tutorial-discover.png[]
|
||||
|
||||
Right under the tab itself, there is a search box where you can search your data. Searches take a specific
|
||||
{es-ref}query-dsl-query-string-query.html#query-string-syntax[query syntax] that enable you to create custom searches,
|
||||
which you can save and load by clicking the buttons to the right of the search box.
|
||||
|
||||
Beneath the search box, the current index pattern is displayed in a drop-down. You can change the index pattern by
|
||||
selecting a different pattern from the drop-down selector.
|
||||
|
||||
You can construct searches by using the field names and the values you're interested in. With numeric fields you can
|
||||
use comparison operators such as greater than (>), less than (<), or equals (=). You can link elements with the
|
||||
logical operators AND, OR, and NOT, all in uppercase.
|
||||
|
||||
Try selecting the `ba*` index pattern and putting the following search into the search box:
|
||||
|
||||
[source,text]
|
||||
account_number:<100 AND balance:>47500
|
||||
|
||||
This search returns all account numbers between zero and 99 with balances in excess of 47,500.
|
||||
|
||||
If you're using the linked sample data set, this search returns 5 results: Account numbers 8, 32, 78, 85, and 97.
|
||||
|
||||
image::images/tutorial-discover-2.png[]
|
||||
|
||||
To narrow the display to only the specific fields of interest, highlight each field in the list that displays under the
|
||||
index pattern and click the *Add* button. Note how, in this example, adding the `account_number` field changes the
|
||||
display from the full text of five records to a simple list of five account numbers:
|
||||
|
||||
image::images/tutorial-discover-3.png[]
|
171
docs/getting-started/tutorial-load-dataset.asciidoc
Normal file
|
@ -0,0 +1,171 @@
|
|||
[[tutorial-load-dataset]]
|
||||
== Loading Sample Data
|
||||
|
||||
The tutorials in this section rely on the following data sets:
|
||||
|
||||
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here:
|
||||
https://www.elastic.co/guide/en/kibana/3.0/snippets/shakespeare.json[shakespeare.json].
|
||||
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here:
|
||||
https://github.com/bly2k/files/blob/master/accounts.zip?raw=true[accounts.zip]
|
||||
* A set of randomly generated log files. Download this data set by clicking here:
|
||||
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logs.jsonl.gz]
|
||||
|
||||
Two of the data sets are compressed. Use the following commands to extract the files:
|
||||
|
||||
[source,shell]
|
||||
unzip accounts.zip
|
||||
gunzip logs.jsonl.gz
|
||||
|
||||
The Shakespeare data set is organized in the following schema:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"line_id": INT,
|
||||
"play_name": "String",
|
||||
"speech_number": INT,
|
||||
"line_number": "String",
|
||||
"speaker": "String",
|
||||
"text_entry": "String",
|
||||
}
|
||||
|
||||
The accounts data set is organized in the following schema:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"account_number": INT,
|
||||
"balance": INT,
|
||||
"firstname": "String",
|
||||
"lastname": "String",
|
||||
"age": INT,
|
||||
"gender": "M or F",
|
||||
"address": "String",
|
||||
"employer": "String",
|
||||
"email": "String",
|
||||
"city": "String",
|
||||
"state": "String"
|
||||
}
|
||||
|
||||
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are:
|
||||
|
||||
[source,json]
|
||||
{
|
||||
"memory": INT,
|
||||
"geo.coordinates": "geo_point"
|
||||
"@timestamp": "date"
|
||||
}
|
||||
|
||||
Before we load the Shakespeare and logs data sets, we need to set up {es-ref}mapping.html[_mappings_] for the fields.
|
||||
Mapping divides the documents in the index into logical groups and specifies a field's characteristics, such as the
|
||||
field's searchability or whether or not it's _tokenized_, or broken up into separate words.
|
||||
|
||||
Use the following command to set up a mapping for the Shakespeare data set:
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/shakespeare -d '
|
||||
{
|
||||
"mappings" : {
|
||||
"_default_" : {
|
||||
"properties" : {
|
||||
"speaker" : {"type": "string", "index" : "not_analyzed" },
|
||||
"play_name" : {"type": "string", "index" : "not_analyzed" },
|
||||
"line_id" : { "type" : "integer" },
|
||||
"speech_number" : { "type" : "integer" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
This mapping specifies the following qualities for the data set:
|
||||
|
||||
* The _speaker_ field is a string that isn't analyzed. The string in this field is treated as a single unit, even if
|
||||
there are multiple words in the field.
|
||||
* The same applies to the _play_name_ field.
|
||||
* The _line_id_ and _speech_number_ fields are integers.
|
||||
|
||||
The logs data set requires a mapping to label the latitude/longitude pairs in the logs as geographic locations by
|
||||
applying the `geo_point` type to those fields.
|
||||
|
||||
Use the following commands to establish `geo_point` mapping for the logs:
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.18 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.19 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
[source,shell]
|
||||
curl -XPUT http://localhost:9200/logstash-2015.05.20 -d '
|
||||
{
|
||||
"mappings": {
|
||||
"log": {
|
||||
"properties": {
|
||||
"geo": {
|
||||
"properties": {
|
||||
"coordinates": {
|
||||
"type": "geo_point"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
';
|
||||
|
||||
The accounts data set doesn't require any mappings, so at this point we're ready to use the Elasticsearch
|
||||
{es-ref}docs-bulk.html[`bulk`] API to load the data sets with the following commands:
|
||||
|
||||
[source,shell]
|
||||
curl -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
|
||||
curl -XPOST 'localhost:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
|
||||
curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
|
||||
|
||||
These commands may take some time to execute, depending on the computing resources available.
|
||||
|
||||
Verify successful loading with the following command:
|
||||
|
||||
[source,shell]
|
||||
curl 'localhost:9200/_cat/indices?v'
|
||||
|
||||
You should see output similar to the following:
|
||||
|
||||
[source,shell]
|
||||
health status index pri rep docs.count docs.deleted store.size pri.store.size
|
||||
yellow open bank 5 1 1000 0 418.2kb 418.2kb
|
||||
yellow open shakespeare 5 1 111396 0 17.6mb 17.6mb
|
||||
yellow open logstash-2015.05.18 5 1 4631 0 15.6mb 15.6mb
|
||||
yellow open logstash-2015.05.19 5 1 4624 0 15.7mb 15.7mb
|
||||
yellow open logstash-2015.05.20 5 1 4750 0 16.4mb 16.4mb
|
136
docs/getting-started/tutorial-visualizing.asciidoc
Normal file
|
@ -0,0 +1,136 @@
|
|||
[[tutorial-visualizing]]
|
||||
== Data Visualization: Beyond Discovery
|
||||
|
||||
The visualization tools available on the *Visualize* tab enable you to display aspects of your data sets in several
|
||||
different ways.
|
||||
|
||||
Click on the *Visualize* image:images/visualize-icon.png[Bar chart icon] tab to start:
|
||||
|
||||
image::images/tutorial-visualize.png[]
|
||||
|
||||
Click on *Pie chart*, then *From a new search*. Select the `ba*` index pattern.
|
||||
|
||||
Visualizations depend on Elasticsearch {es-ref}search-aggregations.html[aggregations] in two different types: _bucket_
|
||||
aggregations and _metric_ aggregations. A bucket aggregation sorts your data according to criteria you specify. For
|
||||
example, in our accounts data set, we can establish a range of account balances, then display what proportions of the
|
||||
total fall into which range of balances.
|
||||
|
||||
The whole pie displays, since we haven't specified any buckets yet.
|
||||
|
||||
image::images/tutorial-visualize-pie-1.png[]
|
||||
|
||||
Select *Split Slices* from the *Select buckets type* list, then select *Range* from the *Aggregation* drop-down
|
||||
selector. Select the *balance* field from the *Field* drop-down, then click on *Add Range* four times to bring the
|
||||
total number of ranges to six. Enter the following ranges:
|
||||
|
||||
[source,text]
|
||||
0 999
|
||||
1000 2999
|
||||
3000 6999
|
||||
7000 14999
|
||||
15000 30999
|
||||
31000 50000
|
||||
|
||||
Click the *Apply changes* button image:images/apply-changes-button.png[] to display the chart:
|
||||
|
||||
image::images/tutorial-visualize-pie-2.png[]
|
||||
|
||||
This shows you what proportion of the 1000 accounts fall in these balance ranges. To see another dimension of the data,
|
||||
we're going to add another bucket aggregation. We can break down each of the balance ranges further by the account
|
||||
holder's age.
|
||||
|
||||
Click *Add sub-buckets* at the bottom, then select *Split Slices*. Choose the *Terms* aggregation and the *age* field from
|
||||
the drop-downs.
|
||||
Click the *Apply changes* button image:images/apply-changes-button.png[] to add an external ring with the new
|
||||
results.
|
||||
|
||||
image::images/tutorial-visualize-pie-3.png[]
|
||||
|
||||
Save this chart by clicking the *Save Visualization* button to the right of the search field. Name the visualization
|
||||
_Pie Example_.
|
||||
|
||||
Next, we're going to make a bar chart. Click on *New Visualization*, then *Vertical bar chart*. Select *From a new
|
||||
search* and the `shakes*` index pattern. You'll see a single big bar, since we haven't defined any buckets yet:
|
||||
|
||||
image::images/tutorial-visualize-bar-1.png[]
|
||||
|
||||
For the Y-axis metrics aggregation, select *Unique Count*, with *speaker* as the field. For Shakespeare plays, it might
|
||||
be useful to know which plays have the lowest number of distinct speaking parts, if your theater company is short on
|
||||
actors. For the X-Axis buckets, select the *Terms* aggregation with the *play_name* field. For the *Order*, select
|
||||
*Ascending*, leaving the *Size* at 5. Write a description for the axes in the *Custom Label* fields.
|
||||
|
||||
Leave the other elements at their default values and click the *Apply changes* button
|
||||
image:images/apply-changes-button.png[]. Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-bar-2.png[]
|
||||
|
||||
Notice how the individual play names show up as whole phrases, instead of being broken down into individual words. This
|
||||
is the result of the mapping we did at the beginning of the tutorial, when we marked the *play_name* field as 'not
|
||||
analyzed'.
|
||||
|
||||
Hovering on each bar shows you the number of speaking parts for each play as a tooltip. You can turn this behavior off,
|
||||
as well as change many other options for your visualizations, by clicking the *Options* tab in the top left.
|
||||
|
||||
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these
|
||||
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part. Add
|
||||
a Y-axis aggregation with the *Add metrics* button, then choose the *Max* aggregation for the *speech_number* field. In
|
||||
the *Options* tab, change the *Bar Mode* drop-down to *grouped*, then click the *Apply changes* button
|
||||
image:images/apply-changes-button.png[]. Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-bar-3.png[]
|
||||
|
||||
As you can see, _Love's Labours Lost_ has an unusually high maximum speech number, compared to the other plays, and
|
||||
might therefore make more demands on an actor's memory.
|
||||
|
||||
Note how the *Number of speaking parts* Y-axis starts at zero, but the bars don't begin to differentiate until 18. To
|
||||
make the differences stand out, starting the Y-axis at a value closer to the minimum, check the
|
||||
*Scale Y-Axis to data bounds* box in the *Options* tab.
|
||||
|
||||
Save this chart with the name _Bar Example_.
|
||||
|
||||
Next, we're going to make a tile map chart to visualize some geographic data. Click on *New Visualization*, then
|
||||
*Tile map*. Select *From a new search* and the `logstash-*` index pattern. Define the time window for the events
|
||||
we're exploring by clicking the time selector at the top right of the Kibana interface. Click on *Absolute*, then set
|
||||
the start time to May 18, 2015 and the end time for the range to May 20, 2015:
|
||||
|
||||
image::images/tutorial-timepicker.png[]
|
||||
|
||||
Once you've got the time range set up, click the *Go* button, then close the time picker by clicking the small up arrow
|
||||
at the bottom. You'll see a map of the world, since we haven't defined any buckets yet:
|
||||
|
||||
image::images/tutorial-visualize-map-1.png[]
|
||||
|
||||
Select *Geo Coordinates* as the bucket, then click the *Apply changes* button image:images/apply-changes-button.png[].
|
||||
Your chart should now look like this:
|
||||
|
||||
image::images/tutorial-visualize-map-2.png[]
|
||||
|
||||
You can navigate the map by clicking and dragging, zoom with the image:images/viz-zoom.png[] buttons, or hit the *Fit
|
||||
Data Bounds* image:images/viz-fit-bounds.png[] button to zoom to the lowest level that includes all the points. You can
|
||||
also create a filter to define a rectangle on the map, either to include or exclude, by clicking the
|
||||
*Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[] button and drawing a bounding box on the map.
|
||||
A green oval with the filter definition displays right under the query box:
|
||||
|
||||
image::images/tutorial-visualize-map-3.png[]
|
||||
|
||||
Hover on the filter to display the controls to toggle, pin, invert, or delete the filter. Save this chart with the name
|
||||
_Map Example_.
|
||||
|
||||
Finally, we're going to define a sample Markdown widget to display on our dashboard. Click on *New Visualization*, then
|
||||
*Markdown widget*, to display a very simple Markdown entry field:
|
||||
|
||||
image::images/tutorial-visualize-md-1.png[]
|
||||
|
||||
Write the following text in the field:
|
||||
|
||||
[source,markdown]
|
||||
# This is a tutorial dashboard!
|
||||
The Markdown widget uses **markdown** syntax.
|
||||
> Blockquotes in Markdown use the > character.
|
||||
|
||||
Click the *Apply changes* button image:images/apply-changes-button.png[] to display the rendered Markdown in the
|
||||
preview pane:
|
||||
|
||||
image::images/tutorial-visualize-md-2.png[]
|
||||
|
||||
Save this visualization with the name _Markdown Example_.
|
5
docs/getting-started/wrapping-up.asciidoc
Normal file
|
@ -0,0 +1,5 @@
|
|||
[[wrapping-up]]
|
||||
== Wrapping Up
|
||||
|
||||
Now that you've handled the basic aspects of Kibana's functionality, you're ready to explore Kibana in further detail.
|
||||
Take a look at the rest of the documentation for more details!
|
Before Width: | Height: | Size: 472 B |
Before Width: | Height: | Size: 270 B |
Before Width: | Height: | Size: 31 KiB |
Before Width: | Height: | Size: 62 KiB |
Before Width: | Height: | Size: 6.8 KiB |
Before Width: | Height: | Size: 120 KiB |
Before Width: | Height: | Size: 316 B |
Before Width: | Height: | Size: 1.3 KiB |
Before Width: | Height: | Size: 7.3 KiB |
Before Width: | Height: | Size: 88 KiB |
Before Width: | Height: | Size: 18 KiB |
Before Width: | Height: | Size: 285 KiB |
Before Width: | Height: | Size: 7.1 KiB |
Before Width: | Height: | Size: 20 KiB |
Before Width: | Height: | Size: 7.2 KiB |
Before Width: | Height: | Size: 257 KiB |
Before Width: | Height: | Size: 256 KiB |
Before Width: | Height: | Size: 187 KiB |
Before Width: | Height: | Size: 304 KiB |
Before Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 748 B |
Before Width: | Height: | Size: 38 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 196 KiB |
Before Width: | Height: | Size: 655 B |
Before Width: | Height: | Size: 65 KiB |
Before Width: | Height: | Size: 90 KiB |
Before Width: | Height: | Size: 114 KiB |
Before Width: | Height: | Size: 86 KiB |
Before Width: | Height: | Size: 180 KiB |
Before Width: | Height: | Size: 69 KiB |
Before Width: | Height: | Size: 125 KiB |
|
@ -1,27 +1,28 @@
|
|||
[[kibana-guide]]
|
||||
= Kibana User Guide
|
||||
|
||||
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/5.0/
|
||||
:xpack: https://www.elastic.co/guide/en/x-pack/5.0/
|
||||
:scyld: X-Pack Security
|
||||
:k4issue: https://github.com/elastic/kibana/issues/
|
||||
:k4pull: https://github.com/elastic/kibana/pull/
|
||||
:version: 5.0.0-rc1
|
||||
:esversion: 5.0.0-rc1
|
||||
:packageversion: 5.0-rc
|
||||
:version: 5.1.0
|
||||
:major-version: 5.x
|
||||
|
||||
//////////
|
||||
release-state can be: released | prerelease | unreleased
|
||||
//////////
|
||||
|
||||
:release-state: unreleased
|
||||
:es-ref: https://www.elastic.co/guide/en/elasticsearch/reference/5.x/
|
||||
:xpack-ref: https://www.elastic.co/guide/en/x-pack/current/
|
||||
:issue: https://github.com/elastic/elasticsearch/issues/
|
||||
:pull: https://github.com/elastic/elasticsearch/pull/
|
||||
|
||||
|
||||
include::introduction.asciidoc[]
|
||||
|
||||
include::setup.asciidoc[]
|
||||
|
||||
include::migration.asciidoc[]
|
||||
|
||||
include::getting-started.asciidoc[]
|
||||
|
||||
include::migration/index.asciidoc[]
|
||||
|
||||
include::plugins.asciidoc[]
|
||||
|
||||
include::access.asciidoc[]
|
||||
|
||||
include::discover.asciidoc[]
|
||||
|
||||
include::visualize.asciidoc[]
|
||||
|
@ -30,8 +31,6 @@ include::dashboard.asciidoc[]
|
|||
|
||||
include::console.asciidoc[]
|
||||
|
||||
include::settings.asciidoc[]
|
||||
include::management.asciidoc[]
|
||||
|
||||
include::production.asciidoc[]
|
||||
|
||||
include::releasenotes.asciidoc[]
|
||||
include::plugins.asciidoc[]
|
||||
|
|
|
@ -10,47 +10,3 @@ create and share dynamic dashboards that display changes to Elasticsearch querie
|
|||
|
||||
Setting up Kibana is a snap. You can install Kibana and start exploring your Elasticsearch indices in minutes -- no
|
||||
code, no additional infrastructure required.
|
||||
|
||||
For more information about creating and sharing visualizations and dashboards, see the <<visualize, Visualize>>
|
||||
and <<dashboard, Dashboard>> topics. A complete <<getting-started,tutorial>> covering several aspects of Kibana's
|
||||
functionality is also available.
|
||||
|
||||
NOTE: This guide describes how to use Kibana {version}. For information about what's new in Kibana {version}, see
|
||||
the <<releasenotes, release notes>>.
|
||||
|
||||
////
|
||||
[float]
|
||||
[[data-discovery]]
|
||||
=== Data Discovery and Visualization
|
||||
|
||||
Let's take a look at how you might use Kibana to explore and visualize data.
|
||||
We've indexed some data from Transport for London (TFL) that shows one week
|
||||
of transit (Oyster) card usage.
|
||||
|
||||
From Kibana's Discover page, we can submit search queries, filter the results, and
|
||||
examine the data in the returned documents. For example, we can get all trips
|
||||
completed by the Tube during the week by excluding incomplete trips and trips by bus:
|
||||
|
||||
image:images/TFL-CompletedTrips.jpg[Discover]
|
||||
|
||||
Right away, we can see the peaks for the morning and afternoon commute hours in the
|
||||
histogram. By default, the Discover page also shows the first 500 entries that match the
|
||||
search criteria. You can change the time filter, interact with the histogram to drill
|
||||
down into the data, and view the details of particular documents. For more
|
||||
information about exploring your data from the Discover page, see <<discover, Discover>>.
|
||||
|
||||
You can construct visualizations of your search results from the Visualization page.
|
||||
Each visualization is associated with a search. For example, we can create a histogram
|
||||
that shows the weekly London commute traffic via the Tube using our previous search.
|
||||
The Y-axis shows the number of trips. The X-axis shows
|
||||
the day and time. By adding a sub-aggregation, we can see the top 3 end stations during
|
||||
each hour:
|
||||
|
||||
image:images/TFL-CommuteHistogram.jpg[Visualize]
|
||||
|
||||
You can save and share visualizations and combine them into dashboards to make it easy
|
||||
to correlate related information. For example, we could create a dashboard
|
||||
that displays several visualizations of the TFL data:
|
||||
|
||||
image:images/TFL-Dashboard.jpg[Dashboard]
|
||||
////
|
||||
|
|
|
@ -1,114 +0,0 @@
|
|||
[[setup-repositories]]
|
||||
=== Installing Kibana with apt and yum
|
||||
|
||||
Binary packages for Kibana are available for Unix distributions that support the `apt` and `yum` tools.
|
||||
We also have repositories available for APT and YUM based distributions.
|
||||
|
||||
NOTE: Since the packages are created as part of the Kibana build, source packages are not available.
|
||||
|
||||
Packages are signed with the PGP key http://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4[D88E42B4], which
|
||||
has the following fingerprint:
|
||||
|
||||
4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4
|
||||
|
||||
[float]
|
||||
[[kibana-apt]]
|
||||
===== Installing Kibana with apt-get
|
||||
|
||||
. Download and install the Public Signing Key:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
|
||||
--------------------------------------------------
|
||||
+
|
||||
. Add the repository definition to your `/etc/apt/sources.list.d/kibana.list` file:
|
||||
+
|
||||
["source","sh",subs="attributes"]
|
||||
--------------------------------------------------
|
||||
echo "deb https://artifacts.elastic.co/packages/5.x-prerelease/apt stable main" | sudo tee -a /etc/apt/sources.list.d/kibana.list
|
||||
--------------------------------------------------
|
||||
+
|
||||
[WARNING]
|
||||
==================================================
|
||||
Use the `echo` method described above to add the Kibana repository. Do not use `add-apt-repository`, as that command
|
||||
adds a `deb-src` entry with no corresponding source package.
|
||||
|
||||
When the `deb-src` entry is present, the commands in this procedure generate an error similar to the following:
|
||||
|
||||
Unable to find expected entry 'main/source/Sources' in Release file (Wrong sources.list entry or malformed file)
|
||||
|
||||
Delete the `deb-src` entry from the `/etc/apt/sources.list.d/kibana.list` file to clear the error.
|
||||
==================================================
|
||||
+
|
||||
. Run `apt-get update` to ready the repository. Install Kibana with the following command:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo apt-get update && sudo apt-get install kibana
|
||||
--------------------------------------------------
|
||||
+
|
||||
. Configure Kibana to automatically start during bootup. If your distribution is using the System V version of `init`,
|
||||
run the following command:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo update-rc.d kibana defaults 95 10
|
||||
--------------------------------------------------
|
||||
+
|
||||
. If your distribution is using `systemd`, run the following commands instead:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo /bin/systemctl daemon-reload
|
||||
sudo /bin/systemctl enable kibana.service
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[[kibana-yum]]
|
||||
===== Installing Kibana with yum
|
||||
|
||||
WARNING: The repositories set up in this procedure are not compatible with distributions using version 3 of `rpm`, such
|
||||
as CentOS version 5.
|
||||
|
||||
. Download and install the public signing key:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
--------------------------------------------------
|
||||
+
|
||||
. Create a file named `kibana.repo` in the `/etc/yum.repos.d/` directory with the following contents:
|
||||
+
|
||||
["source","sh",subs="attributes"]
|
||||
--------------------------------------------------
|
||||
[kibana-{packageversion}]
|
||||
name=Kibana repository for {packageversion} packages
|
||||
baseurl=https://artifacts.elastic.co/packages/5.x-prerelease/yum
|
||||
gpgcheck=1
|
||||
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
enabled=1
|
||||
--------------------------------------------------
|
||||
+
|
||||
. Install Kibana by running the following command:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
yum install kibana
|
||||
--------------------------------------------------
|
||||
+
|
||||
Configure Kibana to automatically start during bootup. If your distribution is using the System V version of `init`
|
||||
(check with `ps -p 1`), run the following command:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
chkconfig --add kibana
|
||||
--------------------------------------------------
|
||||
+
|
||||
. If your distribution is using `systemd`, run the following commands instead:
|
||||
+
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo /bin/systemctl daemon-reload
|
||||
sudo /bin/systemctl enable kibana.service
|
||||
--------------------------------------------------
|
22
docs/management.asciidoc
Normal file
|
@ -0,0 +1,22 @@
|
|||
[[management]]
|
||||
= Management
|
||||
|
||||
[partintro]
|
||||
--
|
||||
The Management application is where you perform your runtime configuration of
|
||||
Kibana, including both the initial setup and ongoing configuration of index
|
||||
patterns, advanced settings that tweak the behaviors of Kibana itself, and
|
||||
the various "objects" that you can save throughout Kibana such as searches,
|
||||
visualizations, and dashboards.
|
||||
|
||||
This section is pluginable, so in addition to the out of the box capabitilies,
|
||||
packs such as X-Pack can add additional management capabilities to Kibana.
|
||||
--
|
||||
|
||||
include::management/index-patterns.asciidoc[]
|
||||
|
||||
include::management/managing-fields.asciidoc[]
|
||||
|
||||
include::management/advanced-options.asciidoc[]
|
||||
|
||||
include::management/managing-saved-objects.asciidoc[]
|
|
@ -1,3 +1,18 @@
|
|||
[[advanced-options]]
|
||||
== Setting Advanced Options
|
||||
|
||||
The *Advanced Settings* page enables you to directly edit settings that control the behavior of the Kibana application.
|
||||
For example, you can change the format used to display dates, specify the default index pattern, and set the precision
|
||||
for displayed decimal values.
|
||||
|
||||
To set advanced options:
|
||||
|
||||
. Go to *Settings > Advanced*.
|
||||
. Click the *Edit* button for the option you want to modify.
|
||||
. Enter a new value for the option.
|
||||
. Click the *Save* button.
|
||||
|
||||
[float]
|
||||
[[kibana-settings-reference]]
|
||||
|
||||
WARNING: Modifying the following settings can significantly affect Kibana's performance and cause problems that are
|
||||
|
@ -7,7 +22,7 @@ compatible with other configuration settings. Deleting a custom setting removes
|
|||
.Kibana Settings Reference
|
||||
[horizontal]
|
||||
`query:queryString:options`:: Options for the Lucene query string parser.
|
||||
`sort:options`:: Options for the Elasticsearch {ref}/search-request-sort.html[sort] parameter.
|
||||
`sort:options`:: Options for the Elasticsearch {es-ref}search-request-sort.html[sort] parameter.
|
||||
`dateFormat`:: The format to use for displaying pretty-formatted dates.
|
||||
`dateFormat:tz`:: The timezone that Kibana uses. The default value of `Browser` uses the timezone detected by the browser.
|
||||
`dateFormat:scaled`:: These values define the format used to render ordered time-based data. Formatted timestamps must
|
||||
|
@ -28,7 +43,7 @@ increase request processing time.
|
|||
`histogram:maxBars`:: Date histograms are not generated with more bars than the value of this property, scaling values
|
||||
when necessary.
|
||||
`visualization:tileMap:maxPrecision`:: The maximum geoHash precision displayed on tile maps: 7 is high, 10 is very high,
|
||||
12 is the maximum. {ref}/search-aggregations-bucket-geohashgrid-aggregation.html#_cell_dimensions_at_the_equator[Explanation of cell dimensions].
|
||||
12 is the maximum. {es-ref}search-aggregations-bucket-geohashgrid-aggregation.html#_cell_dimensions_at_the_equator[Explanation of cell dimensions].
|
||||
`visualization:tileMap:WMSdefaults`:: Default properties for the WMS map server support in the tile map.
|
||||
`visualization:colorMapping`:: Maps values to specified colors within visualizations.
|
||||
`visualization:loadingDelay`:: Time to wait before dimming visualizations during query.
|
146
docs/management/index-patterns.asciidoc
Normal file
|
@ -0,0 +1,146 @@
|
|||
[[index-patterns]]
|
||||
== Index Patterns
|
||||
|
||||
To use Kibana, you have to tell it about the Elasticsearch indices that you want to explore by configuring one or more
|
||||
index patterns. You can also:
|
||||
|
||||
* Create scripted fields that are computed on the fly from your data. You can browse and visualize scripted fields, but
|
||||
you cannot search them.
|
||||
* Set advanced options such as the number of rows to show in a table and how many of the most popular fields to show.
|
||||
Use caution when modifying advanced options, as it's possible to set values that are incompatible with one another.
|
||||
* Configure Kibana for a production environment
|
||||
|
||||
[float]
|
||||
[[settings-create-pattern]]
|
||||
== Creating an Index Pattern to Connect to Elasticsearch
|
||||
An _index pattern_ identifies one or more Elasticsearch indices that you want to explore with Kibana. Kibana looks for
|
||||
index names that match the specified pattern.
|
||||
An asterisk (*) in the pattern matches zero or more characters. For example, the pattern `myindex-*` matches all
|
||||
indices whose names start with `myindex-`, such as `myindex-1` and `myindex-2`.
|
||||
|
||||
An index pattern can also simply be the name of a single index.
|
||||
|
||||
To create an index pattern to connect to Elasticsearch:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Specify an index pattern that matches the name of one or more of your Elasticsearch indices. By default, Kibana
|
||||
guesses that you're you're working with log data being fed into Elasticsearch by Logstash.
|
||||
+
|
||||
NOTE: When you switch between top-level tabs, Kibana remembers where you were. For example, if you view a particular
|
||||
index pattern from the Settings tab, switch to the Discover tab, and then go back to the Settings tab, Kibana displays
|
||||
the index pattern you last looked at. To get to the create pattern form, click the *Add* button in the Index Patterns
|
||||
list.
|
||||
|
||||
. If your index contains a timestamp field that you want to use to perform time-based comparisons, select the *Index
|
||||
contains time-based events* option and select the index field that contains the timestamp. Kibana reads the index
|
||||
mapping to list all of the fields that contain a timestamp.
|
||||
|
||||
. By default, Kibana restricts wildcard expansion of time-based index patterns to indices with data within the currently
|
||||
selected time range. Click *Do not expand index pattern when search* to disable this behavior.
|
||||
|
||||
. Click *Create* to add the index pattern.
|
||||
|
||||
. To designate the new pattern as the default pattern to load when you view the Discover tab, click the *favorite*
|
||||
button.
|
||||
|
||||
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
|
||||
contain data.
|
||||
|
||||
To use an event time in an index name, enclose the static text in the pattern and specify the date format using the
|
||||
tokens described in the following table.
|
||||
|
||||
For example, `[logstash-]YYYY.MM.DD` matches all indices whose names have a timestamp of the form `YYYY.MM.DD` appended
|
||||
to the prefix `logstash-`, such as `logstash-2015.01.31` and `logstash-2015-02-01`.
|
||||
|
||||
[float]
|
||||
[[date-format-tokens]]
|
||||
.Date Format Tokens
|
||||
[horizontal]
|
||||
`M`:: Month - cardinal: 1 2 3 ... 12
|
||||
`Mo`:: Month - ordinal: 1st 2nd 3rd ... 12th
|
||||
`MM`:: Month - two digit: 01 02 03 ... 12
|
||||
`MMM`:: Month - abbreviation: Jan Feb Mar ... Dec
|
||||
`MMMM`:: Month - full: January February March ... December
|
||||
`Q`:: Quarter: 1 2 3 4
|
||||
`D`:: Day of Month - cardinal: 1 2 3 ... 31
|
||||
`Do`:: Day of Month - ordinal: 1st 2nd 3rd ... 31st
|
||||
`DD`:: Day of Month - two digit: 01 02 03 ... 31
|
||||
`DDD`:: Day of Year - cardinal: 1 2 3 ... 365
|
||||
`DDDo`:: Day of Year - ordinal: 1st 2nd 3rd ... 365th
|
||||
`DDDD`:: Day of Year - three digit: 001 002 ... 364 365
|
||||
`d`:: Day of Week - cardinal: 0 1 3 ... 6
|
||||
`do`:: Day of Week - ordinal: 0th 1st 2nd ... 6th
|
||||
`dd`:: Day of Week - 2-letter abbreviation: Su Mo Tu ... Sa
|
||||
`ddd`:: Day of Week - 3-letter abbreviation: Sun Mon Tue ... Sat
|
||||
`dddd`:: Day of Week - full: Sunday Monday Tuesday ... Saturday
|
||||
`e`:: Day of Week (locale): 0 1 2 ... 6
|
||||
`E`:: Day of Week (ISO): 1 2 3 ... 7
|
||||
`w`:: Week of Year - cardinal (locale): 1 2 3 ... 53
|
||||
`wo`:: Week of Year - ordinal (locale): 1st 2nd 3rd ... 53rd
|
||||
`ww`:: Week of Year - 2-digit (locale): 01 02 03 ... 53
|
||||
`W`:: Week of Year - cardinal (ISO): 1 2 3 ... 53
|
||||
`Wo`:: Week of Year - ordinal (ISO): 1st 2nd 3rd ... 53rd
|
||||
`WW`:: Week of Year - two-digit (ISO): 01 02 03 ... 53
|
||||
`YY`:: Year - two digit: 70 71 72 ... 30
|
||||
`YYYY`:: Year - four digit: 1970 1971 1972 ... 2030
|
||||
`gg`:: Week Year - two digit (locale): 70 71 72 ... 30
|
||||
`gggg`:: Week Year - four digit (locale): 1970 1971 1972 ... 2030
|
||||
`GG`:: Week Year - two digit (ISO): 70 71 72 ... 30
|
||||
`GGGG`:: Week Year - four digit (ISO): 1970 1971 1972 ... 2030
|
||||
`A`:: AM/PM: AM PM
|
||||
`a`:: am/pm: am pm
|
||||
`H`:: Hour: 0 1 2 ... 23
|
||||
`HH`:: Hour - two digit: 00 01 02 ... 23
|
||||
`h`:: Hour - 12-hour clock: 1 2 3 ... 12
|
||||
`hh`:: Hour - 12-hour clock, 2 digit: 01 02 03 ... 12
|
||||
`m`:: Minute: 0 1 2 ... 59
|
||||
`mm`:: Minute - two-digit: 00 01 02 ... 59
|
||||
`s`:: Second: 0 1 2 ... 59
|
||||
`ss`:: Second - two-digit: 00 01 02 ... 59
|
||||
`S`:: Fractional Second - 10ths: 0 1 2 ... 9
|
||||
`SS`:: Fractional Second - 100ths: 0 1 ... 98 99
|
||||
`SSS`:: Fractional Seconds - 1000ths: 0 1 ... 998 999
|
||||
`Z`:: Timezone - zero UTC offset (hh:mm format): -07:00 -06:00 -05:00 .. +07:00
|
||||
`ZZ`:: Timezone - zero UTC offset (hhmm format): -0700 -0600 -0500 ... +0700
|
||||
`X`:: Unix Timestamp: 1360013296
|
||||
`x`:: Unix Millisecond Timestamp: 1360013296123
|
||||
|
||||
[float]
|
||||
[[set-default-pattern]]
|
||||
== Setting the Default Index Pattern
|
||||
The default index pattern is loaded by automatically when you view the *Discover* tab. Kibana displays a star to the
|
||||
left of the name of the default pattern in the Index Patterns list on the *Settings > Indices* tab. The first pattern
|
||||
you create is automatically designated as the default pattern.
|
||||
|
||||
To set a different pattern as the default index pattern:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Select the pattern you want to set as the default in the Index Patterns list.
|
||||
. Click the pattern's *Favorite* button.
|
||||
|
||||
NOTE: You can also manually set the default index pattern in *Advanced > Settings*.
|
||||
|
||||
[float]
|
||||
[[reload-fields]]
|
||||
== Reloading the Index Fields List
|
||||
When you add an index mapping, Kibana automatically scans the indices that match the pattern to display a list of the
|
||||
index fields. You can reload the index fields list to pick up any newly-added fields.
|
||||
|
||||
Reloading the index fields list also resets Kibana's popularity counters for the fields. The popularity counters keep
|
||||
track of the fields you've used most often within Kibana and are used to sort fields within lists.
|
||||
|
||||
To reload the index fields list:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Select an index pattern from the Index Patterns list.
|
||||
. Click the pattern's *Reload* button.
|
||||
|
||||
[float]
|
||||
[[delete-pattern]]
|
||||
== Deleting an Index Pattern
|
||||
To delete an index pattern:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Select the pattern you want to remove in the Index Patterns list.
|
||||
. Click the pattern's *Delete* button.
|
||||
. Confirm that you want to remove the index pattern.
|
123
docs/management/managing-fields.asciidoc
Normal file
|
@ -0,0 +1,123 @@
|
|||
[[managing-fields]]
|
||||
== Managing Fields
|
||||
|
||||
The fields for the index pattern are listed in a table. Click a column header to sort the table by that column. Click
|
||||
the *Controls* button in the rightmost column for a given field to edit the field's properties. You can manually set
|
||||
the field's format from the *Format* drop-down. Format options vary based on the field's type.
|
||||
|
||||
You can also set the field's popularity value in the *Popularity* text entry box to any desired value. Click the
|
||||
*Update Field* button to confirm your changes or *Cancel* to return to the list of fields.
|
||||
|
||||
Kibana has field formatters for the following field types:
|
||||
|
||||
* <<field-formatters-string, Strings>>
|
||||
* <<field-formatters-date, Dates>>
|
||||
* <<field-formatters-geopoint, Geopoints>>
|
||||
* <<field-formatters-numeric, Numbers>>
|
||||
|
||||
[[field-formatters-string]]
|
||||
=== String Field Formatters
|
||||
|
||||
String fields support the `String` and `Url` formatters.
|
||||
|
||||
include::field-formatters/string-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/url-formatter.asciidoc[]
|
||||
|
||||
[[field-formatters-date]]
|
||||
=== Date Field Formatters
|
||||
|
||||
Date fields support the `Date`, `Url`, and `String` formatters.
|
||||
|
||||
The `Date` formatter enables you to choose the display format of date stamps using the http://moment.js[moment.js]
|
||||
standard format definitions.
|
||||
|
||||
include::field-formatters/string-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/url-formatter.asciidoc[]
|
||||
|
||||
[[field-formatters-geopoint]]
|
||||
=== Geographic Point Field Formatters
|
||||
|
||||
Geographic point fields support the `String` formatter.
|
||||
|
||||
include::field-formatters/string-formatter.asciidoc[]
|
||||
|
||||
[[field-formatters-numeric]]
|
||||
=== Numeric Field Formatters
|
||||
|
||||
Numeric fields support the `Url`, `Bytes`, `Duration`, `Number`, `Percentage`, `String`, and `Color` formatters.
|
||||
|
||||
include::field-formatters/url-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/string-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/duration-formatter.asciidoc[]
|
||||
|
||||
include::field-formatters/color-formatter.asciidoc[]
|
||||
|
||||
The `Bytes`, `Number`, and `Percentage` formatters enable you to choose the display formats of numbers in this field using
|
||||
the https://adamwdraper.github.io/Numeral-js/[numeral.js] standard format definitions.
|
||||
|
||||
[[scripted-fields]]
|
||||
=== Scripted Fields
|
||||
|
||||
Scripted fields compute data on the fly from the data in your Elasticsearch indices. Scripted field data is shown on
|
||||
the Discover tab as part of the document data, and you can use scripted fields in your visualizations.
|
||||
Scripted field values are computed at query time so they aren't indexed and cannot be searched.
|
||||
|
||||
NOTE: Kibana cannot query scripted fields.
|
||||
|
||||
WARNING: Computing data on the fly with scripted fields can be very resource intensive and can have a direct impact on
|
||||
Kibana's performance. Keep in mind that there's no built-in validation of a scripted field. If your scripts are
|
||||
buggy, you'll get exceptions whenever you try to view the dynamically generated data.
|
||||
|
||||
Scripted fields use the Lucene expression syntax. For more information,
|
||||
see {es-ref}modules-scripting-expression.html[
|
||||
Lucene Expressions Scripts].
|
||||
|
||||
You can reference any single value numeric field in your expressions, for example:
|
||||
|
||||
----
|
||||
doc['field_name'].value
|
||||
----
|
||||
|
||||
[float]
|
||||
[[create-scripted-field]]
|
||||
=== Creating a Scripted Field
|
||||
To create a scripted field:
|
||||
|
||||
. Go to *Settings > Indices*
|
||||
. Select the index pattern you want to add a scripted field to.
|
||||
. Go to the pattern's *Scripted Fields* tab.
|
||||
. Click *Add Scripted Field*.
|
||||
. Enter a name for the scripted field.
|
||||
. Enter the expression that you want to use to compute a value on the fly from your index data.
|
||||
. Click *Save Scripted Field*.
|
||||
|
||||
For more information about scripted fields in Elasticsearch, see
|
||||
{es-ref}modules-scripting.html[Scripting].
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{es-ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
[float]
|
||||
[[update-scripted-field]]
|
||||
=== Updating a Scripted Field
|
||||
To modify a scripted field:
|
||||
|
||||
. Go to *Settings > Indices*
|
||||
. Click the *Edit* button for the scripted field you want to change.
|
||||
. Make your changes and then click *Save Scripted Field* to update the field.
|
||||
|
||||
WARNING: Keep in mind that there's no built-in validation of a scripted field. If your scripts are buggy, you'll get
|
||||
exceptions whenever you try to view the dynamically generated data.
|
||||
|
||||
[float]
|
||||
[[delete-scripted-field]]
|
||||
=== Deleting a Scripted Field
|
||||
To delete a scripted field:
|
||||
|
||||
. Go to *Settings > Indices*
|
||||
. Click the *Delete* button for the scripted field you want to remove.
|
||||
. Confirm that you really want to delete the field.
|
58
docs/management/managing-saved-objects.asciidoc
Normal file
|
@ -0,0 +1,58 @@
|
|||
[[managing-saved-objects]]
|
||||
== Managing Saved Searches, Visualizations, and Dashboards
|
||||
|
||||
You can view, edit, and delete saved searches, visualizations, and dashboards from *Settings > Objects*. You can also
|
||||
export or import sets of searches, visualizations, and dashboards.
|
||||
|
||||
Viewing a saved object displays the selected item in the *Discover*, *Visualize*, or *Dashboard* page. To view a saved
|
||||
object:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the object you want to view.
|
||||
. Click the *View* button.
|
||||
|
||||
Editing a saved object enables you to directly modify the object definition. You can change the name of the object, add
|
||||
a description, and modify the JSON that defines the object's properties.
|
||||
|
||||
If you attempt to access an object whose index has been deleted, Kibana displays its Edit Object page. You can:
|
||||
|
||||
* Recreate the index so you can continue using the object.
|
||||
* Delete the object and recreate it using a different index.
|
||||
* Change the index name referenced in the object's `kibanaSavedObjectMeta.searchSourceJSON` to point to an existing
|
||||
index pattern. This is useful if the index you were working with has been renamed.
|
||||
|
||||
WARNING: No validation is performed for object properties. Submitting invalid changes will render the object unusable.
|
||||
Generally, you should use the *Discover*, *Visualize*, or *Dashboard* pages to create new objects instead of directly
|
||||
editing existing ones.
|
||||
|
||||
To edit a saved object:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the object you want to edit.
|
||||
. Click the *Edit* button.
|
||||
. Make your changes to the object definition.
|
||||
. Click the *Save Object* button.
|
||||
|
||||
To delete a saved object:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the object you want to delete.
|
||||
. Click the *Delete* button.
|
||||
. Confirm that you really want to delete the object.
|
||||
|
||||
To export a set of objects:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the type of object you want to export. You can export a set of dashboards, searches, or visualizations.
|
||||
. Click the selection box for the objects you want to export, or click the *Select All* box.
|
||||
. Click *Export* to select a location to write the exported JSON.
|
||||
|
||||
WARNING: Exported dashboards do not include their associated index patterns. Re-create the index patterns manually before
|
||||
importing saved dashboards to a Kibana instance running on another Elasticsearch cluster.
|
||||
|
||||
To import a set of objects:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Click *Import* to navigate to the JSON file representing the set of objects to import.
|
||||
. Click *Open* after selecting the JSON file.
|
||||
. If any objects in the set would overwrite objects already present in Kibana, confirm the overwrite.
|
|
@ -1,8 +1,10 @@
|
|||
[[breaking-changes]]
|
||||
== Breaking Changes
|
||||
= Breaking changes
|
||||
|
||||
[partintro]
|
||||
--
|
||||
This section discusses the changes that you need to be aware of when migrating
|
||||
your application from one version of Kibana to another.
|
||||
--
|
||||
|
||||
include::migrate_5_0.asciidoc[]
|
||||
|
||||
include::migration/migrate_5_0.asciidoc[]
|
|
@ -1,16 +1,21 @@
|
|||
[[breaking-changes-5.0]]
|
||||
=== Breaking changes in 5.0
|
||||
== Breaking changes in 5.0
|
||||
|
||||
==== Kibana binds to localhost by default
|
||||
{k4pull}8013[Pull Request 8013]
|
||||
This section discusses the changes that you need to be aware of when migrating
|
||||
your application to Kibana 5.0.
|
||||
|
||||
[float]
|
||||
=== Kibana binds to localhost by default
|
||||
{pull}8013[Pull Request 8013]
|
||||
|
||||
*Details:* Kibana (like Elasticsearch) now binds to localhost for security purposes instead of 0.0.0.0 (all addresses). Previous binding to 0.0.0.0 also caused issues for Windows users.
|
||||
|
||||
*Impact:* If you are running Kibana inside a container/environment that does not allow localhost binding, this will cause Kibana not to start up unless server.host is configured in the kibana.yml to a valid IP address/host, etc..
|
||||
|
||||
==== Markdown headers
|
||||
[float]
|
||||
=== Markdown headers
|
||||
|
||||
{k4pull}7855[Pull Request 7855]
|
||||
{pull}7855[Pull Request 7855]
|
||||
|
||||
*Details:* As part of addressing the security issue https://www.elastic.co/community/security[ESA-2016-03] (CVE-2016-1000220) in the Kibana product, the markdown version has been bumped.
|
||||
|
||||
|
@ -28,23 +33,26 @@ It should now be defined as follows (with a space between ### and the title):
|
|||
[Dashboard](/#/dashboard/Packetbeat-Dashboard)
|
||||
[Web transactions](/#/dashboard/HTTP)
|
||||
|
||||
==== Linux package install directories
|
||||
[float]
|
||||
=== Linux package install directories
|
||||
|
||||
{k4pull}7308[Pull Request 7308]
|
||||
{pull}7308[Pull Request 7308]
|
||||
|
||||
*Details:* To align with the Elasticsearch packages, Kibana now installs binaries under `/usr/share/kibana` and configuration files under `/etc/kibana`. Previously they were both located under `/opt/kibana`.
|
||||
|
||||
*Impact:* Apart from learning the new location of Kibana binaries and configuration files, you may have to update your automation scripts as needed.
|
||||
|
||||
==== The plugin installer now has its own executable
|
||||
[float]
|
||||
=== The plugin installer now has its own executable
|
||||
|
||||
{k4pull}6402[Pull Request 6402]
|
||||
{pull}6402[Pull Request 6402]
|
||||
|
||||
*Details:* The new installer can be found at `/bin/kibana-plugin`. When installing/removing Kibana plugins, you will now call `kibana-plugin` instead of the main kibana script.
|
||||
|
||||
*Impact:* You may have to update your automation scripts.
|
||||
|
||||
==== Dashboards created before 5.0
|
||||
[float]
|
||||
=== Dashboards created before 5.0
|
||||
|
||||
*Details:* Loading a 4.x dashboard in Kibana 5.0 results in an internal change
|
||||
to the dashboard's metadata, which you can persist by saving the dashboard.
|
||||
|
@ -52,7 +60,8 @@ to the dashboard's metadata, which you can persist by saving the dashboard.
|
|||
*Impact:* This change will not affect the functionality of the dashboard itself,
|
||||
but you must save the dashboard before using certain features such as X-Pack reporting.
|
||||
|
||||
==== Saved objects with previously deprecated Elasticsearch features
|
||||
[float]
|
||||
=== Saved objects with previously deprecated Elasticsearch features
|
||||
|
||||
*Details:* Since Kibana 4.3, users have been able to arbitrarily modify filters
|
||||
via a generic JSON editor. If users took advantage of any deprecated Elasticsearch
|
||||
|
|
|
@ -1,37 +1,33 @@
|
|||
[[kibana-plugins]]
|
||||
== Kibana Plugins
|
||||
= Kibana Plugins
|
||||
|
||||
[partintro]
|
||||
--
|
||||
Add-on functionality for Kibana is implemented with plug-in modules. You can use the `bin/kibana-plugin`
|
||||
command to manage these modules. You can also install a plugin manually by moving the plugin file to the
|
||||
`plugins` directory and unpacking the plugin files into a new directory.
|
||||
--
|
||||
|
||||
A list of existing Kibana plugins is available on https://github.com/elastic/kibana/wiki/Known-Plugins[GitHub].
|
||||
|
||||
[float]
|
||||
=== Installing Plugins
|
||||
== Installing Plugins
|
||||
|
||||
Use the following command to install a plugin:
|
||||
|
||||
[source,shell]
|
||||
bin/kibana-plugin install <package name or URL>
|
||||
|
||||
When you specify a plugin name without a URL, the plugin tool attempts to download the plugin from `download.elastic.co`.
|
||||
When you specify a plugin name without a URL, the plugin tool attempts to download an official Elastic plugin, such as:
|
||||
|
||||
["source","shell",subs="attributes"]
|
||||
$ bin/kibana-plugin install x-pack
|
||||
|
||||
|
||||
[float]
|
||||
==== Installing Plugins from an Arbitrary URL
|
||||
=== Installing Plugins from an Arbitrary URL
|
||||
|
||||
You can specify a URL to a specific plugin, as in the following example:
|
||||
|
||||
["source","shell",subs="attributes"]
|
||||
$ bin/kibana-plugin install https://download.elastic.co/kibana/x-pack/x-pack-{version}.zip
|
||||
Attempting to transfer from https://download.elastic.co/kibana/x-pack/x-pack-{version}.zip
|
||||
Transferring <some number> bytes....................
|
||||
Transfer complete
|
||||
Retrieving metadata from plugin archive
|
||||
Extracting plugin archive
|
||||
Extraction complete
|
||||
Optimizing and caching browser bundles...
|
||||
Plugin installation complete
|
||||
|
||||
You can specify URLs that use the HTTP, HTTPS, or `file` protocols.
|
||||
|
||||
|
@ -43,40 +39,36 @@ example:
|
|||
|
||||
[source,shell]
|
||||
$ bin/kibana-plugin install file:///some/local/path/x-pack.zip -d path/to/directory
|
||||
Installing sample-plugin
|
||||
Attempting to transfer from file:///some/local/path/x-pack.zip
|
||||
Transferring <some number> bytes....................
|
||||
Transfer complete
|
||||
Retrieving metadata from plugin archive
|
||||
Extracting plugin archive
|
||||
Extraction complete
|
||||
Optimizing and caching browser bundles...
|
||||
Plugin installation complete
|
||||
|
||||
NOTE: This command creates the specified directory if it does not already exist.
|
||||
|
||||
[float]
|
||||
=== Removing Plugins
|
||||
|
||||
Use the `remove` command to remove a plugin, including any configuration information, as in the following example:
|
||||
|
||||
[source,shell]
|
||||
$ bin/kibana-plugin remove timelion
|
||||
|
||||
You can also remove a plugin manually by deleting the plugin's subdirectory under the `plugins/` directory.
|
||||
|
||||
[float]
|
||||
=== Listing Installed Plugins
|
||||
|
||||
Use the `list` command to list the currently installed plugins.
|
||||
|
||||
[float]
|
||||
=== Updating Plugins
|
||||
== Updating & Removing Plugins
|
||||
|
||||
To update a plugin, remove the current version and reinstall the plugin.
|
||||
|
||||
[float]
|
||||
=== Configuring the Plugin Manager
|
||||
To remove a plugin, use the `remove` command, as in the following example:
|
||||
|
||||
[source,shell]
|
||||
$ bin/kibana-plugin remove x-pack
|
||||
|
||||
You can also remove a plugin manually by deleting the plugin's subdirectory under the `plugins/` directory.
|
||||
|
||||
NOTE: Removing a plugin will result in an "optimize" run which will delay the next start of Kibana.
|
||||
|
||||
== Disabling Plugins
|
||||
|
||||
Use the following command to disable a plugin:
|
||||
|
||||
[source,shell]
|
||||
-----------
|
||||
./bin/kibana --<plugin ID>.enabled=false <1>
|
||||
-----------
|
||||
|
||||
NOTE: Disabling or enabling a plugin will result in an "optimize" run which will delay the start of Kibana.
|
||||
|
||||
<1> You can find a plugin's plugin ID as the value of the `name` property in the plugin's `package.json` file.
|
||||
|
||||
== Configuring the Plugin Manager
|
||||
|
||||
By default, the plugin manager provides you with feedback on the status of the activity you've asked the plugin manager
|
||||
to perform. You can control the level of feedback for the `install` and `remove` commands with the `--quiet` and
|
||||
|
@ -95,7 +87,7 @@ bin/kibana-plugin install --timeout 30s sample-plugin
|
|||
bin/kibana-plugin install --timeout 1m sample-plugin
|
||||
|
||||
[float]
|
||||
==== Plugins and Custom Kibana Configurations
|
||||
=== Plugins and Custom Kibana Configurations
|
||||
|
||||
Use the `-c` or `--config` options with the `install` and `remove` commands to specify the path to the configuration file
|
||||
used to start Kibana. By default, Kibana uses the configuration file `config/kibana.yml`. When you change your installed
|
||||
|
@ -110,22 +102,3 @@ you must specify the path to that configuration file each time you use the `bin/
|
|||
64:: Unknown command or incorrect option parameter
|
||||
74:: I/O error
|
||||
70:: Other error
|
||||
|
||||
[float]
|
||||
[[plugin-switcher]]
|
||||
== Switching Plugin Functionality
|
||||
|
||||
The Kibana UI serves as a framework that can contain several different plugins. You can switch between these
|
||||
plugins by clicking the icons for your desired plugins in the left-hand navigation bar.
|
||||
|
||||
[float]
|
||||
=== Disabling Plugins
|
||||
|
||||
Use the following command to disable a plugin:
|
||||
|
||||
[source,shell]
|
||||
-----------
|
||||
./bin/kibana --<plugin ID>.enabled=false <1>
|
||||
-----------
|
||||
|
||||
<1> You can find a plugin's plugin ID as the value of the `name` property in the plugin's `package.json` file.
|
||||
|
|
|
@ -1,44 +0,0 @@
|
|||
[[releasenotes]]
|
||||
== Kibana {version} Release Notes
|
||||
|
||||
The {version} release of Kibana requires Elasticsearch {esversion} or later.
|
||||
|
||||
[float]
|
||||
[[enhancements]]
|
||||
== Enhancements
|
||||
|
||||
* {k4pull}6682[Pull Request 6682]: Renames Sense to Console, and adds the project to Kibana core.
|
||||
* {k4issue}6913[Issue 6913]: Adds Console support for Elasticsearch 5.0 APIs.
|
||||
* {k4pull}6896[Pull Request 6896]: Adds a configurable whitelist of headers for Elasticsearch requests.
|
||||
* {k4pull}6796[Pull Request 6796]: Adds millisecond durations for intervals.
|
||||
* {k4issue}1855[Issue 1855]: Adds advanced setting to configure the starting day of the week.
|
||||
* {k4issue}6378[Issue 6378]: Adds persistent UUIDs to distinguish multiple instances within a cluster.
|
||||
* {k4issue}6531[Issue 6531]: Improved warning for URL lengths that approach browser limits.
|
||||
* {k4issue}6602[Issue 6602]: Improves dark theme support.
|
||||
* {k4issue}6791[Issue 6791]: Enables composition of custom user toast notifications in Advanced Settings.
|
||||
* {k4pull}8014[Pull Request 8014]: Changes the UUID config setting from `uuid` to `server.uuid`, and puts UUID storage into data file instead of Elasticsearch. added[5.0.0-beta1]
|
||||
|
||||
[float]
|
||||
[[bugfixes]]
|
||||
== Bug Fixes
|
||||
|
||||
* {k4pull}6953[Pull Request 6953]: The `defaultRoute` configuration parameter now honors the value of `basePath` and requires a leading slash (`/`).
|
||||
* {k4issue}6794[Issue 6794]: Fixes extraneous bounds when drawing a bounding box on a tilemap visualization.
|
||||
* {k4issue}6246[Issue 6246]: Custom labels display on percentile and median metrics.
|
||||
* {k4issue}6407[Issue 6407]: Custom labels display on standard deviation metrics.
|
||||
* {k4issue}7003[Issue 7003]: Median visualizations no longer only show `?` as the value.
|
||||
* {k4issue}7006[Issue 7006]: The URL shortener now honors custom configuration values for `kibana.index`.
|
||||
* {k4issue}6785[Issue 6785]: Fixes an intermittent issue that prevented installing plugins by name.
|
||||
* {k4issue}6714[Issue 6714]: Removes unsupported flag functionality.
|
||||
* {k4issue}6760[Issue 6760]: Removed directory listings for static assets.
|
||||
* {k4issue}6762[Issue 6762]: Stopped Kibana logo from randomly disappearing in some situations.
|
||||
* {k4issue}6735[Issue 6735]: Clearer error message when trying to start Kibana while it is already running.
|
||||
|
||||
[float]
|
||||
[[plugins-apis]]
|
||||
== Plugins, APIs, and Development Infrastructure
|
||||
|
||||
NOTE: The items in this section are not a complete list of the internal changes relating to development in Kibana. Plugin
|
||||
framework and APIs are not formally documented and not guaranteed to be backward compatible from release to release.
|
||||
|
||||
* {k4pull}7069[Pull Request 7069]: Adds `preInit` functionality.
|
|
@ -1,476 +0,0 @@
|
|||
[[settings]]
|
||||
== Settings
|
||||
|
||||
To use Kibana, you have to tell it about the Elasticsearch indices that you want to explore by configuring one or more
|
||||
index patterns. You can also:
|
||||
|
||||
* Create scripted fields that are computed on the fly from your data. You can browse and visualize scripted fields, but
|
||||
you cannot search them.
|
||||
* Set advanced options such as the number of rows to show in a table and how many of the most popular fields to show.
|
||||
Use caution when modifying advanced options, as it's possible to set values that are incompatible with one another.
|
||||
* Configure Kibana for a production environment
|
||||
|
||||
[float]
|
||||
[[settings-create-pattern]]
|
||||
=== Creating an Index Pattern to Connect to Elasticsearch
|
||||
An _index pattern_ identifies one or more Elasticsearch indices that you want to explore with Kibana. Kibana looks for
|
||||
index names that match the specified pattern.
|
||||
An asterisk (*) in the pattern matches zero or more characters. For example, the pattern `myindex-*` matches all
|
||||
indices whose names start with `myindex-`, such as `myindex-1` and `myindex-2`.
|
||||
|
||||
An index pattern can also simply be the name of a single index.
|
||||
|
||||
To create an index pattern to connect to Elasticsearch:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Specify an index pattern that matches the name of one or more of your Elasticsearch indices. By default, Kibana
|
||||
guesses that you're you're working with log data being fed into Elasticsearch by Logstash.
|
||||
+
|
||||
NOTE: When you switch between top-level tabs, Kibana remembers where you were. For example, if you view a particular
|
||||
index pattern from the Settings tab, switch to the Discover tab, and then go back to the Settings tab, Kibana displays
|
||||
the index pattern you last looked at. To get to the create pattern form, click the *Add* button in the Index Patterns
|
||||
list.
|
||||
|
||||
. If your index contains a timestamp field that you want to use to perform time-based comparisons, select the *Index
|
||||
contains time-based events* option and select the index field that contains the timestamp. Kibana reads the index
|
||||
mapping to list all of the fields that contain a timestamp.
|
||||
|
||||
. By default, Kibana restricts wildcard expansion of time-based index patterns to indices with data within the currently
|
||||
selected time range. Click *Do not expand index pattern when search* to disable this behavior.
|
||||
|
||||
. Click *Create* to add the index pattern.
|
||||
|
||||
. To designate the new pattern as the default pattern to load when you view the Discover tab, click the *favorite*
|
||||
button.
|
||||
|
||||
NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
|
||||
contain data.
|
||||
|
||||
To use an event time in an index name, enclose the static text in the pattern and specify the date format using the
|
||||
tokens described in the following table.
|
||||
|
||||
For example, `[logstash-]YYYY.MM.DD` matches all indices whose names have a timestamp of the form `YYYY.MM.DD` appended
|
||||
to the prefix `logstash-`, such as `logstash-2015.01.31` and `logstash-2015-02-01`.
|
||||
|
||||
[float]
|
||||
[[date-format-tokens]]
|
||||
.Date Format Tokens
|
||||
[horizontal]
|
||||
`M`:: Month - cardinal: 1 2 3 ... 12
|
||||
`Mo`:: Month - ordinal: 1st 2nd 3rd ... 12th
|
||||
`MM`:: Month - two digit: 01 02 03 ... 12
|
||||
`MMM`:: Month - abbreviation: Jan Feb Mar ... Dec
|
||||
`MMMM`:: Month - full: January February March ... December
|
||||
`Q`:: Quarter: 1 2 3 4
|
||||
`D`:: Day of Month - cardinal: 1 2 3 ... 31
|
||||
`Do`:: Day of Month - ordinal: 1st 2nd 3rd ... 31st
|
||||
`DD`:: Day of Month - two digit: 01 02 03 ... 31
|
||||
`DDD`:: Day of Year - cardinal: 1 2 3 ... 365
|
||||
`DDDo`:: Day of Year - ordinal: 1st 2nd 3rd ... 365th
|
||||
`DDDD`:: Day of Year - three digit: 001 002 ... 364 365
|
||||
`d`:: Day of Week - cardinal: 0 1 3 ... 6
|
||||
`do`:: Day of Week - ordinal: 0th 1st 2nd ... 6th
|
||||
`dd`:: Day of Week - 2-letter abbreviation: Su Mo Tu ... Sa
|
||||
`ddd`:: Day of Week - 3-letter abbreviation: Sun Mon Tue ... Sat
|
||||
`dddd`:: Day of Week - full: Sunday Monday Tuesday ... Saturday
|
||||
`e`:: Day of Week (locale): 0 1 2 ... 6
|
||||
`E`:: Day of Week (ISO): 1 2 3 ... 7
|
||||
`w`:: Week of Year - cardinal (locale): 1 2 3 ... 53
|
||||
`wo`:: Week of Year - ordinal (locale): 1st 2nd 3rd ... 53rd
|
||||
`ww`:: Week of Year - 2-digit (locale): 01 02 03 ... 53
|
||||
`W`:: Week of Year - cardinal (ISO): 1 2 3 ... 53
|
||||
`Wo`:: Week of Year - ordinal (ISO): 1st 2nd 3rd ... 53rd
|
||||
`WW`:: Week of Year - two-digit (ISO): 01 02 03 ... 53
|
||||
`YY`:: Year - two digit: 70 71 72 ... 30
|
||||
`YYYY`:: Year - four digit: 1970 1971 1972 ... 2030
|
||||
`gg`:: Week Year - two digit (locale): 70 71 72 ... 30
|
||||
`gggg`:: Week Year - four digit (locale): 1970 1971 1972 ... 2030
|
||||
`GG`:: Week Year - two digit (ISO): 70 71 72 ... 30
|
||||
`GGGG`:: Week Year - four digit (ISO): 1970 1971 1972 ... 2030
|
||||
`A`:: AM/PM: AM PM
|
||||
`a`:: am/pm: am pm
|
||||
`H`:: Hour: 0 1 2 ... 23
|
||||
`HH`:: Hour - two digit: 00 01 02 ... 23
|
||||
`h`:: Hour - 12-hour clock: 1 2 3 ... 12
|
||||
`hh`:: Hour - 12-hour clock, 2 digit: 01 02 03 ... 12
|
||||
`m`:: Minute: 0 1 2 ... 59
|
||||
`mm`:: Minute - two-digit: 00 01 02 ... 59
|
||||
`s`:: Second: 0 1 2 ... 59
|
||||
`ss`:: Second - two-digit: 00 01 02 ... 59
|
||||
`S`:: Fractional Second - 10ths: 0 1 2 ... 9
|
||||
`SS`:: Fractional Second - 100ths: 0 1 ... 98 99
|
||||
`SSS`:: Fractional Seconds - 1000ths: 0 1 ... 998 999
|
||||
`Z`:: Timezone - zero UTC offset (hh:mm format): -07:00 -06:00 -05:00 .. +07:00
|
||||
`ZZ`:: Timezone - zero UTC offset (hhmm format): -0700 -0600 -0500 ... +0700
|
||||
`X`:: Unix Timestamp: 1360013296
|
||||
`x`:: Unix Millisecond Timestamp: 1360013296123
|
||||
|
||||
[float]
|
||||
[[set-default-pattern]]
|
||||
=== Setting the Default Index Pattern
|
||||
The default index pattern is loaded by automatically when you view the *Discover* tab. Kibana displays a star to the
|
||||
left of the name of the default pattern in the Index Patterns list on the *Settings > Indices* tab. The first pattern
|
||||
you create is automatically designated as the default pattern.
|
||||
|
||||
To set a different pattern as the default index pattern:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Select the pattern you want to set as the default in the Index Patterns list.
|
||||
. Click the pattern's *Favorite* button.
|
||||
|
||||
NOTE: You can also manually set the default index pattern in *Advanced > Settings*.
|
||||
|
||||
[float]
|
||||
[[reload-fields]]
|
||||
=== Reloading the Index Fields List
|
||||
When you add an index mapping, Kibana automatically scans the indices that match the pattern to display a list of the
|
||||
index fields. You can reload the index fields list to pick up any newly-added fields.
|
||||
|
||||
Reloading the index fields list also resets Kibana's popularity counters for the fields. The popularity counters keep
|
||||
track of the fields you've used most often within Kibana and are used to sort fields within lists.
|
||||
|
||||
To reload the index fields list:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Select an index pattern from the Index Patterns list.
|
||||
. Click the pattern's *Reload* button.
|
||||
|
||||
[float]
|
||||
[[delete-pattern]]
|
||||
=== Deleting an Index Pattern
|
||||
To delete an index pattern:
|
||||
|
||||
. Go to the *Settings > Indices* tab.
|
||||
. Select the pattern you want to remove in the Index Patterns list.
|
||||
. Click the pattern's *Delete* button.
|
||||
. Confirm that you want to remove the index pattern.
|
||||
|
||||
[[managing-fields]]
|
||||
=== Managing Fields
|
||||
The fields for the index pattern are listed in a table. Click a column header to sort the table by that column. Click
|
||||
the *Controls* button in the rightmost column for a given field to edit the field's properties. You can manually set
|
||||
the field's format from the *Format* drop-down. Format options vary based on the field's type.
|
||||
|
||||
You can also set the field's popularity value in the *Popularity* text entry box to any desired value. Click the
|
||||
*Update Field* button to confirm your changes or *Cancel* to return to the list of fields.
|
||||
|
||||
Kibana has https://www.elastic.co/blog/kibana-4-1-field-formatters[field formatters] for the following field types:
|
||||
|
||||
==== String Field Formatters
|
||||
|
||||
String fields support the `String` and `Url` formatters.
|
||||
|
||||
include::string-formatter.asciidoc[]
|
||||
|
||||
include::url-formatter.asciidoc[]
|
||||
|
||||
==== Date Field Formatters
|
||||
|
||||
Date fields support the `Date`, `Url`, and `String` formatters.
|
||||
|
||||
The `Date` formatter enables you to choose the display format of date stamps using the http://moment.js[moment.js]
|
||||
standard format definitions.
|
||||
|
||||
include::string-formatter.asciidoc[]
|
||||
|
||||
include::url-formatter.asciidoc[]
|
||||
|
||||
==== Geographic Point Field Formatters
|
||||
|
||||
Geographic point fields support the `String` formatter.
|
||||
|
||||
include::string-formatter.asciidoc[]
|
||||
|
||||
==== Numeric Field Formatters
|
||||
|
||||
Numeric fields support the `Url`, `Bytes`, `Duration`, `Number`, `Percentage`, `String`, and `Color` formatters.
|
||||
|
||||
include::url-formatter.asciidoc[]
|
||||
|
||||
include::string-formatter.asciidoc[]
|
||||
|
||||
include::duration-formatter.asciidoc[]
|
||||
|
||||
include::color-formatter.asciidoc[]
|
||||
|
||||
The `Bytes`, `Number`, and `Percentage` formatters enable you to choose the display formats of numbers in this field using
|
||||
the https://adamwdraper.github.io/Numeral-js/[numeral.js] standard format definitions.
|
||||
|
||||
[float]
|
||||
[[create-scripted-field]]
|
||||
=== Creating a Scripted Field
|
||||
Scripted fields compute data on the fly from the data in your Elasticsearch indices. Scripted field data is shown on
|
||||
the Discover tab as part of the document data, and you can use scripted fields in your visualizations.
|
||||
Scripted field values are computed at query time so they aren't indexed and cannot be searched.
|
||||
|
||||
NOTE: Kibana cannot query scripted fields.
|
||||
|
||||
WARNING: Computing data on the fly with scripted fields can be very resource intensive and can have a direct impact on
|
||||
Kibana's performance. Keep in mind that there's no built-in validation of a scripted field. If your scripts are
|
||||
buggy, you'll get exceptions whenever you try to view the dynamically generated data.
|
||||
|
||||
Scripted fields use the Lucene expression syntax. For more information,
|
||||
see {ref}/modules-scripting-expression.html[
|
||||
Lucene Expressions Scripts].
|
||||
|
||||
You can reference any single value numeric field in your expressions, for example:
|
||||
|
||||
----
|
||||
doc['field_name'].value
|
||||
----
|
||||
|
||||
To create a scripted field:
|
||||
|
||||
. Go to *Settings > Indices*
|
||||
. Select the index pattern you want to add a scripted field to.
|
||||
. Go to the pattern's *Scripted Fields* tab.
|
||||
. Click *Add Scripted Field*.
|
||||
. Enter a name for the scripted field.
|
||||
. Enter the expression that you want to use to compute a value on the fly from your index data.
|
||||
. Click *Save Scripted Field*.
|
||||
|
||||
For more information about scripted fields in Elasticsearch, see
|
||||
{ref}/modules-scripting.html[Scripting].
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{ref}/modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
[float]
|
||||
[[update-scripted-field]]
|
||||
=== Updating a Scripted Field
|
||||
To modify a scripted field:
|
||||
|
||||
. Go to *Settings > Indices*
|
||||
. Click the *Edit* button for the scripted field you want to change.
|
||||
. Make your changes and then click *Save Scripted Field* to update the field.
|
||||
|
||||
WARNING: Keep in mind that there's no built-in validation of a scripted field. If your scripts are buggy, you'll get
|
||||
exceptions whenever you try to view the dynamically generated data.
|
||||
|
||||
[float]
|
||||
[[delete-scripted-field]]
|
||||
=== Deleting a Scripted Field
|
||||
To delete a scripted field:
|
||||
|
||||
. Go to *Settings > Indices*
|
||||
. Click the *Delete* button for the scripted field you want to remove.
|
||||
. Confirm that you really want to delete the field.
|
||||
|
||||
[[advanced-options]]
|
||||
=== Setting Advanced Options
|
||||
The *Advanced Settings* page enables you to directly edit settings that control the behavior of the Kibana application.
|
||||
For example, you can change the format used to display dates, specify the default index pattern, and set the precision
|
||||
for displayed decimal values.
|
||||
|
||||
To set advanced options:
|
||||
|
||||
. Go to *Settings > Advanced*.
|
||||
. Click the *Edit* button for the option you want to modify.
|
||||
. Enter a new value for the option.
|
||||
. Click the *Save* button.
|
||||
|
||||
include::advanced-settings.asciidoc[]
|
||||
|
||||
[[kibana-server-properties]]
|
||||
=== Setting Kibana Server Properties
|
||||
|
||||
The Kibana server reads properties from the `kibana.yml` file on startup. The default settings configure Kibana to run
|
||||
on `localhost:5601`. To change the host or port number, or connect to Elasticsearch running on a different machine,
|
||||
you'll need to update your `kibana.yml` file. You can also enable SSL and set a variety of other options.
|
||||
|
||||
include::kibana-yml.asciidoc[]
|
||||
|
||||
////
|
||||
deprecated[4.2, The names of several Kibana server properties changed in the 4.2 release of Kibana. The previous names remain as functional aliases, but are now deprecated and will be removed in a future release of Kibana]
|
||||
|
||||
[horizontal]
|
||||
.Kibana Server Properties Changed in the 4.2 Release
|
||||
`server.port` added[4.2]:: The port that the Kibana server runs on.
|
||||
+
|
||||
*alias*: `port` deprecated[4.2]
|
||||
+
|
||||
*default*: `5601`
|
||||
|
||||
`server.host` added[4.2]:: The host to bind the Kibana server to.
|
||||
+
|
||||
*alias*: `host` deprecated[4.2]
|
||||
+
|
||||
*default*: `"localhost"`
|
||||
|
||||
`elasticsearch.url` added[4.2]:: The Elasticsearch instance where the indices you want to query reside.
|
||||
+
|
||||
*alias*: `elasticsearch_url` deprecated[4.2]
|
||||
+
|
||||
*default*: `"http://localhost:9200"`
|
||||
|
||||
`elasticsearch.preserveHost` added[4.2]:: By default, the host specified in the incoming request from the browser is specified as the host in the corresponding request Kibana sends to Elasticsearch. If you set this option to `false`, Kibana uses the host specified in `elasticsearch_url`.
|
||||
+
|
||||
*alias*: `elasticsearch_preserve_host` deprecated[4.2]
|
||||
+
|
||||
*default*: `true`
|
||||
|
||||
`elasticsearch.ssl.cert` added[4.2]:: This parameter specifies the path to the SSL certificate for Elasticsearch instances that require a client certificate.
|
||||
+
|
||||
*alias*: `kibana_elasticsearch_client_crt` deprecated[4.2]
|
||||
|
||||
`elasticsearch.ssl.key` added[4.2]:: This parameter specifies the path to the SSL key for Elasticsearch instances that require a client key.
|
||||
+
|
||||
*alias*: `kibana_elasticsearch_client_key` deprecated[4.2]
|
||||
|
||||
`elasticsearch.password` added[4.2]:: This parameter specifies the password for Elasticsearch instances that use HTTP basic authentication. Kibana users still need to authenticate with Elasticsearch, which is proxied through the Kibana server.
|
||||
+
|
||||
*alias*: `kibana_elasticsearch_password` deprecated[4.2]
|
||||
|
||||
`elasticsearch.username` added[4.2]:: This parameter specifies the username for Elasticsearch instances that use HTTP basic authentication. Kibana users still need to authenticate with Elasticsearch, which is proxied through the Kibana server.
|
||||
+
|
||||
*alias*: `kibana_elasticsearch_username` deprecated[4.2]
|
||||
|
||||
`elasticsearch.pingTimeout` added[4.2]:: This parameter specifies the maximum wait time in milliseconds for ping responses by Elasticsearch.
|
||||
+
|
||||
*alias*: `ping_timeout` deprecated[4.2]
|
||||
+
|
||||
*default*: `1500`
|
||||
|
||||
`elasticsearch.startupTimeout` added[4.2]:: This parameter specifies the maximum wait time in milliseconds for Elasticsearch discovery at Kibana startup. Kibana repeats attempts to discover an Elasticsearch cluster after the specified time elapses.
|
||||
+
|
||||
*alias*: `startup_timeout` deprecated[4.2]
|
||||
+
|
||||
*default*: `5000`
|
||||
|
||||
`kibana.index` added[4.2]:: The name of the index where saved searched, visualizations, and dashboards will be stored..
|
||||
+
|
||||
*alias*: `kibana_index` deprecated[4.2]
|
||||
+
|
||||
*default*: `.kibana`
|
||||
|
||||
`kibana.defaultAppId` added[4.2]:: The page that will be displayed when you launch Kibana: `discover`, `visualize`, `dashboard`, or `settings`.
|
||||
+
|
||||
*alias*: `default_app_id` deprecated[4.2]
|
||||
+
|
||||
*default*: `"discover"`
|
||||
|
||||
`logging.silent` added[4.2]:: Set this value to `true` to suppress all logging output.
|
||||
+
|
||||
*default*: `false`
|
||||
|
||||
`logging.quiet` added[4.2]:: Set this value to `true` to suppress all logging output except for log messages tagged `error`, `fatal`, or Hapi.js errors.
|
||||
+
|
||||
*default*: `false`
|
||||
|
||||
`logging.verbose` added[4.2]:: Set this value to `true` to log all events, including system usage information and all requests.
|
||||
+
|
||||
*default*: `false`
|
||||
|
||||
`logging.events` added[4.2]:: You can specify a map of log types to output tags for this parameter to create a customized set of loggable events, as in the following example:
|
||||
+
|
||||
[source,json]
|
||||
{
|
||||
log: ['info', 'warning', 'error', 'fatal'],
|
||||
response: '*',
|
||||
error: '*'
|
||||
}
|
||||
|
||||
`elasticsearch.requestTimeout` added[4.2]:: How long to wait for responses from the Kibana backend or Elasticsearch, in milliseconds.
|
||||
+
|
||||
*alias*: `request_timeout` deprecated[4.2]
|
||||
+
|
||||
*default*: `500000`
|
||||
|
||||
`elasticsearch.requestHeadersWhitelist:` added[5.0]:: List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side headers, set this value to [] (an empty list).
|
||||
+
|
||||
*default*: `[ 'authorization' ]`
|
||||
|
||||
`elasticsearch.shardTimeout` added[4.2]:: How long Elasticsearch should wait for responses from shards. Set to 0 to disable.
|
||||
+
|
||||
*alias*: `shard_timeout` deprecated[4.2]
|
||||
+
|
||||
*default*: `0`
|
||||
|
||||
`elasticsearch.ssl.verify` added[4.2]:: Indicates whether or not to validate the Elasticsearch SSL certificate. Set to false to disable SSL verification.
|
||||
+
|
||||
*alias*: `verify_ssl` deprecated[4.2]
|
||||
+
|
||||
*default*: `true`
|
||||
|
||||
`elasticsearch.ssl.ca`:: An array of paths to the CA certificates for your Elasticsearch instance. Specify if
|
||||
you are using a self-signed certificate so the certificate can be verified. Disable `elasticsearch.ssl.verify` otherwise.
|
||||
+
|
||||
*alias*: `ca` deprecated[4.2]
|
||||
|
||||
`server.ssl.key` added[4.2]:: The path to your Kibana server's key file. Must be set to encrypt communications between the browser and Kibana.
|
||||
+
|
||||
*alias*: `ssl_key_file` deprecated[4.2]
|
||||
|
||||
`server.ssl.cert` added[4.2]:: The path to your Kibana server's certificate file. Must be set to encrypt communications between the browser and Kibana.
|
||||
+
|
||||
*alias*: `ssl_cert_file` deprecated[4.2]
|
||||
|
||||
`pid.file` added[4.2]:: The location where you want to store the process ID file.
|
||||
+
|
||||
*alias*: `pid_file` deprecated[4.2]
|
||||
+
|
||||
*default*: `/var/run/kibana.pid`
|
||||
|
||||
`logging.dest` added[4.2]:: The location where you want to store the Kibana's log output. If not specified, log output is written to standard output and not stored. Specifying a log file suppresses log writes to standard output.
|
||||
+
|
||||
*alias*: `log_file` deprecated[4.2]
|
||||
////
|
||||
|
||||
[[managing-saved-objects]]
|
||||
=== Managing Saved Searches, Visualizations, and Dashboards
|
||||
|
||||
You can view, edit, and delete saved searches, visualizations, and dashboards from *Settings > Objects*. You can also
|
||||
export or import sets of searches, visualizations, and dashboards.
|
||||
|
||||
Viewing a saved object displays the selected item in the *Discover*, *Visualize*, or *Dashboard* page. To view a saved
|
||||
object:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the object you want to view.
|
||||
. Click the *View* button.
|
||||
|
||||
Editing a saved object enables you to directly modify the object definition. You can change the name of the object, add
|
||||
a description, and modify the JSON that defines the object's properties.
|
||||
|
||||
If you attempt to access an object whose index has been deleted, Kibana displays its Edit Object page. You can:
|
||||
|
||||
* Recreate the index so you can continue using the object.
|
||||
* Delete the object and recreate it using a different index.
|
||||
* Change the index name referenced in the object's `kibanaSavedObjectMeta.searchSourceJSON` to point to an existing
|
||||
index pattern. This is useful if the index you were working with has been renamed.
|
||||
|
||||
WARNING: No validation is performed for object properties. Submitting invalid changes will render the object unusable.
|
||||
Generally, you should use the *Discover*, *Visualize*, or *Dashboard* pages to create new objects instead of directly
|
||||
editing existing ones.
|
||||
|
||||
To edit a saved object:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the object you want to edit.
|
||||
. Click the *Edit* button.
|
||||
. Make your changes to the object definition.
|
||||
. Click the *Save Object* button.
|
||||
|
||||
To delete a saved object:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the object you want to delete.
|
||||
. Click the *Delete* button.
|
||||
. Confirm that you really want to delete the object.
|
||||
|
||||
To export a set of objects:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Select the type of object you want to export. You can export a set of dashboards, searches, or visualizations.
|
||||
. Click the selection box for the objects you want to export, or click the *Select All* box.
|
||||
. Click *Export* to select a location to write the exported JSON.
|
||||
|
||||
WARNING: Exported dashboards do not include their associated index patterns. Re-create the index patterns manually before
|
||||
importing saved dashboards to a Kibana instance running on another Elasticsearch cluster.
|
||||
|
||||
To import a set of objects:
|
||||
|
||||
. Go to *Settings > Objects*.
|
||||
. Click *Import* to navigate to the JSON file representing the set of objects to import.
|
||||
. Click *Open* after selecting the JSON file.
|
||||
. If any objects in the set would overwrite objects already present in Kibana, confirm the overwrite.
|
|
@ -1,142 +1,52 @@
|
|||
[[setup]]
|
||||
== Setting Up Kibana
|
||||
You can install Kibana and start exploring your Elasticsearch indices in minutes.
|
||||
All you need is:
|
||||
= Setup Kibana
|
||||
|
||||
* Elasticsearch {esversion}
|
||||
* A modern web browser - https://www.elastic.co/support/matrix#show_browsers[Supported Browsers].
|
||||
* Information about your Elasticsearch installation:
|
||||
** URL of the Elasticsearch instance you want to connect to.
|
||||
** Which Elasticsearch indices you want to search.
|
||||
[partintro]
|
||||
--
|
||||
This section includes information on how to setup Kibana and get it running,
|
||||
including:
|
||||
|
||||
* Downloading
|
||||
* Installing
|
||||
* Starting
|
||||
* Configuring
|
||||
* Upgrading
|
||||
|
||||
[[supported-platforms]]
|
||||
[float]
|
||||
== Supported platforms
|
||||
|
||||
Packages of Kibana are provided for and tested against Linux, Darwin, and
|
||||
Windows. Since Kibana runs on Node.js, we include the necessary Node.js
|
||||
binaries for these platforms. Running Kibana against a separately maintained
|
||||
version of Node.js is not supported.
|
||||
|
||||
[float]
|
||||
[[install]]
|
||||
=== Install Kibana
|
||||
To install and start Kibana:
|
||||
[[elasticsearch-version]]
|
||||
== Elasticsearch version
|
||||
|
||||
. Download the https://www.elastic.co/downloads/kibana[Kibana 4 binary package] for your platform.
|
||||
. Extract the `.zip` or `tar.gz` archive file.
|
||||
. If you're upgrading, migrate any configuration changes from the previous `kibana.yml` to the new version.
|
||||
. Install Kibana plugins (optional).
|
||||
. Run Kibana from the install directory: `bin/kibana` (Linux/MacOSX) or `bin\kibana.bat` (Windows).
|
||||
Kibana should be configured to run against an Elasticsearch node of the same
|
||||
version.
|
||||
|
||||
That's it! Kibana is now running on port 5601.
|
||||
Running different major version releases of Kibana and Elasticsearch (e.g.
|
||||
Kibana 5.x and Elasticsearch 2.x) is not supported, nor is running a minor
|
||||
version of Kibana that is newer than the version of Elasticsearch (e.g. Kibana
|
||||
5.1 and Elasticsearch 5.0).
|
||||
|
||||
On Unix, you can also install Kibana using the package manager suited for your distribution. For more
|
||||
information, see <<setup-repositories, Installing Kibana with apt and yum>>.
|
||||
Running different patch version releases of Kibana and Elasticsearch (e.g.
|
||||
Kibana 5.0.0 and Elasticsearch 5.0.1) is generally supported, though we
|
||||
encourage users to run the same versions of Kibana and Elasticsearch down to
|
||||
the patch version.
|
||||
--
|
||||
|
||||
IMPORTANT: If your Elasticsearch installation is protected by {xpack}/xpack-security.html[{scyld}]
|
||||
see {xpack}/kibana.html[Using Kibana with X-Pack Security] for additional setup
|
||||
instructions.
|
||||
include::setup/install.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[connect]]
|
||||
=== Connect Kibana with Elasticsearch
|
||||
Before you can start using Kibana, you need to tell it which Elasticsearch indices you want to explore.
|
||||
The first time you access Kibana, you are prompted to define an _index pattern_ that matches the name of
|
||||
one or more of your indices. That's it. That's all you need to configure to start using Kibana. You can
|
||||
add index patterns at any time from the <<settings-create-pattern,Settings tab>>.
|
||||
include::setup/settings.asciidoc[]
|
||||
|
||||
TIP: By default, Kibana connects to the Elasticsearch instance running on `localhost`. To connect to a
|
||||
different Elasticsearch instance, modify the Elasticsearch URL in the `kibana.yml` configuration file and
|
||||
restart Kibana. For information about using Kibana with your production nodes, see <<production>>.
|
||||
include::setup/access.asciidoc[]
|
||||
|
||||
To configure the Elasticsearch indices you want to access with Kibana:
|
||||
include::setup/connect-to-elasticsearch.asciidoc[]
|
||||
|
||||
. Point your browser at port 5601 to access the Kibana UI. For example, `localhost:5601` or
|
||||
`http://YOURDOMAIN.com:5601`.
|
||||
+
|
||||
image:images/Start-Page.png[Kibana start page]
|
||||
+
|
||||
. Specify an index pattern that matches the name of one or more of your Elasticsearch indices. By default,
|
||||
Kibana guesses that you're working with data being fed into Elasticsearch by Logstash. If that's the case,
|
||||
you can use the default `logstash-*` as your index pattern. The asterisk (*) matches zero or more
|
||||
characters in an index's name. If your Elasticsearch indices follow some other naming convention, enter
|
||||
an appropriate pattern. The "pattern" can also simply be the name of a single index.
|
||||
. Select the index field that contains the timestamp that you want to use to perform time-based
|
||||
comparisons. Kibana reads the index mapping to list all of the fields that contain a timestamp. If your
|
||||
index doesn't have time-based data, disable the *Index contains time-based events* option.
|
||||
+
|
||||
WARNING: Using event times to create index names is *deprecated* in this release of Kibana. Support for
|
||||
this functionality will be removed entirely in the next major Kibana release. Elasticsearch 2.1 includes
|
||||
sophisticated date parsing APIs that Kibana uses to determine date information, removing the need to
|
||||
specify dates in the index pattern name.
|
||||
+
|
||||
. Click *Create* to add the index pattern. This first pattern is automatically configured as the default.
|
||||
When you have more than one index pattern, you can designate which one to use as the default from
|
||||
*Settings > Indices*.
|
||||
include::setup/production.asciidoc[]
|
||||
|
||||
All done! Kibana is now connected to your Elasticsearch data. Kibana displays a read-only list of fields
|
||||
configured for the matching index.
|
||||
|
||||
NOTE: Kibana relies on dynamic mapping to use fields in visualizations and manage the
|
||||
`.kibana` index. If you have disabled dynamic mapping, you need to manually provide
|
||||
mappings for the fields that Kibana uses to create visualizations. For more information, see
|
||||
<<kibana-dynamic-mapping, Kibana and Elasticsearch Dynamic Mapping>>.
|
||||
|
||||
[float]
|
||||
[[explore]]
|
||||
=== Start Exploring your Data!
|
||||
You're ready to dive in to your data:
|
||||
|
||||
* Search and browse your data interactively from the <<discover, Discover>> page.
|
||||
* Chart and map your data from the <<visualize, Visualize>> page.
|
||||
* Create and view custom dashboards from the <<dashboard, Dashboard>> page.
|
||||
|
||||
For a step-by-step introduction to these core Kibana concepts, see the <<getting-started,
|
||||
Getting Started>> tutorial.
|
||||
|
||||
[float]
|
||||
[[kibana-dynamic-mapping]]
|
||||
=== Kibana and Elasticsearch Dynamic Mapping
|
||||
By default, Elasticsearch enables {ref}dynamic-mapping.html[dynamic mapping] for fields. Kibana needs
|
||||
dynamic mapping to use fields in visualizations correctly, as well as to manage the `.kibana` index
|
||||
where saved searches, visualizations, and dashboards are stored.
|
||||
|
||||
If your Elasticsearch use case requires you to disable dynamic mapping, you need to manually provide
|
||||
mappings for fields that Kibana uses to create visualizations. You also need to manually enable dynamic
|
||||
mapping for the `.kibana` index.
|
||||
|
||||
The following procedure assumes that the `.kibana` index does not already exist in Elasticsearch and
|
||||
that the `index.mapper.dynamic` setting in `elasticsearch.yml` is set to `false`:
|
||||
|
||||
. Start Elasticsearch.
|
||||
. Create the `.kibana` index with dynamic mapping enabled just for that index:
|
||||
+
|
||||
[source,shell]
|
||||
PUT .kibana
|
||||
{
|
||||
"index.mapper.dynamic": true
|
||||
}
|
||||
+
|
||||
. Start Kibana and navigate to the web UI and verify that there are no error messages related to dynamic
|
||||
mapping.
|
||||
|
||||
include::kibana-repositories.asciidoc[]
|
||||
|
||||
[[upgrading-kibana]]
|
||||
=== Upgrading Kibana
|
||||
|
||||
Your existing Kibana version is generally compatible with the next minor version release of Elasticsearch.
|
||||
This means you should upgrade your Elasticsearch cluster(s) before or at the same time as Kibana.
|
||||
We cannot guarantee compatibility between major version releases so in those cases both Elasticsearch and
|
||||
Kibana must be upgraded together.
|
||||
|
||||
To upgrade Kibana:
|
||||
|
||||
. Create a {ref}/modules-snapshots.html[snapshot]
|
||||
of the existing `.kibana` index.
|
||||
. Back up the `kibana.yml` configuration file.
|
||||
. Take note of the Kibana plugins that are installed:
|
||||
* `bin/kibana plugin --list` on 4.x versions of Kibana.
|
||||
* `bin/kibana-plugin list` on 5.x versions of Kibana.
|
||||
. To upgrade from an Archive File:
|
||||
.. Extract the new version of Kibana into a different directory. See steps below.
|
||||
.. Migrate any custom configuration from your old kibana.yml to your new one
|
||||
.. Follow other steps below to complete the new installation.
|
||||
.. Once the new version is fully configured and working with required plugins, remove the previous version
|
||||
of Kibana
|
||||
. To upgrade using a Linux Package Manager:
|
||||
.. Uninstall the existing Kibana package: `apt-get remove kibana` or `yum remove kibana`
|
||||
.. Install the new Kibana package. There have been some installer issues between various version of
|
||||
Kibana so the uninstall and install process is safer than an upgrade.
|
||||
include::setup/upgrade.asciidoc[]
|
||||
|
|
|
@ -11,6 +11,7 @@ time filter is set to the last 15 minutes and the search query is set to match-a
|
|||
If you don't see any documents, try setting the time filter to a wider time range.
|
||||
If you still don't see any results, it's possible that you don't *have* any documents.
|
||||
|
||||
[float]
|
||||
[[status]]
|
||||
=== Checking Kibana Status
|
||||
|
82
docs/setup/connect-to-elasticsearch.asciidoc
Normal file
|
@ -0,0 +1,82 @@
|
|||
[[connect-to-elasticsearch]]
|
||||
== Connect Kibana with Elasticsearch
|
||||
|
||||
Before you can start using Kibana, you need to tell it which Elasticsearch indices you want to explore.
|
||||
The first time you access Kibana, you are prompted to define an _index pattern_ that matches the name of
|
||||
one or more of your indices. That's it. That's all you need to configure to start using Kibana. You can
|
||||
add index patterns at any time from the <<settings-create-pattern,Management tab>>.
|
||||
|
||||
TIP: By default, Kibana connects to the Elasticsearch instance running on `localhost`. To connect to a
|
||||
different Elasticsearch instance, modify the Elasticsearch URL in the `kibana.yml` configuration file and
|
||||
restart Kibana. For information about using Kibana with your production nodes, see <<production>>.
|
||||
|
||||
To configure the Elasticsearch indices you want to access with Kibana:
|
||||
|
||||
. Point your browser at port 5601 to access the Kibana UI. For example, `localhost:5601` or
|
||||
`http://YOURDOMAIN.com:5601`.
|
||||
+
|
||||
image:images/Start-Page.png[Kibana start page]
|
||||
+
|
||||
. Specify an index pattern that matches the name of one or more of your Elasticsearch indices. By default,
|
||||
Kibana guesses that you're working with data being fed into Elasticsearch by Logstash. If that's the case,
|
||||
you can use the default `logstash-*` as your index pattern. The asterisk (*) matches zero or more
|
||||
characters in an index's name. If your Elasticsearch indices follow some other naming convention, enter
|
||||
an appropriate pattern. The "pattern" can also simply be the name of a single index.
|
||||
. Select the index field that contains the timestamp that you want to use to perform time-based
|
||||
comparisons. Kibana reads the index mapping to list all of the fields that contain a timestamp. If your
|
||||
index doesn't have time-based data, disable the *Index contains time-based events* option.
|
||||
+
|
||||
WARNING: Using event times to create index names is *deprecated* in this release of Kibana. Support for
|
||||
this functionality will be removed entirely in the next major Kibana release. Elasticsearch 2.1 includes
|
||||
sophisticated date parsing APIs that Kibana uses to determine date information, removing the need to
|
||||
specify dates in the index pattern name.
|
||||
+
|
||||
. Click *Create* to add the index pattern. This first pattern is automatically configured as the default.
|
||||
When you have more than one index pattern, you can designate which one to use as the default from
|
||||
*Settings > Indices*.
|
||||
|
||||
All done! Kibana is now connected to your Elasticsearch data. Kibana displays a read-only list of fields
|
||||
configured for the matching index.
|
||||
|
||||
NOTE: Kibana relies on dynamic mapping to use fields in visualizations and manage the
|
||||
`.kibana` index. If you have disabled dynamic mapping, you need to manually provide
|
||||
mappings for the fields that Kibana uses to create visualizations. For more information, see
|
||||
<<kibana-dynamic-mapping, Kibana and Elasticsearch Dynamic Mapping>>.
|
||||
|
||||
[float]
|
||||
[[explore]]
|
||||
=== Start Exploring your Data!
|
||||
You're ready to dive in to your data:
|
||||
|
||||
* Search and browse your data interactively from the <<discover, Discover>> page.
|
||||
* Chart and map your data from the <<visualize, Visualize>> page.
|
||||
* Create and view custom dashboards from the <<dashboard, Dashboard>> page.
|
||||
|
||||
For a step-by-step introduction to these core Kibana concepts, see the <<getting-started,
|
||||
Getting Started>> tutorial.
|
||||
|
||||
[float]
|
||||
[[kibana-dynamic-mapping]]
|
||||
=== Kibana and Elasticsearch Dynamic Mapping
|
||||
By default, Elasticsearch enables {es-ref}dynamic-mapping.html[dynamic mapping] for fields. Kibana needs
|
||||
dynamic mapping to use fields in visualizations correctly, as well as to manage the `.kibana` index
|
||||
where saved searches, visualizations, and dashboards are stored.
|
||||
|
||||
If your Elasticsearch use case requires you to disable dynamic mapping, you need to manually provide
|
||||
mappings for fields that Kibana uses to create visualizations. You also need to manually enable dynamic
|
||||
mapping for the `.kibana` index.
|
||||
|
||||
The following procedure assumes that the `.kibana` index does not already exist in Elasticsearch and
|
||||
that the `index.mapper.dynamic` setting in `elasticsearch.yml` is set to `false`:
|
||||
|
||||
. Start Elasticsearch.
|
||||
. Create the `.kibana` index with dynamic mapping enabled just for that index:
|
||||
+
|
||||
[source,shell]
|
||||
PUT .kibana
|
||||
{
|
||||
"index.mapper.dynamic": true
|
||||
}
|
||||
+
|
||||
. Start Kibana and navigate to the web UI and verify that there are no error messages related to dynamic
|
||||
mapping.
|
43
docs/setup/install.asciidoc
Normal file
|
@ -0,0 +1,43 @@
|
|||
[[install]]
|
||||
== Installing Kibana
|
||||
|
||||
Kibana is provided in the following package formats:
|
||||
|
||||
[horizontal]
|
||||
`tar.gz`/`zip`::
|
||||
|
||||
The `tar.gz` packages are provided for installation on Linux and Darwin and are
|
||||
the easiest choice for getting started with Kibana.
|
||||
+
|
||||
The `zip` package is the only supported package for Windows.
|
||||
+
|
||||
<<targz>> or <<windows>>
|
||||
|
||||
`deb`::
|
||||
|
||||
The `deb` package is suitable for Debian, Ubuntu, and other Debian-based
|
||||
systems. Debian packages may be downloaded from the Elastic website or from
|
||||
our Debian repository.
|
||||
+
|
||||
<<deb>>
|
||||
|
||||
`rpm`::
|
||||
|
||||
The `rpm` package is suitable for installation on Red Hat, Centos, SLES,
|
||||
OpenSuSE and other RPM-based systems. RPMs may be downloaded from the
|
||||
Elastic website or from our RPM repository.
|
||||
+
|
||||
<<rpm>>
|
||||
|
||||
IMPORTANT: If your Elasticsearch installation is protected by {xpack-ref}xpack-security.html[X-Pack Security]
|
||||
see {xpack-ref}kibana.html[Using Kibana with X-Pack Security] for additional setup
|
||||
instructions.
|
||||
|
||||
|
||||
include::install/targz.asciidoc[]
|
||||
|
||||
include::install/deb.asciidoc[]
|
||||
|
||||
include::install/rpm.asciidoc[]
|
||||
|
||||
include::install/windows.asciidoc[]
|
196
docs/setup/install/deb.asciidoc
Normal file
|
@ -0,0 +1,196 @@
|
|||
[[deb]]
|
||||
=== Install Kibana with Debian Package
|
||||
|
||||
The Debian package for Kibana can be <<install-deb,downloaded from our website>>
|
||||
or from our <<deb-repo,APT repository>>. It can be used to install
|
||||
Kibana on any Debian-based system such as Debian and Ubuntu.
|
||||
|
||||
The latest stable version of Kibana can be found on the
|
||||
link:/downloads/kibana[Download Kibana] page. Other versions can
|
||||
be found on the link:/downloads/past-releases[Past Releases page].
|
||||
|
||||
[[deb-key]]
|
||||
==== Import the Elastic PGP Key
|
||||
|
||||
include::key.asciidoc[]
|
||||
|
||||
[source,sh]
|
||||
-------------------------
|
||||
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
|
||||
-------------------------
|
||||
|
||||
[[deb-repo]]
|
||||
==== Installing from the APT repository
|
||||
|
||||
ifeval::["{release-state}"=="unreleased"]
|
||||
|
||||
Version {version} of Kibana has not yet been released.
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"!="unreleased"]
|
||||
|
||||
You may need to install the `apt-transport-https` package on Debian before proceeding:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo apt-get install apt-transport-https
|
||||
--------------------------------------------------
|
||||
|
||||
Save the repository definition to +/etc/apt/sources.list.d/elastic-{major-version}.list+:
|
||||
|
||||
ifeval::["{release-state}"=="released"]
|
||||
|
||||
["source","sh",subs="attributes,callouts"]
|
||||
--------------------------------------------------
|
||||
echo "deb https://artifacts.elastic.co/packages/{major-version}-prerelease/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-{major-version}.list
|
||||
--------------------------------------------------
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"=="prerelease"]
|
||||
|
||||
["source","sh",subs="attributes,callouts"]
|
||||
--------------------------------------------------
|
||||
echo "deb https://artifacts.elastic.co/packages/{major-version}/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-{major-version}.list
|
||||
--------------------------------------------------
|
||||
|
||||
endif::[]
|
||||
|
||||
|
||||
[WARNING]
|
||||
==================================================
|
||||
|
||||
Do not use `add-apt-repository` as it will add a `deb-src` entry as well, but
|
||||
we do not provide a source package. If you have added the `deb-src` entry, you
|
||||
will see an error like the following:
|
||||
|
||||
Unable to find expected entry 'main/source/Sources' in Release file
|
||||
(Wrong sources.list entry or malformed file)
|
||||
|
||||
Delete the `deb-src` entry from the `/etc/apt/sources.list` file and the
|
||||
installation should work as expected.
|
||||
==================================================
|
||||
|
||||
You can install the Kibana Debian package with:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo apt-get update && sudo apt-get install kibana
|
||||
--------------------------------------------------
|
||||
|
||||
[WARNING]
|
||||
==================================================
|
||||
If two entries exist for the same Kibana repository, you will see an error like this during `apt-get update`:
|
||||
|
||||
["literal",subs="attributes,callouts"]
|
||||
|
||||
Duplicate sources.list entry https://artifacts.elastic.co/packages/{major-version}/apt/ ...`
|
||||
|
||||
Examine +/etc/apt/sources.list.d/kibana-{major-version}.list+ for the duplicate entry or locate the duplicate entry amongst the files in `/etc/apt/sources.list.d/` and the `/etc/apt/sources.list` file.
|
||||
==================================================
|
||||
|
||||
endif::[]
|
||||
|
||||
|
||||
[[install-deb]]
|
||||
==== Download and install the Debian package manually
|
||||
|
||||
ifeval::["{release-state}"=="unreleased"]
|
||||
|
||||
Version {version} of Kibana has not yet been released.
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"!="unreleased"]
|
||||
|
||||
The Debian package for Kibana v{version} can be downloaded from the website and installed as follows:
|
||||
|
||||
["source","sh",subs="attributes"]
|
||||
--------------------------------------------
|
||||
wget https://artifacts.elastic.co/downloads/kibana/kibana-{version}.deb
|
||||
sha1sum kibana-{version}.deb <1>
|
||||
sudo dpkg -i kibana-{version}.deb
|
||||
--------------------------------------------
|
||||
<1> Compare the SHA produced by `sha1sum` or `shasum` with the
|
||||
https://artifacts.elastic.co/downloads/kibana/kibana-{version}.deb.sha1[published SHA].
|
||||
|
||||
endif::[]
|
||||
|
||||
include::init-systemd.asciidoc[]
|
||||
|
||||
[[deb-running-init]]
|
||||
==== Running Kibana with SysV `init`
|
||||
|
||||
Use the `update-rc.d` command to configure Kibana to start automatically
|
||||
when the system boots up:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo update-rc.d kibana defaults 95 10
|
||||
--------------------------------------------------
|
||||
|
||||
Kibana can be started and stopped using the `service` command:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------
|
||||
sudo -i service kibana start
|
||||
sudo -i service kibana stop
|
||||
--------------------------------------------
|
||||
|
||||
If Kibana fails to start for any reason, it will print the reason for
|
||||
failure to STDOUT. Log files can be found in `/var/log/kibana/`.
|
||||
|
||||
[[deb-running-systemd]]
|
||||
include::systemd.asciidoc[]
|
||||
|
||||
[[deb-configuring]]
|
||||
==== Configuring Kibana
|
||||
|
||||
Kibana loads its configuration from the `/etc/kibana/kibana.yml`
|
||||
file by default. The format of this config file is explained in
|
||||
<<settings>>.
|
||||
|
||||
[[deb-layout]]
|
||||
==== Directory layout of Debian package
|
||||
|
||||
The Debian package places config files, logs, and the data directory in the appropriate
|
||||
locations for a Debian-based system:
|
||||
|
||||
|
||||
[cols="<h,<,<m,<m",options="header",]
|
||||
|=======================================================================
|
||||
| Type | Description | Default Location | Setting
|
||||
| home
|
||||
| Kibana home directory or `$KIBANA_HOME`
|
||||
| /usr/share/kibana
|
||||
d|
|
||||
|
||||
| bin
|
||||
| Binary scripts including `kibana` to start the Kibana server
|
||||
and `kibana-plugin` to install plugins
|
||||
| /usr/share/kibana/bin
|
||||
d|
|
||||
|
||||
| config
|
||||
| Configuration files including `kibana.yml`
|
||||
| /etc/kibana
|
||||
d|
|
||||
|
||||
| data
|
||||
| The location of the data files written to disk by Kibana and its plugins
|
||||
| /var/lib/kibana
|
||||
d|
|
||||
|
||||
| optimize
|
||||
| Transpiled source code. Certain administrative actions (e.g. plugin install)
|
||||
result in the source code being retranspiled on the fly.
|
||||
| /usr/share/kibana/optimize
|
||||
d|
|
||||
|
||||
| plugins
|
||||
| Plugin files location. Each plugin will be contained in a subdirectory.
|
||||
| /usr/share/kibana/plugins
|
||||
d|
|
||||
|
||||
|=======================================================================
|
11
docs/setup/install/init-systemd.asciidoc
Normal file
|
@ -0,0 +1,11 @@
|
|||
==== SysV `init` vs `systemd`
|
||||
|
||||
Kibana is not started automatically after installation. How to start
|
||||
and stop Kibana depends on whether your system uses SysV `init` or
|
||||
`systemd` (used by newer distributions). You can tell which is being used by
|
||||
running this command:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------
|
||||
ps -p 1
|
||||
--------------------------------------------
|
7
docs/setup/install/key.asciidoc
Normal file
|
@ -0,0 +1,7 @@
|
|||
We sign all of our packages with the Elastic Signing Key (PGP key
|
||||
https://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4[D88E42B4],
|
||||
available from https://pgp.mit.edu) with fingerprint:
|
||||
|
||||
4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4
|
||||
|
||||
Download and install the public signing key:
|
188
docs/setup/install/rpm.asciidoc
Normal file
|
@ -0,0 +1,188 @@
|
|||
[[rpm]]
|
||||
=== Install Kibana with RPM
|
||||
|
||||
The RPM for Kibana can be <<install-rpm,downloaded from our website>>
|
||||
or from our <<rpm-repo,RPM repository>>. It can be used to install
|
||||
Kibana on any RPM-based system such as OpenSuSE, SLES, Centos, Red Hat,
|
||||
and Oracle Enterprise.
|
||||
|
||||
NOTE: RPM install is not supported on distributions with old versions of RPM,
|
||||
such as SLES 11 and CentOS 5. Please see <<targz>> instead.
|
||||
|
||||
The latest stable version of Kibana can be found on the
|
||||
link:/downloads/kibana[Download Kibana] page. Other versions can
|
||||
be found on the link:/downloads/past-releases[Past Releases page].
|
||||
|
||||
[[rpm-key]]
|
||||
==== Import the Elastic PGP Key
|
||||
|
||||
include::key.asciidoc[]
|
||||
|
||||
[source,sh]
|
||||
-------------------------
|
||||
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
-------------------------
|
||||
|
||||
[[rpm-repo]]
|
||||
==== Installing from the RPM repository
|
||||
|
||||
ifeval::["{release-state}"=="unreleased"]
|
||||
|
||||
Version {version} of Kibana has not yet been released.
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"!="unreleased"]
|
||||
|
||||
Create a file called `kibana.repo` in the `/etc/yum.repos.d/` directory
|
||||
for RedHat based distributions, or in the `/etc/zypp/repos.d/` directory for
|
||||
OpenSuSE based distributions, containing:
|
||||
|
||||
ifeval::["{release-state}"=="released"]
|
||||
|
||||
["source","sh",subs="attributes,callouts"]
|
||||
--------------------------------------------------
|
||||
[kibana-{major-version}]
|
||||
name=Kibana repository for {major-version} packages
|
||||
baseurl=https://artifacts.elastic.co/packages/{major-version}-prerelease/yum
|
||||
gpgcheck=1
|
||||
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
enabled=1
|
||||
autorefresh=1
|
||||
type=rpm-md
|
||||
--------------------------------------------------
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"=="prerelease"]
|
||||
|
||||
["source","sh",subs="attributes,callouts"]
|
||||
--------------------------------------------------
|
||||
[kibana-{major-version}]
|
||||
name=Kibana repository for {major-version} packages
|
||||
baseurl=https://artifacts.elastic.co/packages/{major-version}/yum
|
||||
gpgcheck=1
|
||||
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
enabled=1
|
||||
autorefresh=1
|
||||
type=rpm-md
|
||||
--------------------------------------------------
|
||||
|
||||
endif::[]
|
||||
|
||||
And your repository is ready for use. You can now install Kibana with one of the following commands:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo yum install kibana <1>
|
||||
sudo dnf install kibana <2>
|
||||
sudo zypper install kibana <3>
|
||||
--------------------------------------------------
|
||||
<1> Use `yum` on CentOS and older Red Hat based distributions.
|
||||
<2> Use `dnf` on Fedora and other newer Red Hat distributions.
|
||||
<3> Use `zypper` on OpenSUSE based distributions
|
||||
|
||||
endif::[]
|
||||
|
||||
[[install-rpm]]
|
||||
==== Download and install the RPM manually
|
||||
|
||||
ifeval::["{release-state}"=="unreleased"]
|
||||
|
||||
Version {version} of Kibana has not yet been released.
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"!="unreleased"]
|
||||
|
||||
The RPM for Kibana v{version} can be downloaded from the website and installed as follows:
|
||||
|
||||
["source","sh",subs="attributes"]
|
||||
--------------------------------------------
|
||||
wget https://artifacts.elastic.co/downloads/kibana/kibana-{version}.rpm
|
||||
sha1sum kibana-{version}.rpm <1>
|
||||
sudo rpm --install kibana-{version}.rpm
|
||||
--------------------------------------------
|
||||
<1> Compare the SHA produced by `sha1sum` or `shasum` with the
|
||||
https://artifacts.elastic.co/downloads/kibana/kibana-{version}.rpm.sha1[published SHA].
|
||||
|
||||
endif::[]
|
||||
|
||||
include::init-systemd.asciidoc[]
|
||||
|
||||
[[rpm-running-init]]
|
||||
==== Running Kibana with SysV `init`
|
||||
|
||||
Use the `chkconfig` command to configure Kibana to start automatically
|
||||
when the system boots up:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo chkconfig --add kibana
|
||||
--------------------------------------------------
|
||||
|
||||
Kibana can be started and stopped using the `service` command:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------
|
||||
sudo -i service kibana start
|
||||
sudo -i service kibana stop
|
||||
--------------------------------------------
|
||||
|
||||
If Kibana fails to start for any reason, it will print the reason for
|
||||
failure to STDOUT. Log files can be found in `/var/log/kibana/`.
|
||||
|
||||
|
||||
[[rpm-running-systemd]]
|
||||
include::systemd.asciidoc[]
|
||||
|
||||
[[rpm-configuring]]
|
||||
==== Configuring Kibana
|
||||
|
||||
Kibana loads its configuration from the `/etc/kibana/kibana.yml`
|
||||
file by default. The format of this config file is explained in
|
||||
<<settings>>.
|
||||
|
||||
[[rpm-layout]]
|
||||
==== Directory layout of RPM
|
||||
|
||||
The RPM places config files, logs, and the data directory in the appropriate
|
||||
locations for an RPM-based system:
|
||||
|
||||
|
||||
[cols="<h,<,<m,<m",options="header",]
|
||||
|=======================================================================
|
||||
| Type | Description | Default Location | Setting
|
||||
| home
|
||||
| Kibana home directory or `$KIBANA_HOME`
|
||||
| /usr/share/kibana
|
||||
d|
|
||||
|
||||
| bin
|
||||
| Binary scripts including `kibana` to start the Kibana server
|
||||
and `kibana-plugin` to install plugins
|
||||
| /usr/share/kibana/bin
|
||||
d|
|
||||
|
||||
| config
|
||||
| Configuration files including `kibana.yml`
|
||||
| /etc/kibana
|
||||
d|
|
||||
|
||||
| data
|
||||
| The location of the data files written to disk by Kibana and its plugins
|
||||
| /var/lib/kibana
|
||||
d|
|
||||
|
||||
| optimize
|
||||
| Transpiled source code. Certain administrative actions (e.g. plugin install)
|
||||
result in the source code being retranspiled on the fly.
|
||||
| /usr/share/kibana/optimize
|
||||
d|
|
||||
|
||||
| plugins
|
||||
| Plugin files location. Each plugin will be contained in a subdirectory.
|
||||
| /usr/share/kibana/plugins
|
||||
d|
|
||||
|
||||
|=======================================================================
|
22
docs/setup/install/systemd.asciidoc
Normal file
|
@ -0,0 +1,22 @@
|
|||
==== Running Kibana with `systemd`
|
||||
|
||||
To configure Kibana to start automatically when the system boots up,
|
||||
run the following commands:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
sudo /bin/systemctl daemon-reload
|
||||
sudo /bin/systemctl enable kibana.service
|
||||
--------------------------------------------------
|
||||
|
||||
Kibana can be started and stopped as follows:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------
|
||||
sudo systemctl start kibana.service
|
||||
sudo systemctl stop kibana.service
|
||||
--------------------------------------------
|
||||
|
||||
These commands provide no feedback as to whether Kibana was started
|
||||
successfully or not. Instead, this information will be written in the log
|
||||
files located in `/var/log/kibana/`.
|
165
docs/setup/install/targz.asciidoc
Normal file
|
@ -0,0 +1,165 @@
|
|||
[[targz]]
|
||||
=== Install Kibana with `.tar.gz`
|
||||
|
||||
Kibana is provided for Linux and Darwin as a `.tar.gz` package. These packages
|
||||
are the easiest formats to use when trying out Kibana.
|
||||
|
||||
The latest stable version of Kibana can be found on the
|
||||
link:/downloads/kibana[Download Kibana] page.
|
||||
Other versions can be found on the
|
||||
link:/downloads/past-releases[Past Releases page].
|
||||
|
||||
|
||||
[[install-linux64]]
|
||||
==== Download and install the Linux 64-bit package
|
||||
|
||||
ifeval::["{release-state}"=="unreleased"]
|
||||
|
||||
Version {version} of Kibana has not yet been released.
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"!="unreleased"]
|
||||
|
||||
The 64-bit Linux archive for Kibana v{version} can be downloaded and installed as follows:
|
||||
|
||||
["source","sh",subs="attributes"]
|
||||
--------------------------------------------
|
||||
wget https://artifacts.elastic.co/downloads/kibana/kibana-{version}-linux-x86_64.tar.gz
|
||||
sha1sum kibana-{version}-linux-x86_64.tar.gz <1>
|
||||
tar -xzf kibana-{version}-linux-x86_64.tar.gz
|
||||
cd kibana/ <2>
|
||||
--------------------------------------------
|
||||
<1> Compare the SHA produced by `sha1sum` or `shasum` with the
|
||||
https://artifacts.elastic.co/downloads/kibana/kibana-{version}-linux-x86_64.tar.gz.sha1[published SHA].
|
||||
<2> This directory is known as `$KIBANA_HOME`.
|
||||
|
||||
endif::[]
|
||||
|
||||
|
||||
[[install-linux32]]
|
||||
==== Download and install the Linux 32-bit package
|
||||
|
||||
ifeval::["{release-state}"=="unreleased"]
|
||||
|
||||
Version {version} of Kibana has not yet been released.
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"!="unreleased"]
|
||||
|
||||
The 32-bit Linux archive for Kibana v{version} can be downloaded and installed as follows:
|
||||
|
||||
["source","sh",subs="attributes"]
|
||||
--------------------------------------------
|
||||
wget https://artifacts.elastic.co/downloads/kibana/kibana-{version}-linux-x86.tar.gz
|
||||
sha1sum kibana-{version}-linux-x86.tar.gz <1>
|
||||
tar -xzf kibana-{version}-linux-x86.tar.gz
|
||||
cd kibana/ <2>
|
||||
--------------------------------------------
|
||||
<1> Compare the SHA produced by `sha1sum` or `shasum` with the
|
||||
https://artifacts.elastic.co/downloads/kibana/kibana-{version}-linux-x86.tar.gz.sha1[published SHA].
|
||||
<2> This directory is known as `$KIBANA_HOME`.
|
||||
|
||||
endif::[]
|
||||
|
||||
|
||||
[[install-darwin64]]
|
||||
==== Download and install the Darwin package
|
||||
|
||||
ifeval::["{release-state}"=="unreleased"]
|
||||
|
||||
Version {version} of Kibana has not yet been released.
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"!="unreleased"]
|
||||
|
||||
The Darwin archive for Kibana v{version} can be downloaded and installed as follows:
|
||||
|
||||
["source","sh",subs="attributes"]
|
||||
--------------------------------------------
|
||||
wget https://artifacts.elastic.co/downloads/kibana/kibana-{version}-darwin-x86_64.tar.gz
|
||||
sha1sum kibana-{version}-darwin-x86_64.tar.gz <1>
|
||||
tar -xzf kibana-{version}-darwin-x86_64.tar.gz
|
||||
cd kibana/ <2>
|
||||
--------------------------------------------
|
||||
<1> Compare the SHA produced by `sha1sum` or `shasum` with the
|
||||
https://artifacts.elastic.co/downloads/kibana/kibana-{version}-darwin-x86_64.tar.gz.sha1[published SHA].
|
||||
<2> This directory is known as `$KIBANA_HOME`.
|
||||
|
||||
endif::[]
|
||||
|
||||
|
||||
[[targz-running]]
|
||||
==== Running Kibana from the command line
|
||||
|
||||
Kibana can be started from the command line as follows:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------
|
||||
./bin/kibana
|
||||
--------------------------------------------
|
||||
|
||||
By default, Kibana runs in the foreground, prints its logs to the
|
||||
standard output (`stdout`), and can be stopped by pressing `Ctrl-C`.
|
||||
|
||||
|
||||
[[targz-configuring]]
|
||||
==== Configuring Kibana on the command line
|
||||
|
||||
Kibana loads its configuration from the `$KIBANA_HOME/config/kibana.yml`
|
||||
file by default. The format of this config file is explained in
|
||||
<<settings>>.
|
||||
|
||||
|
||||
[[targz-layout]]
|
||||
==== Directory layout of `.tar.gz` archives
|
||||
|
||||
The `.tar.gz` packages are entirely self-contained. All files and directories
|
||||
are, by default, contained within `$KIBANA_HOME` -- the directory created when
|
||||
unpacking the archive.
|
||||
|
||||
This is very convenient because you don't have to create any directories to
|
||||
start using Kibana, and uninstalling Kibana is as easy as removing the
|
||||
`$KIBANA_HOME` directory. However, it is advisable to change the default
|
||||
locations of the config and data directories so that you do not delete
|
||||
important data later on.
|
||||
|
||||
|
||||
[cols="<h,<,<m,<m",options="header",]
|
||||
|=======================================================================
|
||||
| Type | Description | Default Location | Setting
|
||||
| home
|
||||
| Kibana home directory or `%KIBANA_HOME%`
|
||||
d| Directory created by unpacking the archive
|
||||
d|
|
||||
|
||||
| bin
|
||||
| Binary scripts including `kibana` to start the Kibana server
|
||||
and `kibana-plugin` to install plugins
|
||||
| %KIBANA_HOME%\bin
|
||||
d|
|
||||
|
||||
| config
|
||||
| Configuration files including `kibana.yml`
|
||||
| %KIBANA_HOME%\config
|
||||
d|
|
||||
|
||||
| data
|
||||
| The location of the data files written to disk by Kibana and its plugins
|
||||
| %KIBANA_HOME%\data
|
||||
d|
|
||||
|
||||
| optimize
|
||||
| Transpiled source code. Certain administrative actions (e.g. plugin install)
|
||||
result in the source code being retranspiled on the fly.
|
||||
| %KIBANA_HOME%\optimize
|
||||
d|
|
||||
|
||||
| plugins
|
||||
| Plugin files location. Each plugin will be contained in a subdirectory.
|
||||
| %KIBANA_HOME%\plugins
|
||||
d|
|
||||
|
||||
|=======================================================================
|
106
docs/setup/install/windows.asciidoc
Normal file
|
@ -0,0 +1,106 @@
|
|||
[[windows]]
|
||||
=== Install Kibana on Windows
|
||||
|
||||
Kibana can be installed on Windows using the `.zip` package.
|
||||
|
||||
The latest stable version of Kibana can be found on the
|
||||
link:/downloads/kibana[Download Kibana] page.
|
||||
Other versions can be found on the
|
||||
link:/downloads/past-releases[Past Releases page].
|
||||
|
||||
[[install-windows]]
|
||||
==== Download and install the `.zip` package
|
||||
|
||||
ifeval::["{release-state}"=="unreleased"]
|
||||
|
||||
Version {version} of Kibana has not yet been released.
|
||||
|
||||
endif::[]
|
||||
|
||||
ifeval::["{release-state}"!="unreleased"]
|
||||
|
||||
Download the `.zip` windows archive for Kibana v{version} from
|
||||
https://artifacts.elastic.co/downloads/kibana/kibana-{version}-windows-x86.zip
|
||||
|
||||
Unzip it with your favourite unzip tool. This will create a folder called
|
||||
kibana-{version}-windows-x86, which we will refer to as `%KIBANA_HOME%`. In a
|
||||
terminal window, `CD` to the `%KIBANA_HOME%` directory, for instance:
|
||||
|
||||
|
||||
["source","sh",subs="attributes"]
|
||||
----------------------------
|
||||
CD c:\kibana-{version}-windows-x86
|
||||
----------------------------
|
||||
|
||||
endif::[]
|
||||
|
||||
[[windows-running]]
|
||||
==== Running Kibana from the command line
|
||||
|
||||
Kibana can be started from the command line as follows:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------
|
||||
.\bin\kibana
|
||||
--------------------------------------------
|
||||
|
||||
By default, Kibana runs in the foreground, prints its logs to `STDOUT`,
|
||||
and can be stopped by pressing `Ctrl-C`.
|
||||
|
||||
[[windows-configuring]]
|
||||
==== Configuring Kibana on the command line
|
||||
|
||||
Kibana loads its configuration from the `%KIBANA_HOME%/config/kibana.yml`
|
||||
file by default. The format of this config file is explained in
|
||||
<<settings>>.
|
||||
|
||||
[[windows-layout]]
|
||||
==== Directory layout of `.zip` archive
|
||||
|
||||
The `.zip` package is entirely self-contained. All files and directories are,
|
||||
by default, contained within `%KIBANA_HOME%` -- the directory created when
|
||||
unpacking the archive.
|
||||
|
||||
This is very convenient because you don't have to create any directories to
|
||||
start using Kibana, and uninstalling Kibana is as easy as removing the
|
||||
`%KIBANA_HOME%` directory. However, it is advisable to change the default
|
||||
locations of the config and data directories so that you do not delete
|
||||
important data later on.
|
||||
|
||||
|
||||
[cols="<h,<,<m,<m",options="header",]
|
||||
|=======================================================================
|
||||
| Type | Description | Default Location | Setting
|
||||
| home
|
||||
| Kibana home directory or `%KIBANA_HOME%`
|
||||
d| Directory created by unpacking the archive
|
||||
d|
|
||||
|
||||
| bin
|
||||
| Binary scripts including `kibana` to start the Kibana server
|
||||
and `kibana-plugin` to install plugins
|
||||
| %KIBANA_HOME%\bin
|
||||
d|
|
||||
|
||||
| config
|
||||
| Configuration files including `kibana.yml`
|
||||
| %KIBANA_HOME%\config
|
||||
d|
|
||||
|
||||
| data
|
||||
| The location of the data files written to disk by Kibana and its plugins
|
||||
| %KIBANA_HOME%\data
|
||||
d|
|
||||
|
||||
| optimize
|
||||
| Transpiled source code. Certain administrative actions (e.g. plugin install)
|
||||
result in the source code being retranspiled on the fly.
|
||||
| %KIBANA_HOME%\optimize
|
||||
d|
|
||||
|
||||
| plugins
|
||||
| Plugin files location. Each plugin will be contained in a subdirectory.
|
||||
| %KIBANA_HOME%\plugins
|
||||
d|
|
||||
|
||||
|=======================================================================
|
|
@ -1,8 +1,10 @@
|
|||
[[production]]
|
||||
== Using Kibana in a Production Environment
|
||||
* <<configuring-kibana-shield, Configuring Kibana to Work with {scyld}>>
|
||||
|
||||
* <<configuring-kibana-shield, Using Kibana with X-Pack>>
|
||||
* <<enabling-ssl, Enabling SSL>>
|
||||
* <<load-balancing, Load Balancing Across Multiple Elasticsearch Nodes>>
|
||||
* <<kibana-tribe, Kibana and Tribe Nodes>>
|
||||
|
||||
How you deploy Kibana largely depends on your use case. If you are the only user,
|
||||
you can run Kibana on your local machine and configure it to point to whatever
|
||||
|
@ -20,7 +22,7 @@ and an Elasticsearch client node on the same machine. For more information, see
|
|||
[[configuring-kibana-shield]]
|
||||
=== Using Kibana with X-Pack
|
||||
|
||||
You can use {xpack-ref}xpack-security.html[{scyld}] to control what
|
||||
You can use {xpack-ref}xpack-security.html[X-Pack Security] to control what
|
||||
Elasticsearch data users can access through Kibana.
|
||||
|
||||
When you install X-Pack, Kibana users have to log in. They need to
|
||||
|
@ -29,7 +31,7 @@ will be working with in Kibana.
|
|||
|
||||
If a user loads a Kibana dashboard that accesses data in an index that they
|
||||
are not authorized to view, they get an error that indicates the index does
|
||||
not exist. {scyld} does not currently provide a way to control which
|
||||
not exist. X-Pack Security does not currently provide a way to control which
|
||||
users can load which dashboards.
|
||||
|
||||
For information about setting up Kibana users and how to configure Kibana
|
||||
|
@ -51,7 +53,7 @@ server.ssl.key: /path/to/your/server.key
|
|||
server.ssl.cert: /path/to/your/server.crt
|
||||
----
|
||||
|
||||
If you are using {scyld} or a proxy that provides an HTTPS endpoint for Elasticsearch,
|
||||
If you are using X-Pack Security or a proxy that provides an HTTPS endpoint for Elasticsearch,
|
||||
you can configure Kibana to access Elasticsearch via HTTPS so communications between
|
||||
the Kibana server and Elasticsearch are encrypted.
|
||||
|
||||
|
@ -81,7 +83,7 @@ across the nodes is to run an Elasticsearch _client_ node on the same machine as
|
|||
Elasticsearch client nodes are essentially smart load balancers that are part of the cluster. They
|
||||
process incoming HTTP requests, redirect operations to the other nodes in the cluster as needed, and
|
||||
gather and return the results. For more information, see
|
||||
{ref}/modules-node.html[Node] in the Elasticsearch reference.
|
||||
{es-ref}modules-node.html[Node] in the Elasticsearch reference.
|
||||
|
||||
To use a local client node to load balance Kibana requests:
|
||||
|
|
@ -1,3 +1,10 @@
|
|||
[[settings]]
|
||||
== Configuring Kibana
|
||||
|
||||
The Kibana server reads properties from the `kibana.yml` file on startup. The default settings configure Kibana to run
|
||||
on `localhost:5601`. To change the host or port number, or connect to Elasticsearch running on a different machine,
|
||||
you'll need to update your `kibana.yml` file. You can also enable SSL and set a variety of other options.
|
||||
|
||||
.Kibana Configuration Settings
|
||||
[horizontal]
|
||||
`server.port:`:: *Default: 5601* Kibana is served by a back end server. This setting specifies the port to use.
|
27
docs/setup/upgrade.asciidoc
Normal file
|
@ -0,0 +1,27 @@
|
|||
[[upgrade]]
|
||||
== Upgrading Kibana
|
||||
|
||||
Your existing Kibana version is generally compatible with the next minor
|
||||
version release of Elasticsearch. This means you should upgrade your
|
||||
Elasticsearch cluster(s) before or at the same time as Kibana. We cannot
|
||||
guarantee compatibility between major version releases so in those cases both
|
||||
Elasticsearch and Kibana must be upgraded together.
|
||||
|
||||
To upgrade Kibana:
|
||||
|
||||
. Create a {es-ref}modules-snapshots.html[snapshot]
|
||||
of the existing `.kibana` index.
|
||||
. Back up the `kibana.yml` configuration file.
|
||||
. Take note of the Kibana plugins that are installed:
|
||||
* `bin/kibana plugin --list` on 4.x versions of Kibana.
|
||||
* `bin/kibana-plugin list` on 5.x versions of Kibana.
|
||||
. To upgrade from an Archive File:
|
||||
.. Extract the new version of Kibana into a different directory. See steps below.
|
||||
.. Migrate any custom configuration from your old kibana.yml to your new one
|
||||
.. Follow other steps below to complete the new installation.
|
||||
.. Once the new version is fully configured and working with required plugins, remove the previous version
|
||||
of Kibana
|
||||
. To upgrade using a Linux Package Manager:
|
||||
.. Uninstall the existing Kibana package: `apt-get remove kibana` or `yum remove kibana`
|
||||
.. Install the new Kibana package. There have been some installer issues between various version of
|
||||
Kibana so the uninstall and install process is safer than an upgrade.
|
|
@ -1,6 +1,8 @@
|
|||
[[visualize]]
|
||||
== Visualize
|
||||
= Visualize
|
||||
|
||||
[partintro]
|
||||
--
|
||||
You can use the _Visualize_ page to design data visualizations. You can save these visualizations, use them
|
||||
individually, or combine visualizations into a _dashboard_. A visualization can be based on one of the following
|
||||
data source types:
|
||||
|
@ -9,11 +11,11 @@ data source types:
|
|||
* A saved search
|
||||
* An existing saved visualization
|
||||
|
||||
Visualizations are based on the {ref}search-aggregations.html[aggregation] feature introduced in Elasticsearch 1.x.
|
||||
Visualizations are based on the {es-ref}search-aggregations.html[aggregation] feature of Elasticsearch.
|
||||
--
|
||||
|
||||
[float]
|
||||
[[createvis]]
|
||||
=== Creating a New Visualization
|
||||
== Creating a New Visualization
|
||||
|
||||
Click on the *Visualize* image:images/visualize-icon.png[chart icon] tab in the left-hand navigation bar. If you are
|
||||
already creating a visualization, you can click the *New* button in the toolbar. To set up your visualization, follow
|
||||
|
@ -21,7 +23,7 @@ these steps:
|
|||
|
||||
[float]
|
||||
[[newvis01]]
|
||||
==== Step 1: Choose the Visualization Type
|
||||
=== Step 1: Choose the Visualization Type
|
||||
|
||||
Choose a visualization type when you start the New Visualization wizard:
|
||||
|
||||
|
@ -47,7 +49,7 @@ selection.
|
|||
|
||||
[float]
|
||||
[[newvis02]]
|
||||
==== Step 2: Choose a Data Source
|
||||
=== Step 2: Choose a Data Source
|
||||
|
||||
You can choose a new or saved search to serve as the data source for your visualization. Searches are associated with
|
||||
an index or a set of indexes. When you select _new search_ on a system with multiple indices configured, select an
|
||||
|
@ -58,7 +60,7 @@ When you make changes to the search that is linked to the visualization, the vis
|
|||
|
||||
[float]
|
||||
[[visualization-editor]]
|
||||
==== Step 3: The Visualization Editor
|
||||
=== Step 3: The Visualization Editor
|
||||
|
||||
The visualization editor enables you to configure and edit visualizations. The visualization editor has the following
|
||||
main elements:
|
||||
|
@ -71,11 +73,11 @@ image:images/VizEditor.jpg[]
|
|||
|
||||
[float]
|
||||
[[viz-autorefresh]]
|
||||
include::autorefresh.asciidoc[]
|
||||
include::discover/autorefresh.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[toolbar-panel]]
|
||||
===== Toolbar
|
||||
==== Toolbar
|
||||
|
||||
The toolbar has a search field for interactive data searches, as well as controls to manage saving and loading
|
||||
visualizations. For visualizations based on saved searches, the search bar is grayed out. To edit the search, replacing
|
||||
|
@ -87,23 +89,23 @@ the current visualization.
|
|||
|
||||
[float]
|
||||
[[aggregation-builder]]
|
||||
===== Aggregation Builder
|
||||
==== Aggregation Builder
|
||||
|
||||
Use the aggregation builder on the left of the page to configure the {ref}search-aggregations-metrics.html[metric] and {ref}search-aggregations-bucket.html[bucket] aggregations used in your
|
||||
Use the aggregation builder on the left of the page to configure the {es-ref}search-aggregations-metrics.html[metric] and {es-ref}search-aggregations-bucket.html[bucket] aggregations used in your
|
||||
visualization. Buckets are analogous to SQL `GROUP BY` statements. For more information on aggregations, see the main
|
||||
{ref}search-aggregations.html[Elasticsearch aggregations reference].
|
||||
{es-ref}search-aggregations.html[Elasticsearch aggregations reference].
|
||||
|
||||
Bar, line, or area chart visualizations use _metrics_ for the y-axis and _buckets_ are used for the x-axis, segment bar
|
||||
colors, and row/column splits. For pie charts, use the metric for the slice size and the bucket for the number of
|
||||
slices.
|
||||
|
||||
Choose the metric aggregation for your visualization's Y axis, such as
|
||||
{ref}/search-aggregations-metrics-valuecount-aggregation.html[count],
|
||||
{ref}/search-aggregations-metrics-avg-aggregation.html[average],
|
||||
{ref}/search-aggregations-metrics-sum-aggregation.html[sum],
|
||||
{ref}/search-aggregations-metrics-min-aggregation.html[min],
|
||||
{ref}/search-aggregations-metrics-max-aggregation.html[max], or
|
||||
{ref}/search-aggregations-metrics-cardinality-aggregation.html[cardinality]
|
||||
{es-ref}search-aggregations-metrics-valuecount-aggregation.html[count],
|
||||
{es-ref}search-aggregations-metrics-avg-aggregation.html[average],
|
||||
{es-ref}search-aggregations-metrics-sum-aggregation.html[sum],
|
||||
{es-ref}search-aggregations-metrics-min-aggregation.html[min],
|
||||
{es-ref}search-aggregations-metrics-max-aggregation.html[max], or
|
||||
{es-ref}search-aggregations-metrics-cardinality-aggregation.html[cardinality]
|
||||
(unique count). Use bucket aggregations for the visualization's X axis, color slices, and row/column splits. Common
|
||||
bucket aggregations include date histogram, range, terms, filters, and significant terms.
|
||||
|
||||
|
@ -136,27 +138,27 @@ https://www.elastic.co/blog/kibana-aggregation-execution-order-and-you[here].
|
|||
|
||||
[float]
|
||||
[[visualize-filters]]
|
||||
include::filter-pinning.asciidoc[]
|
||||
include::discover/filter-pinning.asciidoc[]
|
||||
|
||||
[float]
|
||||
[[preview-canvas]]
|
||||
===== Preview Canvas
|
||||
==== Preview Canvas
|
||||
|
||||
The preview canvas displays a preview of the visualization you've defined in the aggregation builder. To refresh the
|
||||
visualization preview, clicking the *Apply Changes* image:images/apply-changes-button.png[] button on the toolbar.
|
||||
|
||||
include::area.asciidoc[]
|
||||
include::visualize/area.asciidoc[]
|
||||
|
||||
include::datatable.asciidoc[]
|
||||
include::visualize/datatable.asciidoc[]
|
||||
|
||||
include::line.asciidoc[]
|
||||
include::visualize/line.asciidoc[]
|
||||
|
||||
include::markdown.asciidoc[]
|
||||
include::visualize/markdown.asciidoc[]
|
||||
|
||||
include::metric.asciidoc[]
|
||||
include::visualize/metric.asciidoc[]
|
||||
|
||||
include::pie.asciidoc[]
|
||||
include::visualize/pie.asciidoc[]
|
||||
|
||||
include::tilemap.asciidoc[]
|
||||
include::visualize/tilemap.asciidoc[]
|
||||
|
||||
include::vertbar.asciidoc[]
|
||||
include::visualize/vertbar.asciidoc[]
|
||||
|
|
|
@ -1,25 +1,25 @@
|
|||
[[area-chart]]
|
||||
=== Area Charts
|
||||
== Area Charts
|
||||
|
||||
This chart's Y axis is the _metrics_ axis. The following aggregations are available for this axis:
|
||||
|
||||
*Count*:: The {ref}/search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
*Count*:: The {es-ref}search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
the elements in the selected index pattern.
|
||||
*Average*:: This aggregation returns the {ref}/search-aggregations-metrics-avg-aggregation.html[_average_] of a numeric
|
||||
*Average*:: This aggregation returns the {es-ref}search-aggregations-metrics-avg-aggregation.html[_average_] of a numeric
|
||||
field. Select a field from the drop-down.
|
||||
*Sum*:: The {ref}/search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
*Sum*:: The {es-ref}search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
field. Select a field from the drop-down.
|
||||
*Min*:: The {ref}/search-aggregations-metrics-min-aggregation.html[_min_] aggregation returns the minimum value of a
|
||||
*Min*:: The {es-ref}search-aggregations-metrics-min-aggregation.html[_min_] aggregation returns the minimum value of a
|
||||
numeric field. Select a field from the drop-down.
|
||||
*Max*:: The {ref}/search-aggregations-metrics-max-aggregation.html[_max_] aggregation returns the maximum value of a
|
||||
*Max*:: The {es-ref}search-aggregations-metrics-max-aggregation.html[_max_] aggregation returns the maximum value of a
|
||||
numeric field. Select a field from the drop-down.
|
||||
*Unique Count*:: The {ref}/search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
*Unique Count*:: The {es-ref}search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
the number of unique values in a field. Select a field from the drop-down.
|
||||
*Percentiles*:: The {ref}/search-aggregations-metrics-percentile-aggregation.html[_percentile_] aggregation divides the
|
||||
*Percentiles*:: The {es-ref}search-aggregations-metrics-percentile-aggregation.html[_percentile_] aggregation divides the
|
||||
values in a numeric field into percentile bands that you specify. Select a field from the drop-down, then specify one
|
||||
or more ranges in the *Percentiles* fields. Click the *X* to remove a percentile field. Click *+ Add* to add a
|
||||
percentile field.
|
||||
*Percentile Rank*:: The {ref}/search-aggregations-metrics-percentile-rank-aggregation.html[_percentile ranks_]
|
||||
*Percentile Rank*:: The {es-ref}search-aggregations-metrics-percentile-rank-aggregation.html[_percentile ranks_]
|
||||
aggregation returns the percentile rankings for the values in the numeric field you specify. Select a numeric field
|
||||
from the drop-down, then specify one or more percentile rank values in the *Values* fields. Click the *X* to remove a
|
||||
values field. Click *+Add* to add a values field.
|
||||
|
@ -44,7 +44,7 @@ definition, as in the following example:
|
|||
{ "script" : "doc['grade'].value * 1.2" }
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{ref}/modules-scripting.html[dynamic Groovy scripting].
|
||||
{es-ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
The availability of these options varies depending on the aggregation you choose.
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
[[data-table]]
|
||||
=== Data Table
|
||||
== Data Table
|
||||
|
||||
include::y-axis-aggs.asciidoc[]
|
||||
|
||||
|
@ -8,35 +8,35 @@ the table into additional tables.
|
|||
|
||||
Each bucket type supports the following aggregations:
|
||||
|
||||
*Date Histogram*:: A {ref}search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
|
||||
*Date Histogram*:: A {es-ref}search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
|
||||
numeric field and organized by date. You can specify a time frame for the intervals in seconds, minutes, hours, days,
|
||||
weeks, months, or years. You can also specify a custom interval frame by selecting *Custom* as the interval and
|
||||
specifying a number and a time unit in the text field. Custom interval time units are *s* for seconds, *m* for minutes,
|
||||
*h* for hours, *d* for days, *w* for weeks, and *y* for years. Different units support different levels of precision,
|
||||
down to one second.
|
||||
*Histogram*:: A standard {ref}search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
|
||||
*Histogram*:: A standard {es-ref}search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
|
||||
numeric field. Specify an integer interval for this field. Select the *Show empty buckets* checkbox to include empty
|
||||
intervals in the histogram.
|
||||
*Range*:: With a {ref}search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
|
||||
*Range*:: With a {es-ref}search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
|
||||
of values for a numeric field. Click *Add Range* to add a set of range endpoints. Click the red *(x)* symbol to remove
|
||||
a range.
|
||||
*Date Range*:: A {ref}search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
|
||||
*Date Range*:: A {es-ref}search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
|
||||
that are within a range of dates that you specify. You can specify the ranges for the dates using
|
||||
{ref}common-options.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
|
||||
{es-ref}common-options.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
|
||||
Click the red *(/)* symbol to remove a range.
|
||||
*IPv4 Range*:: The {ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
|
||||
*IPv4 Range*:: The {es-ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
|
||||
specify ranges of IPv4 addresses. Click *Add Range* to add a set of range endpoints. Click the red *(/)* symbol to
|
||||
remove a range.
|
||||
*Terms*:: A {ref}search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
|
||||
*Terms*:: A {es-ref}search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
|
||||
or bottom _n_ elements of a given field to display, ordered by count or a custom metric.
|
||||
*Filters*:: You can specify a set of {ref}search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
|
||||
*Filters*:: You can specify a set of {es-ref}search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
|
||||
You can specify a filter as a query string or in JSON format, just as in the Discover search bar. Click *Add Filter* to
|
||||
add another filter. Click the image:images/labelbutton.png[] *label* button to open the label field, where you can type
|
||||
in a name to display on the visualization.
|
||||
*Significant Terms*:: Displays the results of the experimental
|
||||
{ref}search-aggregations-bucket-significantterms-aggregation.html[_significant terms_] aggregation. The value of the
|
||||
{es-ref}search-aggregations-bucket-significantterms-aggregation.html[_significant terms_] aggregation. The value of the
|
||||
*Size* parameter defines the number of entries this aggregation returns.
|
||||
*Geohash*:: The {ref}search-aggregations-bucket-geohashgrid-aggregation.html[_geohash_] aggregation displays points
|
||||
*Geohash*:: The {es-ref}search-aggregations-bucket-geohashgrid-aggregation.html[_geohash_] aggregation displays points
|
||||
based on the geohash coordinates.
|
||||
|
||||
Once you've specified a bucket type aggregation, you can define sub-buckets to refine the visualization. Click
|
||||
|
@ -58,7 +58,7 @@ definition, as in the following example:
|
|||
{ "script" : "doc['grade'].value * 1.2" }
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
{es-ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
The availability of these options varies depending on the aggregation you choose.
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
[[line-chart]]
|
||||
=== Line Charts
|
||||
== Line Charts
|
||||
|
||||
This chart's Y axis is the _metrics_ axis. The following aggregations are available for this axis:
|
||||
|
||||
|
@ -26,7 +26,7 @@ definition, as in the following example:
|
|||
{ "script" : "doc['grade'].value * 1.2" }
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{ref}/modules-scripting.html[dynamic Groovy scripting].
|
||||
{es-ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
The availability of these options varies depending on the aggregation you choose.
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
[[markdown-widget]]
|
||||
=== Markdown Widget
|
||||
== Markdown Widget
|
||||
|
||||
The Markdown widget is a text entry field that accepts GitHub-flavored Markdown text. Kibana renders the text you enter
|
||||
in this field and displays the results on the dashboard. You can click the *Help* link to go to the
|
|
@ -1,5 +1,5 @@
|
|||
[[metric-chart]]
|
||||
=== Metric
|
||||
== Metric
|
||||
|
||||
A metric visualization displays a single number for each aggregation you select:
|
||||
|
||||
|
@ -14,7 +14,7 @@ definition, as in the following example:
|
|||
{ "script" : "doc['grade'].value * 1.2" }
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{ref}/modules-scripting.html[dynamic Groovy scripting].
|
||||
{es-ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
The availability of these options varies depending on the aggregation you choose.
|
||||
|
||||
|
@ -22,6 +22,6 @@ Click the *Options* tab to display the font size slider.
|
|||
|
||||
[float]
|
||||
[[metric-viewing-detailed-information]]
|
||||
==== Viewing Detailed Information
|
||||
=== Viewing Detailed Information
|
||||
|
||||
include::visualization-raw-data.asciidoc[]
|
|
@ -1,14 +1,14 @@
|
|||
[[pie-chart]]
|
||||
=== Pie Charts
|
||||
== Pie Charts
|
||||
|
||||
The slice size of a pie chart is determined by the _metrics_ aggregation. The following aggregations are available for
|
||||
this axis:
|
||||
|
||||
*Count*:: The {ref}search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
*Count*:: The {es-ref}search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
the elements in the selected index pattern.
|
||||
*Sum*:: The {ref}search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
*Sum*:: The {es-ref}search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
field. Select a field from the drop-down.
|
||||
*Unique Count*:: The {ref}search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
*Unique Count*:: The {es-ref}search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
the number of unique values in a field. Select a field from the drop-down.
|
||||
|
||||
Enter a string in the *Custom Label* field to change the display label.
|
||||
|
@ -21,33 +21,33 @@ if the splits are displayed in a row or a column by clicking the *Rows | Columns
|
|||
|
||||
You can specify any of the following bucket aggregations for your pie chart:
|
||||
|
||||
*Date Histogram*:: A {ref}search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
|
||||
*Date Histogram*:: A {es-ref}search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
|
||||
numeric field and organized by date. You can specify a time frame for the intervals in seconds, minutes, hours, days,
|
||||
weeks, months, or years. You can also specify a custom interval frame by selecting *Custom* as the interval and
|
||||
specifying a number and a time unit in the text field. Custom interval time units are *s* for seconds, *m* for minutes,
|
||||
*h* for hours, *d* for days, *w* for weeks, and *y* for years. Different units support different levels of precision,
|
||||
down to one second.
|
||||
*Histogram*:: A standard {ref}search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
|
||||
*Histogram*:: A standard {es-ref}search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
|
||||
numeric field. Specify an integer interval for this field. Select the *Show empty buckets* checkbox to include empty
|
||||
intervals in the histogram.
|
||||
*Range*:: With a {ref}search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
|
||||
*Range*:: With a {es-ref}search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
|
||||
of values for a numeric field. Click *Add Range* to add a set of range endpoints. Click the red *(x)* symbol to remove
|
||||
a range.
|
||||
*Date Range*:: A {ref}search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
|
||||
*Date Range*:: A {es-ref}search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
|
||||
that are within a range of dates that you specify. You can specify the ranges for the dates using
|
||||
{ref}common-options.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
|
||||
{es-ref}common-options.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
|
||||
Click the red *(/)* symbol to remove a range.
|
||||
*IPv4 Range*:: The {ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
|
||||
*IPv4 Range*:: The {es-ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
|
||||
specify ranges of IPv4 addresses. Click *Add Range* to add a set of range endpoints. Click the red *(/)* symbol to
|
||||
remove a range.
|
||||
*Terms*:: A {ref}search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
|
||||
*Terms*:: A {es-ref}search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
|
||||
or bottom _n_ elements of a given field to display, ordered by count or a custom metric.
|
||||
*Filters*:: You can specify a set of {ref}search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
|
||||
*Filters*:: You can specify a set of {es-ref}search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
|
||||
You can specify a filter as a query string or in JSON format, just as in the Discover search bar. Click *Add Filter* to
|
||||
add another filter. Click the image:images/labelbutton.png[] *label* button to open the label field, where you can type
|
||||
in a name to display on the visualization.
|
||||
*Significant Terms*:: Displays the results of the experimental
|
||||
{ref}search-aggregations-bucket-significantterms-aggregation.html[_significant terms_] aggregation. The value of the
|
||||
{es-ref}search-aggregations-bucket-significantterms-aggregation.html[_significant terms_] aggregation. The value of the
|
||||
*Size* parameter defines the number of entries this aggregation returns.
|
||||
|
||||
After defining an initial bucket aggregation, you can define sub-buckets to refine the visualization. Click *+ Add
|
||||
|
@ -72,7 +72,7 @@ definition, as in the following example:
|
|||
{ "script" : "doc['grade'].value * 1.2" }
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
{es-ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
The availability of these options varies depending on the aggregation you choose.
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
[[tilemap]]
|
||||
=== Tile Maps
|
||||
== Tile Maps
|
||||
|
||||
A tile map displays a geographic area overlaid with circles keyed to the data determined by the buckets you specify.
|
||||
|
||||
|
@ -10,17 +10,17 @@ in `kibana.yml`.
|
|||
The default _metrics_ aggregation for a tile map is the *Count* aggregation. You can select any of the following
|
||||
aggregations as the metrics aggregation:
|
||||
|
||||
*Count*:: The {ref}search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
*Count*:: The {es-ref}search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
the elements in the selected index pattern.
|
||||
*Average*:: This aggregation returns the {ref}search-aggregations-metrics-avg-aggregation.html[_average_] of a numeric
|
||||
*Average*:: This aggregation returns the {es-ref}search-aggregations-metrics-avg-aggregation.html[_average_] of a numeric
|
||||
field. Select a field from the drop-down.
|
||||
*Sum*:: The {ref}search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
*Sum*:: The {es-ref}search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
field. Select a field from the drop-down.
|
||||
*Min*:: The {ref}search-aggregations-metrics-min-aggregation.html[_min_] aggregation returns the minimum value of a
|
||||
*Min*:: The {es-ref}search-aggregations-metrics-min-aggregation.html[_min_] aggregation returns the minimum value of a
|
||||
numeric field. Select a field from the drop-down.
|
||||
*Max*:: The {ref}search-aggregations-metrics-max-aggregation.html[_max_] aggregation returns the maximum value of a
|
||||
*Max*:: The {es-ref}search-aggregations-metrics-max-aggregation.html[_max_] aggregation returns the maximum value of a
|
||||
numeric field. Select a field from the drop-down.
|
||||
*Unique Count*:: The {ref}search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
*Unique Count*:: The {es-ref}search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
the number of unique values in a field. Select a field from the drop-down.
|
||||
|
||||
Enter a string in the *Custom Label* field to change the display label.
|
||||
|
@ -32,7 +32,7 @@ Coordinates* on a single chart. A multiple chart split must run before any other
|
|||
|
||||
Tile maps use the *Geohash* aggregation as their initial aggregation. Select a field, typically coordinates, from the
|
||||
drop-down. The *Precision* slider determines the granularity of the results displayed on the map. See the documentation
|
||||
for the {ref}/search-aggregations-bucket-geohashgrid-aggregation.html#_cell_dimensions_at_the_equator[geohash grid]
|
||||
for the {es-ref}search-aggregations-bucket-geohashgrid-aggregation.html#_cell_dimensions_at_the_equator[geohash grid]
|
||||
aggregation for details on the area specified by each precision level. Kibana supports a maximum geohash length of 7.
|
||||
|
||||
NOTE: Higher precisions increase memory usage for the browser displaying Kibana as well as for the underlying
|
||||
|
@ -42,37 +42,37 @@ Once you've specified a buckets aggregation, you can define sub-aggregations to
|
|||
only support sub-aggregations as split charts. Click *+ Add Sub Aggregation*, then *Split Chart* to select a
|
||||
sub-aggregation from the list of types:
|
||||
|
||||
*Date Histogram*:: A {ref}search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
|
||||
*Date Histogram*:: A {es-ref}search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
|
||||
numeric field and organized by date. You can specify a time frame for the intervals in seconds, minutes, hours, days,
|
||||
weeks, months, or years. You can also specify a custom interval frame by selecting *Custom* as the interval and
|
||||
specifying a number and a time unit in the text field. Custom interval time units are *s* for seconds, *m* for minutes,
|
||||
*h* for hours, *d* for days, *w* for weeks, and *y* for years. Different units support different levels of precision,
|
||||
down to one second.
|
||||
*Histogram*:: A standard {ref}search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
|
||||
*Histogram*:: A standard {es-ref}search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
|
||||
numeric field. Specify an integer interval for this field. Select the *Show empty buckets* checkbox to include empty
|
||||
intervals in the histogram.
|
||||
*Range*:: With a {ref}search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
|
||||
*Range*:: With a {es-ref}search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
|
||||
of values for a numeric field. Click *Add Range* to add a set of range endpoints. Click the red *(x)* symbol to remove
|
||||
a range.
|
||||
After changing options, click the *Apply changes* button to update your visualization, or the grey *Discard
|
||||
changes* button to keep your visualization in its current state.
|
||||
*Date Range*:: A {ref}search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
|
||||
*Date Range*:: A {es-ref}search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
|
||||
that are within a range of dates that you specify. You can specify the ranges for the dates using
|
||||
{ref}common-options.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
|
||||
{es-ref}common-options.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
|
||||
Click the red *(/)* symbol to remove a range.
|
||||
*IPv4 Range*:: The {ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
|
||||
*IPv4 Range*:: The {es-ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
|
||||
specify ranges of IPv4 addresses. Click *Add Range* to add a set of range endpoints. Click the red *(/)* symbol to
|
||||
remove a range.
|
||||
*Terms*:: A {ref}search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
|
||||
*Terms*:: A {es-ref}search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
|
||||
or bottom _n_ elements of a given field to display, ordered by count or a custom metric.
|
||||
*Filters*:: You can specify a set of {ref}search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
|
||||
*Filters*:: You can specify a set of {es-ref}search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
|
||||
You can specify a filter as a query string or in JSON format, just as in the Discover search bar. Click *Add Filter* to
|
||||
add another filter. Click the image:images/labelbutton.png[] *label* button to open the label field, where you can type
|
||||
in a name to display on the visualization.
|
||||
*Significant Terms*:: Displays the results of the experimental
|
||||
{ref}search-aggregations-bucket-significantterms-aggregation.html[_significant terms_] aggregation. The value of the
|
||||
{es-ref}search-aggregations-bucket-significantterms-aggregation.html[_significant terms_] aggregation. The value of the
|
||||
*Size* parameter defines the number of entries this aggregation returns.
|
||||
*Geohash*:: The {ref}search-aggregations-bucket-geohashgrid-aggregation.html[_geohash_] aggregation displays points
|
||||
*Geohash*:: The {es-ref}search-aggregations-bucket-geohashgrid-aggregation.html[_geohash_] aggregation displays points
|
||||
based on the geohash coordinates.
|
||||
|
||||
NOTE: By default, the *Change precision on map zoom* box is checked. Uncheck the box to disable this behavior.
|
||||
|
@ -90,7 +90,7 @@ definition, as in the following example:
|
|||
{ "script" : "doc['grade'].value * 1.2" }
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
{es-ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
The availability of these options varies depending on the aggregation you choose.
|
||||
|
||||
|
@ -106,7 +106,7 @@ Heatmaps have the following options:
|
|||
|
||||
* *Radius*: Sets the size of the individual heatmap dots.
|
||||
* *Blur*: Sets the amount of blurring for the heatmap dots.
|
||||
* *Maximum zoom*: Tilemaps in Kibana support 10 zoom levels. This slider defines the maximum zoom level at which the
|
||||
* *Maximum zoom*: Tilemaps in Kibana support 18 zoom levels. This slider defines the maximum zoom level at which the
|
||||
heatmap dots appear at full intensity.
|
||||
* *Minimum opacity*: Sets the opacity cutoff for the dots.
|
||||
* *Show Tooltip*: Check this box to have a tooltip with the values for a given dot when the cursor is on that dot.
|
|
@ -1,25 +1,25 @@
|
|||
[[vertical-bar-chart]]
|
||||
=== Vertical Bar Charts
|
||||
== Vertical Bar Charts
|
||||
|
||||
This chart's Y axis is the _metrics_ axis. The following aggregations are available for this axis:
|
||||
|
||||
*Count*:: The {ref}/search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
*Count*:: The {es-ref}search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
the elements in the selected index pattern.
|
||||
*Average*:: This aggregation returns the {ref}/search-aggregations-metrics-avg-aggregation.html[_average_] of a numeric
|
||||
*Average*:: This aggregation returns the {es-ref}search-aggregations-metrics-avg-aggregation.html[_average_] of a numeric
|
||||
field. Select a field from the drop-down.
|
||||
*Sum*:: The {ref}/search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
*Sum*:: The {es-ref}search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
field. Select a field from the drop-down.
|
||||
*Min*:: The {ref}/search-aggregations-metrics-min-aggregation.html[_min_] aggregation returns the minimum value of a
|
||||
*Min*:: The {es-ref}search-aggregations-metrics-min-aggregation.html[_min_] aggregation returns the minimum value of a
|
||||
numeric field. Select a field from the drop-down.
|
||||
*Max*:: The {ref}/search-aggregations-metrics-max-aggregation.html[_max_] aggregation returns the maximum value of a
|
||||
*Max*:: The {es-ref}search-aggregations-metrics-max-aggregation.html[_max_] aggregation returns the maximum value of a
|
||||
numeric field. Select a field from the drop-down.
|
||||
*Unique Count*:: The {ref}/search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
*Unique Count*:: The {es-ref}search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
the number of unique values in a field. Select a field from the drop-down.
|
||||
*Percentiles*:: The {ref}/search-aggregations-metrics-percentile-aggregation.html[_percentile_] aggregation divides the
|
||||
*Percentiles*:: The {es-ref}search-aggregations-metrics-percentile-aggregation.html[_percentile_] aggregation divides the
|
||||
values in a numeric field into percentile bands that you specify. Select a field from the drop-down, then specify one
|
||||
or more ranges in the *Percentiles* fields. Click the *X* to remove a percentile field. Click *+ Add* to add a
|
||||
percentile field.
|
||||
*Percentile Rank*:: The {ref}/search-aggregations-metrics-percentile-rank-aggregation.html[_percentile ranks_]
|
||||
*Percentile Rank*:: The {es-ref}search-aggregations-metrics-percentile-rank-aggregation.html[_percentile ranks_]
|
||||
aggregation returns the percentile rankings for the values in the numeric field you specify. Select a numeric field
|
||||
from the drop-down, then specify one or more percentile rank values in the *Values* fields. Click the *X* to remove a
|
||||
values field. Click *+Add* to add a values field.
|
||||
|
@ -51,7 +51,7 @@ definition, as in the following example:
|
|||
{ "script" : "doc['grade'].value * 1.2" }
|
||||
|
||||
NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you to enable
|
||||
{ref}/modules-scripting.html[dynamic Groovy scripting].
|
||||
{es-ref}modules-scripting.html[dynamic Groovy scripting].
|
||||
|
||||
The availability of these options varies depending on the aggregation you choose.
|
||||
|
|
@ -4,34 +4,34 @@ chart, or for split charts.
|
|||
This chart's X axis supports the following aggregations. Click the linked name of each aggregation to visit the main
|
||||
Elasticsearch documentation for that aggregation.
|
||||
|
||||
*Date Histogram*:: A {ref}search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
|
||||
*Date Histogram*:: A {es-ref}search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
|
||||
numeric field and organized by date. You can specify a time frame for the intervals in seconds, minutes, hours, days,
|
||||
weeks, months, or years. You can also specify a custom interval frame by selecting *Custom* as the interval and
|
||||
specifying a number and a time unit in the text field. Custom interval time units are *s* for seconds, *m* for minutes,
|
||||
*h* for hours, *d* for days, *w* for weeks, and *y* for years. Different units support different levels of precision,
|
||||
down to one second.
|
||||
|
||||
*Histogram*:: A standard {ref}search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
|
||||
*Histogram*:: A standard {es-ref}search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
|
||||
numeric field. Specify an integer interval for this field. Select the *Show empty buckets* checkbox to include empty
|
||||
intervals in the histogram.
|
||||
*Range*:: With a {ref}/search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
|
||||
*Range*:: With a {es-ref}search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
|
||||
of values for a numeric field. Click *Add Range* to add a set of range endpoints. Click the red *(x)* symbol to remove
|
||||
a range.
|
||||
*Date Range*:: A {ref}search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
|
||||
*Date Range*:: A {es-ref}search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
|
||||
that are within a range of dates that you specify. You can specify the ranges for the dates using
|
||||
{ref}common-options.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
|
||||
{es-ref}common-options.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
|
||||
Click the red *(x)* symbol to remove a range.
|
||||
*IPv4 Range*:: The {ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
|
||||
*IPv4 Range*:: The {es-ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
|
||||
specify ranges of IPv4 addresses. Click *Add Range* to add a set of range endpoints. Click the red *(x)* symbol to
|
||||
remove a range.
|
||||
*Terms*:: A {ref}search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
|
||||
*Terms*:: A {es-ref}search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
|
||||
or bottom _n_ elements of a given field to display, ordered by count or a custom metric.
|
||||
*Filters*:: You can specify a set of {ref}/search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
|
||||
*Filters*:: You can specify a set of {es-ref}search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
|
||||
You can specify a filter as a query string or in JSON format, just as in the Discover search bar. Click *Add Filter* to
|
||||
add another filter. Click the image:images/labelbutton.png[Label button icon] *label* button to open the label field, where
|
||||
you can type in a name to display on the visualization.
|
||||
*Significant Terms*:: Displays the results of the experimental
|
||||
{ref}/search-aggregations-bucket-significantterms-aggregation.html[_significant terms_] aggregation.
|
||||
{es-ref}search-aggregations-bucket-significantterms-aggregation.html[_significant terms_] aggregation.
|
||||
|
||||
Once you've specified an X axis aggregation, you can define sub-aggregations to refine the visualization. Click *+ Add
|
||||
Sub Aggregation* to define a sub-aggregation, then choose *Split Area* or *Split Chart*, then select a sub-aggregation
|
|
@ -1,22 +1,22 @@
|
|||
*Count*:: The {ref}/search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
*Count*:: The {es-ref}search-aggregations-metrics-valuecount-aggregation.html[_count_] aggregation returns a raw count of
|
||||
the elements in the selected index pattern.
|
||||
*Average*:: This aggregation returns the {ref}/search-aggregations-metrics-avg-aggregation.html[_average_] of a numeric
|
||||
*Average*:: This aggregation returns the {es-ref}search-aggregations-metrics-avg-aggregation.html[_average_] of a numeric
|
||||
field. Select a field from the drop-down.
|
||||
*Sum*:: The {ref}/search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
*Sum*:: The {es-ref}search-aggregations-metrics-sum-aggregation.html[_sum_] aggregation returns the total sum of a numeric
|
||||
field. Select a field from the drop-down.
|
||||
*Min*:: The {ref}/search-aggregations-metrics-min-aggregation.html[_min_] aggregation returns the minimum value of a
|
||||
*Min*:: The {es-ref}search-aggregations-metrics-min-aggregation.html[_min_] aggregation returns the minimum value of a
|
||||
numeric field. Select a field from the drop-down.
|
||||
*Max*:: The {ref}/search-aggregations-metrics-max-aggregation.html[_max_] aggregation returns the maximum value of a
|
||||
*Max*:: The {es-ref}search-aggregations-metrics-max-aggregation.html[_max_] aggregation returns the maximum value of a
|
||||
numeric field. Select a field from the drop-down.
|
||||
*Unique Count*:: The {ref}/search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
*Unique Count*:: The {es-ref}search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
|
||||
the number of unique values in a field. Select a field from the drop-down.
|
||||
*Standard Deviation*:: The {ref}/search-aggregations-metrics-extendedstats-aggregation.html[_extended stats_]
|
||||
*Standard Deviation*:: The {es-ref}search-aggregations-metrics-extendedstats-aggregation.html[_extended stats_]
|
||||
aggregation returns the standard deviation of data in a numeric field. Select a field from the drop-down.
|
||||
*Percentiles*:: The {ref}/search-aggregations-metrics-percentile-aggregation.html[_percentile_] aggregation divides the
|
||||
*Percentiles*:: The {es-ref}search-aggregations-metrics-percentile-aggregation.html[_percentile_] aggregation divides the
|
||||
values in a numeric field into percentile bands that you specify. Select a field from the drop-down, then specify one
|
||||
or more ranges in the *Percentiles* fields. Click the *X* to remove a percentile field. Click *+ Add* to add a
|
||||
percentile field.
|
||||
*Percentile Rank*:: The {ref}/search-aggregations-metrics-percentile-rank-aggregation.html[_percentile ranks_]
|
||||
*Percentile Rank*:: The {es-ref}search-aggregations-metrics-percentile-rank-aggregation.html[_percentile ranks_]
|
||||
aggregation returns the percentile rankings for the values in the numeric field you specify. Select a numeric field
|
||||
from the drop-down, then specify one or more percentile rank values in the *Values* fields. Click the *X* to remove a
|
||||
values field. Click *+Add* to add a values field.
|