merged with latest master

This commit is contained in:
Khalah Jones-Golden 2015-06-19 17:28:16 -04:00
commit dd220a783c
405 changed files with 6926 additions and 3928 deletions

View file

@ -8,6 +8,7 @@
"define": true,
"require": true,
"console": false,
"-event": true
"-event": true,
"-name": true
}
}

View file

@ -1,16 +1,18 @@
language: node_js
node_js:
- '0.10'
install:
- npm install -g grunt-cli bower
- npm install
- bower install
script:
- npm test
node_js: '0.12'
install: npm install
script: npm test
sudo: false
cache:
directories:
- esvm
- node_modules
- src/kibana/bower_components
before_cache:
- rm -rf esvm/*/logs esvm/data_dir
notifications:
email:
- rashid.khan@elastic.co
- spencer.alger@elastic.co
hipchat:
rooms:
secure: UKrVR+5KztHarodQruQe97UJfwftutD6RNdXlVkr+oIr2GqccisDIIN9pAzS/kxl+eAnP1uT6VHzc9YI/jgbrmiSkz3DHViw+MwDwY2aIDgI8aHEbd/4B2ihtb15+OYTVbb+lytyz4+W8A8hSmbkTR/P/uFIJ+EYcBeYZfw1elo=

View file

@ -22,7 +22,7 @@ Please make sure you have signed the [Contributor License Agreement](http://www.
```sh
npm install -g grunt-cli bower
```
- Clone the kibana repo and move into it
```sh
@ -74,4 +74,31 @@ Distributable, built packages can be found in `target/` after the build complete
Push your local changes to your forked copy of the repository and submit a pull request. In the pull request, describe what your changes do and mention the number of the issue where discussion has taken place, eg “Closes #123″.
Then sit back and wait. There will probably be discussion about the pull request and, if any changes are needed, we would love to work with you to get your pull request merged into Kibana.
Always submit your pull against `master` unless the bug is only present in an older version. If the bug effects both `master` and another branch say so in your pull.
Then sit back and wait. There will probably be discussion about the pull request and, if any changes are needed, we'll work with you to get your pull request merged into Kibana.
### The road to review
After a pull is submitted, it needs to get to review. If you have commit permission on the Kibana repo you will probably perform these steps while submitting your pull request. If not, a member of the elastic organization will do them for you, though you can help by suggesting a reviewer for your changes if you've interacted with someone while working on the issue.
1. Assign the `review` tag. This signals to the team that someone needs to give this attention.
1. Assign version tags. If the pull is related to an existing issue (and it should be!), that issue probably has a version tag (eg `4.0.1`) on it. Assign the same version tag to your pull. You may end up with 2 or more version tags if the changes requires backporting
1. Find someone to review your pull. Don't just pick any yahoo, pick the right person. The right person might be the original reporter of the issue, but it might also be the person most familiar with the code you've changed. If neither of those things apply, or your change is small in scope, try to find someone on the Kibana team without a ton of existing reviews on their plate. As a rule, most pulls will require 2 reviewers, but the first reviewer will pick the 2nd.
### Review engaged
So, you've been assigned a pull to review. What's that look like?
Remember, someone is blocked by a pull awaiting review, make it count. Be thorough, the more action items you catch in the first review, the less back and forth will be required, and the better chance the pull has of being successful. Don't you like success?
1. **Understand the issue** that is being fixed, or the feature being added. Check the description on the pull, and check out the related issue. If you don't understand something, ask the person the submitter for clarification.
1. **Reproduce the bug** (or the lack of feature I guess?) in the destination branch, usually `master`. The referenced issue will help you here. If you're unable to reproduce the issue, contact the issue submitter for clarification
1. **Check out the pull** and test it. Is the issue fixed? Does it have nasty side effects? Try to create suspect inputs. If it operates on the value of a field try things like: strings (including an empty string), null, numbers, dates. Try to think of edge cases that might break the code.
1. **Read the code**. Understanding the changes will help you find additional things to test. Contact the submitter if you don't understand something.
1. **Go line-by-line**. Are there [style guide](https://github.com/elastic/kibana/blob/master/STYLEGUIDE.md) violations? Strangely named variables? Magic numbers? Do the abstractions make sense to you? Are things arranged in a testable way?
1. **Speaking of tests** Are they there? If a new function was added does it have tests? Do the tests, well, TEST anything? Do they just run the function or do they properly check the output?
1. **Suggest improvements** If there are changes needed, be explicit, comment on the lines in the code that you'd like changed. You might consider suggesting fixes. If you can't identify the problem, animated screenshots can help the review understand what's going on.
1. **Hand it back** If you found issues, re-assign the submitter to the pull to address them. Repeat until mergable.
1. **Hand it off** If you're the first reviewer and everything looks good but the changes are more than a few lines, hand the pull to someone else to take a second look. Again, try to find the right person to assign it to.
1. **Merge the code** When everything looks good, merge into the target branch. Check the labels on the pull to see if backporting is required, and perform the backport if so.

View file

@ -16,6 +16,7 @@ module.exports = function (grunt) {
nodeVersion: '0.10.35',
platforms: ['darwin-x64', 'linux-x64', 'linux-x86', 'windows'],
services: [ [ 'launchd', '10.9'], [ 'upstart', '1.5'], [ 'systemd', 'default'], [ 'sysv', 'lsb-3.1' ] ],
unitTestDir: __dirname + '/test/unit',
testUtilsDir: __dirname + '/test/utils',

View file

@ -1,4 +1,4 @@
# Kibana 4.1.0-snapshot
# Kibana 4.2.0-snapshot
[![Build Status](https://travis-ci.org/elastic/kibana.svg?branch=master)](https://travis-ci.org/elastic/kibana?branch=master)
@ -39,7 +39,7 @@ For the daring, snapshot builds are available. These builds are created after ea
| platform | | |
| --- | --- | --- |
| OSX | [tar](http://download.elastic.co/kibana/kibana/kibana-4.1.0-snapshot-darwin-x64.tar.gz) | [zip](http://download.elastic.co/kibana/kibana/kibana-4.1.0-snapshot-darwin-x64.zip) |
| Linux x64 | [tar](http://download.elastic.co/kibana/kibana/kibana-4.1.0-snapshot-linux-x64.tar.gz) | [zip](http://download.elastic.co/kibana/kibana/kibana-4.1.0-snapshot-linux-x64.zip) |
| Linux x86 | [tar](http://download.elastic.co/kibana/kibana/kibana-4.1.0-snapshot-linux-x86.tar.gz) | [zip](http://download.elastic.co/kibana/kibana/kibana-4.1.0-snapshot-linux-x86.zip) |
| Windows | [tar](http://download.elastic.co/kibana/kibana/kibana-4.1.0-snapshot-windows.tar.gz) | [zip](http://download.elastic.co/kibana/kibana/kibana-4.1.0-snapshot-windows.zip) |
| OSX | [tar](http://download.elastic.co/kibana/kibana/kibana-4.2.0-snapshot-darwin-x64.tar.gz) | [zip](http://download.elastic.co/kibana/kibana/kibana-4.2.0-snapshot-darwin-x64.zip) |
| Linux x64 | [tar](http://download.elastic.co/kibana/kibana/kibana-4.2.0-snapshot-linux-x64.tar.gz) | [zip](http://download.elastic.co/kibana/kibana/kibana-4.2.0-snapshot-linux-x64.zip) |
| Linux x86 | [tar](http://download.elastic.co/kibana/kibana/kibana-4.2.0-snapshot-linux-x86.tar.gz) | [zip](http://download.elastic.co/kibana/kibana/kibana-4.2.0-snapshot-linux-x86.zip) |
| Windows | [tar](http://download.elastic.co/kibana/kibana/kibana-4.2.0-snapshot-windows.tar.gz) | [zip](http://download.elastic.co/kibana/kibana/kibana-4.2.0-snapshot-windows.zip) |

View file

@ -650,7 +650,7 @@ While you can do it with pure JS, a utility will remove a lot of boilerplate, an
```js
// uses a lodash inherits mixin
// inheritance is defined first - it's easier to read and the function will be hoisted
_(Square).inherits(Shape);
_.class(Square).inherits(Shape);
function Square(width, height) {
Square.Super.call(this);

View file

@ -1,6 +1,5 @@
{
"name": "kibana",
"version": "0.0.0",
"authors": [
"Spencer Alger <spencer@spenceralger.com>"
],
@ -21,40 +20,37 @@
],
"dependencies": {
"angular": "1.2.28",
"angular-bindonce": "~0.3.1",
"angular-bootstrap": "~0.10.0",
"angular-elastic": "~2.3.3",
"angular-mocks": "~1.2.14",
"angular-route": "~1.2.14",
"angular-ui-ace": "~0.2.3",
"async": "~0.2.10",
"bluebird": "~2.1.3",
"bootstrap": "~3.3.1",
"d3": "~3.4.8",
"elasticsearch": "~3.1.1",
"Faker": "~1.1.0",
"FileSaver": "*",
"font-awesome": "~4.2.0",
"gridster": "~0.5.0",
"inflection": "~1.3.5",
"jquery": "~2.1.0",
"angular-bindonce": "0.3.3",
"angular-bootstrap": "0.10.0",
"angular-elastic": "2.4.2",
"angular-mocks": "1.2.28",
"angular-route": "1.2.28",
"angular-ui-ace": "0.2.3",
"bluebird": "~2.9.27",
"bootstrap": "3.3.4",
"d3": "3.5.5",
"elasticsearch": "~5.0.0",
"Faker": "1.1.0",
"FileSaver": "babc6d9d8f",
"font-awesome": "4.3.0",
"gridster": "0.5.6",
"jquery": "2.1.4",
"leaflet": "0.7.3",
"lesshat": "~3.0.2",
"lodash": "~2.4.1",
"moment": "~2.9.0",
"moment-timezone": "~0.0.3",
"ng-clip": "~0.2.4",
"require-css": "~0.1.2",
"requirejs": "~2.1.10",
"requirejs-text": "~2.0.10",
"lodash-deep": "spenceralger/lodash-deep#compat",
"marked": "~0.3.2",
"numeral": "~1.5.3",
"leaflet-draw": "~0.2.4",
"angular-nvd3": "https://github.com/krispo/angular-nvd3.git#1.0.0-beta"
"Leaflet.heat": "Leaflet/Leaflet.heat#627ede7c11bbe43",
"lesshat": "3.0.2",
"lodash": "3.9.3",
"moment": "2.10.3",
"moment-timezone": "0.4.0",
"ng-clip": "0.2.6",
"require-css": "0.1.8",
"requirejs": "2.1.18",
"requirejs-text": "2.0.14",
"marked": "0.3.3",
"numeral": "1.5.3",
"leaflet-draw": "0.2.4"
},
"devDependencies": {},
"resolutions": {
"d3": "~3.4.8"
"angular": "1.2.28"
}
}

View file

@ -1,13 +1,12 @@
[[access]]
== Accessing Kibana
Kibana is a web application that you access through port 5601. All you need to
do is point your web browser at the machine where Kibana is running and
specify the port number. For example, `localhost:5601` or `http://YOURDOMAIN.com:5601`.
Kibana is a web application that you access through port 5601. All you need to do is point your web browser at the
machine where Kibana is running and specify the port number. For example, `localhost:5601` or
`http://YOURDOMAIN.com:5601`.
When you access Kibana, the Discover page loads by default with the default index
pattern selected. The time filter is set to the last 15 minutes and the search
query is set to match-all (\*).
When you access Kibana, the Discover page loads by default with the default index pattern selected. The time filter is
set to the last 15 minutes and the search query is set to match-all (\*).
If you don't see any documents, try setting the time filter to a wider time range.
If you still don't see any results, it's possible that you don't *have* any documents.
If you still don't see any results, it's possible that you don't *have* any documents.

View file

@ -15,9 +15,14 @@ numeric field. Select a field from the drop-down.
numeric field. Select a field from the drop-down.
*Unique Count*:: The {ref}/search-aggregations-metrics-cardinality-aggregation.html[_cardinality_] aggregation returns
the number of unique values in a field. Select a field from the drop-down.
*Percentile*:: The {ref}/search-aggregations-metrics-percentile-rank-aggregation.html[_percentile_] aggregation returns
the percentile rank of values in a numeric field. Select a field from the drop-down, then specify a range in the
*Percentiles* fields. Click the *X* to remove a percentile field. Click *+ Add Percent* to add a percentile field.
*Percentiles*:: The {ref}/search-aggregations-metrics-percentile-aggregation.html[_percentile_] aggregation divides the
values in a numeric field into percentile bands that you specify. Select a field from the drop-down, then specify one
or more ranges in the *Percentiles* fields. Click the *X* to remove a percentile field. Click *+ Add* to add a
percentile field.
*Percentile Rank*:: The {ref}/search-aggregations-metrics-percentile-rank-aggregation.html[_percentile ranks_]
aggregation returns the percentile rankings for the values in the numeric field you specify. Select a numeric field
from the drop-down, then specify one or more percentile rank values in the *Values* fields. Click the *X* to remove a
values field. Click *+Add* to add a values field.
You can add an aggregation by clicking the *+ Add Aggregation* button.
@ -43,7 +48,7 @@ NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you
The availability of these options varies depending on the aggregation you choose.
Select *view options* to change the following aspects of the table:
Select the *Options* tab to change the following aspects of the chart:
*Chart Mode*:: When you have multiple Y-axis aggregations defined for your chart, you can use this drop-down to affect
how the aggregations display on the chart:
@ -56,7 +61,10 @@ _silhouette_:: Displays each aggregation as variance from a central line.
Checkboxes are available to enable and disable the following behaviors:
*Smooth Lines*:: Check this box to curve the top boundary of the area from point to point.
*Set Y-Axis Extents*:: Check this box and enter values in the *y-max* and *y-min* fields to set the Y axis to specific
values.
*Scale Y-Axis to Data Bounds*:: The default Y axis bounds are zero and the maximum value returned in the data. Check
this box to change both upper and lower bounds to match the values returned in the data.
*Show Tooltip*:: Check this box to enable the display of tooltips.
*Show Legend*:: Check this box to enable the display of a legend next to the chart.
*Scale Y-Axis to Data Bounds*:: The default Y axis bounds are zero and the maximum value returned in the data. Check
this box to change both upper and lower bounds to match the values returned in the data.

20
docs/autorefresh.asciidoc Normal file
View file

@ -0,0 +1,20 @@
=== Automatically Refreshing the Page
You can configure a refresh interval to automatically refresh the page with the latest index data. This periodically
resubmits the search query.
When a refresh interval is set, it is displayed to the left of the Time Filter in the menu bar.
To set the refresh interval:
. Click the *Time Filter* image:images/TimeFilter.jpg[Time
Filter] in the upper right corner of the menu bar.
. Click the *Refresh Interval* tab.
. Choose a refresh interval from the list.
To automatically refresh the data, click the image:images/autorefresh.png[] *Auto-refresh* button and select an
autorefresh interval:
image::images/autorefresh-intervals.png
When auto-refresh is enabled, Kibana's top bar displays a pause button and the auto-refresh interval:
image:images/autorefresh-pause.png[]. Click the *Pause* button to pause auto-refresh.

View file

@ -8,7 +8,7 @@ dashboard to share or reload at a later time.
image:images/NYCTA-Dashboard.jpg[Example dashboard]
[float]
[[getting-started]]
[[dashboard-getting-started]]
=== Getting Started
You need at least one saved <<visualize, visualization>> to use a dashboard.
@ -23,6 +23,10 @@ image:images/NewDashboard.jpg[New Dashboard screen]
Build your dashboard by adding visualizations.
[float]
[[dash-autorefresh]]
include::autorefresh.asciidoc[]
[float]
[[adding-visualizations-to-a-dashboard]]
==== Adding Visualizations to a Dashboard
@ -41,7 +45,9 @@ container>>.
==== Saving Dashboards
To save the dashboard, click the *Save Dashboard* button in the toolbar panel, enter a name for the dashboard in the
*Save As* field, and click the *Save* button.
*Save As* field, and click the *Save* button. By default, dashboards store the time period specified in the time filter
when you save a dashboard. To disable this behavior, clear the *Store time with dashboard* box before clicking the
*Save* button.
[float]
[[loading-a-saved-dashboard]]
@ -133,3 +139,7 @@ image:images/NYCTA-Statistics.jpg[]
Click the _Edit_ button image:images/EditVis.png[Pencil button] at the top right of a container to open the
visualization in the <<visualize,Visualize>> page.
[float]
[[dashboard-filters]]
include::filter-pinning.asciidoc[]

View file

@ -10,13 +10,23 @@ Each bucket type supports the following aggregations:
*Date Histogram*:: A {ref}/search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
numeric field and organized by date. You can specify a time frame for the intervals in seconds, minutes, hours, days,
weeks, months, or years.
weeks, months, or years. You can also specify a custom interval frame by selecting *Custom* as the interval and
specifying a number and a time unit in the text field. Custom interval time units are *s* for seconds, *m* for minutes,
*h* for hours, *d* for days, *w* for weeks, and *y* for years. Different units support different levels of precision,
down to one second.
*Histogram*:: A standard {ref}/search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
numeric field. Specify an integer interval for this field. Select the *Show empty buckets* checkbox to include empty
intervals in the histogram.
*Range*:: With a {ref}/search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
of values for a numeric field. Click *Add Range* to add a set of range endpoints. Click the red *(x)* symbol to remove
a range.
*Date Range*:: A {ref}/search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
that are within a range of dates that you specify. You can specify the ranges for the dates using
{ref}/mapping-date-format.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
Click the red *(/)* symbol to remove a range.
*IPv4 Range*:: The {ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
specify ranges of IPv4 addresses. Click *Add Range* to add a set of range endpoints. Click the red *(/)* symbol to
remove a range.
*Terms*:: A {ref}/search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
or bottom _n_ elements of a given field to display, ordered by count or a custom metric.
*Filters*:: You can specify a set of {ref}/search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
@ -51,7 +61,7 @@ NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you
The availability of these options varies depending on the aggregation you choose.
Select *view options* to change the following aspects of the table:
Select the *Options* tab to change the following aspects of the table:
*Per Page*:: This field controls the pagination of the table. The default value is ten rows per page.
@ -60,4 +70,4 @@ Checkboxes are available to enable and disable the following behaviors:
*Show metrics for every bucket/level*:: Check this box to display the intermediate results for each bucket aggregation.
*Show partial rows*:: Check this box to display a row even when there is no result.
NOTE: Enabling these behaviors may have a substantial effect on performance.
NOTE: Enabling these behaviors may have a substantial effect on performance.

View file

@ -1,13 +1,18 @@
[[discover]]
== Discover
You can interactively explore your data from the Discover page. You have access to every document in every index that matches the selected index pattern. You can submit search queries, filter the search results, and view document data. You can also see the number of documents that match the search query and get field value statistics. If a time field is configured for the selected index pattern, the distribution of documents over time is displayed in a histogram at the top of the page.
You can interactively explore your data from the Discover page. You have access to every document in every index that
matches the selected index pattern. You can submit search queries, filter the search results, and view document data.
You can also see the number of documents that match the search query and get field value statistics. If a time field is
configured for the selected index pattern, the distribution of documents over time is displayed in a histogram at the
top of the page.
image:images/Discover-Start-Annotated.jpg[Discover Page]
[float]
[[set-time-filter]]
=== Setting the Time Filter
The Time Filter restricts the search results to a specific time period. You can set a time filter if your index contains time-based events and a time-field is configured for the selected index pattern.
The Time Filter restricts the search results to a specific time period. You can set a time filter if your index
contains time-based events and a time-field is configured for the selected index pattern.
By default the time filter is set to the last 15 minutes. You can use the Time Picker to change the time filter
or select a specific time interval or time range in the histogram at the top of the page.
@ -18,40 +23,56 @@ To set a time filter with the Time Picker:
. To set a quick filter, simply click one of the shortcut links.
. To specify a relative Time Filter, click *Relative* and enter the relative start time. You can specify
the relative start time as any number of seconds, minutes, hours, days, months, or years ago.
. To specify an absolute Time Filter, click *Absolute* and enter the start date in the *From* field and the end date in the *To* field.
. To specify an absolute Time Filter, click *Absolute* and enter the start date in the *From* field and the end date in
the *To* field.
. Click the caret at the bottom of the Time Picker to hide it.
To set a Time Filter from the histogram, do one of the following:
* Click the bar that represents the time interval you want to zoom in on.
* Click and drag to view a specific timespan. You must start the selection with the cursor over the background of the chart--the cursor changes to a plus sign when you hover over a valid start point.
* Click and drag to view a specific timespan. You must start the selection with the cursor over the background of the
chart--the cursor changes to a plus sign when you hover over a valid start point.
You can use the browser Back button to undo your changes.
The histogram lists the time range you're currently exploring, as well as the intervals that range is currently using.
To change the intervals, click the link and select an interval from the drop-down. The default behavior automatically
sets an interval based on the time range.
[float]
[[search]]
=== Searching Your Data
You can search the indices that match the current index pattern by submitting a search from the Discover page.
You can enter simple query strings, use the Lucene https://lucene.apache.org/core/2_9_4/queryparsersyntax.html[query syntax], or use the full JSON-based http://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html[Elasticsearch Query DSL].
You can enter simple query strings, use the Lucene https://lucene.apache.org/core/2_9_4/queryparsersyntax.html[query
syntax], or use the full JSON-based {ref}/query-dsl.html[Elasticsearch Query DSL].
When you submit a search, the histogram, Documents table, and Fields list are updated to reflect
the search results. The total number of hits (matching documents) is shown in the upper right corner of the
histogram. The Documents table shows the first five hundred hits. By default, the hits are listed in reverse chronological order, with the newest documents shown first. You can reverse the sort order by by clicking on the Time column header. You can also sort the table using the values in any indexed field. For more information, see <<sorting, Sorting the Documents Table>>.
histogram. The Documents table shows the first five hundred hits. By default, the hits are listed in reverse
chronological order, with the newest documents shown first. You can reverse the sort order by by clicking on the Time
column header. You can also sort the table using the values in any indexed field. For more information, see <<sorting,
Sorting the Documents Table>>.
To search your data:
. Enter a query string in the Search field:
+
* To perform a free text search, simply enter a text string. For example, if you're searching web server logs, you could enter `safari` to search all fields for the term `safari`.
* To perform a free text search, simply enter a text string. For example, if you're searching web server logs, you
could enter `safari` to search all fields for the term `safari`.
+
* To search for a value in a specific field, you prefix the value with the name of the field. For example, you could enter `status:200` to limit the results to entries that contain the value `200` in the `status` field.
* To search for a value in a specific field, you prefix the value with the name of the field. For example, you could
enter `status:200` to limit the results to entries that contain the value `200` in the `status` field.
+
* To search for a range of values, you can use the bracketed range syntax, `[START_VALUE TO END_VALUE]`. For example, to find entries that have 4xx status codes, you could enter `status:[400 TO 499]`.
* To search for a range of values, you can use the bracketed range syntax, `[START_VALUE TO END_VALUE]`. For example,
to find entries that have 4xx status codes, you could enter `status:[400 TO 499]`.
+
* To specify more complex search criteria, you can use the Boolean operators `AND`, `OR`, and `NOT`. For example,
to find entries that have 4xx status codes and have an extension of `php` or `html`, you could enter `status:[400 TO 499] AND (extension:php OR extension:html)`.
to find entries that have 4xx status codes and have an extension of `php` or `html`, you could enter `status:[400 TO
499] AND (extension:php OR extension:html)`.
+
NOTE: These examples use the Lucene query syntax. You can also submit queries using the Elasticsearch Query DSL. For examples, see http://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax[query string syntax] in the Elasticsearch Reference.
NOTE: These examples use the Lucene query syntax. You can also submit queries using the Elasticsearch Query DSL. For
examples, see {ref}/query-dsl-query-string-query.html#query-string-syntax[query string syntax] in the Elasticsearch
Reference.
+
. Press *Enter* or click the *Search* button to submit your search query.
@ -82,91 +103,113 @@ To load a saved search:
button] in the Discover toolbar.
. Select the search you want to load.
If the saved search is associated with a different index pattern than is currently selected, loading the saved search also changes the selected index pattern.
If the saved search is associated with a different index pattern than is currently selected, loading the saved search
also changes the selected index pattern.
[float]
[[select-pattern]]
==== Changing Which Indices You're Searching
When you submit a search request, the indices that match the currently-selected index pattern are searched. The current index pattern is shown below the search field. To change which indices you are searching, select a different index pattern.
To select a different index pattern:
. Click the *Settings* button image:images/SettingsButton.jpg[Settings
button] in the Discover toolbar.
. Select the pattern you want to use from the Index Pattern list.
When you submit a search request, the indices that match the currently-selected index pattern are searched. The current
index pattern is shown below the search field. To change which indices you are searching, click the name of the current
index pattern to display a list of the configured index patterns and select a different index pattern.
For more information about index patterns, see <<settings-create-pattern, Creating an Index Pattern>>.
[float]
[[auto-refresh]]
=== Automatically Refreshing the Page
You can configure a refresh interval to automatically refresh the Discover page with the latest
index data. This periodically resubmits the search query.
When a refresh interval is set, it is displayed to the left of the Time Filter in the menu bar.
To set the refresh interval:
. Click the *Time Filter* image:images/TimeFilter.jpg[Time
Filter] in the upper right corner of the menu bar.
. Click the *Refresh Interval* tab.
. Choose a refresh interval from the list.
include::autorefresh.asciidoc[]
[float]
[[field-filter]]
=== Filtering by Field
You can filter the search results to display only those documents that contain a particular value in a field. You can also create negative filters that exclude documents that contain the specified field value.
You can filter the search results to display only those documents that contain a particular value in a field. You can
also create negative filters that exclude documents that contain the specified field value.
You can add filters from the Fields list or from the Documents table. When you add a filter, it is displayed in the filter bar below the search query. From the filter bar, you can enable or disable a filter, invert the filter (change it from a positive filter to a negative filter and vice-versa), toggle the filter on or off, or remove it entirely.
You can add filters from the Fields list or from the Documents table. When you add a filter, it is displayed in the
filter bar below the search query. From the filter bar, you can enable or disable a filter, invert the filter (change
it from a positive filter to a negative filter and vice-versa), toggle the filter on or off, or remove it entirely.
Click the small left-facing arrow to the right of the index pattern selection drop-down to collapse the Fields list.
To add a filter from the Fields list:
. Click the name of the field you want to filter on. This displays the top five values for that field. To the right of each value, there are two magnifying glass buttons--one for adding a regular (positive) filter, and
. Click the name of the field you want to filter on. This displays the top five values for that field. To the right of
each value, there are two magnifying glass buttons--one for adding a regular (positive) filter, and
one for adding a negative filter.
. To add a positive filter, click the *Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button]. This filters out documents that don't contain that value in the field.
. To add a negative filter, click the *Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button]. This excludes documents that contain that value in the field.
. To add a positive filter, click the *Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button].
This filters out documents that don't contain that value in the field.
. To add a negative filter, click the *Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button].
This excludes documents that contain that value in the field.
To add a filter from the Documents table:
. Expand a document in the Documents table by clicking the *Expand* button image:images/ExpandButton.jpg[Expand Button] to the left of the document's entry in the first column (the first column is usually Time). To the right of each field name, there are two magnifying glass buttons--one for adding a regular (positive) filter, and one for adding a negative filter.
. To add a positive filter based on the document's value in a field, click the *Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button]. This filters out documents that don't contain the specified value in that field.
. To add a negative filter based on the document's value in a field, click the *Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button]. This excludes documents that contain the specified value in that field.
. Expand a document in the Documents table by clicking the *Expand* button image:images/ExpandButton.jpg[Expand Button]
to the left of the document's entry in the first column (the first column is usually Time). To the right of each field
name, there are two magnifying glass buttons--one for adding a regular (positive) filter, and one for adding a negative
filter.
. To add a positive filter based on the document's value in a field, click the
*Positive Filter* button image:images/PositiveFilter.jpg[Positive Filter Button]. This filters out documents that don't
contain the specified value in that field.
. To add a negative filter based on the document's value in a field, click the
*Negative Filter* button image:images/NegativeFilter.jpg[Negative Filter Button]. This excludes documents that contain
the specified value in that field.
[float]
[[discover-filters]]
include::filter-pinning.asciidoc[]
[float]
[[document-data]]
=== Viewing Document Data
When you submit a search query, the 500 most recent documents that match the query are listed in the Documents table. You can configure the number of documents shown in the table by setting the `discover:sampleSize` property in <<advanced-options,Advanced Settings>>. By default, the table shows the localized version of the time field specified in the selected index pattern and the document `_source`. You can <<adding-columns, add fields to the Documents table>> from the Fields list. You can <<sorting, sort the listed documents>> by any indexed field that's included in the table.
When you submit a search query, the 500 most recent documents that match the query are listed in the Documents table.
You can configure the number of documents shown in the table by setting the `discover:sampleSize` property in
<<advanced-options,Advanced Settings>>. By default, the table shows the localized version of the time field specified
in the selected index pattern and the document `_source`. You can <<adding-columns, add fields to the Documents table>>
from the Fields list. You can <<sorting, sort the listed documents>> by any indexed field that's included in the table.
To view a document's field data:
To view a document's field data, click the *Expand* button image:images/ExpandButton.jpg[Expand Button] to the left of
the document's entry in the first column (the first column is usually Time). Kibana reads the document data from
Elasticsearch and displays the document fields in a table. The table contains a row for each field that contains the
name of the field, add filter buttons, and the field value.
image::images/Expanded-Document.png[]
. Click the *Expand* button image:images/ExpandButton.jpg[Expand Button] to the left of the document's entry in the first column (the first column is usually Time). Kibana reads the document data from Elasticsearch and displays the document fields in a table. The table contains a row for each field that contains the name of the field, add filter buttons, and the field value.
. To view the original JSON document (pretty-printed), click the *JSON* tab.
. To view the document data as a separate page, click the link. You can bookmark and share this link to provide direct access to a particular document.
. To view the document data as a separate page, click the link. You can bookmark and share this link to provide direct
access to a particular document.
. To collapse the document details, click the *Collapse* button image:images/CollapseButton.jpg[Collapse Button].
. To toggle a particular field's column in the Documents table, click the
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
[float]
[[sorting]]
==== Sorting the Document List
You can sort the documents in the Documents table by the values in any indexed field. If a time field is configured for the selected index pattern, by default the documents are sorted in reverse chronological order.
You can sort the documents in the Documents table by the values in any indexed field. If a time field is configured for
the selected index pattern, by default the documents are sorted in reverse chronological order.
To change the sort order:
* Click the name of the field you want to sort by. The fields you can use for sorting have a sort button to the right of the field name. Clicking the field name a second time reverses the sort order.
* Click the name of the field you want to sort by. The fields you can use for sorting have a sort button to the right
of the field name. Clicking the field name a second time reverses the sort order.
[float]
[[adding-columns]]
==== Adding Field Columns to the Documents Table
By default, the Documents table shows the localized version of the time field specified in the selected index pattern and the document `_source`. You can add fields to the table from the Fields list.
By default, the Documents table shows the localized version of the time field specified in the selected index pattern
and the document `_source`. You can add fields to the table from the Fields list or from a document's expanded view.
To add field columns to the Documents table:
. Mouse over a field in the Fields list and click its *add* button image:images/AddFieldButton.jpg[Add Field Button].
. Repeat until you've added all the fields you want to display in the Documents table.
. Alternately, add a field column directly from a document's expanded view by clicking the
image:images/add-column-button.png[Add Column] *Toggle column in table* button.
The added field columns replace the `_source` column in the Documents table. The added fields are also
listed in the *Selected Fields* section at the top of the field list.
To rearrange the field columns in the table, mouse over the header of the column you want to move and click the *Move* button.
To rearrange the field columns in the table, mouse over the header of the column you want to move and click the *Move*
button.
image:images/Discover-MoveColumn.jpg[Move Column]
@ -175,21 +218,21 @@ image:images/Discover-MoveColumn.jpg[Move Column]
==== Removing Field Columns from the Documents Table
To remove field columns from the Documents table:
. Mouse over the field you want to remove in the *Selected Fields* section of the Fields list and click its *remove* button image:images/RemoveFieldButton.jpg[Remove Field Button].
. Mouse over the field you want to remove in the *Selected Fields* section of the Fields list and click its *remove*
button image:images/RemoveFieldButton.jpg[Remove Field Button].
. Repeat until you've removed all the fields you want to drop from the Documents table.
[float]
[[viewing-field-stats]]
=== Viewing Field Data Statistics
From the field list, you can see how many documents in the Documents table contain a particular field, what the top 5 values are, and what percentage of documents contain each value.
From the field list, you can see how many documents in the Documents table contain a particular field, what the top 5
values are, and what percentage of documents contain each value.
To view field data statistics:
* Click the name of a field in the Fields list. The field can be anywhere in the Fields list--Selected Fields, Popular Fields, or the list of other fields.
* Click the name of a field in the Fields list. The field can be anywhere in the Fields list--Selected Fields, Popular
Fields, or the list of other fields.
image:images/Discover-FieldStats.jpg[Field Statistics]
TIP: To create a visualization based on the field, click the *Visualize* button below the field statistics.

View file

@ -0,0 +1,26 @@
=== Working with Filters
When you create a filter anywhere in Kibana, the filter conditions display in a green oval under the search text
entry box:
image::images/filter-sample.png[]
Hovering on the filter oval displays the following icons:
image::images/filter-allbuttons.png[]
Enable Filter image:images/filter-enable.png[]:: Click this icon to disable the filter without removing it. You can
enable the filter again later by clicking the icon again. Disabled filters display a striped shaded color, green for
inclusion filters and red for exclusion filters.
Pin Filter image:images/filter-pin.png[]:: Click this icon to _pin_ a filter. Pinned filters persist across Kibana tabs.
You can pin filters from the _Visualize_ tab, click on the _Discover_ or _Dashboard_ tabs, and those filters remain in
place.
NOTE: If you have a pinned filter and you're not seeing any query results, that your current tab's index pattern is one
that the filter applies to.
Toggle Filter image:images/filter-toggle.png[]:: Click this icon to _toggle_ a filter. By default, filters are inclusion
filters, and display in green. Only elements that match the filter are displayed. To change this to an exclusion
filters, displaying only elements that _don't_ match, toggle the filter. Exclusion filters display in red.
Remove Filter image:images/filter-delete.png[]:: Click this icon to remove a filter entirely.
To apply any of the filter actions to all the filters currently in place, click the image:images/filter-actions.png[]
*Global Filter Actions* button and select an action.

View file

@ -0,0 +1,326 @@
[[getting-started]]
== Getting Started with Kibana
Now that you have Kibana <<setup,installed>>, you can step through this tutorial to get fast hands-on experience with
key Kibana functionality. By the end of this tutorial, you will have:
* Loaded a sample data set into your Elasticsearch installation
* Defined at least one index pattern
* Used the <<discover, Discover>> functionality to explore your data
* Set up some <<visualize,_visualizations_>> to graphically represent your data
* Assembled visualizations into a <<dashboard,Dashboard>>
The material in this section assumes you have a working Kibana install connected to a working Elasticsearch install.
[float]
[[tutorial-load-dataset]]
=== Before You Start: Loading Sample Data
The tutorials in this section rely on the following data sets:
* The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here:
https://www.elastic.co/guide/en/kibana/3.0/snippets/shakespeare.json[shakespeare.json].
* A set of fictitious accounts with randomly generated data. Download this data set by clicking here:
https://github.com/bly2k/files/blob/master/accounts.zip?raw=true[accounts.zip]
* A set of randomly generated log files. Download this data set by clicking here:
https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz[logstash.jsonl.gz]
Two of the data sets are compressed. Use the following commands to extract the files:
[source,shell]
unzip accounts.zip
gunzip logstash.jsonl.gz
The Shakespeare data set is organized in the following schema:
[source,json]
{
"line_id": INT,
"play_name": "String",
"speech_number": INT,
"line_number": "String",
"speaker": "String",
"text_entry": "String",
}
The accounts data set is organized in the following schema:
[source,json]
{
"account_number": INT,
"balance": INT,
"firstname": "String",
"lastname": "String",
"age": INT,
"gender": "M or F",
"address": "String",
"employer": "String",
"email": "String",
"city": "String",
"state": "String"
}
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are:
[source,json]
{
"memory": INT,
"geo.coordinates": "geo_point"
"@timestamp": "date"
}
Before we load the Shakespeare data set, we need to set up a {ref}/mapping.html[_mapping_] for the fields. Mapping
divides the documents in the index into logical groups and specifies a field's characteristics, such as the field's
searchability or whether or not it's _tokenized_, or broken up into separate words.
Use the following command to set up a mapping for the Shakespeare data set:
[source,shell]
curl -XPUT http://localhost:9200/shakespeare -d '
{
"mappings" : {
"_default_" : {
"properties" : {
"speaker" : {"type": "string", "index" : "not_analyzed" },
"play_name" : {"type": "string", "index" : "not_analyzed" },
"line_id" : { "type" : "integer" },
"speech_number" : { "type" : "integer" }
}
}
}
}
';
This mapping specifies the following qualities for the data set:
* The _speaker_ field is a string that isn't analyzed. The string in this field is treated as a single unit, even if
there are multiple words in the field.
* The same applies to the _play_name_ field.
* The line_id and speech_number fields are integers.
The accounts and logstash data sets don't require any mappings, so at this point we're ready to load the data sets into
Elasticsearch with the following commands:
[source,shell]
curl -XPOST 'localhost:9200/bank/_bulk?pretty' --data-binary @accounts.json
curl -XPOST 'localhost:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logstash.json
These commands may take some time to execute, depending on the computing resources available.
Verify successful loading with the following command:
[source,shell]
curl 'localhost:9200/_cat/indices?v'
You should see output similar to the following:
[source,shell]
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open bank 5 1 1000 0 418.2kb 418.2kb
yellow open shakespeare 5 1 111396 0 17.6mb 17.6mb
yellow open logstash-2015.05.18 5 1 4631 0 15.6mb 15.6mb
yellow open logstash-2015.05.19 5 1 4624 0 15.7mb 15.7mb
yellow open logstash-2015.05.20 5 1 4750 0 16.4mb 16.4mb
[[tutorial-define-index]]
=== Defining Your Index Patterns
Each set of data loaded to Elasticsearch has an https://www.elastic.co/guide/en/kibana/current/settings.html#settings-create-pattern[index pattern]. In the previous section, the Shakespeare data set has an index named `shakespeare`, and the accounts
data set has an index named `bank`. An _index pattern_ is a string with optional wildcards that can match multiple
indices. For example, in the common logging use case, a typical index name contains the date in MM-DD-YYYY
format, and an index pattern for May would look something like `logstash-2015.05*`.
For this tutorial, any pattern that matches either of the two indices we've loaded will work. Open a browser and
navigate to `localhost:5601`. Click the *Settings* tab, then the *Indices* tab. Click *Add New* to define a new index
pattern. Since these data sets don't contain time-series data, make sure the *Index contains time-based events* box is
unchecked. Specify `shakes*` as the index pattern for the Shakespeare data set and click *Create* to define the index
pattern, then define a second index pattern named `ba*`.
[float]
[[tutorial-discovering]]
=== Discovering Your Data
Click the *Discover* tab to display Kibana's data discovery functions:
image::images/tutorial-discover.png[]
Right under the tab itself, there is a search box where you can search your data. Searches take a specific
{ref}/query-dsl-query-string-query.html#query-string-syntax[query syntax] that enable you to create custom searches,
which you can save and load by clicking the buttons to the right of the search box.
Beneath the search box, the current index pattern is displayed in a drop-down. You can change the index pattern by
selecting a different pattern from the drop-down selector.
You can construct searches by using the field names and the values you're interested in. With numeric fields you can
use comparison operators such as greater than (>), less than (<), or equals (=). You can link elements with the
logical operators AND, OR, and NOT, all in uppercase.
Try selecting the `ba*` index pattern and putting the following search into the search box:
[source,text]
account_number:<100 AND balance:>47500
This search returns all account numbers between zero and 99 with balances in excess of 47,500.
If you're using the linked sample data set, this search returns 5 results: Account numbers 8, 32, 78, 85, and 97.
image::images/tutorial-discover-2.png[]
To narrow the display to only the specific fields of interest, highlight each field in the list that displays under the
index pattern and click the *Add* button. Note how, in this example, adding the `account_number` field changes the
display from the full text of five records to a simple list of five account numbers:
image::images/tutorial-discover-3.png[]
[[tutorial-visualizing]]
=== Data Visualization: Beyond Discovery
The visualization tools available on the *Visualize* tab enable you to display aspects of your data sets in several
different ways.
Click on the *Visualize* tab to start:
image::images/tutorial-visualize.png[]
Click on *Pie chart*, then *From a new search*. Select the `ba*` index pattern.
Visualizations depend on Elasticsearch {ref}/search-aggregations.html[aggregations] in two different types: _bucket_
aggregations and _metric_ aggregations. A bucket aggregation sorts your data according to criteria you specify. For
example, in our accounts data set, we can establish a range of account balances, then display what proportions of the
total fall into which range of balances.
The whole pie displays, since we haven't specified any buckets yet.
image::images/tutorial-visualize-pie-1.png[]
Select *Split Slices* from the *Select buckets type* list, then select *Range* from the *Aggregation* drop-down
selector. Select the *balance* field from the *Field* drop-down, then click on *Add Range* four times to bring the
total number of ranges to six. Enter the following ranges:
[source,text]
0 1000
1000 3000
3000 7000
7000 15000
15000 31000
31000 50000
Click the green *Apply changes* to display the chart:
image::images/tutorial-visualize-pie-2.png[]
This shows you what proportion of the 1000 accounts fall in these balance ranges. To see another dimension of the data,
we're going to add another bucket aggregation. We can break down each of the balance ranges further by the account
holder's age.
Click *Add sub-buckets* at the bottom, then select the *Terms* aggregation and the *age* field from the drop-downs.
Click the green *Apply changes* button to add an external ring with the new results.
image::images/tutorial-visualize-pie-3.png[]
Save this chart by clicking the *Save Visualization* button to the right of the search field. Name the visualization
_Pie Example_.
Next, we're going to make a bar chart. Click on *New Visualization*, then *Vertical bar chart*. Select *From a new
search* and the `shakes*` index pattern. You'll see a single big bar, since we haven't defined any buckets yet:
image::images/tutorial-visualize-bar-1.png[]
For the Y-axis metrics aggregation, select *Unique Count*, with *speaker* as the field. For Shakespeare plays, it might
be useful to know which plays have the lowest number of distinct speaking parts, if your theater company is short on
actors. For the X-Axis buckets, select the *Terms* aggregation with the *play_name* field. For the *Order*, select
*Bottom*, leaving the *Size* at 5.
Leave the other elements at their default values and click the green *Apply changes* button. Your chart should now look
like this:
image::images/tutorial-visualize-bar-2.png[]
Notice how the individual play names show up as whole phrases, instead of being broken down into individual words. This
is the result of the mapping we did at the beginning of the tutorial, when we marked the *play_name* field as 'not
analyzed'.
Hovering on each bar shows you the number of speaking parts for each play as a tooltip. You can turn this behavior off,
as well as change many other options for your visualizations, by clicking the *Options* tab in the top left.
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part. Add
a Y-axis aggregation with the *Add metrics* button, then choose the *Max* aggregation for the *speech_number* field. In
the *Options* tab, change the *Bar Mode* drop-down to *grouped*, then click the green *Apply changes* button. Your
chart should now look like this:
image::images/tutorial-visualize-bar-3.png[]
As you can see, _Love's Labours Lost_ has an unusually high maximum speech number, compared to the other plays, and
might therefore make more demands on an actor's memory.
Save this chart with the name _Bar Example_.
Next, we're going to make a tile map chart to visualize some geographic data. Click on *New Visualization*, then
*Tile map*. Select *From a new search* and the `logstash-*` index pattern. Define the time window for the events we're
exploring by clicking the time selector at the top right of the Kibana interface. Click on *Absolute*, then set the
end time for the range to May 20, 2015 and the start time to May 18, 2015:
image::images/tutorial-timepicker.png[]
Once you've got the time range set up, click the *Go* button, then close the time picker by clicking the small up arrow
at the bottom. You'll see a map of the world, since we haven't defined any buckets yet:
image::images/tutorial-visualize-map-1.png[]
Select *Geo Coordinates* as the bucket, then click the green *Apply changes* button. Your chart should now look like
this:
image::images/tutorial-visualize-map-2.png[]
You can navigate the map by clicking and dragging, zoom with the image:images/viz-zoom.png[] buttons, or hit the *Fit
Data Bounds* image:images/viz-fit-bounds.png[] button to zoom to the lowest level that includes all the points. You can
also create a filter to define a rectangle on the map, either to include or exclude, by clicking the
*Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[] button and drawing a bounding box on the map.
A green oval with the filter definition displays right under the query box:
image::images/tutorial-visualize-map-3.png[]
Hover on the filter to display the controls to toggle, pin, invert, or delete the filter. Save this chart with the name
_Bar Example_.
Finally, we're going to define a sample Markdown widget to display on our dashboard. Click on *New Visualization*, then
*Markdown widget*, to display a very simple Markdown entry field:
image::images/tutorial-visualize-md-1.png[]
Write the following text in the field:
[source,markdown]
# This is a tutorial dashboard!
The Markdown widget uses **markdown** syntax.
> Blockquotes in Markdown use the > character.
Click the green *Apply changes* button to display the rendered Markdown in the preview pane:
image::images/tutorial-visualize-md-2.png[]
Save this visualization with the name _Markdown Example_.
[[tutorial-dashboard]]
=== Putting it all Together with Dashboards
A Kibana dashboard is a collection of visualizations that you can arrange and share. To get started, click the
*Dashboard* tab, then the *Add Visualization* button at the far right of the search box to display the list of saved
visualizations. Select _Markdown Example_, _Pie Example_, _Bar Example_, and _Map Example_, then close the list of
visualizations by clicking the small up-arrow at the bottom of the list. You can move the containers for each
visualization by clicking and dragging the title bar. Resize the containers by dragging the lower right corner of a
visualization's container. Your sample dashboard should end up looking roughly like this:
image::images/tutorial-dashboard.png[]
Click the *Save Dashboard* button, then name the dashboard _Tutorial Dashboard_. You can share a saved dashboard by
clicking the *Share* button to display HTML embedding code as well as a direct link.
[float]
[[wrapping-up]]
=== Wrapping Up
Now that you've handled the basic aspects of Kibana's functionality, you're ready to explore Kibana in further detail.
Take a look at the rest of the documentation for more details!

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 31 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 198 KiB

After

Width:  |  Height:  |  Size: 180 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

After

Width:  |  Height:  |  Size: 62 KiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

After

Width:  |  Height:  |  Size: 88 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 211 KiB

After

Width:  |  Height:  |  Size: 187 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 31 KiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 350 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2 KiB

BIN
docs/images/autorefresh.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 760 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

BIN
docs/images/filter-pin.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 855 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 208 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 498 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 511 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 619 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 217 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 655 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 493 B

BIN
docs/images/viz-zoom.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 426 B

View file

@ -1,13 +1,16 @@
[[kibana-guide]]
= Kibana User Guide
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/current
:ref: http://www.elastic.co/guide/en/elasticsearch/reference/current/
:shield: https://www.elastic.co/guide/en/shield/current
:k4pull: https://github.com/elastic/kibana/pull/
include::introduction.asciidoc[]
include::setup.asciidoc[]
include::getting-started.asciidoc[]
include::access.asciidoc[]
include::discover.asciidoc[]
@ -20,4 +23,4 @@ include::settings.asciidoc[]
include::production.asciidoc[]
include::whats-new.asciidoc[]
include::whats-new.asciidoc[]

View file

@ -11,11 +11,12 @@ browser-based interface enables you to quickly create and share dynamic
dashboards that display changes to Elasticsearch queries in real time.
Setting up Kibana is a snap. You can install Kibana and start exploring your
Elasticsearch indices in minutes--no code, no additional infrastructure required.
Elasticsearch indices in minutes -- no code, no additional infrastructure required.
NOTE: This guide describes how to use Kibana 4. For information about what's new
in Kibana 4, see <<whats-new>>. For information about Kibana 3,
see the http://www.elastic.co/guide/en/kibana/current/index.html[Kibana 3 User Guide].
NOTE: This guide describes how to use Kibana 4.1. For information about what's new
in Kibana 4.1, see <<whats-new>>. For earlier versions of Kibana 4, see the
http://www.elastic.co/guide/en/kibana/4.0/index.html[Kibana 4 User Guide]. For information about Kibana 3, see the
http://www.elastic.co/guide/en/kibana/3.0/index.html[Kibana 3 User Guide].
[float]
[[data-discovery]]
@ -53,6 +54,5 @@ that displays several visualizations of the TFL data:
image:images/TFL-Dashboard.jpg[Dashboard]
For more information about creating and sharing visualizations and dashboards, see the <<visualize, Visualize>>
and <<dashboard, Dashboard>> topics.
and <<dashboard, Dashboard>> topics. A complete <<getting-started,tutorial>> covering several aspects of Kibana's
functionality is also available.

View file

@ -0,0 +1,113 @@
[[setup-repositories]]
=== Kibana Repositories
Binary packages for Kibana are available for Unix distributions that support the `apt` and `yum` tools.We also have
repositories available for APT and YUM based distributions.
NOTE: Since the packages are created as part of the Kibana build, source packages are not available.
Packages are signed with the PGP key http://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4[D88E42B4], which
has the following fingerprint:
4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4
[float]
[[kibana-apt]]
===== Installing Kibana with apt-get
. Download and install the Public Signing Key:
+
[source,sh]
--------------------------------------------------
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
--------------------------------------------------
+
. Add the repository definition to your `/etc/apt/sources.list` file:
+
[source, sh]
--------------------------------------------------
echo "deb http://packages.elastic.co/kibana/{branch}/debian stable main" | sudo tee -a /etc/apt/sources.list
--------------------------------------------------
+
[WARNING]
==================================================
Use the `echo` method described above to add the Kibana repository. Do not use `add-apt-repository`, as that command
adds a `deb-src` entry with no corresponding source package.
When the `deb-src` entry, is present, the commands in this procedure generate an error similar to the following:
Unable to find expected entry 'main/source/Sources' in Release file (Wrong sources.list entry or malformed file)
Delete the `deb-src` entry from the `/etc/apt/sources.list` file to clear the error.
==================================================
+
. Run `apt-get update` and the repository is ready for use. Install Kibana with the following command:
+
[source,sh]
--------------------------------------------------
sudo apt-get update && sudo apt-get install kibana
--------------------------------------------------
+
. Configure Kibana to automatically start during bootup. If your distribution is using the System V version of `init`,
run the following command:
+
[source,sh]
--------------------------------------------------
sudo update-rc.d kibana defaults 95 10
--------------------------------------------------
+
. If your distribution is using `systemd`, run the following commands instead:
+
[source,sh]
--------------------------------------------------
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
--------------------------------------------------
[float]
[[kibana-yum]]
===== Installing Kibana with yum
WARNING: The repositories set up in this procedure are not compatible with distributions using version 3 of `rpm`, such
as CentOS version 5.
. Download and install the public signing key:
+
[source,sh]
--------------------------------------------------
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
--------------------------------------------------
+
. Create a file named `kibana.repo` in the `/etc/yum.repos.d/` directory with the following contents:
+
[source,sh]
--------------------------------------------------
[kibana-{branch}]
name=Kibana repository for {branch}.x packages
baseurl=http://packages.elastic.co/kibana/{branch}/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
--------------------------------------------------
+
. Install Kibana by running the following command:
+
[source,sh]
--------------------------------------------------
yum install kibana
--------------------------------------------------
+
Configure Kibana to automatically start during bootup. If your distribution is using the System V version of `init`,
run the following command:
+
[source,sh]
--------------------------------------------------
chkconfig --add kibana
--------------------------------------------------
+
. If your distribution is using `systemd`, run the following commands instead:
+
[source,sh]
--------------------------------------------------
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
--------------------------------------------------

View file

@ -27,9 +27,36 @@ NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you
{ref}/modules-scripting.html[dynamic Groovy scripting].
The availability of these options varies depending on the aggregation you choose.
Select *view options* to change the following aspects of the table:
Select the *Options* tab to change the following aspects of the chart:
*Y-Axis Scale*:: You can select *linear*, *log*, or *square root* scales for the chart's Y axis. You can use a log
scale to display data that varies exponentially, such as a compounding interest chart, or a square root scale to
regularize the display of data sets with variabilities that are themselves highly variable. This kind of data, where
the variability is itself variable over the domain being examined, is known as _heteroscedastic_ data. For example, if
a data set of height versus weight has a relatively narrow range of variability at the short end of height, but a wider
range at the taller end, the data set is heteroscedastic.
*Smooth Lines*:: Check this box to curve the line from point to point. Bear in mind that smoothed lines necessarily
affect the representation of your data and create a potential for ambiguity.
*Show Connecting Lines*:: Check this box to draw lines between the points on the chart.
*Show Circles*:: Check this box to draw each data point on the chart as a small circle.
*Current time marker*:: For charts of time-series data, check this box to draw a red line on the current time.
*Set Y-Axis Extents*:: Check this box and enter values in the *y-max* and *y-min* fields to set the Y axis to specific
values.
*Show Tooltip*:: Check this box to enable the display of tooltips.
*Show Legend*:: Check this box to enable the display of a legend next to the chart.
*Scale Y-Axis to Data Bounds*:: The default Y axis bounds are zero and the maximum value returned in the data. Check
this box to change both upper and lower bounds to match the values returned in the data.
*Scale Y-Axis to Data Bounds*:: The default Y-axis bounds are zero and the maximum value returned in the data. Check
this box to change both upper and lower bounds to match the values returned in the data.
After changing options, click the green *Apply changes* button to update your visualization, or the grey *Discard
changes* button to keep your visualization in its current state.
[float]
[[bubble-chart]]
=== Bubble Charts
You can convert a line chart visualization to a bubble chart by performing the following steps:
. Click *Add Metrics* for the visualization's Y axis, then select *Dot Size*.
. Select a metric aggregation from the drop-down list.
. In the *Options* tab, uncheck the *Show Connecting Lines* box.
. Click the *Apply changes* button.

View file

@ -4,4 +4,4 @@
The Markdown widget is a text entry field that accepts GitHub-flavored Markdown text. Kibana renders the text you enter
in this field and displays the results on the dashboard. You can click the *Help* link to go to the
https://help.github.com/articles/github-flavored-markdown/[help page] for GitHub flavored Markdown. Click *Apply* to
display the rendered text in the Preview pane or *Discard* to revert to a previous version.
display the rendered text in the Preview pane or *Discard* to revert to a previous version.

View file

@ -6,10 +6,6 @@ A metric visualization displays a single number for each aggregation you select:
include::y-axis-aggs.asciidoc[]
You can click the *Advanced* link to display more customization options:
*Exclude Pattern*:: Specify a pattern in this field to exclude from the results.
*Exclude Pattern Flags*:: A standard set of Java flags for the exclusion pattern.
*Include Pattern*:: Specify a pattern in this field to include in the results.
*Include Pattern Flags*:: A standard set of Java flags for the inclusion pattern.
*JSON Input*:: A text field where you can add specific JSON-formatted properties to merge with the aggregation
definition, as in the following example:
@ -21,4 +17,4 @@ NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you
The availability of these options varies depending on the aggregation you choose.
Click *view options* to change the font used to display the metrics.
Click the *Options* tab to change the font used to display the metrics.

View file

@ -21,13 +21,23 @@ You can specify any of the following bucket aggregations for your pie chart:
*Date Histogram*:: A {ref}/search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
numeric field and organized by date. You can specify a time frame for the intervals in seconds, minutes, hours, days,
weeks, months, or years.
weeks, months, or years. You can also specify a custom interval frame by selecting *Custom* as the interval and
specifying a number and a time unit in the text field. Custom interval time units are *s* for seconds, *m* for minutes,
*h* for hours, *d* for days, *w* for weeks, and *y* for years. Different units support different levels of precision,
down to one second.
*Histogram*:: A standard {ref}/search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
numeric field. Specify an integer interval for this field. Select the *Show empty buckets* checkbox to include empty
intervals in the histogram.
*Range*:: With a {ref}/search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
of values for a numeric field. Click *Add Range* to add a set of range endpoints. Click the red *(x)* symbol to remove
a range.
*Date Range*:: A {ref}/search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
that are within a range of dates that you specify. You can specify the ranges for the dates using
{ref}/mapping-date-format.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
Click the red *(/)* symbol to remove a range.
*IPv4 Range*:: The {ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
specify ranges of IPv4 addresses. Click *Add Range* to add a set of range endpoints. Click the red *(/)* symbol to
remove a range.
*Terms*:: A {ref}/search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
or bottom _n_ elements of a given field to display, ordered by count or a custom metric.
*Filters*:: You can specify a set of {ref}/search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
@ -61,8 +71,11 @@ NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you
The availability of these options varies depending on the aggregation you choose.
Select *view options* to change the following aspects of the table:
Select the *Options* tab to change the following aspects of the table:
*Donut*:: Display the chart as a sliced ring instead of a sliced pie.
*Show Tooltip*:: Check this box to enable the display of tooltips.
*Show Legend*:: Check this box to enable the display of a legend next to the chart.
*Show Legend*:: Check this box to enable the display of a legend next to the chart.
After changing options, click the green *Apply changes* button to update your visualization, or the grey *Discard
changes* button to keep your visualization in its current state.

View file

@ -42,7 +42,11 @@ kibana_elasticsearch_password: kibana4-password
----
Kibana 4 users also need access to the `.kibana` index so they can save and load searches, visualizations, and dashboards.
For more information, see {shield}/_shield_with_kibana_4.html#kibana4-roles[Configuring Roles for Kibana 4 Users] in the Shield documentation.
For more information, see {shield}/_shield_with_kibana_4.html#kibana4-roles[Configuring Roles for Kibana 4 Users] in
the Shield documentation.
TIP: See <<kibana-dynamic-mapping, Kibana and Elasticsearch Dynamic Mapping>> for important information on Kibana and
the dynamic mapping feature in Elasticsearch.
[float]
[[enabling-ssl]]
@ -50,7 +54,8 @@ For more information, see {shield}/_shield_with_kibana_4.html#kibana4-roles[Conf
Kibana supports SSL encryption for both client requests and the requests the Kibana server
sends to Elasticsearch.
To encrypt communications between the browser and the Kibana server, you configure the `ssl_key_file `and `ssl_cert_file` properties in `kibana.yml`:
To encrypt communications between the browser and the Kibana server, you configure the `ssl_key_file `and
`ssl_cert_file` properties in `kibana.yml`:
[source,text]
----
@ -101,7 +106,8 @@ If you have multiple nodes in your Elasticsearch cluster, the easiest way to dis
across the nodes is to run an Elasticsearch _client_ node on the same machine as Kibana.
Elasticsearch client nodes are essentially smart load balancers that are part of the cluster. They
process incoming HTTP requests, redirect operations to the other nodes in the cluster as needed, and
gather and return the results. For more information, see http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html[Node] in the Elasticsearch reference.
gather and return the results. For more information, see
http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html[Node] in the Elasticsearch reference.
To use a local client node to load balance Kibana requests:
@ -128,4 +134,4 @@ cluster.name: "my_cluster"
--------
# The Elasticsearch instance to use for all your queries.
elasticsearch_url: "http://localhost:9200"
--------
--------

View file

@ -156,6 +156,15 @@ To delete an index pattern:
. Click the pattern's *Delete* button.
. Confirm that you want to remove the index pattern.
[[managing-fields]]
=== Managing Fields
The fields for the index pattern are listed in a table. Click a column header to sort the table by that column. Click
the *Controls* button in the rightmost column for a given field to edit the field's properties. You can manually set
the field's format from the *Format* drop-down. Format options vary based on the field's type.
You can also set the field's popularity value in the *Popularity* text entry box to any desired value. Click the
*Update Field* button to confirm your changes or *Cancel* to return to the list of fields.
[float]
[[create-scripted-field]]
=== Creating a Scripted Field
@ -220,7 +229,6 @@ To delete a scripted field:
. Click the *Delete* button for the scripted field you want to remove.
. Confirm that you really want to delete the field.
[float]
[[advanced-options]]
=== Setting Advanced Options
The Advanced Settings page enables you to directly edit settings that control
@ -238,9 +246,12 @@ To set advanced options:
. Enter a new value for the option.
. Click the *Save* button.
=== Managing Saved Searches, Visualizations, and Dashboards ===
You can view, edit, and delete saved searches, visualizations,
and dashboards from *Settings > Objects*.
[float]
[[managing-saved-objects]]
=== Managing Saved Searches, Visualizations, and Dashboards
You can view, edit, and delete saved searches, visualizations, and dashboards from *Settings > Objects*. You can also
export or import sets of searches, visualizations, and dashboards.
Viewing a saved object displays the selected item in the *Discover*, *Visualize*,
or *Dashboard* page. To view a saved object:
@ -282,11 +293,27 @@ To delete a saved object:
. Click the *Delete* button.
. Confirm that you really want to delete the object.
To export a set of objects:
. Go to *Settings > Objects*.
. Select the type of object you want to export. You can export a set of dashboards, searches, or visualizations.
. Click the selection box for the objects you want to export, or click the *Select All* box.
. Click *Export* to select a location to write the exported JSON.
To import a set of objects:
. Go to *Settings > Objects*.
. Click *Import* to navigate to the JSON file representing the set of objects to import.
. Click *Open* after selecting the JSON file.
. If any objects in the set would overwrite objects already present in Kibana, confirm the overwrite.
[[kibana-server-properties]]
=== Setting Kibana Server Properties
The Kibana server reads properties from the `kibana.yml` file on startup. The default
settings configure Kibana to run on `localhost:5601`. To change the host or port number, or
connect to Elasticsearch running on a different machine, you'll need to update your `kibana.yml` file. You can also enable SSL and set a variety of other options.
connect to Elasticsearch running on a different machine, you'll need to update your `kibana.yml` file. You can also
enable SSL and set a variety of other options.
.Kibana Server Properties
|===
@ -305,13 +332,15 @@ connect to Elasticsearch running on a different machine, you'll need to update y
|`elasticsearch_preserve_host`
|By default, the host specified in the incoming request from the browser is specified as the host in the
corresponding request Kibana sends to Elasticsearch. If you set this option to `false`, Kibana uses the host
specified in `elasticsearch_url`. You probably don't need to worry about this setting--just use the default. Default: `elasticsearch_preserve_host: true`.
specified in `elasticsearch_url`. You probably don't need to worry about this setting--just use the default.
Default: `elasticsearch_preserve_host: true`.
|`kibana_index`
|The name of the index where saved searched, visualizations, and dashboards will be stored. Default: `kibana_index: .kibana`.
|`default_app_id`
|The page that will be displayed when you launch Kibana: `discover`, `visualize`, `dashboard`, or `settings`. Default: `default_app_id: "discover"`.
|The page that will be displayed when you launch Kibana: `discover`, `visualize`, `dashboard`, or `settings`. Default:
`default_app_id: "discover"`.
|`request_timeout`
|How long to wait for responses from the Kibana backend or Elasticsearch, in milliseconds. Default: `request_timeout: 500000`.
@ -320,7 +349,8 @@ specified in `elasticsearch_url`. You probably don't need to worry about this se
|How long Elasticsearch should wait for responses from shards. Set to 0 to disable. Default: `shard_timeout: 0`.
|`verify_ssl`
|Indicates whether or not to validate the Elasticsearch SSL certificate. Set to false to disable SSL verification. Default: `verify_ssl: true`.
|Indicates whether or not to validate the Elasticsearch SSL certificate. Set to false to disable SSL verification.
Default: `verify_ssl: true`.
|`ca`
|The path to the CA certificate for your Elasticsearch instance. Specify if you are using a self-signed certificate
@ -333,6 +363,11 @@ so the certificate can be verified. (Otherwise, you have to disable `verify_ssl`
|The path to your Kibana server's certificate file. Must be set to encrypt communications between the browser and Kibana. Default: none.
|`pid_file`
|The location where you want to store the process ID file. If not specified, the PID file is stored in `/var/run/kibana.pid`. Default: none.
|The location where you want to store the process ID file. If not specified, the PID file is stored in
`/var/run/kibana.pid`. Default: none.
|`log_file`
|The location where you want to store the Kibana's log output. If not specified, log output is written to standard
output and not stored. Specifying a log file suppresses log writes to standard output. Default: none.
|===

View file

@ -9,25 +9,63 @@ All you need is:
** URL of the Elasticsearch instance you want to connect to.
** Which Elasticsearch indices you want to search.
NOTE: If your Elasticsearch installation is protected by http://www.elastic.co/overview/shield/[Shield] see https://www.elastic.co/guide/en/shield/current/_shield_with_kibana_4.html[Shield with Kibana 4] for additional setup instructions.
NOTE: If your Elasticsearch installation is protected by http://www.elastic.co/overview/shield/[Shield] see
{shield}/_shield_with_kibana_4.html[Shield with Kibana 4] for additional setup instructions.
[float]
[[install]]
=== Install and Start Kibana
To get Kibana up and running:
. Download the https://www.elastic.co/downloads/kibana[Kibana 4 binary package] for your platform.
. Extract the `.zip` or `tar.gz` archive file.
. Run Kibana from the install directory: `bin/kibana` (Linux/MacOSX) or `bin\kibana.bat` (Windows).
// On Unix, you can instead run the package manager suited for your distribution.
//
// [float]
// include::kibana-repositories.asciidoc[]
//
After installing, run Kibana from the install directory: `bin/kibana` (Linux/MacOSX) or `bin\kibana.bat` (Windows).
That's it! Kibana is now running on port 5601.
[float]
[[kibana-dynamic-mapping]]
==== Kibana and Elasticsearch Dynamic Mapping
By default, Elasticsearch enables {ref}mapping-dynamic-mapping.html[dynamic mapping] for fields. Kibana needs dynamic mapping
to use fields in visualizations correctly, as well as to manage the `.kibana` index where saved searches,
visualizations, and dashboards are stored.
If your Elasticsearch use case requires you to disable dynamic mapping, you need to manually provide mappings for
fields that Kibana uses to create visualizations. You also need to manually enable dynamic mapping for the `.kibana`
index.
The following procedure assumes that the `.kibana` index does not already exist in Elasticsearch and that the
`index.mapper.dynamic` setting in `elasticsearch.yml` is set to `false`:
. Start Elasticsearch.
. Create the `.kibana` index with dynamic mapping enabled just for that index:
+
[source,shell]
PUT .kibana
{
"index.mapper.dynamic": true
}
+
. Start Kibana and navigate to the web UI and verify that there are no error messages related to dynamic mapping.
[float]
[[connect]]
=== Connect Kibana with Elasticsearch
Before you can start using Kibana, you need to tell it which Elasticsearch indices you want to explore. The first time you access Kibana, you are prompted to define an _index pattern_ that matches the name of one or more of your indices. That's it. That's all you need to configure to start using Kibana. You can add index patterns at any time from the <<settings-create-pattern,Settings tab>>.
Before you can start using Kibana, you need to tell it which Elasticsearch indices you want to explore. The first time
you access Kibana, you are prompted to define an _index pattern_ that matches the name of one or more of your indices.
That's it. That's all you need to configure to start using Kibana. You can add index patterns at any time from the
<<settings-create-pattern,Settings tab>>.
TIP: By default, Kibana connects to the Elasticsearch instance running on `localhost`. To connect to a different Elasticsearch instance, modify the Elasticsearch URL in the `kibana.yml` configuration file and restart Kibana. For information about using Kibana with your production nodes, see <<production>>.
TIP: By default, Kibana connects to the Elasticsearch instance running on `localhost`. To connect to a different
Elasticsearch instance, modify the Elasticsearch URL in the `kibana.yml` configuration file and restart Kibana. For
information about using Kibana with your production nodes, see <<production>>.
To configure the Elasticsearch indices you want to access with Kibana:
@ -35,18 +73,32 @@ To configure the Elasticsearch indices you want to access with Kibana:
+
image:images/Start-Page.jpg[Kibana start page]
+
. Specify an index pattern that matches the name of one or more of your Elasticsearch indices. By default, Kibana guesses that you're working with data being fed into Elasticsearch by Logstash. If that's the case, you can use the default `logstash-*` as your index pattern. The asterisk (*) matches zero or more characters in an index's name. If your Elasticsearch indices follow some other naming convention, enter an appropriate pattern. The "pattern" can also simply be the name of a single index.
. Select the index field that contains the timestamp that you want to use to perform time-based comparisons. Kibana reads the index mapping to list all of the fields that contain a timestamp. If your index doesn't have time-based data, disable the *Index contains time-based events* option.
. If new indices are generated periodically and have a timestamp appended to the name, select the *Use event times to create index names* option and select the *Index pattern interval*. This improves search performance by enabling Kibana to search only those indices that could contain data in the time range you specify. This is primarily applicable if you are using Logstash to feed data into Elasticsearch.
. Click *Create* to add the index pattern. This first pattern is automatically configured as the default. When you have more than one index pattern, you can designate which one to use as the default from *Settings > Indices*.
. Specify an index pattern that matches the name of one or more of your Elasticsearch indices. By default, Kibana
guesses that you're working with data being fed into Elasticsearch by Logstash. If that's the case, you can use the
default `logstash-*` as your index pattern. The asterisk (*) matches zero or more characters in an index's name. If
your Elasticsearch indices follow some other naming convention, enter an appropriate pattern. The "pattern" can also
simply be the name of a single index.
. Select the index field that contains the timestamp that you want to use to perform time-based comparisons. Kibana
reads the index mapping to list all of the fields that contain a timestamp. If your index doesn't have time-based data,
disable the *Index contains time-based events* option.
. If new indices are generated periodically and have a timestamp appended to the name, select the *Use event times to
create index names* option and select the *Index pattern interval*. This improves search performance by enabling Kibana
to search only those indices that could contain data in the time range you specify. This is primarily applicable if you
are using Logstash to feed data into Elasticsearch.
. Click *Create* to add the index pattern. This first pattern is automatically configured as the default.
When you have more than one index pattern, you can designate which one to use as the default from *Settings > Indices*.
Voila! Kibana is now connected to your Elasticsearch data. Kibana displays a read-only list of fields configured for the matching index.
Voila! Kibana is now connected to your Elasticsearch data. Kibana displays a read-only list of fields configured for
the matching index.
[float]
[[explore]]
=== Start Exploring your Data!
You're ready to dive in to your data:
* Search and browse your data interactively from the <<discover,Discover>> page.
* Search and browse your data interactively from the <<discover, Discover>> page.
* Chart and map your data from the <<visualize, Visualize>> page.
* Create and view custom dashboards from the <<dashboard, Dashboard>> page.
* Create and view custom dashboards from the <<dashboard, Dashboard>> page.
For a brief tutorial that explores these core Kibana concepts, take a look at the <<getting-started, Getting
Started>> page.

View file

@ -25,7 +25,13 @@ Before you choose a buckets aggregation, specify if you are splitting the chart
Coordinates* on a single chart. A multiple chart split must run before any other aggregations.
Tile maps use the *Geohash* aggregation as their initial aggregation. Select a field, typically coordinates, from the
drop-down. The *Precision* slider determines the granularity of the results displayed on the map.
drop-down. The *Precision* slider determines the granularity of the results displayed on the map. See the documentation
for the {ref}/search-aggregations-bucket-geohashgrid-aggregation.html#_cell_dimensions_at_the_equator[geohash grid]
aggregation for details on the area specified by each precision level. As of the 4.1 release, Kibana supports a maximum
geohash length of 7.
NOTE: Higher precisions increase memory usage for the browser displaying Kibana as well as for the underlying
Elasticsearch cluster.
Once you've specified a buckets aggregation, you can define sub-aggregations to refine the visualization. Tile maps
only support sub-aggregations as split charts. Click *+ Add Sub Aggregation*, then *Split Chart* to select a
@ -33,13 +39,25 @@ sub-aggregation from the list of types:
*Date Histogram*:: A {ref}/search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
numeric field and organized by date. You can specify a time frame for the intervals in seconds, minutes, hours, days,
weeks, months, or years.
weeks, months, or years. You can also specify a custom interval frame by selecting *Custom* as the interval and
specifying a number and a time unit in the text field. Custom interval time units are *s* for seconds, *m* for minutes,
*h* for hours, *d* for days, *w* for weeks, and *y* for years. Different units support different levels of precision,
down to one second.
*Histogram*:: A standard {ref}/search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
numeric field. Specify an integer interval for this field. Select the *Show empty buckets* checkbox to include empty
intervals in the histogram.
*Range*:: With a {ref}/search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
of values for a numeric field. Click *Add Range* to add a set of range endpoints. Click the red *(x)* symbol to remove
a range.
After changing options, click the green *Apply changes* button to update your visualization, or the grey *Discard
changes* button to keep your visualization in its current state.
*Date Range*:: A {ref}/search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
that are within a range of dates that you specify. You can specify the ranges for the dates using
{ref}/mapping-date-format.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
Click the red *(/)* symbol to remove a range.
*IPv4 Range*:: The {ref}search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
specify ranges of IPv4 addresses. Click *Add Range* to add a set of range endpoints. Click the red *(/)* symbol to
remove a range.
*Terms*:: A {ref}/search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
or bottom _n_ elements of a given field to display, ordered by count or a custom metric.
*Filters*:: You can specify a set of {ref}/search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
@ -51,6 +69,8 @@ add another filter.
*Geohash*:: The {ref}/search-aggregations-bucket-geohashgrid-aggregation.html[_geohash_] aggregation displays points
based on the geohash coordinates.
NOTE: By default, the *Change precision on map zoom* box is checked. Uncheck the box to disable this behavior.
You can click the *Advanced* link to display more customization options for your metrics or bucket aggregation:
*Exclude Pattern*:: Specify a pattern in this field to exclude from the results.
@ -68,8 +88,37 @@ NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you
The availability of these options varies depending on the aggregation you choose.
Select *view options* to change the following aspects of the table:
Select the *Options* tab to change the following aspects of the chart:
*Shaded Circle Markers*:: Displays the markers with different shades based on the metric aggregation's value.
*Scaled Circle Markers*:: Scale the size of the markers based on the metric aggregation's value.
*Desaturate Map*:: Desaturate the map's color in order to make the markers stand out more clearly.
*Map type*:: Select one of the following options from the drop-down.
*_Scaled Circle Markers_*:: Scale the size of the markers based on the metric aggregation's value.
*_Shaded Circle Markers_*:: Displays the markers with different shades based on the metric aggregation's value.
*_Shaded Geohash Grid_*:: Displays the rectangular cells of the geohash grid instead of circular markers, with different
shades based on the metric aggregation's value.
*_Heatmap_*:: A heat map applies blurring to the circle markers and applies shading based on the amount of overlap.
Heatmaps have the following options:
* *Radius*: Sets the size of the individual heatmap dots.
* *Blur*: Sets the amount of blurring for the heatmap dots.
* *Maximum zoom*: Tilemaps in Kibana support 18 zoom levels. This slider defines the maximum zoom level at which the
heatmap dots appear at full intensity.
* *Minimum opacity*: Sets the opacity cutoff for the dots.
* *Show Tooltip*: Check this box to have a tooltip with the values for a given dot when the cursor is on that dot.
*Desaturate map tiles*:: Desaturate the map's color in order to make the markers stand out more clearly.
After changing options, click the green *Apply changes* button to update your visualization, or the grey *Discard
changes* button to keep your visualization in its current state.
[float]
[[navigating-map]]
==== Navigating the Map
Once your tilemap visualization is ready, you can explore the map in several ways:
* Click and hold anywhere on the map and move the cursor to move the map center. Hold Shift and drag a bounding box
across the map to zoom in on the selection.
* Click the *Zoom In/Out* image:images/viz-zoom.png[] buttons to change the zoom level manually.
* Click the *Fit Data Bounds* image:images/viz-fit-bounds.png[] button to automatically crop the map boundaries to the
geohash buckets that have at least one result.
* Click the *Latitude/Longitude Filter* image:images/viz-lat-long-filter.png[] button, then drag a bounding box across the
map, to create a filter for the box coordinates.

View file

@ -30,7 +30,7 @@ NOTE: In Elasticsearch releases 1.4.3 and later, this functionality requires you
The availability of these options varies depending on the aggregation you choose.
Select *view options* to change the following aspects of the table:
Select the *Options* to change the following aspects of the table:
*Bar Mode*:: When you have multiple Y-axis aggregations defined for your chart, you can use this drop-down to affect
how the aggregations display on the chart:
@ -44,4 +44,4 @@ Checkboxes are available to enable and disable the following behaviors:
*Show Tooltip*:: Check this box to enable the display of tooltips.
*Show Legend*:: Check this box to enable the display of a legend next to the chart.
*Scale Y-Axis to Data Bounds*:: The default Y axis bounds are zero and the maximum value returned in the data. Check
this box to change both upper and lower bounds to match the values returned in the data.
this box to change both upper and lower bounds to match the values returned in the data.

View file

@ -1,20 +0,0 @@
[[vizconf]]
== Visualization Configuration
This sections deals with the configuration options for visualizations in Kibana.
include::area.asciidoc[]
include::datatable.asciidoc[]
include::line.asciidoc[]
include::markdown.asciidoc[]
include::metric.asciidoc[]
include::pie.asciidoc[]
include::tilemap.asciidoc[]
include::vertbar.asciidoc[]

View file

@ -53,8 +53,6 @@ You can choose a new or saved search to serve as the data source for your visual
an index or a set of indexes. When you select _new search_ on a system with multiple indices configured, select an
index pattern from the drop-down to bring up the visualization editor.
// How is this drop-down populated? Is it just a list of all indices in the cluster? Can I configure the contents?
When you create a visualization from a saved search and save the visualization, the search is tied to the visualization.
When you make changes to the search that is linked to the visualization, the visualization updates automatically.
@ -71,6 +69,10 @@ main elements:
image:images/VizEditor.jpg[]
[float]
[[viz-autorefresh]]
include::autorefresh.asciidoc[]
[float]
[[toolbar-panel]]
===== Toolbar
@ -91,7 +93,7 @@ Use the aggregation builder on the left of the page to configure the {ref}/searc
visualization. Buckets are analogous to SQL `GROUP BY` statements. For more information on aggregations, see the main
{ref}/search-aggregations.html[Elasticsearch aggregations reference].
In bar or line chart visualizations, use _metrics_ for the y-axis and _buckets_ are used for the x-axis, segment bar
Bar, line, or area chart visualizations use _metrics_ for the y-axis and _buckets_ are used for the x-axis, segment bar
colors, and row/column splits. For pie charts, use the metric for the slice size and the bucket for the number of
slices.
@ -126,7 +128,12 @@ inside each bucket, which in this example is a one-hour interval.
NOTE: Remember, each subsequent bucket slices the data from the previous bucket.
To render the visualization on the _preview canvas_, click the *Apply* button at the bottom of the aggregation builder.
To render the visualization on the _preview canvas_, click the green *Apply Changes* button at the top right of the
Aggregation Builder.
[float]
[[visualize-filters]]
include::filter-pinning.asciidoc[]
[float]
[[preview-canvas]]
@ -149,4 +156,4 @@ include::pie.asciidoc[]
include::tilemap.asciidoc[]
include::vertbar.asciidoc[]
include::vertbar.asciidoc[]

View file

@ -1,58 +1,35 @@
[[whats-new]]
== What's New in Kibana 4
Kibana 4 provides dozens of new features that enable you to compose questions,
get answers, and solve problems like never before. It has a brand-new look and
feel and improved workflows for discovering and visualizing your data and
building and sharing dashboards.
== What's New in Kibana 4.1
[float]
[[key-features]]
=== Key Features
* New data search and discovery interface
* Unified visualization builder for your favorite visualizations and some brand
new ones:
** Area Chart
** Data Table
** Line Chart
** Markdown Text Widget
** Pie Chart (including "doughnut" charts)
** Raw Document Widget
** Single Metric Widget
** Tile Map
** Vertical Bar Chart
* Drag and drop dashboard builder that enables you to quickly add, rearrange,
resize, and remove visualizations
* Advanced aggregation-based analytics capabilities, including support for:
** Unique counts (cardinality)
** Non-date histograms
** Ranges
** Significant terms
** Percentiles
* Expressions-based scripted fields enable you to perform ad-hoc analysis by
performing computations on the fly
* {k4pull}2518[Pull Request 2518]: You can pin filters to make the filter persist across Kibana functionality, from
the Visualize tab to Discover to a Dashboard.
* {k4pull}2731[Pull Request 2731]: Field formatting options now supported in Settings.
* {k4pull}3154[Pull Request 3154]: New chart: Bubble chart, derived from the basic line chart.
* {k4pull}3212[Pull Request 3212]: You can now install Kibana on Linux with a package manager such as `yum` or
`apt-get`.
* {k4pull}3271[Pull Request 3271] and {k4pull}3262[3262]: New aggregations: IPv4 and Date range aggregations enable
you to specify buckets for these qualities.
* {k4pull}3290[Pull Request 3290]: You can select a time interval for the Discover display of time series data.
* {k4pull}3470[Pull Request 3470]: New metric: Percentile ranks.
* {k4pull}3573[Pull Request 3573]: Kibana objects (visualizations, dashboards, and searches) can be imported and
exported.
* {k4pull}3830[Pull Request 3830]: New chart: Heatmap, a tile map display variant.
[float]
[[improvements]]
=== Improvements
* Ability to save searches and visualizations enables you to link
searches to visualizations and add the same visualization to multiple dashboards
* Visualizations support an unlimited number of nested aggregations so you can
display new types of visualizations, such as "doughnut" charts
* New URL format eliminates the need for templated and scripted dashboards
* Better mobile experience
* Faster dashboard loading due to a reduction in the number HTTP calls needed to load the page
* SSL encryption for client requests as well as requests to and from Elasticsearch
* Search result highlighting
* Easy to access and export the data behind any visualization:
** View in a table or view as JSON
** Export in CSV format
** See the Elasticsearch request and response
* Share and embed individual visualizations as well as dashboards
[float]
[[nuts-bolts]]
=== Nuts and Bolts
* Ships with its own webserver and uses Node.js on the backend--installation
binaries are provided for Linux, Windows, and Mac OS
* Uses the D3 framework to display visualizations
* {k4pull}3164[Pull Request 3164]: You can now store a specific time range with a dashboard.
* {k4pull}3233[Pull Request 3233]: New Y axis scale options, log scale and square root scale.
* {k4pull}3237[Pull Request 3237]: Dat Histogram bucket aggregation now supports custom intervals, from seconds to
years.
* {k4pull}3273[Pull Request 3273]: Line smoothing for for line and area charts.
* {k4pull}3464[Pull Request 3464]: You can now specify the extent of the Y axis for charts.
* {k4pull}3526[Pull Request 3526]: You can add columns to Discover's list of results directly from an entry's table of
fields.
* {k4pull}3671[Pull Request 3671]: Tile maps now support latitude/longitude filtering.
* {k4pull}3800[Pull Request 3800]: You can now pause auto-refresh on a dashboard.

View file

@ -6,13 +6,24 @@ Elasticsearch documentation for that aggregation.
*Date Histogram*:: A {ref}/search-aggregations-bucket-datehistogram-aggregation.html[_date histogram_] is built from a
numeric field and organized by date. You can specify a time frame for the intervals in seconds, minutes, hours, days,
weeks, months, or years.
weeks, months, or years. You can also specify a custom interval frame by selecting *Custom* as the interval and
specifying a number and a time unit in the text field. Custom interval time units are *s* for seconds, *m* for minutes,
*h* for hours, *d* for days, *w* for weeks, and *y* for years. Different units support different levels of precision,
down to one second.
*Histogram*:: A standard {ref}/search-aggregations-bucket-histogram-aggregation.html[_histogram_] is built from a
numeric field. Specify an integer interval for this field. Select the *Show empty buckets* checkbox to include empty
intervals in the histogram.
*Range*:: With a {ref}/search-aggregations-bucket-range-aggregation.html[_range_] aggregation, you can specify ranges
of values for a numeric field. Click *Add Range* to add a set of range endpoints. Click the red *(x)* symbol to remove
a range.
*Date Range*:: A {ref}/search-aggregations-bucket-daterange-aggregation.html[_date range_] aggregation reports values
that are within a range of dates that you specify. You can specify the ranges for the dates using
{ref}/mapping-date-format.html#date-math[_date math_] expressions. Click *Add Range* to add a set of range endpoints.
Click the red *(/)* symbol to remove a range.
*IPv4 Range*:: The {ref}/search-aggregations-bucket-iprange-aggregation.html[_IPv4 range_] aggregation enables you to
specify ranges of IPv4 addresses. Click *Add Range* to add a set of range endpoints. Click the red *(/)* symbol to
remove a range.
*Terms*:: A {ref}/search-aggregations-bucket-terms-aggregation.html[_terms_] aggregation enables you to specify the top
or bottom _n_ elements of a given field to display, ordered by count or a custom metric.
*Filters*:: You can specify a set of {ref}/search-aggregations-bucket-filters-aggregation.html[_filters_] for the data.
@ -26,4 +37,4 @@ Sub Aggregation* to define a sub-aggregation, then choose *Split Area* or *Split
from the list of types.
When multiple aggregations are defined on a chart's axis, you can use the up or down arrows to the right of the
aggregation's type to change the aggregation's priority.
aggregation's type to change the aggregation's priority.

View file

@ -12,7 +12,13 @@ numeric field. Select a field from the drop-down.
the number of unique values in a field. Select a field from the drop-down.
*Standard Deviation*:: The {ref}/search-aggregations-metrics-extendedstats-aggregation.html[_extended stats_]
aggregation returns the standard deviation of data in a numeric field. Select a field from the drop-down.
*Percentile*:: The {ref}/search-aggregations-metrics-percentile-rank-aggregation.html[_percentile_] aggregation returns
the percentile rank of values in a numeric field. Select a field from the drop-down, then specify a range in the *Percentiles* fields. Click the *X* to remove a percentile field. Click *+ Add Percent* to add a percentile field.
*Percentiles*:: The {ref}/search-aggregations-metrics-percentile-aggregation.html[_percentile_] aggregation divides the
values in a numeric field into percentile bands that you specify. Select a field from the drop-down, then specify one
or more ranges in the *Percentiles* fields. Click the *X* to remove a percentile field. Click *+ Add* to add a
percentile field.
*Percentile Rank*:: The {ref}/search-aggregations-metrics-percentile-rank-aggregation.html[_percentile ranks_]
aggregation returns the percentile rankings for the values in the numeric field you specify. Select a numeric field
from the drop-down, then specify one or more percentile rank values in the *Values* fields. Click the *X* to remove a
values field. Click *+Add* to add a values field.
You can add an aggregation by clicking the *+ Add Aggregation* button.
You can add an aggregation by clicking the *+ Add Aggregation* button.

View file

@ -11,13 +11,13 @@
"dashboarding"
],
"private": false,
"version": "4.1.0-snapshot",
"version": "4.2.0-snapshot",
"main": "src/server/index.js",
"homepage": "https://www.elastic.co/products/kibana",
"bugs": {
"url": "http://github.com/elastic/kibana/issues"
},
"license": "Apache 2.0",
"license": "Apache-2.0",
"author": "Rashid Khan <rashid.khan@elastic.co>",
"contributors": [
"Spencer Alger <spencer.alger@elastic.co>",
@ -30,8 +30,8 @@
],
"scripts": {
"test": "grunt test",
"start": "node ./src/server/index.js",
"server": "node ./src/server/bin/kibana.js",
"start": "node ./src/server/bin/kibana.js",
"postinstall": "bower install && grunt licenses --check-validity",
"precommit": "grunt lintStagedFiles"
},
"repository": {
@ -40,82 +40,82 @@
},
"dependencies": {
"ansicolors": "^0.3.2",
"bluebird": "~2.0.7",
"body-parser": "~1.10.1",
"bluebird": "^2.9.27",
"body-parser": "^1.10.1",
"bunyan": "^1.2.3",
"commander": "^2.6.0",
"compression": "^1.3.0",
"cookie-parser": "~1.3.3",
"debug": "~2.1.1",
"elasticsearch": "^3.1.1",
"express": "~4.10.6",
"cookie-parser": "^1.3.3",
"debug": "^2.1.1",
"elasticsearch": "^5.0.0",
"express": "^4.10.6",
"glob": "^4.3.2",
"good": "^5.1.2",
"good-console": "^4.1.0",
"good-file": "^4.0.2",
"good-reporter": "^3.0.1",
"hapi": "^8.4.0",
"good-reporter": "^3.1.0",
"hapi": "^8.6.1",
"http-auth": "^2.2.5",
"jade": "~1.8.2",
"joi": "^6.2.0",
"joi": "^6.4.3",
"js-yaml": "^3.2.5",
"json-stringify-safe": "^5.0.0",
"less-middleware": "1.0.x",
"lodash": "^2.4.1",
"lodash-deep": "^1.6.0",
"moment": "^2.9.0",
"morgan": "~1.5.1",
"lodash": "^3.9.3",
"json-stringify-safe": "^5.0.1",
"moment": "^2.10.3",
"numeral": "^1.5.3",
"request": "^2.40.0",
"requirefrom": "^0.2.0",
"semver": "^4.2.0",
"serve-favicon": "~2.2.0",
"semver": "^4.3.6",
"serve-favicon": "^2.2.0",
"through": "^2.3.6"
},
"devDependencies": {
"connect": "~2.19.5",
"event-stream": "~3.1.5",
"expect.js": "~0.3.1",
"glob": "~4.1.3",
"grunt": "~0.4.5",
"grunt-contrib-clean": "~0.5.0",
"grunt-contrib-compress": "~0.9.1",
"grunt-contrib-copy": "~0.5.0",
"grunt-contrib-jade": "~0.10.0",
"bower": "^1.4.1",
"bower-license": "^0.2.6",
"event-stream": "^3.1.5",
"expect.js": "^0.3.1",
"grunt": "^0.4.5",
"grunt-cli": "0.1.13",
"grunt-contrib-clean": "^0.6.0",
"grunt-contrib-compress": "^0.13.0",
"grunt-contrib-copy": "^0.8.0",
"grunt-contrib-jade": "^0.14.0",
"grunt-contrib-jshint": "^0.11",
"grunt-contrib-less": "~0.10.0",
"grunt-contrib-requirejs": "~0.4.4",
"grunt-contrib-watch": "~0.5.3",
"grunt-esvm": "~0.3.2",
"grunt-jscs": "git://github.com/spalger/grunt-jscs.git#addFix",
"grunt-mocha": "~0.4.10",
"grunt-contrib-less": "^1.0.1",
"grunt-contrib-requirejs": "^0.4.4",
"grunt-contrib-watch": "^0.6.1",
"grunt-esvm": "^1.0.1",
"grunt-jscs": "^1.8.0",
"grunt-mocha": "^0.4.10",
"grunt-replace": "^0.7.9",
"grunt-run": "^0.2.3",
"grunt-s3": "~0.2.0-alpha.3",
"grunt-saucelabs": "~8.3.2",
"grunt-run": "^0.3.0",
"grunt-s3": "^0.2.0-alpha.3",
"grunt-simple-mocha": "^0.4.0",
"html-entities": "^1.1.1",
"http-proxy": "~1.8.1",
"husky": "~0.6.0",
"istanbul": "~0.2.4",
"libesvm": "0.0.12",
"load-grunt-config": "~0.7.0",
"lodash": "~2.4.1",
"marked": "^0.3.2",
"http-proxy": "^1.8.1",
"husky": "^0.8.1",
"istanbul": "^0.3.15",
"jade": "^1.8.2",
"license-checker": "3.0.3",
"libesvm": "^1.0.1",
"load-grunt-config": "^0.7.0",
"marked": "^0.3.3",
"marked-text-renderer": "^0.1.0",
"mkdirp": "^0.5.0",
"mocha": "~1.20.1",
"mocha-screencast-reporter": "~0.1.4",
"mocha": "^2.2.5",
"nock": "^1.6.0",
"opn": "~1.0.0",
"npm": "^2.11.0",
"opn": "^1.0.0",
"path-browserify": "0.0.0",
"portscanner": "^1.0.0",
"progress": "^1.1.8",
"requirejs": "~2.1.14",
"requirejs": "^2.1.14",
"rjs-build-analysis": "0.0.3",
"simple-git": "^0.11.0",
"simple-git": "^1.3.0",
"sinon": "^1.12.2",
"sinon-as-promised": "^2.0.3",
"tar": "^1.0.1"
"tar": "^2.1.1"
},
"engines": {
"node": "~0.10 || ~0.12",
"iojs": ">=1.5"
}
}

View file

@ -1,51 +0,0 @@
define(function (require) {
var decodeGeoHash = require('utils/decode_geo_hash');
var _ = require('lodash');
function readRows(table, agg, index, chart) {
var geoJson = chart.geoJson;
var props = geoJson.properties;
var metricLabel = agg.metric.makeLabel();
props.length = table.rows.length;
props.min = null;
props.max = null;
props.agg = agg;
table.rows.forEach(function (row) {
var geohash = row[index.geo].value;
var valResult = row[index.metric];
var val = valResult.value;
if (props.min === null || val < props.min) props.min = val;
if (props.max === null || val > props.max) props.max = val;
var location = decodeGeoHash(geohash);
var center = [location.longitude[2], location.latitude[2]];
var rectangle = [
[location.longitude[0], location.latitude[0]],
[location.longitude[1], location.latitude[0]],
[location.longitude[1], location.latitude[1]],
[location.longitude[0], location.latitude[1]]
];
geoJson.features.push({
type: 'Feature',
geometry: {
type: 'Point',
coordinates: center
},
properties: {
valueLabel: metricLabel,
count: val,
geohash: geohash,
center: center,
aggConfigResult: valResult,
rectangle: rectangle
}
});
});
}
return readRows;
});

View file

@ -0,0 +1,8 @@
<table>
<tbody>
<tr ng-repeat="detail in details" >
<td><b>{{detail.label}}</b></td>
<td>{{detail.value}}</td>
</tr>
</tbody>
</table>

View file

@ -0,0 +1,41 @@
define(function (require) {
return function TileMapTooltipFormatter($compile, $rootScope, Private) {
var $ = require('jquery');
var _ = require('lodash');
var fieldFormats = Private(require('registry/field_formats'));
var $tooltipScope = $rootScope.$new();
var $el = $('<div>').html(require('text!components/agg_response/geo_json/_tooltip.html'));
$compile($el)($tooltipScope);
return function tooltipFormatter(feature) {
if (!feature) return '';
var value = feature.properties.value;
var acr = feature.properties.aggConfigResult;
var vis = acr.aggConfig.vis;
var metricAgg = acr.aggConfig;
var geoFormat = _.get(vis.aggs, 'byTypeName.geohash_grid[0].format');
if (!geoFormat) geoFormat = fieldFormats.getDefaultInstance('geo_point');
$tooltipScope.details = [
{
label: metricAgg.makeLabel(),
value: metricAgg.fieldFormatter()(value)
},
{
label: 'Center',
value: geoFormat.convert({
lat: feature.geometry.coordinates[1],
lon: feature.geometry.coordinates[0]
})
}
];
$tooltipScope.$apply();
return $el.html();
};
};
});

View file

@ -2,55 +2,43 @@ define(function (require) {
return function TileMapConverterFn(Private, timefilter, $compile, $rootScope) {
var _ = require('lodash');
var readRows = require('components/agg_response/geo_json/_read_rows');
function findCol(table, name) {
return _.findIndex(table.columns, function (col) {
return col.aggConfig.schema.name === name;
});
}
var rowsToFeatures = require('components/agg_response/geo_json/rowsToFeatures');
var tooltipFormatter = Private(require('components/agg_response/geo_json/_tooltip_formatter'));
function createGeoJson(vis, table) {
var index = {
geo: findCol(table, 'segment'),
metric: findCol(table, 'metric')
};
return function (vis, table) {
var col = {
geo: table.columns[index.geo],
metric: table.columns[index.metric],
};
var agg = _.mapValues(col, function (col) {
return col && col.aggConfig;
});
var chart = {};
var geoJson = chart.geoJson = {
type: 'FeatureCollection',
features: []
};
var props = geoJson.properties = {
label: table.title(),
length: 0,
min: 0,
max: 0
};
// set precision from the bucketting column, if we have one
if (agg.geo) {
props.precision = _.parseInt(agg.geo.params.precision);
function columnIndex(schema) {
return _.findIndex(table.columns, function (col) {
return col.aggConfig.schema.name === schema;
});
}
// we're all done if there are no columns
if (!col.geo || !col.metric || !table.rows.length) return chart;
var geoI = columnIndex('segment');
var metricI = columnIndex('metric');
var geoAgg = _.get(table.columns, [geoI, 'aggConfig']);
var metricAgg = _.get(table.columns, [metricI, 'aggConfig']);
// read the rows into the geoJson features list
readRows(table, agg, index, chart);
var features = rowsToFeatures(table, geoI, metricI);
var values = features.map(function (feature) {
return feature.properties.value;
});
return chart;
}
return createGeoJson;
return {
title: table.title(),
valueFormatter: metricAgg && metricAgg.fieldFormatter(),
tooltipFormatter: tooltipFormatter,
geohashGridAgg: geoAgg,
geoJson: {
type: 'FeatureCollection',
features: features,
properties: {
min: _.min(values),
max: _.max(values),
zoom: _.get(geoAgg, 'params.mapZoom'),
center: _.get(geoAgg, 'params.mapCenter')
}
}
};
};
};
});

View file

@ -0,0 +1,50 @@
define(function (require) {
var decodeGeoHash = require('utils/decode_geo_hash');
var AggConfigResult = require('components/vis/_agg_config_result');
var _ = require('lodash');
function getAcr(val) {
return val instanceof AggConfigResult ? val : null;
}
function unwrap(val) {
return getAcr(val) ? val.value : val;
}
function convertRowsToFeatures(table, geoI, metricI) {
return _.transform(table.rows, function (features, row) {
var geohash = unwrap(row[geoI]);
if (!geohash) return;
var location = decodeGeoHash(geohash);
var center = [
location.longitude[2],
location.latitude[2]
];
var rectangle = [
[location.longitude[0], location.latitude[0]],
[location.longitude[1], location.latitude[0]],
[location.longitude[1], location.latitude[1]],
[location.longitude[0], location.latitude[1]]
];
features.push({
type: 'Feature',
geometry: {
type: 'Point',
coordinates: center
},
properties: {
geohash: geohash,
value: unwrap(row[metricI]),
aggConfigResult: getAcr(row[metricI]),
center: center,
rectangle: rectangle
}
});
}, []);
}
return convertRowsToFeatures;
});

View file

@ -13,19 +13,12 @@ define(function () {
|| (col && col.label)
|| ('level ' + item.depth);
// Set the bucket name, and use the converter to format the field if
// the field exists.
var bucket = item.name;
if (col) {
bucket = col.fieldFormatter()(bucket);
}
// Add the row to the tooltipScope.rows
memo.unshift({
aggConfig: col,
depth: depth,
field: field,
bucket: bucket,
bucket: item.name,
metric: item.size,
item: item
});

View file

@ -4,9 +4,9 @@ define(function (require) {
var nextChildren = _.pluck(children, 'children');
var keys = _.pluck(children, 'name');
return _(nextChildren)
.map(collectKeys)
.flatten()
.union(keys)
.value();
.map(collectKeys)
.flattenDeep()
.union(keys)
.value();
};
});

View file

@ -21,7 +21,9 @@ define(function (require) {
}
// Create the columns
results.columns = _(aggs).flatten().map(function (agg) {
results.columns = _(aggs)
.flattenDeep()
.map(function (agg) {
return {
categoryName: agg.schema.name,
id: agg.id,
@ -30,7 +32,8 @@ define(function (require) {
field: agg.params.field,
label: agg.type.makeLabel(agg)
};
}).value();
})
.value();
// if there are no buckets then we need to just set the value and return
@ -58,7 +61,7 @@ define(function (require) {
// iterate through all the buckets
_.each(extractBuckets(data[agg.id]), function (bucket) {
var _record = _.flatten([record, bucket.key]);
var _record = _.flattenDeep([record, bucket.key]);
_.each(metrics, function (metric) {
var value = bucket.doc_count;
if (bucket[metric.id] && !_.isUndefined(bucket[metric.id].value)) {

View file

@ -16,19 +16,19 @@ define(function (require) {
// Collect the current leaf and parents into an array of values
$tooltipScope.rows = collectBranch(datum);
var metricCol = $tooltipScope.metricCol = _.find(columns, { categoryName: 'metric' });
// Map those values to what the tooltipSource.rows format.
_.forEachRight($tooltipScope.rows, function (row, i, rows) {
row.spacer = $sce.trustAsHtml(_.repeat('&nbsp;', row.depth));
var percent;
if (i > 0) {
var parentMetric = rows[i - 1].metric;
percent = row.metric / parentMetric;
}
else if (row.item.percentOfGroup != null) {
if (row.item.percentOfGroup != null) {
percent = row.item.percentOfGroup;
}
row.metric = metricCol.aggConfig.fieldFormatter()(row.metric);
if (percent != null) {
row.metric += ' (' + numeral(percent).format('0.[00]%') + ')';
}
@ -36,8 +36,6 @@ define(function (require) {
return row;
});
$tooltipScope.metricCol = _.find(columns, { categoryName: 'metric' });
$tooltipScope.$apply();
return $tooltip[0].outerHTML;
};

View file

@ -5,19 +5,16 @@ define(function (require) {
var AggConfigResult = require('components/vis/_agg_config_result');
return function transformAggregation(agg, metric, aggData, parent) {
return _.map(extractBuckets(aggData), function (bucket) {
// Pick the appropriate value, if the metric doesn't exist then we just
// use the count.
var value = bucket.doc_count;
if (bucket[metric.id] && !_.isUndefined(bucket[metric.id].value)) {
value = bucket[metric.id].value;
}
var aggConfigResult = new AggConfigResult(
agg,
parent && parent.aggConfigResult,
metric.getValue(bucket),
agg.getKey(bucket)
);
// Create the new branch record
var $parent = parent && parent.aggConfigResult;
var aggConfigResult = new AggConfigResult(agg, $parent, value, agg.getKey(bucket));
var branch = {
name: bucket.key,
size: value,
name: agg.fieldFormatter()(bucket.key),
size: aggConfigResult.value,
aggConfig: agg,
aggConfigResult: aggConfigResult
};

View file

@ -7,7 +7,7 @@ define(function (require) {
var AggConfigResult = require('components/vis/_agg_config_result');
_(SplitAcr).inherits(AggConfigResult);
_.class(SplitAcr).inherits(AggConfigResult);
function SplitAcr(agg, parent, key) {
SplitAcr.Super.call(this, agg, parent, key, key);
}

View file

@ -26,7 +26,7 @@ define(function (require) {
* @extends IndexedArray
* @param {object[]} params - array of params that get new-ed up as AggParam objects as descibed above
*/
_(AggParams).inherits(IndexedArray);
_.class(AggParams).inherits(IndexedArray);
function AggParams(params) {
AggParams.Super.call(this, {
index: ['name'],

View file

@ -3,7 +3,7 @@ define(function (require) {
var _ = require('lodash');
var AggType = Private(require('components/agg_types/_agg_type'));
_(BucketAggType).inherits(AggType);
_.class(BucketAggType).inherits(AggType);
function BucketAggType(config) {
BucketAggType.Super.call(this, config);

View file

@ -4,7 +4,7 @@ define(function (require) {
return function CreateFilterFiltersProvider(Private) {
return function (aggConfig, key) {
// have the aggConfig write agg dsl params
var dslFilters = _.deepGet(aggConfig.toDsl(), 'filters.filters');
var dslFilters = _.get(aggConfig.toDsl(), 'filters.filters');
var filter = dslFilters[key];
if (filter) {

View file

@ -5,6 +5,7 @@ define(function (require) {
var BucketAggType = Private(require('components/agg_types/buckets/_bucket_agg_type'));
var TimeBuckets = Private(require('components/time_buckets/time_buckets'));
var createFilter = Private(require('components/agg_types/buckets/create_filter/date_histogram'));
var intervalOptions = Private(require('components/agg_types/buckets/_interval_options'));
var tzOffset = moment().format('Z');
@ -21,6 +22,7 @@ define(function (require) {
}
require('filters/field_type');
require('components/validateDateInterval');
return new BucketAggType({
name: 'date_histogram',
@ -59,6 +61,10 @@ define(function (require) {
return agg.vis.indexPattern.timeFieldName;
},
onChange: function (agg) {
if (_.get(agg, 'params.interval.val') === 'auto' && !agg.fieldIsTimeField()) {
delete agg.params.interval;
}
setBounds(agg, true);
}
},
@ -66,8 +72,16 @@ define(function (require) {
{
name: 'interval',
type: 'optioned',
deserialize: function (state) {
var interval = _.find(intervalOptions, {val: state});
return interval || _.find(intervalOptions, function (option) {
// For upgrading from 4.0.x to 4.1.x - intervals are now stored as 'y' instead of 'year',
// but this maps the old values to the new values
return Number(moment.duration(1, state)) === Number(moment.duration(1, option.val));
});
},
default: 'auto',
options: Private(require('components/agg_types/buckets/_interval_options')),
options: intervalOptions,
editor: require('text!components/agg_types/controls/interval.html'),
onRequest: function (agg) {
setBounds(agg, true);
@ -85,7 +99,7 @@ define(function (require) {
var scaleMetrics = interval.scaled && interval.scale < 1;
if (scaleMetrics) {
scaleMetrics = _.every(agg.vis.aggs.bySchemaGroup.metrics, function (agg) {
return agg.type.name === 'count' || agg.type.name === 'sum';
return agg.type && (agg.type.name === 'count' || agg.type.name === 'sum');
});
}

View file

@ -15,7 +15,7 @@ define(function (require) {
{
name: 'filters',
editor: require('text!components/agg_types/controls/filters.html'),
default: [ {input: {}} ],
default: [ {input: {}, label: ''} ],
write: function (aggConfig, output) {
var inFilters = aggConfig.params.filters;
if (!_.size(inFilters)) return;
@ -29,7 +29,7 @@ define(function (require) {
decorateQuery(query);
var label = _.deepGet(query, 'query_string.query') || angular.toJson(query);
var label = filter.label || _.get(query, 'query_string.query') || angular.toJson(query);
filters[label] = input;
}, {});

View file

@ -3,7 +3,7 @@ define(function (require) {
var _ = require('lodash');
var moment = require('moment');
var BucketAggType = Private(require('components/agg_types/buckets/_bucket_agg_type'));
var defaultPrecision = 3;
var defaultPrecision = 2;
function getPrecision(precision) {
var maxPrecision = _.parseInt(config.get('visualization:tileMap:maxPrecision'));
@ -29,10 +29,31 @@ define(function (require) {
name: 'field',
filterFieldTypes: 'geo_point'
},
{
name: 'autoPrecision',
default: true,
write: _.noop
},
{
name: 'mapZoom',
write: _.noop
},
{
name: 'mapCenter',
write: _.noop
},
{
name: 'precision',
default: defaultPrecision,
editor: require('text!components/agg_types/controls/precision.html'),
controller: function ($scope) {
$scope.$watchMulti([
'agg.params.autoPrecision',
'outputAgg.params.precision'
], function (cur, prev) {
if (cur[1]) $scope.agg.params.precision = cur[1];
});
},
deserialize: getPrecision,
write: function (aggConfig, output) {
output.params.precision = getPrecision(aggConfig.params.precision);
@ -41,4 +62,4 @@ define(function (require) {
]
});
};
});
});

View file

@ -5,6 +5,8 @@ define(function (require) {
var BucketAggType = Private(require('components/agg_types/buckets/_bucket_agg_type'));
var createFilter = Private(require('components/agg_types/buckets/create_filter/histogram'));
require('components/validateDateInterval');
return new BucketAggType({
name: 'histogram',
title: 'Histogram',

View file

@ -1,7 +1,5 @@
<div>
<small><a target="_window" href="http://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html#date-math">Accepted Date Formats <i class="fa-link fa"></i></a></small>
<table class="vis-editor-agg-editor-ranges form-group">
<table class="vis-editor-agg-editor-ranges form-group" ng-show="agg.params.ranges.length">
<tr>
<th>
<label>From</label>
@ -37,8 +35,23 @@
</button>
</td>
</tr>
<tr>
<td colspan="3">
<small>
<a target="_window" href="http://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html#date-math">Accepted Date Formats <i class="fa-link fa"></i></a>
</small>
</td>
</tr>
</table>
<input ng-model="agg.params.ranges.length" name="rangeLength" required min="1" type="number" class="ng-hide" />
<div class="hintbox" ng-show="aggForm.rangeLength.$invalid">
<p>
<i class="fa fa-danger text-danger"></i>
<strong>Required:</strong> You must specify at least one date range.
</p>
</div>
<div
ng-click="agg.params.ranges.push({})"
class="sidebar-item-button primary">

View file

@ -22,7 +22,7 @@
name="field"
required
ng-model="agg.params.field"
ng-if="indexedFields.length"
ng-show="indexedFields.length"
auto-select-if-only-one="indexedFields"
ng-options="field as field.displayName group by field.type for field in indexedFields"
ng-change="aggParam.onChange(agg)">

View file

@ -1,28 +1,52 @@
<div class="form-group">
<div ng-repeat="filter in agg.params.filters">
<label>Filter {{$index + 1}}</label>
<div class="form-group vis-editor-agg-form-row">
<div class="vis-editor-agg-header">
<label>
Filter {{$index + 1}}
<span ng-if="filter.label">- {{ filter.label }}</span>
</label>
<div class="btn-group">
<button
ng-click="showConfig = !showConfig"
type="button"
class="btn btn-default btn-xs">
<i class="fa fa-tag"></i>
</button>
<button
type="button"
ng-click="agg.params.filters.splice($index, 1)"
class="btn btn-danger btn-xs">
<i class="fa fa-times"></i>
</button>
</div>
</div>
<div class="form-group">
<input validate-query
ng-model="filter.input.query"
type="text"
class="form-control"
name="filter{{$index}}">
</div>
<button
type="button"
ng-click="agg.params.filters.splice($index, 1)"
class="btn btn-danger btn-xs">
<i class="fa fa-times"></i>
</button>
<div class="form-group" ng-show="showConfig">
<label>Filter {{$index + 1}} label</label>
<input
ng-model="filter.label"
placeholder="Label"
type="text"
class="form-control"
name="label{{$index}}">
</div>
</div>
</div>
<input ng-model="agg.params.filters.length" name="filterLength" required min="1" type="number" class="ng-hide">
<input ng-model="agg.params.filters.length" name="filterLength" required min="1" type="number" class="ng-hide" />
<div class="hintbox" ng-show="aggForm.filterLength.$invalid">
<p>
<i class="fa fa-danger text-danger"></i>
<strong>Required:</strong> You must specify at least one filter
<strong>Required:</strong> You must specify at least one filter.
</p>
</div>

View file

@ -20,8 +20,10 @@
</select>
<input
type="text"
name="customInterval"
ng-model="agg.params.customInterval"
ng-change="agg.write()"
validate-date-interval
ng-change="aggForm.customInterval.$valid && agg.write()"
ng-if="agg.params.interval.val == 'custom'"
class="form-control"
required />

View file

@ -5,7 +5,7 @@
</p>
<div ng-show="agg.params.ipRangeType != 'mask'">
<table class="vis-editor-agg-editor-ranges form-group">
<table class="vis-editor-agg-editor-ranges form-group" ng-show="agg.params.ranges.fromTo.length">
<tr>
<th>
<label>From</label>
@ -43,6 +43,14 @@
</tr>
</table>
<input ng-if="agg.params.ipRangeType != 'mask'" ng-model="agg.params.ranges.fromTo.length" name="rangeLength" required min="1" type="number" class="ng-hide" />
<div class="hintbox" ng-show="aggForm.rangeLength.$invalid">
<p>
<i class="fa fa-danger text-danger"></i>
<strong>Required:</strong> You must specify at least one IP range.
</p>
</div>
<div
ng-click="agg.params.ranges.fromTo.push({})"
class="sidebar-item-button primary">
@ -51,7 +59,7 @@
</div>
<div ng-show="agg.params.ipRangeType == 'mask'">
<table class="vis-editor-agg-editor-ranges form-group">
<table class="vis-editor-agg-editor-ranges form-group" ng-show="agg.params.ranges.mask.length">
<tr>
<th>
<label>Mask</label>
@ -79,6 +87,14 @@
</tr>
</table>
<input ng-if="agg.params.ipRangeType == 'mask'" ng-model="agg.params.ranges.mask.length" name="rangeLength" required min="1" type="number" class="ng-hide" />
<div class="hintbox" ng-show="aggForm.rangeLength.$invalid">
<p>
<i class="fa fa-danger text-danger"></i>
<strong>Required:</strong> You must specify at least one IP range.
</p>
</div>
<div
ng-click="agg.params.ranges.mask.push({})"
class="sidebar-item-button primary">

View file

@ -1,5 +1,5 @@
<div class="vis-editor-agg-form-row">
<div class="form-group">
<div class="vis-editor-agg-form-row" ng-controller="agg.type.params.byName.precision.controller">
<div ng-if="!agg.params.autoPrecision" class="form-group">
<label>Precision</label>
<div class="vis-editor-agg-form-row">
<input
@ -16,4 +16,14 @@
</div>
</div>
</div>
</div>
</div>
<div class="vis-option-item">
<label>
<input type="checkbox"
name="autoPrecision"
ng-model="agg.params.autoPrecision">
Change precision on map zoom
</label>
</div>

View file

@ -1,4 +1,4 @@
<table class="vis-editor-agg-editor-ranges form-group">
<table class="vis-editor-agg-editor-ranges form-group" ng-show="agg.params.ranges.length">
<tr>
<th>
<label>From</label>
@ -37,8 +37,16 @@
</tr>
</table>
<input ng-model="agg.params.ranges.length" name="rangeLength" required min="1" type="number" class="ng-hide" />
<div class="hintbox" ng-show="aggForm.rangeLength.$invalid">
<p>
<i class="fa fa-danger text-danger"></i>
<strong>Required:</strong> You must specify at least one range.
</p>
</div>
<div
ng-click="agg.params.ranges.push({})"
class="sidebar-item-button primary">
Add Range
</div>
</div>

View file

@ -4,7 +4,7 @@ define(function (require) {
var AggType = Private(require('components/agg_types/_agg_type'));
var fieldFormats = Private(require('registry/field_formats'));
_(MetricAggType).inherits(AggType);
_.class(MetricAggType).inherits(AggType);
function MetricAggType(config) {
MetricAggType.Super.call(this, config);

View file

@ -6,7 +6,7 @@ define(function (require) {
var BaseAggParam = Private(require('components/agg_types/param_types/base'));
var SavedObjectNotFound = require('errors').SavedObjectNotFound;
_(FieldAggParam).inherits(BaseAggParam);
_.class(FieldAggParam).inherits(BaseAggParam);
function FieldAggParam(config) {
FieldAggParam.Super.call(this, config);
}

View file

@ -5,7 +5,7 @@ define(function (require) {
var IndexedArray = require('utils/indexed_array/index');
var BaseAggParam = Private(require('components/agg_types/param_types/base'));
_(OptionedAggParam).inherits(BaseAggParam);
_.class(OptionedAggParam).inherits(BaseAggParam);
function OptionedAggParam(config) {
OptionedAggParam.Super.call(this, config);

View file

@ -5,7 +5,7 @@ define(function (require) {
var BaseAggParam = Private(require('components/agg_types/param_types/base'));
var editorHtml = require('text!components/agg_types/controls/raw_json.html');
_(RawJSONAggParam).inherits(BaseAggParam);
_.class(RawJSONAggParam).inherits(BaseAggParam);
function RawJSONAggParam(config) {
// force name override
config = _.defaults(config, { name: 'json' });
@ -39,10 +39,40 @@ define(function (require) {
return;
}
_.assign(output.params, paramJSON);
function filteredCombine(srcA, srcB) {
function mergeObjs(a, b) {
return _(a)
.keys()
.union(_.keys(b))
.transform(function (dest, key) {
var val = compare(a[key], b[key]);
if (val !== undefined) dest[key] = val;
}, {})
.value();
}
function mergeArrays(a, b) {
// attempt to merge each value
return _.times(Math.max(a.length, b.length), function (i) {
return compare(a[i], b[i]);
});
}
function compare(a, b) {
if (_.isPlainObject(a) && _.isPlainObject(b)) return mergeObjs(a, b);
if (_.isArray(a) && _.isArray(b)) return mergeArrays(a, b);
if (b === null) return undefined;
if (b !== undefined) return b;
return a;
}
return compare(srcA, srcB);
}
output.params = filteredCombine(output.params, paramJSON);
return;
};
return RawJSONAggParam;
};
});
});

Some files were not shown because too many files have changed in this diff Show more