mirror of
https://github.com/elastic/kibana.git
synced 2025-04-23 09:19:04 -04:00
[Reporting/CSV Export] Add setting to use PIT or Scroll API (#174980)
## Summary Closes https://github.com/elastic/kibana-team/issues/715 This adds the `scroll` search API method as an option for CSV. To achieve this, administrators can update `kibana.yml` with: ``` xpack.reporting.csv.scroll.strategy: scroll ``` The valid options for this setting are `scroll` and `pit`. The default is `pit`. ### Design The strategy option is only customizable in `kibana.yml` settings. It can not be set on a per-report basis without changing the YML file and restarting Kibana. 1. User sends a request to the Server to generate a CSV report. 2. Server creates a payload and adds a “strategy” setting from the system configuration to the payload. 3. The Server stores the payload in the Queue. 4. The Queuing System triggers an action with the payload. 5. The system reads the “strategy” setting from the payload. 6. The system begins to export data using a method based on the strategy. ``` User⎯Request → Server ↓ Task payload (with strategy added) ↓ Kibana Task Manager⎯Action → CSV Generator ``` ### Other changes 1. Reorganize source files in the kbn-generate-csv package. ### Checklist Delete any items that are not applicable to this PR. - [x] Update "Inspect search in Dev Tools" for scroll option - [x] [Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html) was added for features that require explanation or tutorials - [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios - [ ] [Flaky Test Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner/1) was used on any tests changed - [x] If a plugin configuration key changed, check if it needs to be allowlisted in the cloud and added to the [docker list](https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/resources/base/bin/kibana-docker) --------- Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
This commit is contained in:
parent
707b423389
commit
1c292409e1
31 changed files with 1791 additions and 937 deletions
|
@ -6,8 +6,8 @@
|
|||
++++
|
||||
|
||||
:frontmatter-description: A reference of the reporting settings administrators configure in kibana.yml.
|
||||
:frontmatter-tags-products: [kibana]
|
||||
:frontmatter-tags-content-type: [reference]
|
||||
:frontmatter-tags-products: [kibana]
|
||||
:frontmatter-tags-content-type: [reference]
|
||||
:frontmatter-tags-user-goals: [configure]
|
||||
|
||||
You can configure `xpack.reporting` settings in your `kibana.yml` to:
|
||||
|
@ -216,7 +216,7 @@ smaller reports, or extract the data you need directly from {es}.
|
|||
The following deployment configurations may lead to failed report jobs or incomplete reports:
|
||||
|
||||
* Any shard needed for search is unavailable.
|
||||
* Data is stored on slow storage tiers.
|
||||
* Data is stored on slow storage tiers.
|
||||
* Network latency between nodes is high.
|
||||
* {ccs-cap} is used.
|
||||
|
||||
|
@ -232,12 +232,21 @@ You may need to lower this setting if the default number of documents creates a
|
|||
============
|
||||
|
||||
`xpack.reporting.csv.scroll.duration`::
|
||||
Amount of {time-units}[time] allowed before {kib} cleans the scroll context during a CSV export. Defaults to `30s`.
|
||||
Amount of {time-units}[time] allowed before {kib} cleans the scroll context during a CSV export. Defaults to `30s`.
|
||||
[NOTE]
|
||||
============
|
||||
If search latency in {es} is sufficiently high, such as if you are using {ccs}, you may need to increase the setting.
|
||||
============
|
||||
|
||||
`xpack.reporting.csv.scroll.strategy`::
|
||||
Choose the API method used to page through data during CSV export. Valid options are `scroll` and `pit`. Defaults to `pit`.
|
||||
[NOTE]
|
||||
============
|
||||
Each method has its own unique limitations which are important to understand.
|
||||
* Scroll API: Search is limited to 500 shards at the very most. In cases where data shards are unavailable or time out, the export may return partial data.
|
||||
* PIT API: Permissions to read data aliases alone will not work: the permissions are needed on the underlying indices or datastreams. In cases where data shards are unavailable or time out, the export will be empty rather than returning partial data.
|
||||
============
|
||||
|
||||
`xpack.reporting.csv.checkForFormulas`::
|
||||
Enables a check that warns you when there's a potential formula included in the output (=, -, +, and @ chars). See OWASP: https://www.owasp.org/index.php/CSV_Injection. Defaults to `true`.
|
||||
|
||||
|
@ -245,7 +254,7 @@ Enables a check that warns you when there's a potential formula included in the
|
|||
Escape formula values in cells with a `'`. See OWASP: https://www.owasp.org/index.php/CSV_Injection. Defaults to `true`.
|
||||
|
||||
`xpack.reporting.csv.enablePanelActionDownload`::
|
||||
deprecated:[7.9.0,This setting has no effect.] Enables CSV export from a saved search on a dashboard. This action is available in the dashboard panel menu for the saved search.
|
||||
deprecated:[7.9.0,This setting has no effect.] Enables CSV export from a saved search on a dashboard. This action is available in the dashboard panel menu for the saved search.
|
||||
[NOTE]
|
||||
============
|
||||
This setting exists for backwards compatibility, and is hardcoded to `true`. CSV export from a saved search on a dashboard is enabled when Reporting is enabled.
|
||||
|
|
|
@ -8,3 +8,4 @@
|
|||
|
||||
export { CsvGenerator } from './src/generate_csv';
|
||||
export { CsvESQLGenerator, type JobParamsCsvESQL } from './src/generate_csv_esql';
|
||||
export type { CsvPagingStrategy } from './types';
|
||||
|
|
|
@ -1,5 +1,231 @@
|
|||
// Jest Snapshot v1, https://goo.gl/fbAQLP
|
||||
|
||||
exports[`CsvGenerator PIT strategy keeps order of the columns during the scroll 1`] = `
|
||||
"\\"_id\\",\\"_index\\",\\"_score\\",a,b
|
||||
\\"'-\\",\\"'-\\",\\"'-\\",a1,b1
|
||||
\\"'-\\",\\"'-\\",\\"'-\\",\\"'-\\",b2
|
||||
\\"'-\\",\\"'-\\",\\"'-\\",a3,\\"'-\\",c3
|
||||
"
|
||||
`;
|
||||
|
||||
exports[`CsvGenerator PIT strategy uses the pit ID to page all the data 1`] = `
|
||||
"date,ip,message
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
"
|
||||
`;
|
||||
|
||||
exports[`CsvGenerator Scroll strategy keeps order of the columns during the scroll 1`] = `
|
||||
"\\"_id\\",\\"_index\\",\\"_score\\",a,b
|
||||
\\"'-\\",\\"'-\\",\\"'-\\",a1,b1
|
||||
\\"'-\\",\\"'-\\",\\"'-\\",\\"'-\\",b2
|
||||
\\"'-\\",\\"'-\\",\\"'-\\",a3,\\"'-\\",c3
|
||||
"
|
||||
`;
|
||||
|
||||
exports[`CsvGenerator Scroll strategy uses the scroll context to page all the data 1`] = `
|
||||
"date,ip,message
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
"
|
||||
`;
|
||||
|
||||
exports[`CsvGenerator fields from job.columns (7.13+ generated) cells can be multi-value 1`] = `
|
||||
"product,category
|
||||
coconut,\\"cool, rad\\"
|
||||
|
@ -64,197 +290,3 @@ exports[`CsvGenerator formulas escapes formula values in a header, doesn't warn
|
|||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"This is great data\\"
|
||||
"
|
||||
`;
|
||||
|
||||
exports[`CsvGenerator keeps order of the columns during the scroll 1`] = `
|
||||
Array [
|
||||
Array [
|
||||
"Requesting PIT for: [logstash-*]...",
|
||||
],
|
||||
Array [
|
||||
"Opened PIT ID: oju9fs3698s3[39 bytes]",
|
||||
],
|
||||
Array [
|
||||
"Executing search request with PIT ID: [oju9fs3698s3[39 bytes]]",
|
||||
],
|
||||
Array [
|
||||
"Received total hits: 3. Accuracy: unknown.",
|
||||
],
|
||||
Array [
|
||||
"Result details: {\\"rawResponse\\":{\\"took\\":1,\\"timed_out\\":false,\\"_shards\\":{\\"total\\":1,\\"successful\\":1,\\"failed\\":0,\\"skipped\\":0},\\"hits\\":{\\"total\\":3,\\"max_score\\":0},\\"pit_id\\":\\"oju9fs3698s3[39 bytes]\\"}}",
|
||||
],
|
||||
Array [
|
||||
"Received PIT ID: [oju9fs3698s3[39 bytes]]",
|
||||
],
|
||||
Array [
|
||||
"Received search_after: [undefined]",
|
||||
],
|
||||
Array [
|
||||
"Building CSV header row",
|
||||
],
|
||||
Array [
|
||||
"Building 1 CSV data rows",
|
||||
],
|
||||
Array [
|
||||
"Executing search request with PIT ID: [oju9fs3698s3[39 bytes]]",
|
||||
],
|
||||
Array [
|
||||
"Received total hits: 3. Accuracy: unknown.",
|
||||
],
|
||||
Array [
|
||||
"Result details: {\\"rawResponse\\":{\\"took\\":1,\\"timed_out\\":false,\\"_shards\\":{\\"total\\":1,\\"successful\\":1,\\"failed\\":0,\\"skipped\\":0},\\"hits\\":{\\"total\\":3,\\"max_score\\":0},\\"pit_id\\":\\"oju9fs3698s3[39 bytes]\\"}}",
|
||||
],
|
||||
Array [
|
||||
"Received PIT ID: [oju9fs3698s3[39 bytes]]",
|
||||
],
|
||||
Array [
|
||||
"Received search_after: [undefined]",
|
||||
],
|
||||
Array [
|
||||
"Building 1 CSV data rows",
|
||||
],
|
||||
Array [
|
||||
"Executing search request with PIT ID: [oju9fs3698s3[39 bytes]]",
|
||||
],
|
||||
Array [
|
||||
"Received total hits: 3. Accuracy: unknown.",
|
||||
],
|
||||
Array [
|
||||
"Result details: {\\"rawResponse\\":{\\"took\\":1,\\"timed_out\\":false,\\"_shards\\":{\\"total\\":1,\\"successful\\":1,\\"failed\\":0,\\"skipped\\":0},\\"hits\\":{\\"total\\":3,\\"max_score\\":0},\\"pit_id\\":\\"oju9fs3698s3[39 bytes]\\"}}",
|
||||
],
|
||||
Array [
|
||||
"Received PIT ID: [oju9fs3698s3[39 bytes]]",
|
||||
],
|
||||
Array [
|
||||
"Received search_after: [undefined]",
|
||||
],
|
||||
Array [
|
||||
"Building 1 CSV data rows",
|
||||
],
|
||||
Array [
|
||||
"Closing PIT oju9fs3698s3[39 bytes]",
|
||||
],
|
||||
]
|
||||
`;
|
||||
|
||||
exports[`CsvGenerator keeps order of the columns during the scroll 2`] = `
|
||||
"\\"_id\\",\\"_index\\",\\"_score\\",a,b
|
||||
\\"'-\\",\\"'-\\",\\"'-\\",a1,b1
|
||||
\\"'-\\",\\"'-\\",\\"'-\\",\\"'-\\",b2
|
||||
\\"'-\\",\\"'-\\",\\"'-\\",a3,\\"'-\\",c3
|
||||
"
|
||||
`;
|
||||
|
||||
exports[`CsvGenerator uses the pit ID to page all the data 1`] = `
|
||||
"date,ip,message
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from the initial search\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"hit from a subsequent scroll\\"
|
||||
"
|
||||
`;
|
||||
|
||||
exports[`CsvGenerator warns if max size was reached 1`] = `
|
||||
"date,ip,message
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"super cali fragile istic XPLA docious\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"super cali fragile istic XPLA docious\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"super cali fragile istic XPLA docious\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"super cali fragile istic XPLA docious\\"
|
||||
\\"2020-12-31T00:14:28.000Z\\",\\"110.135.176.89\\",\\"super cali fragile istic XPLA docious\\"
|
||||
"
|
||||
`;
|
||||
|
|
|
@ -19,9 +19,8 @@ import {
|
|||
savedObjectsClientMock,
|
||||
uiSettingsServiceMock,
|
||||
} from '@kbn/core/server/mocks';
|
||||
import { ISearchStartSearchSource } from '@kbn/data-plugin/common';
|
||||
import { ISearchClient, ISearchStartSearchSource } from '@kbn/data-plugin/common';
|
||||
import { searchSourceInstanceMock } from '@kbn/data-plugin/common/search/search_source/mocks';
|
||||
import { IScopedSearchClient } from '@kbn/data-plugin/server';
|
||||
import { dataPluginMock } from '@kbn/data-plugin/server/mocks';
|
||||
import { FieldFormatsRegistry } from '@kbn/field-formats-plugin/common';
|
||||
import { CancellationToken } from '@kbn/reporting-common';
|
||||
|
@ -31,18 +30,32 @@ import {
|
|||
UI_SETTINGS_CSV_QUOTE_VALUES,
|
||||
UI_SETTINGS_CSV_SEPARATOR,
|
||||
UI_SETTINGS_DATEFORMAT_TZ,
|
||||
} from './constants';
|
||||
} from '../constants';
|
||||
import { CsvGenerator } from './generate_csv';
|
||||
|
||||
const createMockJob = (baseObj: any = {}): JobParamsCSV => ({
|
||||
...baseObj,
|
||||
type CsvConfigType = ReportingConfigType['csv'];
|
||||
|
||||
const getMockConfig = (opts: Partial<CsvConfigType> = {}): CsvConfigType => ({
|
||||
checkForFormulas: true,
|
||||
escapeFormulaValues: true,
|
||||
maxSizeBytes: 180000,
|
||||
useByteOrderMarkEncoding: false,
|
||||
scroll: { size: 500, duration: '30s', strategy: 'pit' },
|
||||
enablePanelActionDownload: true,
|
||||
maxConcurrentShardRequests: 5,
|
||||
...opts,
|
||||
});
|
||||
|
||||
const createMockJob = (baseObj: Partial<JobParamsCSV> = {}): JobParamsCSV =>
|
||||
({
|
||||
...baseObj,
|
||||
} as JobParamsCSV);
|
||||
const mockTaskInstanceFields = { startedAt: null, retryAt: null };
|
||||
|
||||
describe('CsvGenerator', () => {
|
||||
let mockEsClient: IScopedClusterClient;
|
||||
let mockDataClient: IScopedSearchClient;
|
||||
let mockConfig: ReportingConfigType['csv'];
|
||||
let mockDataClient: ISearchClient;
|
||||
let mockConfig: CsvConfigType;
|
||||
let mockLogger: jest.Mocked<Logger>;
|
||||
let uiSettingsClient: IUiSettingsClient;
|
||||
let stream: jest.Mocked<Writable>;
|
||||
|
@ -62,7 +75,7 @@ describe('CsvGenerator', () => {
|
|||
getAllMigrations: jest.fn(),
|
||||
};
|
||||
|
||||
const mockPitId = 'oju9fs3698s3902f02-8qg3-u9w36oiewiuyew6';
|
||||
const mockCursorId = 'oju9fs3698s3902f02-8qg3-u9w36oiewiuyew6';
|
||||
|
||||
const getMockRawResponse = (
|
||||
hits: Array<estypes.SearchHit<unknown>> = [],
|
||||
|
@ -70,7 +83,7 @@ describe('CsvGenerator', () => {
|
|||
) => ({
|
||||
took: 1,
|
||||
timed_out: false,
|
||||
pit_id: mockPitId,
|
||||
pit_id: mockCursorId,
|
||||
_shards: { total: 1, successful: 1, failed: 0, skipped: 0 },
|
||||
hits: { hits, total, max_score: 0 },
|
||||
});
|
||||
|
@ -96,7 +109,9 @@ describe('CsvGenerator', () => {
|
|||
mockDataClient = dataPluginMock.createStartContract().search.asScoped({} as any);
|
||||
mockDataClient.search = mockDataClientSearchDefault;
|
||||
|
||||
mockEsClient.asCurrentUser.openPointInTime = jest.fn().mockResolvedValueOnce({ id: mockPitId });
|
||||
mockEsClient.asCurrentUser.openPointInTime = jest
|
||||
.fn()
|
||||
.mockResolvedValueOnce({ id: mockCursorId });
|
||||
|
||||
uiSettingsClient = uiSettingsServiceMock
|
||||
.createStartContract()
|
||||
|
@ -112,20 +127,12 @@ describe('CsvGenerator', () => {
|
|||
}
|
||||
});
|
||||
|
||||
mockConfig = {
|
||||
checkForFormulas: true,
|
||||
escapeFormulaValues: true,
|
||||
maxSizeBytes: 180000,
|
||||
useByteOrderMarkEncoding: false,
|
||||
scroll: { size: 500, duration: '30s' },
|
||||
enablePanelActionDownload: true,
|
||||
maxConcurrentShardRequests: 5,
|
||||
};
|
||||
mockConfig = getMockConfig();
|
||||
|
||||
searchSourceMock.getField = jest.fn((key: string) => {
|
||||
switch (key) {
|
||||
case 'pit':
|
||||
return { id: mockPitId };
|
||||
return { id: mockCursorId };
|
||||
case 'index':
|
||||
return {
|
||||
fields: {
|
||||
|
@ -241,88 +248,28 @@ describe('CsvGenerator', () => {
|
|||
expect(csvResult.warnings).toEqual([]);
|
||||
});
|
||||
|
||||
it('warns if max size was reached', async () => {
|
||||
const TEST_MAX_SIZE = 500;
|
||||
mockConfig = {
|
||||
checkForFormulas: true,
|
||||
escapeFormulaValues: true,
|
||||
maxSizeBytes: TEST_MAX_SIZE,
|
||||
useByteOrderMarkEncoding: false,
|
||||
scroll: { size: 500, duration: '30s' },
|
||||
enablePanelActionDownload: true,
|
||||
maxConcurrentShardRequests: 5,
|
||||
};
|
||||
describe('PIT strategy', () => {
|
||||
const mockJobUsingPitPaging = createMockJob({
|
||||
columns: ['date', 'ip', 'message'],
|
||||
pagingStrategy: 'pit',
|
||||
});
|
||||
|
||||
mockDataClient.search = jest.fn().mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
range(0, HITS_TOTAL).map(
|
||||
() =>
|
||||
({
|
||||
fields: {
|
||||
date: ['2020-12-31T00:14:28.000Z'],
|
||||
ip: ['110.135.176.89'],
|
||||
message: ['super cali fragile istic XPLA docious'],
|
||||
},
|
||||
} as unknown as estypes.SearchHit)
|
||||
)
|
||||
),
|
||||
})
|
||||
);
|
||||
it('warns if max size was reached', async () => {
|
||||
const TEST_MAX_SIZE = 500;
|
||||
mockConfig = getMockConfig({
|
||||
maxSizeBytes: TEST_MAX_SIZE,
|
||||
});
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
createMockJob({ columns: ['date', 'ip', 'message'] }),
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
const csvResult = await generateCsv.generateData();
|
||||
expect(csvResult.max_size_reached).toBe(true);
|
||||
expect(csvResult.warnings).toEqual([]);
|
||||
expect(content).toMatchSnapshot();
|
||||
});
|
||||
|
||||
it('uses the pit ID to page all the data', async () => {
|
||||
mockDataClient.search = jest
|
||||
.fn()
|
||||
.mockImplementationOnce(() =>
|
||||
mockDataClient.search = jest.fn().mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
range(0, HITS_TOTAL / 10).map(
|
||||
range(0, HITS_TOTAL).map(
|
||||
() =>
|
||||
({
|
||||
fields: {
|
||||
date: ['2020-12-31T00:14:28.000Z'],
|
||||
ip: ['110.135.176.89'],
|
||||
message: ['hit from the initial search'],
|
||||
},
|
||||
} as unknown as estypes.SearchHit)
|
||||
),
|
||||
HITS_TOTAL
|
||||
),
|
||||
})
|
||||
)
|
||||
.mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
range(0, HITS_TOTAL / 10).map(
|
||||
() =>
|
||||
({
|
||||
fields: {
|
||||
date: ['2020-12-31T00:14:28.000Z'],
|
||||
ip: ['110.135.176.89'],
|
||||
message: ['hit from a subsequent scroll'],
|
||||
message: ['super cali fragile istic XPLA docious'],
|
||||
},
|
||||
} as unknown as estypes.SearchHit)
|
||||
)
|
||||
|
@ -330,101 +277,505 @@ describe('CsvGenerator', () => {
|
|||
})
|
||||
);
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
createMockJob({ columns: ['date', 'ip', 'message'] }),
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
const csvResult = await generateCsv.generateData();
|
||||
expect(csvResult.warnings).toEqual([]);
|
||||
expect(content).toMatchSnapshot();
|
||||
|
||||
expect(mockDataClient.search).toHaveBeenCalledTimes(10);
|
||||
expect(mockDataClient.search).toBeCalledWith(
|
||||
{ params: { body: {}, ignore_throttled: undefined, max_concurrent_shard_requests: 5 } },
|
||||
{ strategy: 'es', transport: { maxRetries: 0, requestTimeout: '30s' } }
|
||||
);
|
||||
|
||||
expect(mockEsClient.asCurrentUser.openPointInTime).toHaveBeenCalledTimes(1);
|
||||
expect(mockEsClient.asCurrentUser.openPointInTime).toHaveBeenCalledWith(
|
||||
{
|
||||
ignore_unavailable: true,
|
||||
index: 'logstash-*',
|
||||
keep_alive: '30s',
|
||||
},
|
||||
{ maxConcurrentShardRequests: 5, maxRetries: 0, requestTimeout: '30s' }
|
||||
);
|
||||
|
||||
expect(mockEsClient.asCurrentUser.closePointInTime).toHaveBeenCalledTimes(1);
|
||||
expect(mockEsClient.asCurrentUser.closePointInTime).toHaveBeenCalledWith({
|
||||
body: { id: mockPitId },
|
||||
const generateCsv = new CsvGenerator(
|
||||
mockJobUsingPitPaging,
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
const csvResult = await generateCsv.generateData();
|
||||
expect(csvResult.max_size_reached).toBe(true);
|
||||
expect(csvResult.warnings).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
it('keeps order of the columns during the scroll', async () => {
|
||||
mockDataClient.search = jest
|
||||
.fn()
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
[{ fields: { a: ['a1'], b: ['b1'] } } as unknown as estypes.SearchHit],
|
||||
3
|
||||
),
|
||||
})
|
||||
)
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
[{ fields: { b: ['b2'] } } as unknown as estypes.SearchHit],
|
||||
3
|
||||
),
|
||||
})
|
||||
)
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
[{ fields: { a: ['a3'], c: ['c3'] } } as unknown as estypes.SearchHit],
|
||||
3
|
||||
),
|
||||
it('uses the pit ID to page all the data', async () => {
|
||||
mockDataClient.search = jest
|
||||
.fn()
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
range(0, HITS_TOTAL / 10).map(
|
||||
() =>
|
||||
({
|
||||
fields: {
|
||||
date: ['2020-12-31T00:14:28.000Z'],
|
||||
ip: ['110.135.176.89'],
|
||||
message: ['hit from the initial search'],
|
||||
},
|
||||
} as unknown as estypes.SearchHit)
|
||||
),
|
||||
HITS_TOTAL
|
||||
),
|
||||
})
|
||||
)
|
||||
.mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
range(0, HITS_TOTAL / 10).map(
|
||||
() =>
|
||||
({
|
||||
fields: {
|
||||
date: ['2020-12-31T00:14:28.000Z'],
|
||||
ip: ['110.135.176.89'],
|
||||
message: ['hit from a subsequent scroll'],
|
||||
},
|
||||
} as unknown as estypes.SearchHit)
|
||||
)
|
||||
),
|
||||
})
|
||||
);
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
mockJobUsingPitPaging,
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
const csvResult = await generateCsv.generateData();
|
||||
expect(csvResult.warnings).toEqual([]);
|
||||
expect(content).toMatchSnapshot();
|
||||
|
||||
expect(mockDataClient.search).toHaveBeenCalledTimes(10);
|
||||
expect(mockDataClient.search).toBeCalledWith(
|
||||
{ params: { body: {}, ignore_throttled: undefined, max_concurrent_shard_requests: 5 } },
|
||||
{ strategy: 'es', transport: { maxRetries: 0, requestTimeout: '30s' } }
|
||||
);
|
||||
|
||||
expect(mockEsClient.asCurrentUser.openPointInTime).toHaveBeenCalledTimes(1);
|
||||
expect(mockEsClient.asCurrentUser.openPointInTime).toHaveBeenCalledWith(
|
||||
{
|
||||
ignore_unavailable: true,
|
||||
index: 'logstash-*',
|
||||
keep_alive: '30s',
|
||||
},
|
||||
{ maxConcurrentShardRequests: 5, maxRetries: 0, requestTimeout: '30s' }
|
||||
);
|
||||
|
||||
expect(mockEsClient.asCurrentUser.closePointInTime).toHaveBeenCalledTimes(1);
|
||||
expect(mockEsClient.asCurrentUser.closePointInTime).toHaveBeenCalledWith({
|
||||
body: { id: mockCursorId },
|
||||
});
|
||||
});
|
||||
|
||||
it('keeps order of the columns during the scroll', async () => {
|
||||
mockDataClient.search = jest
|
||||
.fn()
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
[{ fields: { a: ['a1'], b: ['b1'] } } as unknown as estypes.SearchHit],
|
||||
3
|
||||
),
|
||||
})
|
||||
)
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
[{ fields: { b: ['b2'] } } as unknown as estypes.SearchHit],
|
||||
3
|
||||
),
|
||||
})
|
||||
)
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
[{ fields: { a: ['a3'], c: ['c3'] } } as unknown as estypes.SearchHit],
|
||||
3
|
||||
),
|
||||
})
|
||||
);
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
createMockJob({ searchSource: {}, columns: [], pagingStrategy: 'pit' }),
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
await generateCsv.generateData();
|
||||
expect(content).toMatchSnapshot();
|
||||
});
|
||||
|
||||
it('adds a warning if export was unable to close the PIT', async () => {
|
||||
mockEsClient.asCurrentUser.closePointInTime = jest.fn().mockRejectedValueOnce(
|
||||
new esErrors.ResponseError({
|
||||
statusCode: 419,
|
||||
warnings: [],
|
||||
meta: { context: 'test' } as any,
|
||||
})
|
||||
);
|
||||
|
||||
const debugLogSpy = jest.spyOn(mockLogger, 'debug');
|
||||
const generateCsv = new CsvGenerator(
|
||||
mockJobUsingPitPaging,
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
createMockJob({ searchSource: {}, columns: [] }),
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
await generateCsv.generateData();
|
||||
await expect(generateCsv.generateData()).resolves.toMatchInlineSnapshot(`
|
||||
Object {
|
||||
"content_type": "text/csv",
|
||||
"csv_contains_formulas": false,
|
||||
"error_code": undefined,
|
||||
"max_size_reached": false,
|
||||
"metrics": Object {
|
||||
"csv": Object {
|
||||
"rows": 0,
|
||||
},
|
||||
},
|
||||
"warnings": Array [
|
||||
"Unable to close the Point-In-Time used for search. Check the Kibana server logs.",
|
||||
],
|
||||
}
|
||||
`);
|
||||
});
|
||||
|
||||
expect(debugLogSpy.mock.calls).toMatchSnapshot();
|
||||
describe('debug logging', () => {
|
||||
it('logs the the total hits relation if relation is provided', async () => {
|
||||
mockDataClient.search = jest.fn().mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: {
|
||||
took: 1,
|
||||
timed_out: false,
|
||||
pit_id: mockCursorId,
|
||||
_shards: { total: 1, successful: 1, failed: 0, skipped: 0 },
|
||||
hits: { hits: [], total: { relation: 'eq', value: 12345 }, max_score: 0 },
|
||||
},
|
||||
})
|
||||
);
|
||||
|
||||
expect(content).toMatchSnapshot();
|
||||
const debugLogSpy = jest.spyOn(mockLogger, 'debug');
|
||||
const generateCsv = new CsvGenerator(
|
||||
mockJobUsingPitPaging,
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
|
||||
await generateCsv.generateData();
|
||||
expect(debugLogSpy).toHaveBeenCalledWith('Received total hits: 12345. Accuracy: eq.');
|
||||
});
|
||||
|
||||
it('logs the the total hits relation as "unknown" if relation is not provided', async () => {
|
||||
mockDataClient.search = jest.fn().mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: {
|
||||
took: 1,
|
||||
timed_out: false,
|
||||
pit_id: mockCursorId,
|
||||
_shards: { total: 1, successful: 1, failed: 0, skipped: 0 },
|
||||
hits: { hits: [], total: 12345, max_score: 0 },
|
||||
},
|
||||
})
|
||||
);
|
||||
|
||||
const debugLogSpy = jest.spyOn(mockLogger, 'debug');
|
||||
const generateCsv = new CsvGenerator(
|
||||
mockJobUsingPitPaging,
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
|
||||
await generateCsv.generateData();
|
||||
expect(debugLogSpy).toHaveBeenCalledWith('Received total hits: 12345. Accuracy: unknown.');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('Scroll strategy', () => {
|
||||
const mockJobUsingScrollPaging = createMockJob({
|
||||
columns: ['date', 'ip', 'message'],
|
||||
pagingStrategy: 'scroll',
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
mockDataClient.search = jest
|
||||
.fn()
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
range(0, HITS_TOTAL / 10).map(
|
||||
() =>
|
||||
({
|
||||
fields: {
|
||||
date: ['2020-12-31T00:14:28.000Z'],
|
||||
ip: ['110.135.176.89'],
|
||||
message: ['hit from the initial search'],
|
||||
},
|
||||
} as unknown as estypes.SearchHit)
|
||||
),
|
||||
HITS_TOTAL
|
||||
),
|
||||
})
|
||||
)
|
||||
.mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
range(0, HITS_TOTAL / 10).map(
|
||||
() =>
|
||||
({
|
||||
fields: {
|
||||
date: ['2020-12-31T00:14:28.000Z'],
|
||||
ip: ['110.135.176.89'],
|
||||
message: ['hit from a subsequent scroll'],
|
||||
},
|
||||
} as unknown as estypes.SearchHit)
|
||||
)
|
||||
),
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('warns if max size was reached', async () => {
|
||||
const TEST_MAX_SIZE = 500;
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
mockJobUsingScrollPaging,
|
||||
getMockConfig({
|
||||
maxSizeBytes: TEST_MAX_SIZE,
|
||||
scroll: { size: 500, duration: '30s', strategy: 'scroll' },
|
||||
}),
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
const csvResult = await generateCsv.generateData();
|
||||
expect(csvResult.max_size_reached).toBe(true);
|
||||
expect(csvResult.warnings).toEqual([]);
|
||||
});
|
||||
|
||||
it('uses the scroll context to page all the data', async () => {
|
||||
const generateCsv = new CsvGenerator(
|
||||
mockJobUsingScrollPaging,
|
||||
getMockConfig({
|
||||
scroll: { size: 500, duration: '30s', strategy: 'scroll' },
|
||||
}),
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
const csvResult = await generateCsv.generateData();
|
||||
expect(csvResult.warnings).toEqual([]);
|
||||
expect(content).toMatchSnapshot();
|
||||
|
||||
expect(mockDataClient.search).toHaveBeenCalledTimes(10);
|
||||
expect(mockDataClient.search).toBeCalledWith(
|
||||
{
|
||||
params: expect.objectContaining({
|
||||
index: 'logstash-*',
|
||||
scroll: '30s',
|
||||
size: 500,
|
||||
max_concurrent_shard_requests: 5,
|
||||
}),
|
||||
},
|
||||
{ strategy: 'es', transport: { maxRetries: 0, requestTimeout: '30s' } }
|
||||
);
|
||||
|
||||
expect(mockEsClient.asCurrentUser.openPointInTime).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('keeps order of the columns during the scroll', async () => {
|
||||
mockDataClient.search = jest
|
||||
.fn()
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
[{ fields: { a: ['a1'], b: ['b1'] } } as unknown as estypes.SearchHit],
|
||||
3
|
||||
),
|
||||
})
|
||||
)
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
[{ fields: { b: ['b2'] } } as unknown as estypes.SearchHit],
|
||||
3
|
||||
),
|
||||
})
|
||||
)
|
||||
.mockImplementationOnce(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse(
|
||||
[{ fields: { a: ['a3'], c: ['c3'] } } as unknown as estypes.SearchHit],
|
||||
3
|
||||
),
|
||||
})
|
||||
);
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
createMockJob({ searchSource: {}, columns: [], pagingStrategy: 'scroll' }),
|
||||
getMockConfig({
|
||||
scroll: { size: 500, duration: '30s', strategy: 'scroll' },
|
||||
}),
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
await generateCsv.generateData();
|
||||
expect(content).toMatchSnapshot();
|
||||
});
|
||||
|
||||
describe('debug logging', () => {
|
||||
it('logs the the total hits relation if relation is provided', async () => {
|
||||
mockDataClient.search = jest.fn().mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: {
|
||||
took: 1,
|
||||
timed_out: false,
|
||||
_scroll_id: mockCursorId,
|
||||
_shards: { total: 1, successful: 1, failed: 0, skipped: 0 },
|
||||
hits: { hits: [], total: { relation: 'eq', value: 100 }, max_score: 0 },
|
||||
},
|
||||
})
|
||||
);
|
||||
|
||||
const debugLogSpy = jest.spyOn(mockLogger, 'debug');
|
||||
const generateCsv = new CsvGenerator(
|
||||
mockJobUsingScrollPaging,
|
||||
getMockConfig({
|
||||
scroll: { size: 500, duration: '30s', strategy: 'scroll' },
|
||||
}),
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
|
||||
await generateCsv.generateData();
|
||||
expect(debugLogSpy).toHaveBeenCalledWith('Received total hits: 100. Accuracy: eq.');
|
||||
});
|
||||
|
||||
it('logs the the total hits relation as "unknown" if relation is not provided', async () => {
|
||||
const debugLogSpy = jest.spyOn(mockLogger, 'debug');
|
||||
const generateCsv = new CsvGenerator(
|
||||
mockJobUsingScrollPaging,
|
||||
getMockConfig({
|
||||
scroll: { size: 500, duration: '30s', strategy: 'scroll' },
|
||||
}),
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
|
||||
await generateCsv.generateData();
|
||||
expect(debugLogSpy).toHaveBeenCalledWith('Received total hits: 100. Accuracy: unknown.');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('fields from job.searchSource.getFields() (7.12 generated)', () => {
|
||||
|
@ -487,7 +838,7 @@ describe('CsvGenerator', () => {
|
|||
createMockJob({
|
||||
searchSource: {
|
||||
query: { query: '', language: 'kuery' },
|
||||
sort: [{ '@date': 'desc' }],
|
||||
sort: [{ '@date': 'desc' as any }],
|
||||
index: '93f4bc50-6662-11eb-98bc-f550e2308366',
|
||||
fields: ['_id', '_index', '@date', 'message'],
|
||||
filter: [],
|
||||
|
@ -543,7 +894,7 @@ describe('CsvGenerator', () => {
|
|||
createMockJob({
|
||||
searchSource: {
|
||||
query: { query: '', language: 'kuery' },
|
||||
sort: [{ '@date': 'desc' }],
|
||||
sort: [{ '@date': 'desc' as any }],
|
||||
index: '93f4bc50-6662-11eb-98bc-f550e2308366',
|
||||
fields: ['*'],
|
||||
filter: [],
|
||||
|
@ -773,15 +1124,11 @@ describe('CsvGenerator', () => {
|
|||
});
|
||||
|
||||
it('can check for formulas, without escaping them', async () => {
|
||||
mockConfig = {
|
||||
mockConfig = getMockConfig({
|
||||
checkForFormulas: true,
|
||||
escapeFormulaValues: false,
|
||||
maxSizeBytes: 180000,
|
||||
useByteOrderMarkEncoding: false,
|
||||
scroll: { size: 500, duration: '30s' },
|
||||
enablePanelActionDownload: true,
|
||||
maxConcurrentShardRequests: 5,
|
||||
};
|
||||
});
|
||||
|
||||
mockDataClient.search = jest.fn().mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: getMockRawResponse([
|
||||
|
@ -877,129 +1224,6 @@ describe('CsvGenerator', () => {
|
|||
);
|
||||
});
|
||||
|
||||
it('adds a warning if export was unable to close the PIT', async () => {
|
||||
mockEsClient.asCurrentUser.closePointInTime = jest.fn().mockRejectedValueOnce(
|
||||
new esErrors.ResponseError({
|
||||
statusCode: 419,
|
||||
warnings: [],
|
||||
meta: { context: 'test' } as any,
|
||||
})
|
||||
);
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
createMockJob({ columns: ['date', 'ip', 'message'] }),
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
|
||||
await expect(generateCsv.generateData()).resolves.toMatchInlineSnapshot(`
|
||||
Object {
|
||||
"content_type": "text/csv",
|
||||
"csv_contains_formulas": false,
|
||||
"error_code": undefined,
|
||||
"max_size_reached": false,
|
||||
"metrics": Object {
|
||||
"csv": Object {
|
||||
"rows": 0,
|
||||
},
|
||||
},
|
||||
"warnings": Array [
|
||||
"Unable to close the Point-In-Time used for search. Check the Kibana server logs.",
|
||||
],
|
||||
}
|
||||
`);
|
||||
});
|
||||
|
||||
describe('debug logging', () => {
|
||||
it('logs the the total hits relation if relation is provided', async () => {
|
||||
mockDataClient.search = jest.fn().mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: {
|
||||
took: 1,
|
||||
timed_out: false,
|
||||
pit_id: mockPitId,
|
||||
_shards: { total: 1, successful: 1, failed: 0, skipped: 0 },
|
||||
hits: { hits: [], total: { relation: 'eq', value: 12345 }, max_score: 0 },
|
||||
},
|
||||
})
|
||||
);
|
||||
|
||||
const debugLogSpy = jest.spyOn(mockLogger, 'debug');
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
createMockJob({ columns: ['date', 'ip', 'message'] }),
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
|
||||
await generateCsv.generateData();
|
||||
|
||||
expect(debugLogSpy).toHaveBeenCalledWith('Received total hits: 12345. Accuracy: eq.');
|
||||
});
|
||||
|
||||
it('logs the the total hits relation as "unknown" if relation is not provided', async () => {
|
||||
mockDataClient.search = jest.fn().mockImplementation(() =>
|
||||
Rx.of({
|
||||
rawResponse: {
|
||||
took: 1,
|
||||
timed_out: false,
|
||||
pit_id: mockPitId,
|
||||
_shards: { total: 1, successful: 1, failed: 0, skipped: 0 },
|
||||
hits: { hits: [], total: 12345, max_score: 0 },
|
||||
},
|
||||
})
|
||||
);
|
||||
|
||||
const debugLogSpy = jest.spyOn(mockLogger, 'debug');
|
||||
|
||||
const generateCsv = new CsvGenerator(
|
||||
createMockJob({ columns: ['date', 'ip', 'message'] }),
|
||||
mockConfig,
|
||||
mockTaskInstanceFields,
|
||||
{
|
||||
es: mockEsClient,
|
||||
data: mockDataClient,
|
||||
uiSettings: uiSettingsClient,
|
||||
},
|
||||
{
|
||||
searchSourceStart: mockSearchSourceService,
|
||||
fieldFormatsRegistry: mockFieldFormatsRegistry,
|
||||
},
|
||||
new CancellationToken(),
|
||||
mockLogger,
|
||||
stream
|
||||
);
|
||||
|
||||
await generateCsv.generateData();
|
||||
|
||||
expect(debugLogSpy).toHaveBeenCalledWith('Received total hits: 12345. Accuracy: unknown.');
|
||||
});
|
||||
});
|
||||
|
||||
it('will return partial data if the scroll or search fails', async () => {
|
||||
mockDataClient.search = jest.fn().mockImplementation(() => {
|
||||
throw new esErrors.ResponseError({
|
||||
|
|
|
@ -7,19 +7,12 @@
|
|||
*/
|
||||
|
||||
import moment from 'moment';
|
||||
import { lastValueFrom } from 'rxjs';
|
||||
import type { Writable } from 'stream';
|
||||
|
||||
import { errors as esErrors, estypes } from '@elastic/elasticsearch';
|
||||
import type { IScopedClusterClient, IUiSettingsClient, Logger } from '@kbn/core/server';
|
||||
import type {
|
||||
IEsSearchRequest,
|
||||
IKibanaSearchResponse,
|
||||
ISearchSource,
|
||||
ISearchStartSearchSource,
|
||||
} from '@kbn/data-plugin/common';
|
||||
import { ES_SEARCH_STRATEGY, cellHasFormulas, tabifyDocs } from '@kbn/data-plugin/common';
|
||||
import type { IScopedSearchClient } from '@kbn/data-plugin/server';
|
||||
import type { ISearchClient, ISearchStartSearchSource } from '@kbn/data-plugin/common';
|
||||
import { cellHasFormulas, tabifyDocs } from '@kbn/data-plugin/common';
|
||||
import type { Datatable } from '@kbn/expressions-plugin/server';
|
||||
import type {
|
||||
FieldFormat,
|
||||
|
@ -35,15 +28,18 @@ import {
|
|||
import type { TaskInstanceFields, TaskRunResult } from '@kbn/reporting-common/types';
|
||||
import type { ReportingConfigType } from '@kbn/reporting-server';
|
||||
|
||||
import { CONTENT_TYPE_CSV } from './constants';
|
||||
import { CsvExportSettings, getExportSettings } from './get_export_settings';
|
||||
import { i18nTexts } from './i18n_texts';
|
||||
import { MaxSizeStringBuilder } from './max_size_string_builder';
|
||||
import { JobParamsCSV } from '../types';
|
||||
import { CONTENT_TYPE_CSV } from '../constants';
|
||||
import type { JobParamsCSV } from '../types';
|
||||
import { getExportSettings, type CsvExportSettings } from './lib/get_export_settings';
|
||||
import { i18nTexts } from './lib/i18n_texts';
|
||||
import { MaxSizeStringBuilder } from './lib/max_size_string_builder';
|
||||
import type { SearchCursor } from './lib/search_cursor';
|
||||
import { SearchCursorPit } from './lib/search_cursor_pit';
|
||||
import { SearchCursorScroll } from './lib/search_cursor_scroll';
|
||||
|
||||
interface Clients {
|
||||
es: IScopedClusterClient;
|
||||
data: IScopedSearchClient;
|
||||
data: ISearchClient;
|
||||
uiSettings: IUiSettingsClient;
|
||||
}
|
||||
|
||||
|
@ -68,127 +64,6 @@ export class CsvGenerator {
|
|||
private stream: Writable
|
||||
) {}
|
||||
|
||||
private async openPointInTime(indexPatternTitle: string, settings: CsvExportSettings) {
|
||||
const {
|
||||
includeFrozen,
|
||||
maxConcurrentShardRequests,
|
||||
scroll: { duration },
|
||||
} = settings;
|
||||
let pitId: string | undefined;
|
||||
this.logger.debug(`Requesting PIT for: [${indexPatternTitle}]...`);
|
||||
try {
|
||||
// NOTE: if ES is overloaded, this request could time out
|
||||
const response = await this.clients.es.asCurrentUser.openPointInTime(
|
||||
{
|
||||
index: indexPatternTitle,
|
||||
keep_alive: duration,
|
||||
ignore_unavailable: true,
|
||||
// @ts-expect-error ignore_throttled is not in the type definition, but it is accepted by es
|
||||
ignore_throttled: includeFrozen ? false : undefined, // "true" will cause deprecation warnings logged in ES
|
||||
},
|
||||
{
|
||||
requestTimeout: duration,
|
||||
maxRetries: 0,
|
||||
maxConcurrentShardRequests,
|
||||
}
|
||||
);
|
||||
pitId = response.id;
|
||||
} catch (err) {
|
||||
this.logger.error(err);
|
||||
}
|
||||
|
||||
if (!pitId) {
|
||||
throw new Error(`Could not receive a PIT ID!`);
|
||||
}
|
||||
|
||||
this.logger.debug(`Opened PIT ID: ${this.formatPit(pitId)}`);
|
||||
|
||||
return pitId;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param clientDetails: Details from the data.search client
|
||||
* @param results: Raw data from ES
|
||||
*/
|
||||
private logResults(
|
||||
clientDetails: Omit<IKibanaSearchResponse<unknown>, 'rawResponse'>,
|
||||
results: estypes.SearchResponse<unknown>
|
||||
) {
|
||||
const { hits: resultsHits, ...headerWithPit } = results;
|
||||
const { hits, ...hitsMeta } = resultsHits;
|
||||
const trackedTotal = resultsHits.total as estypes.SearchTotalHits;
|
||||
const currentTotal = trackedTotal?.value ?? resultsHits.total;
|
||||
|
||||
const totalAccuracy = trackedTotal?.relation ?? 'unknown';
|
||||
this.logger.debug(`Received total hits: ${currentTotal}. Accuracy: ${totalAccuracy}.`);
|
||||
|
||||
// reconstruct the data.search response (w/out the data) for logging
|
||||
const { pit_id: newPitId, ...header } = headerWithPit;
|
||||
const logInfo = {
|
||||
...clientDetails,
|
||||
rawResponse: {
|
||||
...header,
|
||||
hits: hitsMeta,
|
||||
pit_id: `${this.formatPit(newPitId)}`,
|
||||
},
|
||||
};
|
||||
this.logger.debug(`Result details: ${JSON.stringify(logInfo)}`);
|
||||
|
||||
// use the most recently received id for the next search request
|
||||
this.logger.debug(`Received PIT ID: [${this.formatPit(results.pit_id)}]`);
|
||||
}
|
||||
|
||||
private async doSearch(
|
||||
searchSource: ISearchSource,
|
||||
settings: CsvExportSettings,
|
||||
searchAfter?: estypes.SortResults
|
||||
) {
|
||||
const { scroll: scrollSettings, maxConcurrentShardRequests } = settings;
|
||||
searchSource.setField('size', scrollSettings.size);
|
||||
|
||||
if (searchAfter) {
|
||||
searchSource.setField('searchAfter', searchAfter);
|
||||
}
|
||||
|
||||
const pitId = searchSource.getField('pit')?.id;
|
||||
this.logger.debug(
|
||||
`Executing search request with PIT ID: [${this.formatPit(pitId)}]` +
|
||||
(searchAfter ? ` search_after: [${searchAfter}]` : '')
|
||||
);
|
||||
|
||||
const searchBody: estypes.SearchRequest = searchSource.getSearchRequestBody();
|
||||
if (searchBody == null) {
|
||||
throw new Error('Could not retrieve the search body!');
|
||||
}
|
||||
|
||||
const searchParams: IEsSearchRequest = {
|
||||
params: {
|
||||
body: searchBody,
|
||||
max_concurrent_shard_requests: maxConcurrentShardRequests,
|
||||
},
|
||||
};
|
||||
|
||||
let results: estypes.SearchResponse<unknown> | undefined;
|
||||
try {
|
||||
const { rawResponse, ...rawDetails } = await lastValueFrom(
|
||||
this.clients.data.search(searchParams, {
|
||||
strategy: ES_SEARCH_STRATEGY,
|
||||
transport: {
|
||||
maxRetries: 0, // retrying reporting jobs is handled in the task manager scheduling logic
|
||||
requestTimeout: settings.scroll.duration,
|
||||
},
|
||||
})
|
||||
);
|
||||
results = rawResponse;
|
||||
this.logResults(rawDetails, rawResponse);
|
||||
} catch (err) {
|
||||
this.logger.error(`CSV export search error: ${err}`);
|
||||
throw err;
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
/*
|
||||
* Load field formats for each field in the list
|
||||
*/
|
||||
|
@ -306,7 +181,7 @@ export class CsvGenerator {
|
|||
/*
|
||||
* Intrinsically, generating the rows is a synchronous process. Awaiting
|
||||
* on a setImmediate call here partititions what could be a very long and
|
||||
* CPU-intenstive synchronous process into an asychronous process. This
|
||||
* CPU-intenstive synchronous process into asychronous processes. This
|
||||
* give NodeJS to process other asychronous events that wait on the Event
|
||||
* Loop.
|
||||
*
|
||||
|
@ -375,10 +250,20 @@ export class CsvGenerator {
|
|||
let first = true;
|
||||
let currentRecord = -1;
|
||||
let totalRecords: number | undefined;
|
||||
let searchAfter: estypes.SortResults | undefined;
|
||||
|
||||
let reportingError: undefined | ReportingError;
|
||||
let pitId = await this.openPointInTime(indexPatternTitle, settings);
|
||||
|
||||
// use a class to internalize the paging strategy
|
||||
let cursor: SearchCursor;
|
||||
if (this.job.pagingStrategy === 'scroll') {
|
||||
// Optional strategy: scan-and-scroll
|
||||
cursor = new SearchCursorScroll(indexPatternTitle, settings, this.clients, this.logger);
|
||||
logger.debug('Using search strategy: scroll');
|
||||
} else {
|
||||
// Default strategy: point-in-time
|
||||
cursor = new SearchCursorPit(indexPatternTitle, settings, this.clients, this.logger);
|
||||
logger.debug('Using search strategy: pit');
|
||||
}
|
||||
await cursor.initialize();
|
||||
|
||||
// apply timezone from the job to all date field formatters
|
||||
try {
|
||||
|
@ -404,17 +289,22 @@ export class CsvGenerator {
|
|||
if (this.cancellationToken.isCancelled()) {
|
||||
break;
|
||||
}
|
||||
// set the latest pit, which could be different from the last request
|
||||
searchSource.setField('pit', { id: pitId, keep_alive: settings.scroll.duration });
|
||||
searchSource.setField('size', settings.scroll.size);
|
||||
|
||||
let results: estypes.SearchResponse<unknown> | undefined;
|
||||
try {
|
||||
results = await cursor.getPage(searchSource);
|
||||
} catch (err) {
|
||||
this.logger.error(`CSV export search error: ${err}`);
|
||||
throw err;
|
||||
}
|
||||
|
||||
const results = await this.doSearch(searchSource, settings, searchAfter);
|
||||
if (!results) {
|
||||
logger.warn(`Search results are undefined!`);
|
||||
break;
|
||||
}
|
||||
|
||||
const { hits: resultsHits } = results;
|
||||
const { hits, total } = resultsHits;
|
||||
const { total } = results.hits;
|
||||
const trackedTotal = total as estypes.SearchTotalHits;
|
||||
const currentTotal = trackedTotal?.value ?? total;
|
||||
|
||||
|
@ -423,13 +313,8 @@ export class CsvGenerator {
|
|||
totalRecords = currentTotal;
|
||||
}
|
||||
|
||||
// use the most recently received id for the next search request
|
||||
pitId = results.pit_id ?? pitId;
|
||||
|
||||
// Update last sort results for next query. PIT is used, so the sort results
|
||||
// automatically include _shard_doc as a tiebreaker
|
||||
searchAfter = hits[hits.length - 1]?.sort as estypes.SortResults | undefined;
|
||||
logger.debug(`Received search_after: [${searchAfter}]`);
|
||||
// use the most recently received cursor id for the next search request
|
||||
cursor.updateIdFromResults(results);
|
||||
|
||||
// check for shard failures, log them and add a warning if found
|
||||
const { _shards: shards } = results;
|
||||
|
@ -489,27 +374,24 @@ export class CsvGenerator {
|
|||
} else {
|
||||
warnings.push(i18nTexts.unknownError(err?.message ?? err));
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
if (pitId) {
|
||||
logger.debug(`Closing PIT ${this.formatPit(pitId)}`);
|
||||
await this.clients.es.asCurrentUser.closePointInTime({ body: { id: pitId } });
|
||||
} else {
|
||||
logger.warn(`No PIT ID to clear!`);
|
||||
} finally {
|
||||
try {
|
||||
await cursor.closeCursor();
|
||||
} catch (err) {
|
||||
logger.error(err);
|
||||
warnings.push(cursor.getUnableToCloseCursorMessage());
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error(err);
|
||||
warnings.push(i18nTexts.csvUnableToClosePit());
|
||||
}
|
||||
|
||||
logger.info(`Finished generating. Row count: ${this.csvRowCount}.`);
|
||||
|
||||
if (!this.maxSizeReached && this.csvRowCount !== totalRecords) {
|
||||
logger.warn(
|
||||
`ES scroll returned fewer total hits than expected! ` +
|
||||
`Search result total hits: ${totalRecords}. Row count: ${this.csvRowCount}`
|
||||
`ES scroll returned ` +
|
||||
`${this.csvRowCount > (totalRecords ?? 0) ? 'more' : 'fewer'} total hits than expected!`
|
||||
);
|
||||
logger.warn(`Search result total hits: ${totalRecords}. Row count: ${this.csvRowCount}`);
|
||||
|
||||
if (totalRecords || totalRecords === 0) {
|
||||
warnings.push(
|
||||
i18nTexts.csvRowCountError({ expected: totalRecords, received: this.csvRowCount })
|
||||
|
@ -530,12 +412,4 @@ export class CsvGenerator {
|
|||
error_code: reportingError?.code,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Method to avoid logging the entire PIT: it could be megabytes long
|
||||
*/
|
||||
private formatPit(pitId: string | undefined) {
|
||||
const byteSize = pitId ? Buffer.byteLength(pitId, 'utf-8') : 0;
|
||||
return pitId?.substring(0, 12) + `[${byteSize} bytes]`;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -20,15 +20,15 @@ import {
|
|||
import { IKibanaSearchResponse } from '@kbn/data-plugin/common';
|
||||
import { IScopedSearchClient } from '@kbn/data-plugin/server';
|
||||
import { dataPluginMock } from '@kbn/data-plugin/server/mocks';
|
||||
import type { ESQLSearchReponse } from '@kbn/es-types';
|
||||
import { CancellationToken } from '@kbn/reporting-common';
|
||||
import type { ReportingConfigType } from '@kbn/reporting-server';
|
||||
import {
|
||||
UI_SETTINGS_CSV_QUOTE_VALUES,
|
||||
UI_SETTINGS_CSV_SEPARATOR,
|
||||
UI_SETTINGS_DATEFORMAT_TZ,
|
||||
} from './constants';
|
||||
} from '../constants';
|
||||
import { CsvESQLGenerator, JobParamsCsvESQL } from './generate_csv_esql';
|
||||
import type { ESQLSearchReponse } from '@kbn/es-types';
|
||||
|
||||
const createMockJob = (
|
||||
params: Partial<JobParamsCsvESQL> = { query: { esql: '' } }
|
||||
|
@ -93,7 +93,7 @@ describe('CsvESQLGenerator', () => {
|
|||
escapeFormulaValues: true,
|
||||
maxSizeBytes: 180000,
|
||||
useByteOrderMarkEncoding: false,
|
||||
scroll: { size: 500, duration: '30s' },
|
||||
scroll: { size: 500, duration: '30s', strategy: 'pit' },
|
||||
enablePanelActionDownload: true,
|
||||
maxConcurrentShardRequests: 5,
|
||||
};
|
||||
|
@ -373,7 +373,7 @@ describe('CsvESQLGenerator', () => {
|
|||
escapeFormulaValues: false,
|
||||
maxSizeBytes: 180000,
|
||||
useByteOrderMarkEncoding: false,
|
||||
scroll: { size: 500, duration: '30s' },
|
||||
scroll: { size: 500, duration: '30s', strategy: 'pit' },
|
||||
enablePanelActionDownload: true,
|
||||
maxConcurrentShardRequests: 5,
|
||||
};
|
||||
|
|
|
@ -11,31 +11,31 @@ import type { Writable } from 'stream';
|
|||
|
||||
import { errors as esErrors } from '@elastic/elasticsearch';
|
||||
import type { IScopedClusterClient, IUiSettingsClient, Logger } from '@kbn/core/server';
|
||||
import type { ESQLSearchParams, ESQLSearchReponse } from '@kbn/es-types';
|
||||
import {
|
||||
cellHasFormulas,
|
||||
ESQL_SEARCH_STRATEGY,
|
||||
type IKibanaSearchRequest,
|
||||
type IKibanaSearchResponse,
|
||||
cellHasFormulas,
|
||||
getEsQueryConfig,
|
||||
IKibanaSearchRequest,
|
||||
IKibanaSearchResponse,
|
||||
} from '@kbn/data-plugin/common';
|
||||
import type { IScopedSearchClient } from '@kbn/data-plugin/server';
|
||||
import { type Filter, buildEsQuery } from '@kbn/es-query';
|
||||
import type { ESQLSearchParams, ESQLSearchReponse } from '@kbn/es-types';
|
||||
import { i18n } from '@kbn/i18n';
|
||||
import {
|
||||
AuthenticationExpiredError,
|
||||
byteSizeValueToNumber,
|
||||
CancellationToken,
|
||||
ReportingError,
|
||||
byteSizeValueToNumber,
|
||||
} from '@kbn/reporting-common';
|
||||
import type { TaskRunResult } from '@kbn/reporting-common/types';
|
||||
import type { ReportingConfigType } from '@kbn/reporting-server';
|
||||
import { buildEsQuery, Filter } from '@kbn/es-query';
|
||||
import { zipObject } from 'lodash';
|
||||
import { i18n } from '@kbn/i18n';
|
||||
|
||||
import { CONTENT_TYPE_CSV } from './constants';
|
||||
import { CsvExportSettings, getExportSettings } from './get_export_settings';
|
||||
import { i18nTexts } from './i18n_texts';
|
||||
import { MaxSizeStringBuilder } from './max_size_string_builder';
|
||||
import { CONTENT_TYPE_CSV } from '../constants';
|
||||
import { type CsvExportSettings, getExportSettings } from './lib/get_export_settings';
|
||||
import { i18nTexts } from './lib/i18n_texts';
|
||||
import { MaxSizeStringBuilder } from './lib/max_size_string_builder';
|
||||
|
||||
export interface JobParamsCsvESQL {
|
||||
query: { esql: string };
|
||||
|
|
|
@ -19,23 +19,25 @@ import {
|
|||
UI_SETTINGS_CSV_SEPARATOR,
|
||||
UI_SETTINGS_DATEFORMAT_TZ,
|
||||
UI_SETTINGS_SEARCH_INCLUDE_FROZEN,
|
||||
} from './constants';
|
||||
} from '../../constants';
|
||||
import { getExportSettings } from './get_export_settings';
|
||||
|
||||
describe('getExportSettings', () => {
|
||||
let uiSettingsClient: IUiSettingsClient;
|
||||
const config: ReportingConfigType['csv'] = {
|
||||
checkForFormulas: true,
|
||||
escapeFormulaValues: false,
|
||||
maxSizeBytes: 180000,
|
||||
scroll: { size: 500, duration: '30s' },
|
||||
useByteOrderMarkEncoding: false,
|
||||
maxConcurrentShardRequests: 5,
|
||||
enablePanelActionDownload: true,
|
||||
};
|
||||
let config: ReportingConfigType['csv'];
|
||||
const logger = loggingSystemMock.createLogger();
|
||||
|
||||
beforeEach(() => {
|
||||
config = {
|
||||
checkForFormulas: true,
|
||||
escapeFormulaValues: false,
|
||||
maxSizeBytes: 180000,
|
||||
scroll: { size: 500, duration: '30s', strategy: 'pit' },
|
||||
useByteOrderMarkEncoding: false,
|
||||
maxConcurrentShardRequests: 5,
|
||||
enablePanelActionDownload: true,
|
||||
};
|
||||
|
||||
uiSettingsClient = uiSettingsServiceMock
|
||||
.createStartContract()
|
||||
.asScopedToClient(savedObjectsClientMock.create());
|
||||
|
@ -56,23 +58,48 @@ describe('getExportSettings', () => {
|
|||
});
|
||||
|
||||
test('getExportSettings: returns the expected result', async () => {
|
||||
expect(await getExportSettings(uiSettingsClient, config, '', logger)).toMatchInlineSnapshot(`
|
||||
Object {
|
||||
"bom": "",
|
||||
"checkForFormulas": true,
|
||||
"escapeFormulaValues": false,
|
||||
"escapeValue": [Function],
|
||||
"includeFrozen": false,
|
||||
"maxConcurrentShardRequests": 5,
|
||||
"maxSizeBytes": 180000,
|
||||
"scroll": Object {
|
||||
"duration": "30s",
|
||||
"size": 500,
|
||||
},
|
||||
"separator": ",",
|
||||
"timezone": "UTC",
|
||||
}
|
||||
`);
|
||||
expect(await getExportSettings(uiSettingsClient, config, '', logger)).toMatchObject({
|
||||
bom: '',
|
||||
checkForFormulas: true,
|
||||
escapeFormulaValues: false,
|
||||
includeFrozen: false,
|
||||
maxConcurrentShardRequests: 5,
|
||||
maxSizeBytes: 180000,
|
||||
scroll: {
|
||||
duration: '30s',
|
||||
size: 500,
|
||||
},
|
||||
separator: ',',
|
||||
timezone: 'UTC',
|
||||
});
|
||||
});
|
||||
|
||||
test('does not add a default scroll strategy', async () => {
|
||||
// @ts-expect-error undefined isn't allowed
|
||||
config = { ...config, scroll: { strategy: undefined } };
|
||||
expect(await getExportSettings(uiSettingsClient, config, '', logger)).toMatchObject(
|
||||
expect.objectContaining({ scroll: expect.objectContaining({ strategy: undefined }) })
|
||||
);
|
||||
});
|
||||
|
||||
test('passes the scroll=pit strategy through', async () => {
|
||||
config = { ...config, scroll: { ...config.scroll, strategy: 'pit' } };
|
||||
|
||||
expect(await getExportSettings(uiSettingsClient, config, '', logger)).toMatchObject(
|
||||
expect.objectContaining({ scroll: expect.objectContaining({ strategy: 'pit' }) })
|
||||
);
|
||||
});
|
||||
|
||||
test('passes the scroll=scroll strategy through', async () => {
|
||||
config = { ...config, scroll: { ...config.scroll, strategy: 'scroll' } };
|
||||
|
||||
expect(await getExportSettings(uiSettingsClient, config, '', logger)).toMatchObject(
|
||||
expect.objectContaining({
|
||||
scroll: expect.objectContaining({
|
||||
strategy: 'scroll',
|
||||
}),
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test('escapeValue function', async () => {
|
|
@ -10,17 +10,20 @@ import type { ByteSizeValue } from '@kbn/config-schema';
|
|||
import type { IUiSettingsClient, Logger } from '@kbn/core/server';
|
||||
import { createEscapeValue } from '@kbn/data-plugin/common';
|
||||
import type { ReportingConfigType } from '@kbn/reporting-server';
|
||||
|
||||
import {
|
||||
CSV_BOM_CHARS,
|
||||
UI_SETTINGS_CSV_QUOTE_VALUES,
|
||||
UI_SETTINGS_CSV_SEPARATOR,
|
||||
UI_SETTINGS_DATEFORMAT_TZ,
|
||||
UI_SETTINGS_SEARCH_INCLUDE_FROZEN,
|
||||
} from './constants';
|
||||
} from '../../constants';
|
||||
import { CsvPagingStrategy } from '../../types';
|
||||
|
||||
export interface CsvExportSettings {
|
||||
timezone: string;
|
||||
scroll: {
|
||||
strategy?: CsvPagingStrategy;
|
||||
size: number;
|
||||
duration: string;
|
||||
};
|
||||
|
@ -73,6 +76,7 @@ export const getExportSettings = async (
|
|||
return {
|
||||
timezone: setTimezone,
|
||||
scroll: {
|
||||
strategy: config.scroll.strategy as CsvPagingStrategy,
|
||||
size: config.scroll.size,
|
||||
duration: config.scroll.duration,
|
||||
},
|
|
@ -48,4 +48,9 @@ export const i18nTexts = {
|
|||
defaultMessage:
|
||||
'Unable to close the Point-In-Time used for search. Check the Kibana server logs.',
|
||||
}),
|
||||
csvUnableToCloseScroll: () =>
|
||||
i18n.translate('generateCsv.csvUnableToCloseScroll', {
|
||||
defaultMessage:
|
||||
'Unable to close the scroll context used for search. Check the Kibana server logs.',
|
||||
}),
|
||||
};
|
91
packages/kbn-generate-csv/src/lib/search_cursor.ts
Normal file
91
packages/kbn-generate-csv/src/lib/search_cursor.ts
Normal file
|
@ -0,0 +1,91 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License
|
||||
* 2.0 and the Server Side Public License, v 1; you may not use this file except
|
||||
* in compliance with, at your election, the Elastic License 2.0 or the Server
|
||||
* Side Public License, v 1.
|
||||
*/
|
||||
|
||||
import type { estypes } from '@elastic/elasticsearch';
|
||||
import type { IScopedClusterClient, Logger } from '@kbn/core/server';
|
||||
import type {
|
||||
IEsSearchResponse,
|
||||
IKibanaSearchResponse,
|
||||
ISearchClient,
|
||||
ISearchSource,
|
||||
} from '@kbn/data-plugin/common';
|
||||
import type { CsvExportSettings } from './get_export_settings';
|
||||
|
||||
export interface SearchCursorClients {
|
||||
data: ISearchClient;
|
||||
es: IScopedClusterClient;
|
||||
}
|
||||
|
||||
export type SearchCursorSettings = Pick<
|
||||
CsvExportSettings,
|
||||
'scroll' | 'includeFrozen' | 'maxConcurrentShardRequests'
|
||||
>;
|
||||
|
||||
export abstract class SearchCursor {
|
||||
protected cursorId: string | undefined;
|
||||
|
||||
constructor(
|
||||
protected indexPatternTitle: string,
|
||||
protected settings: SearchCursorSettings,
|
||||
protected clients: SearchCursorClients,
|
||||
protected logger: Logger
|
||||
) {}
|
||||
|
||||
public abstract initialize(): Promise<void>;
|
||||
|
||||
public abstract getPage(
|
||||
searchSource: ISearchSource
|
||||
): Promise<IEsSearchResponse['rawResponse'] | undefined>;
|
||||
|
||||
public abstract updateIdFromResults(
|
||||
results: Pick<estypes.SearchResponse<unknown>, '_scroll_id' | 'pit_id' | 'hits'>
|
||||
): void;
|
||||
|
||||
public abstract closeCursor(): Promise<void>;
|
||||
|
||||
public abstract getUnableToCloseCursorMessage(): string;
|
||||
|
||||
/**
|
||||
* Safely logs debugging meta info from search results
|
||||
* @param clientDetails: Details from the data.search client
|
||||
* @param results: Raw data from ES
|
||||
*/
|
||||
protected logSearchResults(
|
||||
clientDetails: Omit<IKibanaSearchResponse<unknown>, 'rawResponse'>,
|
||||
results: estypes.SearchResponse<unknown>
|
||||
) {
|
||||
const { hits: resultsHits, ...headerWithCursor } = results;
|
||||
const { hits, ...hitsMeta } = resultsHits;
|
||||
const trackedTotal = resultsHits.total as estypes.SearchTotalHits;
|
||||
const currentTotal = trackedTotal?.value ?? resultsHits.total;
|
||||
|
||||
const totalAccuracy = trackedTotal?.relation ?? 'unknown';
|
||||
this.logger.debug(`Received total hits: ${currentTotal}. Accuracy: ${totalAccuracy}.`);
|
||||
|
||||
// reconstruct the data.search response (w/out the data) for logging
|
||||
const { pit_id: newPitId, _scroll_id: newScrollId, ...header } = headerWithCursor;
|
||||
const logInfo = {
|
||||
...clientDetails,
|
||||
rawResponse: {
|
||||
...header,
|
||||
hits: hitsMeta,
|
||||
pit_id: newPitId ? `${this.formatCursorId(newPitId)}` : undefined,
|
||||
_scroll_id: newScrollId ? `${this.formatCursorId(newScrollId)}` : undefined,
|
||||
},
|
||||
};
|
||||
this.logger.debug(`Result details: ${JSON.stringify(logInfo)}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Method to avoid logging the entire PIT: it could be megabytes long
|
||||
*/
|
||||
protected formatCursorId(cursorId: string | undefined) {
|
||||
const byteSize = cursorId ? Buffer.byteLength(cursorId, 'utf-8') : 0;
|
||||
return cursorId?.substring(0, 12) + `[${byteSize} bytes]`;
|
||||
}
|
||||
}
|
82
packages/kbn-generate-csv/src/lib/search_cursor_pit.test.ts
Normal file
82
packages/kbn-generate-csv/src/lib/search_cursor_pit.test.ts
Normal file
|
@ -0,0 +1,82 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License
|
||||
* 2.0 and the Server Side Public License, v 1; you may not use this file except
|
||||
* in compliance with, at your election, the Elastic License 2.0 or the Server
|
||||
* Side Public License, v 1.
|
||||
*/
|
||||
|
||||
import type { IScopedClusterClient, Logger } from '@kbn/core/server';
|
||||
import { elasticsearchServiceMock, loggingSystemMock } from '@kbn/core/server/mocks';
|
||||
import type { ISearchClient } from '@kbn/data-plugin/common';
|
||||
import { createSearchSourceMock } from '@kbn/data-plugin/common/search/search_source/mocks';
|
||||
import { createSearchRequestHandlerContext } from '@kbn/data-plugin/server/search/mocks';
|
||||
import type { SearchCursor, SearchCursorSettings } from './search_cursor';
|
||||
import { SearchCursorPit } from './search_cursor_pit';
|
||||
|
||||
describe('CSV Export Search Cursor', () => {
|
||||
let settings: SearchCursorSettings;
|
||||
let es: IScopedClusterClient;
|
||||
let data: ISearchClient;
|
||||
let logger: Logger;
|
||||
let cursor: SearchCursor;
|
||||
|
||||
beforeEach(async () => {
|
||||
settings = {
|
||||
scroll: {
|
||||
duration: '10m',
|
||||
size: 500,
|
||||
},
|
||||
includeFrozen: false,
|
||||
maxConcurrentShardRequests: 5,
|
||||
};
|
||||
|
||||
es = elasticsearchServiceMock.createScopedClusterClient();
|
||||
data = createSearchRequestHandlerContext();
|
||||
jest.spyOn(es.asCurrentUser, 'openPointInTime').mockResolvedValue({ id: 'somewhat-pit-id' });
|
||||
|
||||
logger = loggingSystemMock.createLogger();
|
||||
|
||||
cursor = new SearchCursorPit('test-index-pattern-string', settings, { data, es }, logger);
|
||||
|
||||
const openPointInTimeSpy = jest
|
||||
// @ts-expect-error create spy on private method
|
||||
.spyOn(cursor, 'openPointInTime');
|
||||
|
||||
await cursor.initialize();
|
||||
|
||||
expect(openPointInTimeSpy).toBeCalledTimes(1);
|
||||
});
|
||||
|
||||
it('supports point-in-time', async () => {
|
||||
const searchWithPitSpy = jest
|
||||
// @ts-expect-error create spy on private method
|
||||
.spyOn(cursor, 'searchWithPit')
|
||||
// @ts-expect-error mock resolved value for spy on private method
|
||||
.mockResolvedValueOnce({ rawResponse: { hits: [] } });
|
||||
|
||||
const searchSource = createSearchSourceMock();
|
||||
await cursor.getPage(searchSource);
|
||||
expect(searchWithPitSpy).toBeCalledTimes(1);
|
||||
});
|
||||
|
||||
it('can update internal cursor ID', () => {
|
||||
cursor.updateIdFromResults({ pit_id: 'very-typical-pit-id', hits: { hits: [] } });
|
||||
// @ts-expect-error private field
|
||||
expect(cursor.cursorId).toBe('very-typical-pit-id');
|
||||
});
|
||||
|
||||
it('manages search_after', () => {
|
||||
// @ts-expect-error access private method
|
||||
cursor.setSearchAfter([
|
||||
{
|
||||
_index: 'test-index',
|
||||
_id: 'test-doc-id',
|
||||
sort: ['Wed Jan 17 15:35:47 MST 2024', 42],
|
||||
},
|
||||
]);
|
||||
|
||||
// @ts-expect-error access private method
|
||||
expect(cursor.getSearchAfter()).toEqual(['Wed Jan 17 15:35:47 MST 2024', 42]);
|
||||
});
|
||||
});
|
170
packages/kbn-generate-csv/src/lib/search_cursor_pit.ts
Normal file
170
packages/kbn-generate-csv/src/lib/search_cursor_pit.ts
Normal file
|
@ -0,0 +1,170 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License
|
||||
* 2.0 and the Server Side Public License, v 1; you may not use this file except
|
||||
* in compliance with, at your election, the Elastic License 2.0 or the Server
|
||||
* Side Public License, v 1.
|
||||
*/
|
||||
|
||||
import type { estypes } from '@elastic/elasticsearch';
|
||||
import type { Logger } from '@kbn/core/server';
|
||||
import { lastValueFrom } from 'rxjs';
|
||||
import {
|
||||
ES_SEARCH_STRATEGY,
|
||||
type ISearchSource,
|
||||
type SearchRequest,
|
||||
} from '@kbn/data-plugin/common';
|
||||
import { SearchCursor, type SearchCursorClients, type SearchCursorSettings } from './search_cursor';
|
||||
import { i18nTexts } from './i18n_texts';
|
||||
|
||||
export class SearchCursorPit extends SearchCursor {
|
||||
private searchAfter: estypes.SortResults | undefined;
|
||||
|
||||
constructor(
|
||||
indexPatternTitle: string,
|
||||
settings: SearchCursorSettings,
|
||||
clients: SearchCursorClients,
|
||||
logger: Logger
|
||||
) {
|
||||
super(indexPatternTitle, settings, clients, logger);
|
||||
}
|
||||
|
||||
/**
|
||||
* When point-in-time strategy is used, the first step is to open a PIT ID for search context.
|
||||
*/
|
||||
public async initialize() {
|
||||
this.cursorId = await this.openPointInTime();
|
||||
}
|
||||
|
||||
private async openPointInTime() {
|
||||
const { includeFrozen, maxConcurrentShardRequests, scroll } = this.settings;
|
||||
|
||||
let pitId: string | undefined;
|
||||
|
||||
this.logger.debug(`Requesting PIT for: [${this.indexPatternTitle}]...`);
|
||||
try {
|
||||
// NOTE: if ES is overloaded, this request could time out
|
||||
const response = await this.clients.es.asCurrentUser.openPointInTime(
|
||||
{
|
||||
index: this.indexPatternTitle,
|
||||
keep_alive: scroll.duration,
|
||||
ignore_unavailable: true,
|
||||
// @ts-expect-error ignore_throttled is not in the type definition, but it is accepted by es
|
||||
ignore_throttled: includeFrozen ? false : undefined, // "true" will cause deprecation warnings logged in ES
|
||||
},
|
||||
{
|
||||
requestTimeout: scroll.duration,
|
||||
maxRetries: 0,
|
||||
maxConcurrentShardRequests,
|
||||
}
|
||||
);
|
||||
pitId = response.id;
|
||||
} catch (err) {
|
||||
this.logger.error(err);
|
||||
}
|
||||
|
||||
if (!pitId) {
|
||||
throw new Error(`Could not receive a PIT ID!`);
|
||||
}
|
||||
|
||||
this.logger.debug(`Opened PIT ID: ${this.formatCursorId(pitId)}`);
|
||||
|
||||
return pitId;
|
||||
}
|
||||
|
||||
private async searchWithPit(searchBody: SearchRequest) {
|
||||
const { maxConcurrentShardRequests, scroll } = this.settings;
|
||||
|
||||
const searchParamsPit = {
|
||||
params: {
|
||||
body: searchBody,
|
||||
max_concurrent_shard_requests: maxConcurrentShardRequests,
|
||||
},
|
||||
};
|
||||
|
||||
return await lastValueFrom(
|
||||
this.clients.data.search(searchParamsPit, {
|
||||
strategy: ES_SEARCH_STRATEGY,
|
||||
transport: {
|
||||
maxRetries: 0, // retrying reporting jobs is handled in the task manager scheduling logic
|
||||
requestTimeout: scroll.duration,
|
||||
},
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
public async getPage(searchSource: ISearchSource) {
|
||||
if (!this.cursorId) {
|
||||
throw new Error(`No access to valid PIT ID!`);
|
||||
}
|
||||
|
||||
searchSource.setField('pit', {
|
||||
id: this.cursorId,
|
||||
keep_alive: this.settings.scroll.duration,
|
||||
});
|
||||
|
||||
const searchAfter = this.getSearchAfter();
|
||||
if (searchAfter) {
|
||||
searchSource.setField('searchAfter', searchAfter);
|
||||
}
|
||||
|
||||
this.logger.debug(
|
||||
`Executing search request with PIT ID: [${this.formatCursorId(this.cursorId)}]` +
|
||||
(searchAfter ? ` search_after: [${searchAfter}]` : '')
|
||||
);
|
||||
|
||||
const searchBody: estypes.SearchRequest = searchSource.getSearchRequestBody();
|
||||
if (searchBody == null) {
|
||||
throw new Error('Could not retrieve the search body!');
|
||||
}
|
||||
|
||||
const response = await this.searchWithPit(searchBody);
|
||||
|
||||
if (!response) {
|
||||
throw new Error(`Response could not be retrieved!`);
|
||||
}
|
||||
|
||||
const { rawResponse, ...rawDetails } = response;
|
||||
|
||||
this.logSearchResults(rawDetails, rawResponse);
|
||||
this.logger.debug(`Received PIT ID: [${this.formatCursorId(rawResponse.pit_id)}]`);
|
||||
|
||||
return rawResponse;
|
||||
}
|
||||
|
||||
public updateIdFromResults(results: Pick<estypes.SearchResponse<unknown>, 'pit_id' | 'hits'>) {
|
||||
const cursorId = results.pit_id;
|
||||
this.cursorId = cursorId ?? this.cursorId;
|
||||
|
||||
// track the beginning of the next page of search results
|
||||
const { hits } = results.hits;
|
||||
this.setSearchAfter(hits); // for pit only
|
||||
}
|
||||
|
||||
private getSearchAfter() {
|
||||
return this.searchAfter;
|
||||
}
|
||||
|
||||
/**
|
||||
* For managing the search_after parameter, needed for paging using point-in-time
|
||||
*/
|
||||
private setSearchAfter(hits: Array<estypes.SearchHit<unknown>>) {
|
||||
// Update last sort results for next query. PIT is used, so the sort results
|
||||
// automatically include _shard_doc as a tiebreaker
|
||||
this.searchAfter = hits[hits.length - 1]?.sort as estypes.SortResults | undefined;
|
||||
this.logger.debug(`Received search_after: [${this.searchAfter}]`);
|
||||
}
|
||||
|
||||
public async closeCursor() {
|
||||
if (this.cursorId) {
|
||||
this.logger.debug(`Executing close PIT on ${this.formatCursorId(this.cursorId)}`);
|
||||
await this.clients.es.asCurrentUser.closePointInTime({ body: { id: this.cursorId } });
|
||||
} else {
|
||||
this.logger.warn(`No PIT Id to clear!`);
|
||||
}
|
||||
}
|
||||
|
||||
public getUnableToCloseCursorMessage() {
|
||||
return i18nTexts.csvUnableToClosePit();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,61 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License
|
||||
* 2.0 and the Server Side Public License, v 1; you may not use this file except
|
||||
* in compliance with, at your election, the Elastic License 2.0 or the Server
|
||||
* Side Public License, v 1.
|
||||
*/
|
||||
|
||||
import type { IScopedClusterClient, Logger } from '@kbn/core/server';
|
||||
import { elasticsearchServiceMock, loggingSystemMock } from '@kbn/core/server/mocks';
|
||||
import type { ISearchClient } from '@kbn/data-plugin/common';
|
||||
import { createSearchSourceMock } from '@kbn/data-plugin/common/search/search_source/mocks';
|
||||
import { createSearchRequestHandlerContext } from '@kbn/data-plugin/server/search/mocks';
|
||||
import type { SearchCursor, SearchCursorSettings } from './search_cursor';
|
||||
import { SearchCursorScroll } from './search_cursor_scroll';
|
||||
|
||||
describe('CSV Export Search Cursor', () => {
|
||||
let settings: SearchCursorSettings;
|
||||
let es: IScopedClusterClient;
|
||||
let data: ISearchClient;
|
||||
let logger: Logger;
|
||||
let cursor: SearchCursor;
|
||||
|
||||
beforeEach(async () => {
|
||||
settings = {
|
||||
scroll: {
|
||||
duration: '10m',
|
||||
size: 500,
|
||||
},
|
||||
includeFrozen: false,
|
||||
maxConcurrentShardRequests: 5,
|
||||
};
|
||||
|
||||
es = elasticsearchServiceMock.createScopedClusterClient();
|
||||
data = createSearchRequestHandlerContext();
|
||||
jest.spyOn(es.asCurrentUser, 'openPointInTime').mockResolvedValue({ id: 'simply-scroll-id' });
|
||||
|
||||
logger = loggingSystemMock.createLogger();
|
||||
|
||||
cursor = new SearchCursorScroll('test-index-pattern-string', settings, { data, es }, logger);
|
||||
await cursor.initialize();
|
||||
});
|
||||
|
||||
it('supports scan/scroll', async () => {
|
||||
const scanSpy = jest
|
||||
// @ts-expect-error create spy on private method
|
||||
.spyOn(cursor, 'scan')
|
||||
// @ts-expect-error mock resolved value for spy on private method
|
||||
.mockResolvedValueOnce({ rawResponse: { hits: [] } });
|
||||
|
||||
const searchSource = createSearchSourceMock();
|
||||
await cursor.getPage(searchSource);
|
||||
expect(scanSpy).toBeCalledTimes(1);
|
||||
});
|
||||
|
||||
it('can update internal cursor ID', () => {
|
||||
cursor.updateIdFromResults({ _scroll_id: 'not-unusual-scroll-id', hits: { hits: [] } });
|
||||
// @ts-expect-error private field
|
||||
expect(cursor.cursorId).toBe('not-unusual-scroll-id');
|
||||
});
|
||||
});
|
120
packages/kbn-generate-csv/src/lib/search_cursor_scroll.ts
Normal file
120
packages/kbn-generate-csv/src/lib/search_cursor_scroll.ts
Normal file
|
@ -0,0 +1,120 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License
|
||||
* 2.0 and the Server Side Public License, v 1; you may not use this file except
|
||||
* in compliance with, at your election, the Elastic License 2.0 or the Server
|
||||
* Side Public License, v 1.
|
||||
*/
|
||||
|
||||
import type { estypes } from '@elastic/elasticsearch';
|
||||
import { lastValueFrom } from 'rxjs';
|
||||
import type { Logger } from '@kbn/core/server';
|
||||
import {
|
||||
ES_SEARCH_STRATEGY,
|
||||
type IEsSearchResponse,
|
||||
type ISearchSource,
|
||||
type SearchRequest,
|
||||
} from '@kbn/data-plugin/common';
|
||||
import { SearchCursor, type SearchCursorClients, type SearchCursorSettings } from './search_cursor';
|
||||
import { i18nTexts } from './i18n_texts';
|
||||
|
||||
export class SearchCursorScroll extends SearchCursor {
|
||||
constructor(
|
||||
indexPatternTitle: string,
|
||||
settings: SearchCursorSettings,
|
||||
clients: SearchCursorClients,
|
||||
logger: Logger
|
||||
) {
|
||||
super(indexPatternTitle, settings, clients, logger);
|
||||
}
|
||||
|
||||
// The first search query begins the scroll context in ES
|
||||
public async initialize() {}
|
||||
|
||||
private async scan(searchBody: SearchRequest) {
|
||||
const { includeFrozen, maxConcurrentShardRequests, scroll } = this.settings;
|
||||
|
||||
const searchParamsScan = {
|
||||
params: {
|
||||
body: searchBody,
|
||||
index: this.indexPatternTitle,
|
||||
scroll: scroll.duration,
|
||||
size: scroll.size,
|
||||
ignore_throttled: includeFrozen ? false : undefined, // "true" will cause deprecation warnings logged in ES
|
||||
max_concurrent_shard_requests: maxConcurrentShardRequests,
|
||||
},
|
||||
};
|
||||
|
||||
return await lastValueFrom(
|
||||
this.clients.data.search(searchParamsScan, {
|
||||
strategy: ES_SEARCH_STRATEGY,
|
||||
transport: {
|
||||
maxRetries: 0, // retrying reporting jobs is handled in the task manager scheduling logic
|
||||
requestTimeout: scroll.duration,
|
||||
},
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
private async scroll() {
|
||||
const { duration } = this.settings.scroll;
|
||||
return await this.clients.es.asCurrentUser.scroll(
|
||||
{ scroll: duration, scroll_id: this.cursorId },
|
||||
{
|
||||
maxRetries: 0, // retrying reporting jobs is handled in the task manager scheduling logic
|
||||
requestTimeout: duration,
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
public async getPage(searchSource: ISearchSource) {
|
||||
if (this.cursorId) {
|
||||
this.logger.debug(
|
||||
`Executing search request with scroll ID [${this.formatCursorId(this.cursorId)}]`
|
||||
);
|
||||
} else {
|
||||
this.logger.debug(`Executing search for initial scan request.`);
|
||||
}
|
||||
|
||||
let response: IEsSearchResponse | undefined;
|
||||
|
||||
const searchBody: estypes.SearchRequest = searchSource.getSearchRequestBody();
|
||||
if (searchBody == null) {
|
||||
throw new Error('Could not retrieve the search body!');
|
||||
}
|
||||
|
||||
if (this.cursorId == null) {
|
||||
response = await this.scan(searchBody);
|
||||
} else {
|
||||
response = { rawResponse: await this.scroll() };
|
||||
}
|
||||
|
||||
if (!response) {
|
||||
throw new Error(`Response could not be retrieved!`);
|
||||
}
|
||||
|
||||
const { rawResponse, ...rawDetails } = response;
|
||||
|
||||
this.logSearchResults(rawDetails, rawResponse);
|
||||
this.logger.debug(`Received Scroll ID: [${this.formatCursorId(rawResponse._scroll_id)}]`);
|
||||
|
||||
return rawResponse;
|
||||
}
|
||||
|
||||
public updateIdFromResults(results: Pick<estypes.SearchResponse<unknown>, '_scroll_id'>) {
|
||||
this.cursorId = results._scroll_id ?? this.cursorId;
|
||||
}
|
||||
|
||||
public async closeCursor() {
|
||||
if (this.cursorId) {
|
||||
this.logger.debug(`Executing clearScroll on ${this.formatCursorId(this.cursorId)}`);
|
||||
await this.clients.es.asCurrentUser.clearScroll({ scroll_id: [this.cursorId] });
|
||||
} else {
|
||||
this.logger.warn(`No Scroll Id to clear!`);
|
||||
}
|
||||
}
|
||||
|
||||
public getUnableToCloseCursorMessage() {
|
||||
return i18nTexts.csvUnableToCloseScroll();
|
||||
}
|
||||
}
|
|
@ -11,8 +11,10 @@ import type { SerializedSearchSourceFields } from '@kbn/data-plugin/public';
|
|||
/**
|
||||
* Duplicated from @kbn/reporting-export-types-csv-common to reduce dependencies
|
||||
*/
|
||||
export type CsvPagingStrategy = 'pit' | 'scroll';
|
||||
export interface JobParamsCSV {
|
||||
browserTimezone?: string;
|
||||
searchSource: SerializedSearchSourceFields;
|
||||
columns?: string[];
|
||||
pagingStrategy?: CsvPagingStrategy;
|
||||
}
|
||||
|
|
|
@ -6,11 +6,11 @@
|
|||
* Side Public License, v 1.
|
||||
*/
|
||||
|
||||
import type { ConcreteTaskInstance } from '@kbn/task-manager-plugin/server';
|
||||
import type {
|
||||
LayoutParams,
|
||||
PerformanceMetrics as ScreenshotMetrics,
|
||||
} from '@kbn/screenshotting-plugin/common';
|
||||
import type { ConcreteTaskInstance } from '@kbn/task-manager-plugin/server';
|
||||
import { JOB_STATUS } from './constants';
|
||||
import type { LocatorParams } from './url';
|
||||
|
||||
|
@ -59,11 +59,12 @@ export interface ReportOutput extends TaskRunResult {
|
|||
* @deprecated
|
||||
*/
|
||||
export interface BaseParams {
|
||||
layout?: LayoutParams;
|
||||
browserTimezone: string; // to format dates in the user's time zone
|
||||
objectType: string;
|
||||
title: string;
|
||||
browserTimezone: string; // to format dates in the user's time zone
|
||||
version: string; // to handle any state migrations
|
||||
layout?: LayoutParams; // png & pdf only
|
||||
pagingStrategy?: 'pit' | 'scroll'; // csv only
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -10,7 +10,7 @@ import { Writable } from 'stream';
|
|||
|
||||
import type { DataPluginStart } from '@kbn/data-plugin/server/plugin';
|
||||
import type { DiscoverServerPluginStart } from '@kbn/discover-plugin/server';
|
||||
import { CsvGenerator } from '@kbn/generate-csv';
|
||||
import { CsvGenerator, type CsvPagingStrategy } from '@kbn/generate-csv';
|
||||
import {
|
||||
CancellationToken,
|
||||
LICENSE_TYPE_BASIC,
|
||||
|
@ -67,7 +67,8 @@ export class CsvSearchSourceExportType extends ExportType<
|
|||
}
|
||||
|
||||
public createJob = async (jobParams: JobParamsCSV) => {
|
||||
return { ...jobParams };
|
||||
const pagingStrategy = this.config.csv.scroll.strategy as CsvPagingStrategy;
|
||||
return { pagingStrategy, ...jobParams };
|
||||
};
|
||||
|
||||
public runTask = async (
|
||||
|
|
|
@ -16,6 +16,7 @@ Object {
|
|||
"scroll": Object {
|
||||
"duration": "30s",
|
||||
"size": 500,
|
||||
"strategy": "pit",
|
||||
},
|
||||
"useByteOrderMarkEncoding": false,
|
||||
},
|
||||
|
@ -78,6 +79,7 @@ Object {
|
|||
"scroll": Object {
|
||||
"duration": "30s",
|
||||
"size": 500,
|
||||
"strategy": "pit",
|
||||
},
|
||||
"useByteOrderMarkEncoding": false,
|
||||
},
|
||||
|
|
|
@ -65,6 +65,14 @@ const CsvSchema = schema.object({
|
|||
}),
|
||||
useByteOrderMarkEncoding: schema.boolean({ defaultValue: false }),
|
||||
scroll: schema.object({
|
||||
strategy: schema.oneOf(
|
||||
[
|
||||
// point-in-time API or scroll API is supported
|
||||
schema.literal('pit'),
|
||||
schema.literal('scroll'),
|
||||
],
|
||||
{ defaultValue: 'pit' }
|
||||
),
|
||||
duration: schema.string({
|
||||
defaultValue: '30s', // this value is passed directly to ES, so string only format is preferred
|
||||
validate(value) {
|
||||
|
|
|
@ -337,6 +337,7 @@ kibana_vars=(
|
|||
xpack.reporting.csv.maxSizeBytes
|
||||
xpack.reporting.csv.scroll.duration
|
||||
xpack.reporting.csv.scroll.size
|
||||
xpack.reporting.csv.scroll.strategy
|
||||
xpack.reporting.csv.useByteOrderMarkEncoding
|
||||
xpack.reporting.enabled
|
||||
xpack.reporting.encryptionKey
|
||||
|
|
|
@ -42,7 +42,8 @@ export class Job {
|
|||
public readonly isDeprecated: ReportPayload['isDeprecated'];
|
||||
public readonly spaceId: ReportPayload['spaceId'];
|
||||
public readonly browserTimezone?: ReportPayload['browserTimezone'];
|
||||
public readonly layout: ReportPayload['layout'];
|
||||
public readonly layout: ReportPayload['layout']; // png & pdf only
|
||||
public readonly pagingStrategy: ReportPayload['pagingStrategy']; // csv only
|
||||
public readonly version: ReportPayload['version'];
|
||||
|
||||
public readonly jobtype: ReportSource['jobtype'];
|
||||
|
@ -81,6 +82,7 @@ export class Job {
|
|||
this.objectType = report.payload.objectType;
|
||||
this.title = report.payload.title;
|
||||
this.layout = report.payload.layout;
|
||||
this.pagingStrategy = report.payload.pagingStrategy;
|
||||
this.version = report.payload.version;
|
||||
this.created_by = report.created_by;
|
||||
this.created_at = report.created_at;
|
||||
|
|
|
@ -1,203 +0,0 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License
|
||||
* 2.0; you may not use this file except in compliance with the Elastic License
|
||||
* 2.0.
|
||||
*/
|
||||
|
||||
import { compressToEncodedURIComponent } from 'lz-string';
|
||||
import React, { useCallback } from 'react';
|
||||
|
||||
import { EuiContextMenuItem } from '@elastic/eui';
|
||||
import type { ISearchStartSearchSource } from '@kbn/data-plugin/common';
|
||||
import { i18n } from '@kbn/i18n';
|
||||
import { useKibana } from '@kbn/kibana-react-plugin/public';
|
||||
import {
|
||||
getQueryFromCsvJob,
|
||||
QueryInspection,
|
||||
TaskPayloadCSV,
|
||||
} from '@kbn/reporting-export-types-csv-common';
|
||||
import type { ClientConfigType } from '@kbn/reporting-public';
|
||||
import type { LocatorClient } from '@kbn/share-plugin/common/url_service';
|
||||
|
||||
import type { Job } from '../../lib/job';
|
||||
import type { KibanaContext } from '../../types';
|
||||
|
||||
interface PropsUI {
|
||||
job: Job;
|
||||
csvConfig: ClientConfigType['csv'];
|
||||
searchSourceStart: ISearchStartSearchSource;
|
||||
locators: LocatorClient;
|
||||
}
|
||||
|
||||
const InspectInConsoleButtonUi: React.FC<PropsUI> = (props) => {
|
||||
const { csvConfig, job, searchSourceStart, locators } = props;
|
||||
|
||||
const { title: jobTitle } = job;
|
||||
const serializedSearchSource = (job.payload as TaskPayloadCSV).searchSource;
|
||||
|
||||
const handleDevToolsLinkClick = useCallback(async () => {
|
||||
const searchSource = await searchSourceStart.create(serializedSearchSource);
|
||||
const index = searchSource.getField('index');
|
||||
if (!index) {
|
||||
throw new Error(`The search must have a reference to an index pattern!`);
|
||||
}
|
||||
const indexPatternTitle = index.getIndexPattern();
|
||||
const examplePitId = i18n.translate(
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.examplePitId',
|
||||
{
|
||||
defaultMessage: `[ID returned from first request]`,
|
||||
description: `This gets used in place of an ID string that is sent in a request body.`,
|
||||
}
|
||||
);
|
||||
const queryInfo = getQueryFromCsvJob(searchSource, csvConfig, examplePitId);
|
||||
const queryUri = compressToEncodedURIComponent(
|
||||
getTextForConsole(jobTitle, indexPatternTitle, queryInfo, csvConfig)
|
||||
);
|
||||
const consoleLocator = locators.get('CONSOLE_APP_LOCATOR');
|
||||
consoleLocator?.navigate({
|
||||
loadFrom: `data:text/plain,${queryUri}`,
|
||||
});
|
||||
}, [searchSourceStart, serializedSearchSource, jobTitle, csvConfig, locators]);
|
||||
|
||||
return (
|
||||
<EuiContextMenuItem
|
||||
data-test-subj="reportInfoFlyoutOpenInConsoleButton"
|
||||
key="download"
|
||||
icon="wrench"
|
||||
onClick={handleDevToolsLinkClick}
|
||||
>
|
||||
{i18n.translate('xpack.reporting.reportInfoFlyout.openInConsole', {
|
||||
defaultMessage: 'Inspect query in Console',
|
||||
description: 'An option in a menu of actions.',
|
||||
})}
|
||||
</EuiContextMenuItem>
|
||||
);
|
||||
};
|
||||
|
||||
interface Props {
|
||||
job: Job;
|
||||
config: ClientConfigType;
|
||||
}
|
||||
|
||||
export const InspectInConsoleButton: React.FC<Props> = (props) => {
|
||||
const { config, job } = props;
|
||||
const { services } = useKibana<KibanaContext>();
|
||||
const { application, data, share } = services;
|
||||
const { capabilities } = application;
|
||||
const { locators } = share.url;
|
||||
|
||||
// To show the Console button,
|
||||
// check if job object type is search,
|
||||
// and if both the Dev Tools UI and the Console UI are enabled.
|
||||
const canShowDevTools = job.objectType === 'search' && capabilities.dev_tools.show;
|
||||
if (!canShowDevTools) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return (
|
||||
<InspectInConsoleButtonUi
|
||||
searchSourceStart={data.search.searchSource}
|
||||
locators={locators}
|
||||
job={job}
|
||||
csvConfig={config.csv}
|
||||
/>
|
||||
);
|
||||
};
|
||||
|
||||
const getTextForConsole = (
|
||||
jobTitle: string,
|
||||
indexPattern: string,
|
||||
queryInfo: QueryInspection,
|
||||
csvConfig: ClientConfigType['csv']
|
||||
) => {
|
||||
const { requestBody } = queryInfo;
|
||||
const queryAsString = JSON.stringify(requestBody, null, ' ');
|
||||
|
||||
const pitRequest =
|
||||
`POST /${indexPattern}/_pit?keep_alive=${csvConfig.scroll.duration}` +
|
||||
`&ignore_unavailable=true`;
|
||||
const queryRequest = `POST /_search`;
|
||||
const closePitRequest = `DELETE /_pit\n${JSON.stringify({
|
||||
id: `[ID returned from latest request]`,
|
||||
})}`;
|
||||
|
||||
const introText = i18n.translate(
|
||||
// intro to the content
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.introText',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# Report title: {jobTitle}
|
||||
# These are the queries used when exporting data for
|
||||
# the CSV report.
|
||||
#
|
||||
# For reference about the Elasticsearch Point-In-Time
|
||||
# API, see
|
||||
# https://www.elastic.co/guide/en/elasticsearch/reference/current/point-in-time-api.html
|
||||
|
||||
# The first query opens a Point-In-Time (PIT) context
|
||||
# and receive back the ID reference. The "keep_alive"
|
||||
# value is taken from the
|
||||
# "xpack.reporting.csv.scroll.duration" setting.
|
||||
#
|
||||
# The response will include an "id" value, which is
|
||||
# needed for the second query.
|
||||
{pitRequest}`,
|
||||
values: {
|
||||
jobTitle,
|
||||
pitRequest,
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
const queryText = i18n.translate(
|
||||
// query with the request path and body
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.queryText',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# The second query executes a search using the PIT ID.
|
||||
# The "keep_alive" and "size" values come from the
|
||||
# "xpack.reporting.csv.scroll.duration" and
|
||||
# "xpack.reporting.csv.scroll.size" settings in
|
||||
# kibana.yml.
|
||||
#
|
||||
# The reponse will include new a PIT ID, which might
|
||||
# not be the same as the ID returned from the first
|
||||
# query.
|
||||
{queryRequest}
|
||||
{queryAsString}`,
|
||||
values: { queryRequest, queryAsString },
|
||||
}
|
||||
);
|
||||
|
||||
const pagingText = i18n.translate(
|
||||
// info about querying further pages, and link to documentation
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.pagingText',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `# The first request retrieves the first page of search
|
||||
# results. If you want to retrieve more hits, use PIT
|
||||
# with search_after. Always use the PIT ID from the
|
||||
# latest search response. See
|
||||
# https://www.elastic.co/guide/en/elasticsearch/reference/current/paginate-search-results.html`,
|
||||
}
|
||||
);
|
||||
|
||||
const closingText = i18n.translate(
|
||||
// reminder to close the point-in-time context
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.closingText',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# Finally, release the resources held in Elasticsearch
|
||||
# memory by clearing the PIT.
|
||||
{closePitRequest}
|
||||
`,
|
||||
values: { closePitRequest },
|
||||
}
|
||||
);
|
||||
|
||||
return (introText + queryText + pagingText + closingText).trim();
|
||||
};
|
|
@ -0,0 +1,122 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License
|
||||
* 2.0; you may not use this file except in compliance with the Elastic License
|
||||
* 2.0.
|
||||
*/
|
||||
|
||||
import { ISearchSource } from '@kbn/data-plugin/common';
|
||||
import { i18n } from '@kbn/i18n';
|
||||
import { getQueryFromCsvJob } from '@kbn/reporting-export-types-csv-common';
|
||||
import type { ClientConfigType } from '@kbn/reporting-public';
|
||||
|
||||
export const getPitApiTextForConsole = (
|
||||
jobTitle: string,
|
||||
indexPattern: string,
|
||||
searchSource: ISearchSource,
|
||||
csvConfig: ClientConfigType['csv']
|
||||
) => {
|
||||
const examplePitId = i18n.translate(
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.examplePitId',
|
||||
{
|
||||
defaultMessage: `[ID returned from first request]`,
|
||||
description: `This gets used in place of an ID string that is sent in a request body.`,
|
||||
}
|
||||
);
|
||||
const queryInfo = getQueryFromCsvJob(searchSource, csvConfig, examplePitId);
|
||||
|
||||
// Part 1
|
||||
const pitRequest =
|
||||
`POST /${indexPattern}/_pit?keep_alive=${csvConfig.scroll.duration}` +
|
||||
`&ignore_unavailable=true`;
|
||||
const queryRequest = `POST /_search`;
|
||||
const closePitRequest = `DELETE /_pit\n${JSON.stringify(
|
||||
{ id: `[ID returned from latest request]` },
|
||||
null,
|
||||
' '
|
||||
)}`;
|
||||
const introText = i18n.translate(
|
||||
// intro to the content
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.introText.pit',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# Report title: {jobTitle}
|
||||
# These are the queries used when exporting data for
|
||||
# the CSV report.
|
||||
#
|
||||
# For reference about the Elasticsearch Point-In-Time
|
||||
# API, see
|
||||
# https://www.elastic.co/guide/en/elasticsearch/reference/current/point-in-time-api.html
|
||||
|
||||
# The first query opens a Point-In-Time (PIT) context
|
||||
# and receive back the ID reference. The "keep_alive"
|
||||
# value is taken from the
|
||||
# "xpack.reporting.csv.scroll.duration" setting.
|
||||
#
|
||||
# The response will include an "id" value, which is
|
||||
# needed for the second query.
|
||||
{pitRequest}`,
|
||||
values: {
|
||||
jobTitle,
|
||||
pitRequest,
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
// Part 2
|
||||
const { requestBody } = queryInfo;
|
||||
const queryAsString = JSON.stringify(requestBody, null, ' ');
|
||||
const queryText = i18n.translate(
|
||||
// query with the request path and body
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.queryText.pit',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# The second query executes a search using the PIT ID.
|
||||
# The "keep_alive" and "size" values come from the
|
||||
# "xpack.reporting.csv.scroll.duration" and
|
||||
# "xpack.reporting.csv.scroll.size" settings in
|
||||
# kibana.yml.
|
||||
#
|
||||
# The reponse will include new a PIT ID, which might not
|
||||
# be the same as the ID returned from the first query.
|
||||
# When paging through the data, always use the PIT ID from
|
||||
# the latest search response. See
|
||||
# https://www.elastic.co/guide/en/elasticsearch/reference/current/paginate-search-results.html
|
||||
{queryRequest}
|
||||
{queryAsString}`,
|
||||
values: { queryRequest, queryAsString },
|
||||
}
|
||||
);
|
||||
|
||||
// Part 3
|
||||
const pagingText = i18n.translate(
|
||||
// info about querying further pages, and link to documentation
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.pagingText.pit',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# The first request retrieved the first page of search
|
||||
# results. If you want to retrieve more hits, use the
|
||||
# latest PIT ID with search_after.`,
|
||||
}
|
||||
);
|
||||
|
||||
// Part 4
|
||||
const closingText = i18n.translate(
|
||||
// reminder to close the point-in-time context
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.closingText.pit',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# Finally, release the resources held in Elasticsearch
|
||||
# memory by closing the PIT.
|
||||
{closePitRequest}`,
|
||||
values: { closePitRequest },
|
||||
}
|
||||
);
|
||||
|
||||
// End
|
||||
return `${introText}\n${queryText}\n${pagingText}\n${closingText}`.trim();
|
||||
};
|
|
@ -0,0 +1,113 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License
|
||||
* 2.0; you may not use this file except in compliance with the Elastic License
|
||||
* 2.0.
|
||||
*/
|
||||
|
||||
import { ISearchSource } from '@kbn/data-plugin/common';
|
||||
import { i18n } from '@kbn/i18n';
|
||||
import { getQueryFromCsvJob } from '@kbn/reporting-export-types-csv-common';
|
||||
import type { ClientConfigType } from '@kbn/reporting-public';
|
||||
|
||||
export const getScrollApiTextForConsole = (
|
||||
jobTitle: string,
|
||||
indexPattern: string,
|
||||
searchSource: ISearchSource,
|
||||
csvConfig: ClientConfigType['csv']
|
||||
) => {
|
||||
const queryInfo = getQueryFromCsvJob(searchSource, csvConfig);
|
||||
|
||||
// Part 1
|
||||
const scanRequest =
|
||||
`POST /${indexPattern}/_search?scroll=${csvConfig.scroll.duration}` +
|
||||
`&ignore_unavailable=true`;
|
||||
const scanBody = JSON.stringify(queryInfo.requestBody, null, ' ');
|
||||
const introText = i18n.translate(
|
||||
// intro to the content
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.introText.scroll',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# Report title: {jobTitle}
|
||||
# These are the queries used when exporting data for
|
||||
# the CSV report.
|
||||
#
|
||||
# For reference about the Elasticsearch Scroll API, see
|
||||
# https://www.elastic.co/guide/en/elasticsearch/reference/current/paginate-search-results.html#scroll-search-results
|
||||
|
||||
# The first query opens a scroll context and receive back
|
||||
# the ID reference. The "scroll" value is taken from the
|
||||
# "xpack.reporting.csv.scroll.duration" setting.
|
||||
#
|
||||
# The response will include an "_scroll_id" value, which is
|
||||
# needed for the second query.
|
||||
{scanRequest}
|
||||
{scanBody}`,
|
||||
values: { jobTitle, scanRequest, scanBody },
|
||||
}
|
||||
);
|
||||
|
||||
// Part 2
|
||||
const pagingRequest = `POST /_search/scroll`;
|
||||
const pagingBody = JSON.stringify(
|
||||
{ scroll: csvConfig.scroll.duration, scroll_id: `[ID returned from latest request]` },
|
||||
null,
|
||||
' '
|
||||
);
|
||||
const queryText = i18n.translate(
|
||||
// query with the request path and body
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.queryText.scroll',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# The second query executes a search using the scroll ID.
|
||||
# The "scroll" value comes from the
|
||||
# "xpack.reporting.csv.scroll.duration" setting in
|
||||
# kibana.yml.
|
||||
#
|
||||
# The reponse will include new a scroll ID, which might
|
||||
# not be the same as the ID returned from the first query.
|
||||
# When paging through the data, always use the scroll ID
|
||||
# from the latest search response.
|
||||
{pagingRequest}
|
||||
{pagingBody}`,
|
||||
values: { pagingRequest, pagingBody },
|
||||
}
|
||||
);
|
||||
|
||||
// Part 3
|
||||
const pagingText = i18n.translate(
|
||||
// info about querying further pages, and link to documentation
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.pagingText.scroll',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# The first request retrieved the first page of search
|
||||
# results. If you want to retrieve more hits, keep calling
|
||||
# the search API with the scroll ID.`,
|
||||
}
|
||||
);
|
||||
|
||||
// Part 4
|
||||
const clearScrollRequest = `DELETE /_search/scroll\n${JSON.stringify(
|
||||
{ scroll_id: `[ID returned from latest request]` },
|
||||
null,
|
||||
' '
|
||||
)}`;
|
||||
const closingText = i18n.translate(
|
||||
// reminder to close the point-in-time context
|
||||
'xpack.reporting.reportInfoFlyout.devToolsContent.closingText.scroll',
|
||||
{
|
||||
description: `Script used in the Console app`,
|
||||
defaultMessage: `
|
||||
# Finally, release the resources held in Elasticsearch
|
||||
# memory by clearing the scroll context.
|
||||
{clearScrollRequest}`,
|
||||
values: { clearScrollRequest },
|
||||
}
|
||||
);
|
||||
|
||||
// End
|
||||
return `${introText}\n${queryText}\n${pagingText}\n${closingText}`.trim();
|
||||
};
|
|
@ -0,0 +1,98 @@
|
|||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License
|
||||
* 2.0; you may not use this file except in compliance with the Elastic License
|
||||
* 2.0.
|
||||
*/
|
||||
|
||||
import { compressToEncodedURIComponent } from 'lz-string';
|
||||
import React, { useCallback } from 'react';
|
||||
|
||||
import { EuiContextMenuItem } from '@elastic/eui';
|
||||
import type { ISearchStartSearchSource } from '@kbn/data-plugin/common';
|
||||
import { i18n } from '@kbn/i18n';
|
||||
import { useKibana } from '@kbn/kibana-react-plugin/public';
|
||||
import { type TaskPayloadCSV } from '@kbn/reporting-export-types-csv-common';
|
||||
import type { ClientConfigType } from '@kbn/reporting-public';
|
||||
import type { LocatorClient } from '@kbn/share-plugin/common/url_service';
|
||||
|
||||
import type { Job } from '../../../lib/job';
|
||||
import type { KibanaContext } from '../../../types';
|
||||
import { getPitApiTextForConsole } from './get_console_text_pit';
|
||||
import { getScrollApiTextForConsole } from './get_console_text_scroll';
|
||||
|
||||
interface PropsUI {
|
||||
job: Job;
|
||||
csvConfig: ClientConfigType['csv'];
|
||||
searchSourceStart: ISearchStartSearchSource;
|
||||
locators: LocatorClient;
|
||||
}
|
||||
|
||||
const InspectInConsoleButtonUi: React.FC<PropsUI> = (props) => {
|
||||
const { csvConfig, job, searchSourceStart, locators } = props;
|
||||
|
||||
const { title: jobTitle, pagingStrategy } = job;
|
||||
const serializedSearchSource = (job.payload as TaskPayloadCSV).searchSource;
|
||||
|
||||
const handleDevToolsLinkClick = useCallback(async () => {
|
||||
const searchSource = await searchSourceStart.create(serializedSearchSource);
|
||||
const index = searchSource.getField('index');
|
||||
if (!index) {
|
||||
throw new Error(`The search must have a reference to an index pattern!`);
|
||||
}
|
||||
const indexPatternTitle = index.getIndexPattern();
|
||||
const queryUri = compressToEncodedURIComponent(
|
||||
pagingStrategy === 'scroll'
|
||||
? getScrollApiTextForConsole(jobTitle, indexPatternTitle, searchSource, csvConfig)
|
||||
: getPitApiTextForConsole(jobTitle, indexPatternTitle, searchSource, csvConfig)
|
||||
);
|
||||
const consoleLocator = locators.get('CONSOLE_APP_LOCATOR');
|
||||
consoleLocator?.navigate({
|
||||
loadFrom: `data:text/plain,${queryUri}`,
|
||||
});
|
||||
}, [searchSourceStart, serializedSearchSource, jobTitle, pagingStrategy, csvConfig, locators]);
|
||||
|
||||
return (
|
||||
<EuiContextMenuItem
|
||||
data-test-subj="reportInfoFlyoutOpenInConsoleButton"
|
||||
key="download"
|
||||
icon="wrench"
|
||||
onClick={handleDevToolsLinkClick}
|
||||
>
|
||||
{i18n.translate('xpack.reporting.reportInfoFlyout.openInConsole', {
|
||||
defaultMessage: 'Inspect query in Console',
|
||||
description: 'An option in a menu of actions.',
|
||||
})}
|
||||
</EuiContextMenuItem>
|
||||
);
|
||||
};
|
||||
|
||||
interface Props {
|
||||
job: Job;
|
||||
config: ClientConfigType;
|
||||
}
|
||||
|
||||
export const InspectInConsoleButton: React.FC<Props> = (props) => {
|
||||
const { config, job } = props;
|
||||
const { services } = useKibana<KibanaContext>();
|
||||
const { application, data, share } = services;
|
||||
const { capabilities } = application;
|
||||
const { locators } = share.url;
|
||||
|
||||
// To show the Console button,
|
||||
// check if job object type is search,
|
||||
// and if both the Dev Tools UI and the Console UI are enabled.
|
||||
const canShowDevTools = job.objectType === 'search' && capabilities.dev_tools.show;
|
||||
if (!canShowDevTools) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return (
|
||||
<InspectInConsoleButtonUi
|
||||
searchSourceStart={data.search.searchSource}
|
||||
locators={locators}
|
||||
job={job}
|
||||
csvConfig={config.csv}
|
||||
/>
|
||||
);
|
||||
};
|
|
@ -29,7 +29,7 @@ import type { ClientConfigType } from '@kbn/reporting-public';
|
|||
|
||||
import type { Job } from '../../lib/job';
|
||||
import { useInternalApiClient } from '../../lib/reporting_api_client';
|
||||
import { InspectInConsoleButton } from './inspect_in_console_button';
|
||||
import { InspectInConsoleButton } from './inspect_in_console_button/inspect_in_console_button';
|
||||
import { ReportInfoFlyoutContent } from './report_info_flyout_content';
|
||||
|
||||
interface Props {
|
||||
|
|
|
@ -64,6 +64,7 @@ export const ReportInfoFlyoutContent: FunctionComponent<Props> = ({ info }) => {
|
|||
const memoryInMegabytes =
|
||||
info.metrics?.pdf?.memoryInMegabytes ?? info.metrics?.png?.memoryInMegabytes;
|
||||
const hasCsvRows = info.metrics?.csv?.rows != null;
|
||||
const hasPagingStrategy = info.pagingStrategy != null;
|
||||
const hasScreenshot = USES_HEADLESS_JOB_TYPES.includes(info.jobtype);
|
||||
const hasPdfPagesMetric = info.metrics?.pdf?.pages != null;
|
||||
|
||||
|
@ -115,6 +116,12 @@ export const ReportInfoFlyoutContent: FunctionComponent<Props> = ({ info }) => {
|
|||
}),
|
||||
description: info.metrics?.csv?.rows?.toString() || NA,
|
||||
},
|
||||
hasPagingStrategy && {
|
||||
title: i18n.translate('xpack.reporting.listing.infoPanel.csvSearchStrategy', {
|
||||
defaultMessage: 'Search strategy',
|
||||
}),
|
||||
description: info.pagingStrategy || NA,
|
||||
},
|
||||
|
||||
hasScreenshot && {
|
||||
title: i18n.translate('xpack.reporting.listing.infoPanel.dimensionsInfoHeight', {
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue