mirror of
https://github.com/elastic/kibana.git
synced 2025-04-23 17:28:26 -04:00
[Serverless/Reporting] use 3m reporting poll interval for report job cleanup (#170787)
Monitoring of task consumption has shown the `reports:monitor` task takes an aggressively high amount of cycles from Kibana Task Manager. This impacts the throughput of alerts. This first step is a serverless-only config change to give an immediate increase in overall task throughput. It's safe to lower this polling frequency: **the impact is limited to the responsiveness of retries** when a report job is found to be timed out. Long-term, the plan will be to tune other parts of the code: https://github.com/elastic/kibana/issues/170462 ## Testing 1. Adjust the Dev mode settings to match a value set in production. Add this to `config/kibana.dev.yml`: ``` xpack.reporting.capture.maxAttempts: 3 # usually in Dev mode, this is set to 1 ``` 2. Start the scripts in different terminal windows to run Elasticsearch and Kibana dev servers ``` yarn es serverless --------------- yarn serverless ``` 3. Open a search in Discover that covers about 4,000 hits and request a CSV export using the Share menu. 4. Monitor the Kibana server logs and wait until the background job begins. Restart the server while the job is executing (saving a file in code under `packages/` or `server/` will trigger a restart). 5. Around 3 minutes after restart, the report job will get a re-attempt Compare this behavior vs in non-serverless, where a report job will get a re-attempt around 3 seconds after restart.
This commit is contained in:
parent
d95426af35
commit
2f038e1ba7
1 changed files with 1 additions and 0 deletions
|
@ -146,6 +146,7 @@ xpack.task_manager.requeue_invalid_tasks.enabled: true
|
|||
|
||||
# Reporting feature
|
||||
xpack.screenshotting.enabled: false
|
||||
xpack.reporting.queue.pollInterval: 3m
|
||||
xpack.reporting.roles.enabled: false
|
||||
xpack.reporting.statefulSettings.enabled: false
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue