# Backport
This will backport the following commits from `main` to `8.16`:
- [Set refresh according to stateful vs stateless when indexing alert
documents (#201209)](https://github.com/elastic/kibana/pull/201209)
<!--- Backport version: 8.9.8 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)
<!--BACKPORT [{"author":{"name":"Mike
Côté","email":"mikecote@users.noreply.github.com"},"sourceCommit":{"committedDate":"2024-11-28T17:10:56Z","message":"Set
refresh according to stateful vs stateless when indexing alert documents
(#201209)\n\nIn this PR, I'm making the change so when Kibana is running
with\r\nElasticsearch stateful we set refresh to `wait_for` (instead of
`true`)\r\nso we are not putting too much pressure on the Elasticsearch
indices\r\nwhen under load.\r\n\r\n## To verify\r\n\r\nVery using the
Cloud deployment and Serverless project created from
this\r\nPR\r\n\r\n1. Create an always firing ES Query rule\r\n2. Create
an always firing security detection rule w/ alert suppression\r\n3.
Verify the ECH cluster logs and observe `*** Refresh value
when\r\nindexing alerts: wait_for` and `*** Rule registry - refresh
value when\r\nindexing alerts: wait_for` messages\r\n4. Verify the
serverless project logs on QA overview and observe `***\r\nRefresh value
when indexing alerts: true` and `*** Rule registry -\r\nrefresh value
when indexing alerts: true` messages\r\n\r\n## To-Do\r\n\r\n- [x] Revert
commit\r\n7c19b458e6\r\nthat
was added for testing purposes\r\n\r\n---------\r\n\r\nCo-authored-by:
kibanamachine
<42973632+kibanamachine@users.noreply.github.com>","sha":"a4cb330af2d414e383d75efce526513171098ece","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["Feature:Alerting","release_note:skip","Team:ResponseOps","v9.0.0","ci:project-deploy-observability","Team:obs-ux-management","backport:version","v8.17.0","v8.16.1"],"number":201209,"url":"https://github.com/elastic/kibana/pull/201209","mergeCommit":{"message":"Set
refresh according to stateful vs stateless when indexing alert documents
(#201209)\n\nIn this PR, I'm making the change so when Kibana is running
with\r\nElasticsearch stateful we set refresh to `wait_for` (instead of
`true`)\r\nso we are not putting too much pressure on the Elasticsearch
indices\r\nwhen under load.\r\n\r\n## To verify\r\n\r\nVery using the
Cloud deployment and Serverless project created from
this\r\nPR\r\n\r\n1. Create an always firing ES Query rule\r\n2. Create
an always firing security detection rule w/ alert suppression\r\n3.
Verify the ECH cluster logs and observe `*** Refresh value
when\r\nindexing alerts: wait_for` and `*** Rule registry - refresh
value when\r\nindexing alerts: wait_for` messages\r\n4. Verify the
serverless project logs on QA overview and observe `***\r\nRefresh value
when indexing alerts: true` and `*** Rule registry -\r\nrefresh value
when indexing alerts: true` messages\r\n\r\n## To-Do\r\n\r\n- [x] Revert
commit\r\n7c19b458e6\r\nthat
was added for testing purposes\r\n\r\n---------\r\n\r\nCo-authored-by:
kibanamachine
<42973632+kibanamachine@users.noreply.github.com>","sha":"a4cb330af2d414e383d75efce526513171098ece"}},"sourceBranch":"main","suggestedTargetBranches":["8.17","8.16"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","labelRegex":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/201209","number":201209,"mergeCommit":{"message":"Set
refresh according to stateful vs stateless when indexing alert documents
(#201209)\n\nIn this PR, I'm making the change so when Kibana is running
with\r\nElasticsearch stateful we set refresh to `wait_for` (instead of
`true`)\r\nso we are not putting too much pressure on the Elasticsearch
indices\r\nwhen under load.\r\n\r\n## To verify\r\n\r\nVery using the
Cloud deployment and Serverless project created from
this\r\nPR\r\n\r\n1. Create an always firing ES Query rule\r\n2. Create
an always firing security detection rule w/ alert suppression\r\n3.
Verify the ECH cluster logs and observe `*** Refresh value
when\r\nindexing alerts: wait_for` and `*** Rule registry - refresh
value when\r\nindexing alerts: wait_for` messages\r\n4. Verify the
serverless project logs on QA overview and observe `***\r\nRefresh value
when indexing alerts: true` and `*** Rule registry -\r\nrefresh value
when indexing alerts: true` messages\r\n\r\n## To-Do\r\n\r\n- [x] Revert
commit\r\n7c19b458e6\r\nthat
was added for testing purposes\r\n\r\n---------\r\n\r\nCo-authored-by:
kibanamachine
<42973632+kibanamachine@users.noreply.github.com>","sha":"a4cb330af2d414e383d75efce526513171098ece"}},{"branch":"8.17","label":"v8.17.0","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.1","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->
# Backport
This will backport the following commits from `main` to `8.x`:
- [Execution type field
(#195884)](https://github.com/elastic/kibana/pull/195884)
<!--- Backport version: 9.4.3 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)
<!--BACKPORT [{"author":{"name":"Khristinin
Nikita","email":"nikita.khristinin@elastic.co"},"sourceCommit":{"committedDate":"2024-10-14T14:29:12Z","message":"Execution
type field (#195884)\n\n## Added new field - execution type for
alerts\r\n\r\nAdded new field only for security type
alerts:\r\n\r\n`kibana.alert.rule.execution.type` - can be `manual` or
`scheduled`\r\n\r\nAlso, move intended timestamp settings
from\r\n`create_persistence_rule_type_wrapper` to
`build_alert`\r\n\r\nAlso added those new field to Alert schema and
types.\r\n\r\n\r\n\r\nhttps://github.com/user-attachments/assets/c5b021a6-4763-47ae-b46c-814a138be65a\r\n\r\n\r\n\r\nFor
tests:\r\n\r\n- tests all rule types with and without
suppression:\r\n`kibana.alert.rule.execution.type` - should be
`scheduled`,\r\n`kibana.alert.intended_timestamp` - should equal alert
timestamp\r\n\r\n- tests all rules with and without suppression with
manual run -\r\n`kibana.alert.rule.execution.type` - should be
`manual`,\r\n`kibana.alert.intended_timestamp` - should equal date
inside you manual\r\nrule run date
range\r\n\r\n---------\r\n\r\nCo-authored-by: Elastic Machine
<elasticmachine@users.noreply.github.com>","sha":"3d466a72a8ab181aadf562ab6c27a5affa32dc96","branchLabelMapping":{"^v9.0.0$":"main","^v8.16.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","v9.0.0","backport:prev-minor"],"title":"Execution
type
field","number":195884,"url":"https://github.com/elastic/kibana/pull/195884","mergeCommit":{"message":"Execution
type field (#195884)\n\n## Added new field - execution type for
alerts\r\n\r\nAdded new field only for security type
alerts:\r\n\r\n`kibana.alert.rule.execution.type` - can be `manual` or
`scheduled`\r\n\r\nAlso, move intended timestamp settings
from\r\n`create_persistence_rule_type_wrapper` to
`build_alert`\r\n\r\nAlso added those new field to Alert schema and
types.\r\n\r\n\r\n\r\nhttps://github.com/user-attachments/assets/c5b021a6-4763-47ae-b46c-814a138be65a\r\n\r\n\r\n\r\nFor
tests:\r\n\r\n- tests all rule types with and without
suppression:\r\n`kibana.alert.rule.execution.type` - should be
`scheduled`,\r\n`kibana.alert.intended_timestamp` - should equal alert
timestamp\r\n\r\n- tests all rules with and without suppression with
manual run -\r\n`kibana.alert.rule.execution.type` - should be
`manual`,\r\n`kibana.alert.intended_timestamp` - should equal date
inside you manual\r\nrule run date
range\r\n\r\n---------\r\n\r\nCo-authored-by: Elastic Machine
<elasticmachine@users.noreply.github.com>","sha":"3d466a72a8ab181aadf562ab6c27a5affa32dc96"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/195884","number":195884,"mergeCommit":{"message":"Execution
type field (#195884)\n\n## Added new field - execution type for
alerts\r\n\r\nAdded new field only for security type
alerts:\r\n\r\n`kibana.alert.rule.execution.type` - can be `manual` or
`scheduled`\r\n\r\nAlso, move intended timestamp settings
from\r\n`create_persistence_rule_type_wrapper` to
`build_alert`\r\n\r\nAlso added those new field to Alert schema and
types.\r\n\r\n\r\n\r\nhttps://github.com/user-attachments/assets/c5b021a6-4763-47ae-b46c-814a138be65a\r\n\r\n\r\n\r\nFor
tests:\r\n\r\n- tests all rule types with and without
suppression:\r\n`kibana.alert.rule.execution.type` - should be
`scheduled`,\r\n`kibana.alert.intended_timestamp` - should equal alert
timestamp\r\n\r\n- tests all rules with and without suppression with
manual run -\r\n`kibana.alert.rule.execution.type` - should be
`manual`,\r\n`kibana.alert.intended_timestamp` - should equal date
inside you manual\r\nrule run date
range\r\n\r\n---------\r\n\r\nCo-authored-by: Elastic Machine
<elasticmachine@users.noreply.github.com>","sha":"3d466a72a8ab181aadf562ab6c27a5affa32dc96"}}]}]
BACKPORT-->
Co-authored-by: Khristinin Nikita <nikita.khristinin@elastic.co>
# Backport
This will backport the following commits from `main` to `8.x`:
- [[Response Ops][Alerting] Only load maintenance windows when there are
alerts during rule execution and caching loaded maintenance windows
(#192573)](https://github.com/elastic/kibana/pull/192573)
<!--- Backport version: 8.9.8 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)
<!--BACKPORT [{"author":{"name":"Ying
Mao","email":"ying.mao@elastic.co"},"sourceCommit":{"committedDate":"2024-09-26T12:59:36Z","message":"[Response
Ops][Alerting] Only load maintenance windows when there are alerts
during rule execution and caching loaded maintenance windows
(#192573)\n\nResolves
https://github.com/elastic/kibana/issues/184324\r\n\r\n##
Summary\r\n\r\nThis PR moves the loading of maintenance windows further
down in rule\r\nexecution so maintenance windows are only loaded when a
rule execution\r\ngenerates alerts. Also caches maintenance windows per
space to reduce\r\nthe number of requests.\r\n\r\n## To Verify\r\n\r\n1.
Add some logging
to\r\nx-pack/plugins/alerting/server/task_runner/maintenance_windows/maintenance_windows_service.ts\r\nto
indicate when windows are being fetched and when they're
returning\r\nfrom the cache.\r\n2. Create and run some rules in
different spaces with and without alerts\r\nto see that the maintenance
windows are only loaded when there are\r\nalerts and that the windows
are returned from the cache when the cache\r\nhas not
expired.\r\n\r\n---------\r\n\r\nCo-authored-by: Elastic Machine
<elasticmachine@users.noreply.github.com>\r\nCo-authored-by:
kibanamachine
<42973632+kibanamachine@users.noreply.github.com>","sha":"93414a672c2767b035110fa2d811cc040af57727","branchLabelMapping":{"^v9.0.0$":"main","^v8.16.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["Feature:Alerting","release_note:skip","Team:ResponseOps","v9.0.0","backport:prev-minor","ci:project-deploy-observability","Team:obs-ux-management","v8.16.0"],"number":192573,"url":"https://github.com/elastic/kibana/pull/192573","mergeCommit":{"message":"[Response
Ops][Alerting] Only load maintenance windows when there are alerts
during rule execution and caching loaded maintenance windows
(#192573)\n\nResolves
https://github.com/elastic/kibana/issues/184324\r\n\r\n##
Summary\r\n\r\nThis PR moves the loading of maintenance windows further
down in rule\r\nexecution so maintenance windows are only loaded when a
rule execution\r\ngenerates alerts. Also caches maintenance windows per
space to reduce\r\nthe number of requests.\r\n\r\n## To Verify\r\n\r\n1.
Add some logging
to\r\nx-pack/plugins/alerting/server/task_runner/maintenance_windows/maintenance_windows_service.ts\r\nto
indicate when windows are being fetched and when they're
returning\r\nfrom the cache.\r\n2. Create and run some rules in
different spaces with and without alerts\r\nto see that the maintenance
windows are only loaded when there are\r\nalerts and that the windows
are returned from the cache when the cache\r\nhas not
expired.\r\n\r\n---------\r\n\r\nCo-authored-by: Elastic Machine
<elasticmachine@users.noreply.github.com>\r\nCo-authored-by:
kibanamachine
<42973632+kibanamachine@users.noreply.github.com>","sha":"93414a672c2767b035110fa2d811cc040af57727"}},"sourceBranch":"main","suggestedTargetBranches":["8.x"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","labelRegex":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/192573","number":192573,"mergeCommit":{"message":"[Response
Ops][Alerting] Only load maintenance windows when there are alerts
during rule execution and caching loaded maintenance windows
(#192573)\n\nResolves
https://github.com/elastic/kibana/issues/184324\r\n\r\n##
Summary\r\n\r\nThis PR moves the loading of maintenance windows further
down in rule\r\nexecution so maintenance windows are only loaded when a
rule execution\r\ngenerates alerts. Also caches maintenance windows per
space to reduce\r\nthe number of requests.\r\n\r\n## To Verify\r\n\r\n1.
Add some logging
to\r\nx-pack/plugins/alerting/server/task_runner/maintenance_windows/maintenance_windows_service.ts\r\nto
indicate when windows are being fetched and when they're
returning\r\nfrom the cache.\r\n2. Create and run some rules in
different spaces with and without alerts\r\nto see that the maintenance
windows are only loaded when there are\r\nalerts and that the windows
are returned from the cache when the cache\r\nhas not
expired.\r\n\r\n---------\r\n\r\nCo-authored-by: Elastic Machine
<elasticmachine@users.noreply.github.com>\r\nCo-authored-by:
kibanamachine
<42973632+kibanamachine@users.noreply.github.com>","sha":"93414a672c2767b035110fa2d811cc040af57727"}},{"branch":"8.x","label":"v8.16.0","labelRegex":"^v8.16.0$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->
# Backport
This will backport the following commits from `main` to `8.x`:
- [Use the same date for set alert timestmap
(#192668)](https://github.com/elastic/kibana/pull/192668)
<!--- Backport version: 9.6.0 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)
<!--BACKPORT [{"author":{"name":"Khristinin
Nikita","email":"nikita.khristinin@elastic.co"},"sourceCommit":{"committedDate":"2024-09-13T11:16:48Z","message":"Use
the same date for set alert timestmap (#192668)\n\n##
Summary\r\n\r\nLooks like [there
can\r\nbe](https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/6918#0191e0e0-8cde-48a7-a47e-d0919db0f220)\r\nsituation
when new Date() can produce slightly different result for\r\nthose
fields\r\n\r\nSo, I just extract them into 1
variable","sha":"e5f01344644a31f068a79279df24a36c264080af","branchLabelMapping":{"^v9.0.0$":"main","^v8.16.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","backport:skip","v9.0.0"],"title":"Use
the same date for set alert
timestmap","number":192668,"url":"https://github.com/elastic/kibana/pull/192668","mergeCommit":{"message":"Use
the same date for set alert timestmap (#192668)\n\n##
Summary\r\n\r\nLooks like [there
can\r\nbe](https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/6918#0191e0e0-8cde-48a7-a47e-d0919db0f220)\r\nsituation
when new Date() can produce slightly different result for\r\nthose
fields\r\n\r\nSo, I just extract them into 1
variable","sha":"e5f01344644a31f068a79279df24a36c264080af"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/192668","number":192668,"mergeCommit":{"message":"Use
the same date for set alert timestmap (#192668)\n\n##
Summary\r\n\r\nLooks like [there
can\r\nbe](https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/6918#0191e0e0-8cde-48a7-a47e-d0919db0f220)\r\nsituation
when new Date() can produce slightly different result for\r\nthose
fields\r\n\r\nSo, I just extract them into 1
variable","sha":"e5f01344644a31f068a79279df24a36c264080af"}}]}]
BACKPORT-->
# Backport
This will backport the following commits from `main` to `8.x`:
- [[RsponseOps][Alerting] Explicitly set access to all API routes of
actions, connectors, rules, alerts, and cases plugins
(#193520)](https://github.com/elastic/kibana/pull/193520)
<!--- Backport version: 9.4.3 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)
<!--BACKPORT [{"author":{"name":"Janki
Salvi","email":"117571355+js-jankisalvi@users.noreply.github.com"},"sourceCommit":{"committedDate":"2024-09-26T10:00:08Z","message":"[RsponseOps][Alerting]
Explicitly set access to all API routes of actions, connectors, rules,
alerts, and cases plugins (#193520)\n\n## Summary\r\nResolves
https://github.com/elastic/kibana/issues/192956\r\nThis PR adds \r\n-
`access: internal` option to internal routes \r\n- `access: public`
option to public routes \r\n\r\nIt which will help restrict access of
internal routes and allow users to\r\naccess all public
routes.\r\n\r\nThis PR updates api routes of following
`x-pack/plugins`\r\n- actions\r\n- alerting\r\n- cases\r\n-
rule_registry\r\n- stack_connectors\r\n-
triggers_actions_ui","sha":"9c7864309ce1c5a3d085151e3b67d1635bc558c8","branchLabelMapping":{"^v9.0.0$":"main","^v8.16.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","Team:ResponseOps","v9.0.0","backport:prev-minor","v8.16.0"],"title":"[RsponseOps][Alerting]
Explicitly set access to all API routes of actions, connectors, rules,
alerts, and cases
plugins","number":193520,"url":"https://github.com/elastic/kibana/pull/193520","mergeCommit":{"message":"[RsponseOps][Alerting]
Explicitly set access to all API routes of actions, connectors, rules,
alerts, and cases plugins (#193520)\n\n## Summary\r\nResolves
https://github.com/elastic/kibana/issues/192956\r\nThis PR adds \r\n-
`access: internal` option to internal routes \r\n- `access: public`
option to public routes \r\n\r\nIt which will help restrict access of
internal routes and allow users to\r\naccess all public
routes.\r\n\r\nThis PR updates api routes of following
`x-pack/plugins`\r\n- actions\r\n- alerting\r\n- cases\r\n-
rule_registry\r\n- stack_connectors\r\n-
triggers_actions_ui","sha":"9c7864309ce1c5a3d085151e3b67d1635bc558c8"}},"sourceBranch":"main","suggestedTargetBranches":["8.x"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/193520","number":193520,"mergeCommit":{"message":"[RsponseOps][Alerting]
Explicitly set access to all API routes of actions, connectors, rules,
alerts, and cases plugins (#193520)\n\n## Summary\r\nResolves
https://github.com/elastic/kibana/issues/192956\r\nThis PR adds \r\n-
`access: internal` option to internal routes \r\n- `access: public`
option to public routes \r\n\r\nIt which will help restrict access of
internal routes and allow users to\r\naccess all public
routes.\r\n\r\nThis PR updates api routes of following
`x-pack/plugins`\r\n- actions\r\n- alerting\r\n- cases\r\n-
rule_registry\r\n- stack_connectors\r\n-
triggers_actions_ui","sha":"9c7864309ce1c5a3d085151e3b67d1635bc558c8"}},{"branch":"8.x","label":"v8.16.0","branchLabelMappingKey":"^v8.16.0$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->
Co-authored-by: Janki Salvi <117571355+js-jankisalvi@users.noreply.github.com>
## Add new field to alert
Add optional `kibana.alert.intended_timestamp`. For scheduled rules it
has the same values as ALERT_RULE_EXECUTION_TIMESTAMP
(`kibana.alert.rule.execution.timestamp`)
for manual rule runs (backfill) it - will get the startedAtOverridden
For example if i have event at 14:30
And if we run manual rule run from 14:00-15:00, then alert will have
`kibana.alert.intended_timestamp` at 15:00
---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
We saw a stateful project which had a pause of 46m from when Kibana
started, to when it completed. It appears ES must have been inaccessible
during that time and the rule registry was not able to install the
alerting resources. This PR uses an observable from Core to wait to
install the alerting resources until ES is ready.
### Checklist
- [ ] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### To verify
- Start Kibana without starting ES
```
yarn start --logging.root.level=debug
```
- Verify that the alerting resources are not installed, you should not
see this debug log `Initializing resources for AlertsService`
- Start ES and verify that the resources are installed, and you see the
log above
Closes#190995
## Summary
This PR adds grouping functionality to the alerts page alert table based
on @umbopepato's implementation in this [draft
PR](https://github.com/elastic/kibana/pull/183114) (basically, he
implemented the feature and I adjusted a bit for our use case :D).
For now, we only added the **rule** and **source** as default grouping,
and I will create a ticket to add tags as well. The challenge with tags
is that since it is an array, the value of the alert is joined by a
comma as the group, which does not match with what we want for tags.

Here is how we show the rules that don't have a group by field selected
for them: (We used "ungrouped" similar to what we have in SLOs)

---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: DeDe Morton <dede.morton@elastic.co>
Co-authored-by: Shahzad <shahzad31comp@gmail.com>
## Summary
Forwards `featureIds` from `AlertsClient.find()` to
`AlertsClient.searchAlerts()`
### References
Fixes#190424
### Checklist
Delete any items that are not applicable to this PR.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
Resolves https://github.com/elastic/kibana/issues/184322
## Summary
This PR updates `getExecutorServices` to allow alerting rules to only
load the dataViews and wrappedSearchSourceClient services when needed. I
updated the rule types dependent on dataViews and/or
wrappedSearchSourceClient.
### Checklist
- [ ] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### To verify
- Verify that the dataviews and searchsource are only loaded when
needed. I think the best way to verify this is create an Index threshold
rule and make sure that the `dataViews` and `searchSourceClient`
services are not created.
- Verify that the updated rules work correctly. I updated the following
rule types:
Custom threshold
SLO burn rate
ES query
Indicator match
## Summary
- Adds null-value bucket detection to server-side alerts aggregations
and marks those groups with a `--` key and `isNullGroup = true`.
- Improves alerts grouping types with default aggregations.
- Improves documentation
## To verify
1. Temporarily merge
[#189958](https://github.com/elastic/kibana/pull/189958) into this
branch
2. Create a rule that fires alerts in Observability > Alerts (i.e.
Custom Threshold, ES Query, ...)
3. Once you start to see some alerts in the Alerts page, toggle the
grouped alerts view using the dropdown at the top-right of the table
(`Group alerts by: ...`), selecting a custom field that doesn't have a
value in alert documents (to find one, open the alert flyout and look at
the fields table)
4. Check that the group based on the empty field shows `--` as a title
5. Check that the alerts table in the expanded group panel is filtered
correctly
### References
Refs [#189958](https://github.com/elastic/kibana/pull/189958)
### Checklist
Delete any items that are not applicable to this PR.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
## Revert override alert timestamp
Previously we added override of alert timestamp for manual rule runs.
Later was decided, that timestamp for manual rule run should behave the
same as regular alert and represent time when alert generated.
---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Migrates the `useFetchBrowserFieldCapabilities` hook (renamed to
`useFetchAlertsFieldsQuery`) to TanStack Query, following [this
organizational
logic](https://github.com/elastic/kibana/issues/186448#issuecomment-2228853337).
This PR focuses mainly on the fetching logic itself, leaving the
surrounding API surface mostly unchanged since it will be likely
addressed in subsequent PRs.
## To verify
1. Create rules that fire alerts in different solutions
2. Check that the alerts table usages work correctly ({O11y, Security,
Stack} alerts and rule details pages, ...)
1. Check that the alerts displayed in the table are coherent with the
solution, KQL query, time filter, pagination
2. Check that pagination changes are reflected in the table
3. Check that changing the query when in pages > 0 resets the pagination
to the first page
4. Check that the fields browser shows and works correctly (`Fields`
button in the alerts table header)
Closes point 2 of https://github.com/elastic/kibana/issues/186448
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Implements a new `useSearchAlertsQuery` hook based on TanStack Query to
replace the `useFetchAlerts` hook, following [this organizational
logic](https://github.com/elastic/kibana/issues/186448#issuecomment-2228853337).
This PR focuses mainly on the fetching logic itself, leaving the
surrounding API surface mostly unchanged since it will be likely
addressed in subsequent PRs.
## To verify
1. Create rules that fire alerts in different solutions
2. Check that the alerts table usages work correctly ({O11y, Security,
Stack} alerts and rule details pages, ...)
1. Check that the alerts displayed in the table are coherent with the
solution, KQL query, time filter, pagination
2. Check that pagination changes are reflected in the table
3. Check that changing the query when in pages > 0 resets the pagination
to the first page
Closes point 1 of https://github.com/elastic/kibana/issues/186448
Should fix https://github.com/elastic/kibana/issues/171738
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
- https://github.com/elastic/kibana/issues/187630
- https://github.com/elastic/kibana/issues/187768
These changes fix the error on saving the alert
> An error occurred during rule execution: message: "[1:6952] failed to
parse field [event.original] of type [keyword] in document with id
'330b17dc2ac382dbdd2f2577c28e83b42c5dc66eaf95e857ec0f222abfc486fa'..."
The issue happens when source index has non-ECS compliant text field
which is expected to be a keyword. If the text value is longer than
32766 bytes and keyword field does not have ignore_above parameter set,
then on trying to store the text value in keyword field we will hit the
Lucene's term byte-length limit (for more details see [this
page](https://www.elastic.co/guide/en/elasticsearch/reference/current/ignore-above.html)).
See the main ticket for steps to reproduce the issue.
---------
Co-authored-by: Vitalii Dmyterko <92328789+vitaliidm@users.noreply.github.com>
## Summary
Adds solution-agnostic components to create hierarchical alerts grouping
UIs, adapting the original implementation from Security Solution.
Closes#184398
## To Verify
For existing usages of the `@kbn/grouping` package: verify that the
grouped UIs work correctly (Security Alerts, Cloud Security Posture).
New alerting UI components: checkout
https://github.com/elastic/kibana/pull/183114 (PoC PR), where the
updated `@kbn/grouping` package and these new components are used in
Observability's main Alerts page.
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Gerard Soldevila <gerard.soldevila@elastic.co>
Co-authored-by: Vadim Kibana <82822460+vadimkibana@users.noreply.github.com>
Co-authored-by: Alex Szabo <alex.szabo@elastic.co>
Co-authored-by: Tre <wayne.seymour@elastic.co>
## Summary
Adds an endpoint dedicated to fetching alerts group aggregations to
avoid adding runtime mappings and client-side controlled scripts to the
`internal/rac/alerts/find` endpoint.
The new endpoint injects a `groupByField` runtime field used to
normalize the values of the field used for grouping, to account for null
and multi-element arrays.
#184635 depends on this
Closes#186383
## To verify
Review the added
[tests](x-pack/plugins/rule_registry/server/routes/get_alerts_group_aggregations.test.ts).
Use the Kibana Dev Console to test various body params and aggregations:
1. Create any type of rule that fire alerts
2. Wait for the alerts to be created
3. Call the `_group_aggregations` endpoint, using the feature id(s) that
cover the type of rules you used:
```
POST kbn:internal/rac/alerts/_group_aggregations
{
"featureIds": [...],
...
}
```
See
[here](https://github.com/elastic/kibana/pull/186475/files#diff-0780f60b57fdaa96eda1ab2853064033477617430a17cdb87750cef42c6e8668R22)
and
[here](https://github.com/elastic/kibana/pull/186475/files#diff-0780f60b57fdaa96eda1ab2853064033477617430a17cdb87750cef42c6e8668R37)
to know the available params and pre-defined aggregations.
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
## Summary
* Adds optional severity levels to action group definition
* Determines whether alert severity "is improving" based on previous
action group and current action group of alert if severity levels are
defined for action groups.
* Persists this as `kibana.alert.severity_improving` inside the alert
document
* Persists `kibana.alert.previous_action_group` inside the alert
document
## To Verify
I've created a verification branch:
https://github.com/elastic/kibana/pull/184523 that updates the metric
threshold rule type to have action group severity levels. Verification
instructions in that PR summary.
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Reopening https://github.com/elastic/kibana/pull/186326 with my account,
non-internal PRs are just terrible to work with
---------
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Tiago Costa <tiago.costa@elastic.co>
Co-authored-by: Aleh Zasypkin <aleh.zasypkin@elastic.co>
## Summary
It fixes#179633
Observability created a Comparator type/enum, when ResponseOps is
already exporting one and other rules using it.
The only difference is the wording of not in between [I put the two
types side by side to compare]
Currently, we import the one in triggers-actions-ui-plugin , and then
update the not in between to match our Comparator.
### Comparing the two enums:

## For reviewers 🧪
- Everything should work as expected: Alert flyout, Alert reason
message, Rule creation flyout, etc.
- I kept the `outside` comparator (replaced by `NOT BETWEEN`) for
backward compatibility
Resolves https://github.com/elastic/kibana/issues/182888
## Summary
This PR resolves a bug where rules will run the recovery actions for a
delayed active alert.
### Checklist
- [ ] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### To verify
- Create a rule and set the `alert after x consecutive matches` to be
greater than 1. It may be helpful for testing to include recovered and
active actions.
- Make sure the delayed active alert recovers before hitting the
consecutive matches threshold.
- Verify that the rule does not send a recovery action and does not show
a recovered alert in the rule run table.
Closes https://github.com/elastic/kibana/issues/178704
- Adds `_index` to `get` alert response so that it can be used to attach
alert to case from alert details page.
06a1482a-0192-4f08-88f9-fdf121b7a9ca
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Revives this https://github.com/elastic/kibana/pull/181969
To do so, I had to create a new package `search-types` and move the
types I need there.
The Discovery team can take it from here.
Note: It also does a cleanup on the types I move, some of them were
declared twice.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
- addresses https://github.com/elastic/security-team/issues/8824
- adds alert suppression for new terms rule type
- fixes `getOpenAlerts` test function, which returned closed alerts as
well
### UI
<img width="2294" alt="Screenshot 2024-04-02 at 12 53 26"
src="8398fba4-a06c-464b-87ef-1c5d5a18e37f">
<img width="1651" alt="Screenshot 2024-04-02 at 12 53 46"
src="971ec0da-c1d9-4c96-a4af-7cc8dfae52a4">
### Checklist
- [x] Functional changes are hidden behind a feature flag
Feature flag `alertSuppressionForNewTermsRuleEnabled`
- [x] Functional changes are covered with a test plan and automated
tests.
Test plan: https://github.com/elastic/security-team/pull/9045
- [x] Stability of new and changed tests is verified using the [Flaky
Test Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner).
Cypress ESS:
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/5547
Cypress Serverless:
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/5548
FTR ESS:
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/5596
FTR Serverless:
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/5597
- [ ] Comprehensive manual testing is done by two engineers: the PR
author and one of the PR reviewers. Changes are tested in both ESS and
Serverless.
- [x] Mapping changes are accompanied by a technical design document. It
can be a GitHub issue or an RFC explaining the changes. The design
document is shared with and approved by the appropriate teams and
individual stakeholders.
Existing AlertSuppression schema field is used for New terms rule, the
one that used for Query and IM rules.
```yml
alert_suppression:
$ref: './common_attributes.schema.yaml#/components/schemas/AlertSuppression'
```
where
```yml
AlertSuppression:
type: object
properties:
group_by:
$ref: '#/components/schemas/AlertSuppressionGroupBy'
duration:
$ref: '#/components/schemas/AlertSuppressionDuration'
missing_fields_strategy:
$ref: '#/components/schemas/AlertSuppressionMissingFieldsStrategy'
required:
- group_by
```
- [x] Functional changes are communicated to the Docs team. A ticket or PR is opened in https://github.com/elastic/security-docs. The following information is included: any feature flags used, affected environments (Serverless, ESS, or both).
https://github.com/elastic/security-docs/issues/5030
## Summary
Update core's router (not versioned router!) registrars to also accept:
```ts
router[method]({
validate: {
request: { body: schema.object({ ... }) }
response: {
200: {
body: schema.object({ ... })
}
}
}
...
})
```
## Notes
* expect this to be a relatively non-invasive change, but will affect
any code that introspects router routes
* added a public utility to extract the request validation to ease some
of this
* `response` will not be used for anything in this PR, the intention is
to enable future work for generating OAS from router definitions
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Resolves https://github.com/elastic/kibana/issues/175998
## Summary
Follow on work from the alert creation delay feature. This PR adds
consecutive_matches, which is the count of active alerts that is used to
determine the alert delay, to the aad doc and to the action variables.
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### To verify
- Create a new rule with an alert delay
- Add the new `alert.consecutiveMatches` action variable to the action
message. Verify that when the alert fires the action variable is
populated in the message.
- To verify that the alert docs are as expected, go to [Dev
Tools](http://localhost:5601/app/dev_tools#/console) and run the
following `GET .internal.alerts-*/_search`
- Go back to the rule alerts table, and add the
`kibana.alert.consecutive_matches` field to the table. Verify that it is
populated and looks as expected.
## Summary
Fix https://github.com/elastic/kibana/issues/175919
Fix https://github.com/elastic/kibana/issues/176007
Bump `@elastic/elasticsearch` from `8.9.1-canary.1` to `8.12.2`.
## Notable changes
### `IngestPipeline._meta`
I was forced to introduce a lot of new `@ts-expect-error` because the
`_meta` property was introduced to `IngestPipeline` as mandatory instead
of optional (which feels like a type error to me)
**8.9**
```ts
export interface IngestPipeline {
description?: string
on_failure?: IngestProcessorContainer[]
processors?: IngestProcessorContainer[]
version?: VersionNumber
}
```
**8.12**
```ts
export interface IngestPipeline {
description?: string;
on_failure?: IngestProcessorContainer[];
processors?: IngestProcessorContainer[];
version?: VersionNumber;
_meta: Metadata; // <= not defined as optional...
}
```
I opened
https://github.com/elastic/elasticsearch-specification/issues/2434 in
the specification repo to address the problem, but it likely won't be
done for any `8.12.x` versions of the client.
Resolves https://github.com/elastic/kibana/issues/173009
## Summary
This PR:
- Changes the field name from `notification_delay` to `alert_delay`
- Updates the alerts client and rule registry to index new alert docs on
a delay
- Updates the framework code to delay the creation of an alert
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### To verify
- Use [Dev Tools](http://localhost:5601/app/dev_tools#/console) to
create a rule with the `alertDelay`
```
POST kbn:/api/alerting/rule
{
"params": {
"searchType": "esQuery",
"timeWindowSize": 5,
"timeWindowUnit": "m",
"threshold": [
-1
],
"thresholdComparator": ">",
"size": 100,
"esQuery": """{
"query":{
"match_all" : {}
}
}""",
"aggType": "count",
"groupBy": "all",
"termSize": 5,
"excludeHitsFromPreviousRun": false,
"sourceFields": [],
"index": [
".kibana-event-log*"
],
"timeField": "@timestamp"
},
"consumer": "stackAlerts",
"schedule": {
"interval": "1m"
},
"tags": [],
"name": "test",
"rule_type_id": ".es-query",
"actions": [
{
"group": "query matched",
"id": "${ACTION_ID}",
"params": {
"level": "info",
"message": """Elasticsearch query rule '{{rule.name}}' is active:
- Value: {{context.value}}
- Conditions Met: {{context.conditions}} over {{rule.params.timeWindowSize}}{{rule.params.timeWindowUnit}}
- Timestamp: {{context.date}}
- Link: {{context.link}}"""
},
"frequency": {
"notify_when": "onActionGroupChange",
"throttle": null,
"summary": false
}
}
],
"alert_delay": {
"active": 3
}
}
```
- Verify that the alert will not be created until it has matched the
delay threshold.
- Verify that the delay does not affect recovered alerts
## Summary
- addresses https://github.com/elastic/security-team/issues/7773 Epic
- addresses https://github.com/elastic/security-team/issues/8360
Alert suppression for these rule types is hidden behind feature branch
In this PR implemented:
- schema changes: allowing `alert_suppression` object in Indicator match
rule type. `alert_suppression` is identical to existing one for query
rule
- UI changes
- Cypress tests
- BE implementation
- FTR tests
Enabling feature flags
- `alertSuppressionForIndicatorMatchRuleEnabled`
### Tech implementation details
Alert candidates for IM rule deduplicated first, by searching in
existing alerts matched ids.
Once retrieved, alert candidates filtered further, to determine whether
they been already suppressed.
It's done by checking each alert candidate suppression time boundaries.
If suppression ends earlier than existing alert suppression with the
same instance id, alert candidate is removed.
The rest of alert candidates are getting suppressed in memory and either
new alerts created or existing updated.
The max limit of created and suppressed alerts is set to `5 *
max_signals`, which would allow to capture additional threats, should
rule execution's alerts number reach max_signals
### UI changes
Suppression components in IM rule are identical to Custom Query's

### UI changes
### Checklist
- [x] Functional changes are hidden behind a feature flag
Feature flag `alertSuppressionForThresholdRuleEnabled`
- [x] Functional changes are covered with a test plan and automated
tests.
[Test plan PR](https://github.com/elastic/security-team/pull/8390)
- [x] Stability of new and changed tests is verified using the [Flaky
Test Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner).
[FTR ESS & Serverless tests]
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/4972
[Cypress ESS]
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/4970
[Cypress Serverless]
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/4971
- [ ] Comprehensive manual testing is done by two engineers: the PR
author and one of the PR reviewers. Changes are tested in both ESS and
Serverless.
- [x] Mapping changes are accompanied by a technical design document. It
can be a GitHub issue or an RFC explaining the changes. The design
document is shared with and approved by the appropriate teams and
individual stakeholders.
Existing AlertSuppression schema field is used for IM rule, the one that
used for Query rule.
```yml
alert_suppression:
$ref: './common_attributes.schema.yaml#/components/schemas/AlertSuppression'
```
where
```yml
AlertSuppression:
type: object
properties:
group_by:
$ref: '#/components/schemas/AlertSuppressionGroupBy'
duration:
$ref: '#/components/schemas/AlertSuppressionDuration'
missing_fields_strategy:
$ref: '#/components/schemas/AlertSuppressionMissingFieldsStrategy'
required:
- group_by
```
- [x] Functional changes are communicated to the Docs team. A ticket or PR is opened in https://github.com/elastic/security-docs. The following information is included: any feature flags used, affected environments (Serverless, ESS, or both).
https://github.com/elastic/security-docs/issues/4715
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Wafaa Nasr <wafaa.nasr@elastic.co>
Co-authored-by: Ievgen Sorokopud <ievgen.sorokopud@elastic.co>
Towards https://github.com/elastic/response-ops-team/issues/164
Resolves https://github.com/elastic/kibana/issues/171795
## Summary
* Switches this rule type to use `alertsClient` from alerting framework
in favor of the deprecated `alertFactory`
* Defines the `default` alert config for these rule types so framework
level fields will be written out into the
`.alerts-default.alerts-default` index with no rule type specific
fields.
* Updated some terminology from `alert` to `rule`
## To Verify
* Follow the instructions in [this
PR](https://github.com/elastic/kibana/pull/112869) to add a legacy
notification to a detection rule.
* Verify the notification fires as expected
* Verify an alert document is written to
`.alerts-default.alerts-default` that looks like:
```
{
"kibana.alert.rule.category": "Security Solution notification (Legacy)",
"kibana.alert.rule.consumer": "siem",
"kibana.alert.rule.execution.uuid": "cbad59ec-2a6e-4791-81c3-ae0fefd3d48a",
"kibana.alert.rule.name": "Legacy notification with one action",
"kibana.alert.rule.parameters": {
"ruleAlertId": "9c07db42-b5fa-4ef9-8d7e-48d5688fd88e"
},
"kibana.alert.rule.producer": "siem",
"kibana.alert.rule.rule_type_id": "siem.notifications",
"kibana.alert.rule.tags": [],
"kibana.alert.rule.uuid": "1869763e-c6e7-47fd-8275-0c9568127d84",
"kibana.space_ids": [
"default"
],
"@timestamp": "2024-01-10T18:12:02.433Z",
"event.action": "close",
"event.kind": "signal",
"kibana.alert.action_group": "recovered",
"kibana.alert.flapping_history": [
true,
true,
false,
false
],
"kibana.alert.instance.id": "1869763e-c6e7-47fd-8275-0c9568127d84",
"kibana.alert.maintenance_window_ids": [],
"kibana.alert.status": "recovered",
"kibana.alert.uuid": "119269e0-a767-43c9-b383-a8840b4dddd5",
"kibana.alert.workflow_status": "open",
"kibana.alert.start": "2024-01-10T18:08:53.373Z",
"kibana.alert.time_range": {
"gte": "2024-01-10T18:08:53.373Z",
"lte": "2024-01-10T18:09:56.367Z"
},
"kibana.version": "8.13.0",
"tags": [],
"kibana.alert.duration.us": 62994000,
"kibana.alert.end": "2024-01-10T18:09:56.367Z",
"kibana.alert.rule.revision": 0,
"kibana.alert.flapping": false
}
```
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
This PR populates the existing `kibana.alert.workflow_user` field in the
alerts-as-data mappings with the `profile_uid` of the last user to
modify the status of the alert. It also adds a new field,
`kibana.alert.workflow_status_updated_at`, to track the last time the
workflow status was updated and populates it with a timestamp.
Similar to the alert assignment PR, `workflow_user` renders in the table
with a user avatar instead of the raw `profile_uid` value stored in the
alert. The filter in/out buttons on the row cell automatically add a
filter that uses the raw value so that filtering works correctly.
Due to limitations of Kibana's user profile implementation,
`workflow_user` is only populated if a user changes the alert status
using the alert status route (`POST
/api/detection_engine/signals/status`) within an interactive session,
i.e. logs in rather than passes credentials with each API request
([related issue](https://github.com/elastic/kibana/issues/167459)).
## Alerts table

## Alert details

### Checklist
- [ ] Functional changes are hidden behind a feature flag. If not
hidden, the PR explains why these changes are being implemented in a
long-living feature branch.
- [x] Functional changes are covered with a test plan and automated
tests.
- [x] Stability of new and changed tests is verified using the [Flaky
Test Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner).
- Flaky test run:
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/4130
- [ ] Comprehensive manual testing is done by two engineers: the PR
author and one of the PR reviewers. Changes are tested in both ESS and
Serverless.
- [x] Mapping changes are accompanied by a technical design document. It
can be a GitHub issue or an RFC explaining the changes. The design
document is shared with and approved by the appropriate teams and
individual stakeholders.
- https://github.com/elastic/security-team/issues/4820
- [x] Functional changes are communicated to the Docs team. A ticket or
PR is opened in https://github.com/elastic/security-docs. The following
information is included: any feature flags used, affected environments
(Serverless, ESS, or both).
- https://github.com/elastic/security-docs/issues/4325
## Summary
With this PR we introduce a new Alert User Assignment feature:
- It is possible to assign a user/s to alert/s
- There is a new "Assignees" column in the alerts table which displays
avatars of assigned users
- There is a bulk action to update assignees for multiple alerts
- It is possible to see and update assignees inside the alert details
flyout component
- There is an "Assignees" filter button on the Alerts page which allows
to filter alerts by assignees
We decided to develop this feature on a separate branch. This gives us
ability to make sure that it is thoroughly tested and we did not break
anything in production. Since there is a data scheme changes involved we
decided that it will be a better approach. cc @yctercero
## Testing notes
In order to test assignments you need to create a few users. Then for
users to appear in user profiles dropdown menu you need to activate them
by login into those account at least once.
8eeb13f3-2d16-4fba-acdf-755024a59fc2
Main ticket https://github.com/elastic/security-team/issues/2504
## Bugfixes
- [x] https://github.com/elastic/security-team/issues/8028
- [x] https://github.com/elastic/security-team/issues/8034
- [x] https://github.com/elastic/security-team/issues/8006
- [x] https://github.com/elastic/security-team/issues/8025
## Enhancements
- [x] https://github.com/elastic/security-team/issues/8033
### Checklist
- [x] Functional changes are hidden behind a feature flag. If not
hidden, the PR explains why these changes are being implemented in a
long-living feature branch.
- [x] Functional changes are covered with a test plan and automated
tests.
- [x] https://github.com/elastic/kibana/issues/171306
- [x] https://github.com/elastic/kibana/issues/171307
- [x] Stability of new and changed tests is verified using the [Flaky
Test Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner).
- [x]
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/4091
- [x] Comprehensive manual testing is done by two engineers: the PR
author and one of the PR reviewers. Changes are tested in both ESS and
Serverless.
- [x] Mapping changes are accompanied by a technical design document. It
can be a GitHub issue or an RFC explaining the changes. The design
document is shared with and approved by the appropriate teams and
individual stakeholders.
* https://github.com/elastic/security-team/issues/7647
- [x] Functional changes are communicated to the Docs team. A ticket or
PR is opened in https://github.com/elastic/security-docs. The following
information is included: any feature flags used, affected environments
(Serverless, ESS, or both). **NOTE: as discussed we will wait until docs
are ready to merge this PR**.
* https://github.com/elastic/security-docs/issues/4226
* https://github.com/elastic/staging-serverless-security-docs/pull/232
---------
Co-authored-by: Marshall Main <marshall.main@elastic.co>
Co-authored-by: Xavier Mouligneau <xavier.mouligneau@elastic.co>
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Sergi Massaneda <sergi.massaneda@gmail.com>
## Summary
When alerts are bulk indexed in the rule registry and the alerts client,
indexing errors may be returned where the entire field value that failed
to be indexed is echoed in the reason. This can cause unnecessarily
verbose logging so we want to sanitize the field value.
## Summary
Bring back functionality for alert search bar for security solution.
<img width="899" alt="image"
src="13100bd3-4ba9-4cba-9702-d657ee781a4a">
<img width="911" alt="image"
src="0c586d2c-67be-4b37-8fe5-cd483e6def16">
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
Updates new teams as codeowners for Observability team changes.
Also took the opportunity to:
- Delete some paths that no longer exist
- Split infra code ownership between teams (from #168992)
Resolves https://github.com/elastic/response-ops-team/issues/148
## Summary
This PR makes the following changes:
* Skips the initialization of the `AlertsService` for migrator nodes.
This bypasses the installation of alerts as data assets on migrator
nodes, which are short-lived nodes used on Serverless
* Changes error logs to debug logs when alert resource installation is
cut short due to Kibana shutdown.
## To Verify
Run Kibana with the following configs:
* no config for `node.roles` - Kibana should start up and install
alerts-as-data resources
* `node.roles: ["background_tasks"]` - Kibana should start up and
install alerts-as-data resources
* `node.roles: ["ui"]` - Kibana should start up and install
alerts-as-data resources
* `node.roles: ["migrator"]` - Kibana should start up and not install
alerts-as-data resources. No errors related to premature shutdown should
be visible in the logs.
## Summary
Resolves: https://github.com/elastic/kibana/issues/166301
Adds support for solution/category filtering to maintenance windows by
adding a new property: `category_ids`. Selecting one or more solutions
when creating/updating a maintenance window will cause the maintenance
window to only suppress rule types belonging to said solutions. In order
to achieve filtering by solution/category, we are adding a new field to
the rule types schema called `category`. This field should map to the
feature category that the rule type belongs to (`observability`,
`securitySolution` or `management`).
Our initial plan was to use feature IDs or rule type IDs to accomplish
this filtering, we decided against using rule type IDs because if a new
rule type gets added, we need to change the API to support this new rule
type. We decided against feature IDs because it's a very anti-serverless
way of accomplishing this feature, as we don't want to expose feature
IDs to APIs. We decided on app categories because it works well with
serverless and should be much easier to maintain if new rule types are
added in the future.
This means the `rule_types` API has to be changed to include this new
field, although it shouldn't be a breaking change since we're just
adding a new field. No migrations are needed since rule types are in
memory and maintenance windows are backwards compatible.

### Error state:

### Checklist
Delete any items that are not applicable to this PR.
- [x] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/packages/kbn-i18n/README.md)
- [x]
[Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html)
was added for features that require explanation or tutorials
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Dima Arnautov <arnautov.dima@gmail.com>
Resolves https://github.com/elastic/kibana/issues/163953
## Summary
Changes `refresh='wait_for'` to `refresh=true` when bulk indexing alerts
from the alerting framework and the rule registry. For persistence
alerts, `refresh=false` is used when the rule execution is a preview.
## Notes
I deployed this image to the serverless QA environment and compared
execution times between this branch and the default QA version with an
index threshold rule that creates and active alert each run.
Default QA version:
* avg 8.16 seconds
* P99 15.5 seconds
QA using this image:
* avg: 0.6 seconds
* P99 1.7 seconds
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Ryland Herrick <ryalnd@gmail.com>
Resolves https://github.com/elastic/kibana/issues/162630
## Summary
Adds checks for invalid alert index names in two places:
- before updating the underlying settings/mappings of a concrete index;
attempting these updates would typically throw an error and prevent
alert resource installation from successfully completing, causing
subsequent writes to fail; in this PR ,we check for unexpected index
names and log a warning
- before updating an alert document; there is a scenario where an
existing active alert document could be in a partial or restored alert
index and trying to update that alert document would fail. to prevent
these failures, we skip the update and log a warning. we expect this
case to be more rare as most times frozen indices only contain old
alerts so the likelihood of it containing a currently active alert
should be low.
## To Verify
- Run ES with these options: `yarn es snapshot --license trial --ssl -E
path.data=../data_partial_alerts -E path.repo=<snaphot folder>-E
xpack.searchable.snapshot.shared_cache.size=100b -E
indices.lifecycle.poll_interval=1m`
- Start Kibana
- Create a snapshot repository here:
https://localhost:5601/app/management/data/snapshot_restore/add_repository.
Use `Shared File System` and use the same path as you used for
`path.repo` when running ES
- Modify the `.alerts-ilm-policy` to roll over in the hot phase with max
age of 3 minutes. Add a frozen phase that moves data into the frozen
phase after 5 minutes.
- Create some rules that generate alerts. I did both metric threshold
(uses lifecycle executor) and index threshold (uses framework).
- Wait for ILM to run and move indices to frozen. This will take a few
ILM cycles but eventually you should be able to do a `GET
.internal.alerts-stack.alerts-default-*/_alias/.alerts-stack.alerts-default`
and see a partial index name in the results
- Restart Kibana. You should see warnings logged related to the partial
indices but Kibana should successfully start and rule execution should
succeed.
## Notes
I tested what would happen if we added a bunch of new fields to a
component template and increased the total fields limit in the presence
of partial indices. Here, it works in our favor that we only allow
additive changes to our mappings, so the existing partial indices keep
the old mappings and don't need a field limit update because their
mappings don't change. Searching against both the alerts alias (that
targets partial and normal indices) works as expected and searching
directly against the partial index works as expected.
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This PR is updating Discover's rule to be created under the
`stackAlerts` consumer and we created an [breaking change
issue](https://github.com/elastic/dev/issues/2344) to explain the
consequences of this update.
We also fix the rule's consumer for all rule types created under the
observability rule management to use their producer instead of `alerts`.
Also, we add the ability for the ES Query and new Generic Threshold
rules type to pick the consumer associated to the rule. The
`ensureAuthorized` and the `filter` functions have modified and
simplified to support this use case please check the newest unit test
added in
`x-pack/plugins/alerting/server/authorization/alerting_authorization.test.ts`.
There is now a dropdown in the rule form to prompt the user when
creating ES Query/Generic threshold rules to select the consumer based
on their authorized consumers (we can no longer use `alerts` for these).
If there is only 1 option, then the dropdown will not be shown and the
option will be chosen automatically.
Generic threshold rules will have the following possible consumers:
- infrastructure
- logs
ES query rules will have the following possible consumers:
- infrastructure
- logs
- stackAlerts (only from the stack management rule page)
## To Test:
### Single Consumer:
1. Create a user with only `logs` feature enabled (ensuring
`stackAlerts` is not enabled).
2. Navigate to the O11Y rule management page
3. Click the create rule button
4. Assert that both ES query and generic threshold rules are available
5. Click ES query and fill out the relevant information and create the
rule
6. Assert that the rule created has `logs` set in the `consumer` field
7. Repeat 5-6 for the generic threshold rule
8. Repeat 2-7 but on the Stack Management rules page
9. Repeat 1-8 for the `infrastructure` feature.
### Multiple Consumers:
1. Create a user with `logs`, `infrastructure` and `apm` features
enabled (ensuring `stackAlerts` is not enabled).
2. Navigate to the O11Y rule management page
3. Click the create rule button
4. Assert that both ES query and generic threshold rules are available
5. Click ES query and fill out the relevant information and create the
rule
6. A dropdown should prompt the user to select between 1 of the 3
consumers, select 1
7. Assert that the rule was created with the selected consumer
8. Repeat 5-7 for the generic threshold rule
9. Repeat 2-8 but on the Stack Management rules page


### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: Jiawei Wu <74562234+JiaweiWu@users.noreply.github.com>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>