Reverts elastic/kibana#157121
We need to revert because after talking to @kobelb, we are introducing a
new bug where user always need to be a super user to access the fields
from the alert index since only only super user can access the kibana
index.
# Backport
This will backport the following commits from `main` to `8.8`:
- [[RAM] [PERF] Remove endpoint browserFields
(#156869)](https://github.com/elastic/kibana/pull/156869)
<!--- Backport version: 8.9.7 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)
<!--BACKPORT [{"author":{"name":"Xavier
Mouligneau","email":"xavier.mouligneau@elastic.co"},"sourceCommit":{"committedDate":"2023-05-09T03:23:20Z","message":"[RAM]
[PERF] Remove endpoint browserFields (#156869)\n\n## Summary\r\n\r\nFix
https://github.com/elastic/kibana/issues/155000, @dgieselaar
thank\r\nyou so much for finding that!!! lot of love from our part. And,
we find\r\na good solution around this API... We are deleting it!!!
LOL\r\n\r\n### Checklist\r\n\r\nDelete any items that are not applicable
to this PR.\r\n\r\n- [ ] [Unit or
functional\r\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\r\nwere
updated or added to match the most common
scenarios\r\n\r\n---------\r\n\r\nCo-authored-by: Julian Gernun
<17549662+jcger@users.noreply.github.com>","sha":"967b88710d55f395065dd8150817281764dbc468","branchLabelMapping":{"^v8.9.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","Team:ResponseOps","v8.8.0","v8.7.2","v8.9.0"],"number":156869,"url":"https://github.com/elastic/kibana/pull/156869","mergeCommit":{"message":"[RAM]
[PERF] Remove endpoint browserFields (#156869)\n\n## Summary\r\n\r\nFix
https://github.com/elastic/kibana/issues/155000, @dgieselaar
thank\r\nyou so much for finding that!!! lot of love from our part. And,
we find\r\na good solution around this API... We are deleting it!!!
LOL\r\n\r\n### Checklist\r\n\r\nDelete any items that are not applicable
to this PR.\r\n\r\n- [ ] [Unit or
functional\r\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\r\nwere
updated or added to match the most common
scenarios\r\n\r\n---------\r\n\r\nCo-authored-by: Julian Gernun
<17549662+jcger@users.noreply.github.com>","sha":"967b88710d55f395065dd8150817281764dbc468"}},"sourceBranch":"main","suggestedTargetBranches":["8.8","8.7"],"targetPullRequestStates":[{"branch":"8.8","label":"v8.8.0","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.7","label":"v8.7.2","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v8.9.0","labelRegex":"^v8.9.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/156869","number":156869,"mergeCommit":{"message":"[RAM]
[PERF] Remove endpoint browserFields (#156869)\n\n## Summary\r\n\r\nFix
https://github.com/elastic/kibana/issues/155000, @dgieselaar
thank\r\nyou so much for finding that!!! lot of love from our part. And,
we find\r\na good solution around this API... We are deleting it!!!
LOL\r\n\r\n### Checklist\r\n\r\nDelete any items that are not applicable
to this PR.\r\n\r\n- [ ] [Unit or
functional\r\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\r\nwere
updated or added to match the most common
scenarios\r\n\r\n---------\r\n\r\nCo-authored-by: Julian Gernun
<17549662+jcger@users.noreply.github.com>","sha":"967b88710d55f395065dd8150817281764dbc468"}}]}]
BACKPORT-->
Co-authored-by: Xavier Mouligneau <xavier.mouligneau@elastic.co>
# Backport
This will backport the following commits from `main` to `8.8`:
- [[RAM][Maintenance Window][8.8]Fix window maintenance workflow
(#156427)](https://github.com/elastic/kibana/pull/156427)
<!--- Backport version: 8.9.7 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)
<!--BACKPORT [{"author":{"name":"Jiawei
Wu","email":"74562234+JiaweiWu@users.noreply.github.com"},"sourceCommit":{"committedDate":"2023-05-05T00:11:26Z","message":"[RAM][Maintenance
Window][8.8]Fix window maintenance workflow (#156427)\n\n##
Summary\r\n\r\nThe way that we canceled every notification for our alert
life cycle\r\nduring an active maintenance window was not close enough
to what our\r\ncustomers were expecting. For our persisted security
solution alerts, we\r\ndid not have to change the logic because it will
always be a new alert.\r\nTherefore, @shanisagiv1, @mdefazio, @JiaweiWu,
and @XavierM had a\r\ndiscussion about this problem and we decided
this:\r\n\r\nTo summarize, we will only keep the notification during a
maintenance\r\nwindow if an alert has been created/active outside of
window\r\nmaintenance. We created three different scenarios to explain
the new\r\nlogic and we will make the assumption that our alert has an
action per\r\nstatus change. For you to understand the different
scenarios, I created\r\nthis legend below:\r\n<img width=\"223\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236045974-f4fa379b-db5e-41f8-91a8-2689b9f24dab.png\">\r\n\r\n###
Scenario I\r\nIf an alert is active/created before a maintenance window
and recovered\r\ninside of the maintenance window then we will send
notifications\r\n<img width=\"463\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236046473-d04df836-d3e6-42d8-97be-8b4f1544cc1a.png\">\r\n\r\n###
Scenario II\r\nIf an alert is active/created and recovered inside of
window maintenance\r\nthen we will NOT send notifications\r\n<img
width=\"407\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236046913-c2f77131-9ff1-4864-9dab-89c4c429152e.png\">\r\n\r\n###
Scenario III\r\nif an alert is active/created in a maintenance window
and recovered\r\noutside of the maintenance window then we will not send
notifications\r\n<img width=\"496\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236047613-e63efe52-87fa-419e-9e0e-965b1d10ae18.png\">\r\n\r\n\r\n###
Checklist\r\n- [ ] [Unit or
functional\r\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\r\nwere
updated or added to match the most common
scenarios\r\n\r\n---------\r\n\r\nCo-authored-by: Xavier Mouligneau
<xavier.mouligneau@elastic.co>\r\nCo-authored-by: Kibana Machine
<42973632+kibanamachine@users.noreply.github.com>","sha":"ea407983bbd6a364f23f6780ff1049f679f53488","branchLabelMapping":{"^v8.9.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["bug","backport","release_note:skip","Team:ResponseOps","Feature:Alerting/RulesManagement","v8.8.0","v8.9.0"],"number":156427,"url":"https://github.com/elastic/kibana/pull/156427","mergeCommit":{"message":"[RAM][Maintenance
Window][8.8]Fix window maintenance workflow (#156427)\n\n##
Summary\r\n\r\nThe way that we canceled every notification for our alert
life cycle\r\nduring an active maintenance window was not close enough
to what our\r\ncustomers were expecting. For our persisted security
solution alerts, we\r\ndid not have to change the logic because it will
always be a new alert.\r\nTherefore, @shanisagiv1, @mdefazio, @JiaweiWu,
and @XavierM had a\r\ndiscussion about this problem and we decided
this:\r\n\r\nTo summarize, we will only keep the notification during a
maintenance\r\nwindow if an alert has been created/active outside of
window\r\nmaintenance. We created three different scenarios to explain
the new\r\nlogic and we will make the assumption that our alert has an
action per\r\nstatus change. For you to understand the different
scenarios, I created\r\nthis legend below:\r\n<img width=\"223\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236045974-f4fa379b-db5e-41f8-91a8-2689b9f24dab.png\">\r\n\r\n###
Scenario I\r\nIf an alert is active/created before a maintenance window
and recovered\r\ninside of the maintenance window then we will send
notifications\r\n<img width=\"463\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236046473-d04df836-d3e6-42d8-97be-8b4f1544cc1a.png\">\r\n\r\n###
Scenario II\r\nIf an alert is active/created and recovered inside of
window maintenance\r\nthen we will NOT send notifications\r\n<img
width=\"407\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236046913-c2f77131-9ff1-4864-9dab-89c4c429152e.png\">\r\n\r\n###
Scenario III\r\nif an alert is active/created in a maintenance window
and recovered\r\noutside of the maintenance window then we will not send
notifications\r\n<img width=\"496\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236047613-e63efe52-87fa-419e-9e0e-965b1d10ae18.png\">\r\n\r\n\r\n###
Checklist\r\n- [ ] [Unit or
functional\r\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\r\nwere
updated or added to match the most common
scenarios\r\n\r\n---------\r\n\r\nCo-authored-by: Xavier Mouligneau
<xavier.mouligneau@elastic.co>\r\nCo-authored-by: Kibana Machine
<42973632+kibanamachine@users.noreply.github.com>","sha":"ea407983bbd6a364f23f6780ff1049f679f53488"}},"sourceBranch":"main","suggestedTargetBranches":["8.8"],"targetPullRequestStates":[{"branch":"8.8","label":"v8.8.0","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v8.9.0","labelRegex":"^v8.9.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/156427","number":156427,"mergeCommit":{"message":"[RAM][Maintenance
Window][8.8]Fix window maintenance workflow (#156427)\n\n##
Summary\r\n\r\nThe way that we canceled every notification for our alert
life cycle\r\nduring an active maintenance window was not close enough
to what our\r\ncustomers were expecting. For our persisted security
solution alerts, we\r\ndid not have to change the logic because it will
always be a new alert.\r\nTherefore, @shanisagiv1, @mdefazio, @JiaweiWu,
and @XavierM had a\r\ndiscussion about this problem and we decided
this:\r\n\r\nTo summarize, we will only keep the notification during a
maintenance\r\nwindow if an alert has been created/active outside of
window\r\nmaintenance. We created three different scenarios to explain
the new\r\nlogic and we will make the assumption that our alert has an
action per\r\nstatus change. For you to understand the different
scenarios, I created\r\nthis legend below:\r\n<img width=\"223\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236045974-f4fa379b-db5e-41f8-91a8-2689b9f24dab.png\">\r\n\r\n###
Scenario I\r\nIf an alert is active/created before a maintenance window
and recovered\r\ninside of the maintenance window then we will send
notifications\r\n<img width=\"463\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236046473-d04df836-d3e6-42d8-97be-8b4f1544cc1a.png\">\r\n\r\n###
Scenario II\r\nIf an alert is active/created and recovered inside of
window maintenance\r\nthen we will NOT send notifications\r\n<img
width=\"407\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236046913-c2f77131-9ff1-4864-9dab-89c4c429152e.png\">\r\n\r\n###
Scenario III\r\nif an alert is active/created in a maintenance window
and recovered\r\noutside of the maintenance window then we will not send
notifications\r\n<img width=\"496\"
alt=\"image\"\r\nsrc=\"https://user-images.githubusercontent.com/189600/236047613-e63efe52-87fa-419e-9e0e-965b1d10ae18.png\">\r\n\r\n\r\n###
Checklist\r\n- [ ] [Unit or
functional\r\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\r\nwere
updated or added to match the most common
scenarios\r\n\r\n---------\r\n\r\nCo-authored-by: Xavier Mouligneau
<xavier.mouligneau@elastic.co>\r\nCo-authored-by: Kibana Machine
<42973632+kibanamachine@users.noreply.github.com>","sha":"ea407983bbd6a364f23f6780ff1049f679f53488"}}]}]
BACKPORT-->
Co-authored-by: Jiawei Wu <74562234+JiaweiWu@users.noreply.github.com>
## Summary
Setting `xpack.alerting.enableFrameworkAlerts` to true by default. This
causes alerts-as-data resource installation to be handled by the
alerting plugin and not the rule registry. We're keeping the feature
flag in case we run into issues but eventually we'll clean up the code
to remove the feature flag and clean up the rule registry code that
relies on the feature flag. Changing this default setting early will
allow us to identify issues before the 8.8 FF where we can revert if
needed.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
resolves https://github.com/elastic/kibana/issues/142874
The alerting framework now generates an alert UUID for every alert it
creates. The UUID will be reused for alerts which continue to be active
on subsequent runs, until the alert recovers. When the same alert (alert
instance id) becomes active again, a new UUID will be generated. These
UUIDs then identify a "span" of events for a single alert.
The rule registry plugin was already adding these UUIDs to it's own
alerts-as-data indices, and that code has now been changed to make use
of the new UUID the alerting framework generates.
- adds property in the rule task state
`alertInstances[alertInstanceId].meta.uuid`; this is where the alert
UUID is persisted across runs
- adds a new `Alert` method getUuid(): string` that can be used by rule
executors to obtain the UUID of the alert they just retrieved from the
factory; the rule registry uses this to get the UUID generated by the
alerting framework
- for the event log, adds the property `kibana.alert.uuid` to
`*-instance` event log events; this is the same field the rule registry
writes into the alerts-as-data indices
- various changes to tests to accommodate new UUID data / methods
- migrates the UUID previous stored with lifecycle alerts in the alert
state, via the rule registry *INTO* the new `meta.uuid` field in the
existing alert state.
**Relates to:** https://github.com/elastic/kibana/pull/152900
## Summary
This PR adds an ability to wait for rule status by its rule id in functional tests. It is a result of splitting of https://github.com/elastic/kibana/pull/150553 into isolated parts.
## Details
Based on what kind of id is used (SO id or rule id) it leads to different behaviour under the hood. SO id related functionality consumes ES Get API while rule id related functionality consumes ES Search API. This way it may require to add some delay to let ES to refresh the data if the testing logic consumes ES Search API while rule status was awaited via SO id so that handled by ES Get API. This PR removes such a delay at rule exporting functional tests.
Resolves https://github.com/elastic/kibana/issues/151697
## Summary
In a previous [PR](https://github.com/elastic/kibana/pull/145581) we
started installing a context-specific component templates, index
templates and concrete write indices for framework alerts as data when
the `xpack.alerting.enableFrameworkAlerts` config flag is set to true.
In that PR we used a different naming pattern than what is used by the
rule registry for those resources. In this PR, we are aligning the
naming of these resources with the rule registry and installing these
resources on alerting plugin setup when `enableFrameworkAlerts: true`.
If the flag is set to false, the rule registry will continue to handle
this resource installation.
In this PR we are doing the following:
* Registering all rules currently registered with the rule registry with
the alerting framework. This registration allows the alerting framework
to build context specific component templates. Because this PR only
addresses resource installation, rules will continue to be registered
with the rule registry.
* When `enableFrameworkAlerts: true`:
* The framework installs the context specific component template with
the following naming convention: `.alerts-{context}.alerts-mappings`.
This matches what the rule registry currently installs so the transition
should be seamless
* The framework installs the context specific index template for the
`default` space with the following name:
`.alerts-{context}.alerts-default-index-template`. Space awareness will
be addressed in a followup PR. This matches the current rule registry
naming.This index template will reference
(1) ECS component template (if `useEcs: true`),
(2) context-specific component template,
(3) legacy alert component template and
(4) framework component template
where the legacy alert component template + framework component template
= technical component template (from the rule registry).
* The framework creates or updates the concrete write index for the
`default` space with the naming convention:
`.internal.alerts-{context}.alerts-default-000001`. Space awareness will
be addressed in a followup PR. This matches the current rule registry
naming.
* The installation of the index template & write index differs from the
rule registry in that it occurs on alerting plugin start vs the first
rule run.
* We modified the rule registry resource installer to skip installation
of these resources when `enableFrameworkAlerts: true`. In addition, it
will wait for the alerting resource installation promise so if a rule
runs before its resources are fully initialized, it will wait for
initialization to complete before writing.
## To Verify
The following rule registry contexts are affected:
`observability.apm`
`observability.logs`
`observability.metrics`
`observability.slo`
`observability.uptime`
`security`
For each context, we should verify the following:
`Note that if your rule context references the ECS mappings, there may
be differences in those mappings between main and this branch depending
on whether you're running main with enableFrameworkAlerts true or false.
These differences are explained in the summary of this prior PR:
https://github.com/elastic/kibana/pull/150384 but essentially we're
aligning with the latest ECS fields. In the instructions, I suggest
running main with enableFrameworkAlerts: true to minimize the
differences caused by ECS changes`
**While running `main` with `enableFrameworkAlerts: true`:**
1. Get the context specific component template `GET
_component_template/.alerts-{context}.alerts-mappings`
2. Create rule for this context that creates an alert and then
3. Get the index template `GET
_index_template/.alerts-{context}.alerts-default-index-template`
4. Get the index mapping for the concrete index: `GET
.internal.alerts-{context}.alerts-default-000001/_mapping`
**While running this branch with `xpack.alerting.enableFrameworkAlerts:
true` (with a fresh ES instance):**
5. Get the context specific component template `GET
_component_template/.alerts-{context}.alerts-mappings`
6. Get the index template `GET
_index_template/.alerts-{context}.alerts-default-index-template`
7. Get the index mapping for the concrete index: `GET
.internal.alerts-{context}.alerts-default-000001/_mapping`
Note that you should not have to create a rule that generates alerts
before seeing these resources installed.
**Compare the component templates**
Compare 1 and 5. The difference should be:
* component template from this branch should have `_meta.managed: true`.
This is a flag indicating to the user that these templates are system
managed and should not be manually modified.
**Compare the index templates**
Compare 3 and 6. The differences should be:
* index template from this branch should have `managed: true` in the
`_meta` fields
* index template from this branch should not have a `priority` field.
This will be addressed in a followup PR
* index template from this branch should be composed of
`.alerts-legacy-alert-mappings` and `.alerts-framework-mappings` instead
of `.alerts-technical-mappings` but under the hood, these mappings are
equivalent.
**Compare the index mappings**
Compare 4 and 7. The difference should be:
* index mappings from this branch should have `_meta.managed: true`.
### Verify that installed resources templates work as expected
1. Run this branch on a fresh ES install with
`xpack.alerting.enableFrameworkAlerts: true`.
2. Create a rule in your context that generates alerts.
3. Verify that there are no errors during rule execution.
4. Verify that the alerts show up in your alerts table as expected.
5. (For detection rules only): Run this branch with
`xpack.alerting.enableFrameworkAlerts: true` and verify rules in a
non-default space continue to create resources on first rule run and run
as expected.
6. (For detection rules only): Run this branch with
`xpack.alerting.enableFrameworkAlerts: true` and verify rule preview
continue to work as expected
### Verify that installed resources templates work with existing rule
registry resources.
1. Run `main` or a previous version and create a rule in your context
that generates alerts.
2. Using the same ES data, switch to this branch with
`xpack.alerting.enableFrameworkAlerts: false` and verify Kibana starts
with no rule registry errors and the rule continues to run as expected.
3. Using the same ES data, switch to this branch with
`xpack.alerting.enableFrameworkAlerts: true` and verify Kibana starts
with no alerting or rule registry errors and the rule continues to run
as expected.
4. Verify the alerts show up on the alerts table as expected.
5. (For detection rules only): Run this branch with
`xpack.alerting.enableFrameworkAlerts: true` and verify rules in a
non-default space continue to create resources on first rule run and run
as expected.
6. (For detection rules only): Run this branch with
`xpack.alerting.enableFrameworkAlerts: true` and verify rule preview
continue to work as expected
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Resolves https://github.com/elastic/kibana/issues/150358
## Summary
In a previous [PR](https://github.com/elastic/kibana/pull/145581) we
started installing a common component template for framework alerts as
data when the `xpack.alerting.enableFrameworkAlerts` config flag is set
to true. In that PR we used a different naming pattern than what is used
by the rule registry for its component templates.
In this PR we are doing the following:
* Renaming the installed `alerts-common-component-template` to
`.alerts-framework-mappings`.
* Creating and installing `.alerts-legacy-alert-mappings` component
template when `enableFrameworkAlerts: true` on alerting plugin setup
* The combination of the two component templates creates the same set of
mappings as the rule registry technical component template
* Creating and installing `.alerts-ecs-mappings` component template when
`enableFrameworkAlerts: true` on alerting plugin setup (when
`enableFrameworkAlerts: false`, the rule registry continues to install
this component template
* Using the `@kbn/ecs` package provided by core to generate the ECS
field map. The rule registry will continue to install the existing ECS
field map which is actually a subset of ECS fields
* Adding `useLegacy` and `useEcs` flags that allow rule types to specify
whether to include the legacy alerts component template and the ECS
component template when registering with framework alerts-as-data.
* Moved some common functions to alerting framework from the rule
registry
## Things to note
* When generating the ECS field map, we are now including the
`ignore_above` setting from the `@kbn/ecs` package. This changes the ECS
component template to include those settings. I tested updating an index
with just `"type":"keyword"` mappings to add the `ignore_above` field to
the mapping and had no issues so this seems like an additive change to
the mapping that will hopefully prevent problems in the future.
* The rule registry ECS component template also includes the technical
fields which is redundant because the technical component template is
automatically installed for all index templates so the framework ECS
component template only contains ECS fields.
| Previous mapping | Updated mapping |
| ----------- | ----------- |
| `{ "organization": { "type": "keyword" } }` | `{ "organization": {
"type": "keyword", "ignore_above": 1024 } }` |
## To Verify
### Verify that the generated component templates are as expected:
Get the following
**While running `main`:**
1. Get the ECS component template `GET
_component_template/.alerts-ecs-mappings`
2. Get the technical component template `GET
_component_template/.alerts-technical-mappings`
3. Create a detection rule that creates an alert and then get the index
mapping for the concrete security alert index `GET
.internal.alerts-security.alerts-default-000001/_mapping`
**While running this branch with `xpack.alerting.enableFrameworkAlerts:
false`:**
4. Get the ECS component template `GET
_component_template/.alerts-ecs-mappings`
5. Get the technical component template `GET
_component_template/.alerts-technical-mappings`
6. Create a detection rule that creates an alert and then get the index
mapping for the concrete security alert index `GET
.internal.alerts-security.alerts-default-000001/_mapping`
**While running this branch with `xpack.alerting.enableFrameworkAlerts:
true`:**
7. Get the ECS component template `GET
_component_template/.alerts-ecs-mappings`
8. Get the technical component template `GET
_component_template/.alerts-technical-mappings`
9. Create a detection rule that creates an alert and then get the index
mapping for the concrete security alert index `GET
.internal.alerts-security.alerts-default-000001/_mapping`
10. Verify that component templates exist for
`.alerts-framework-mappings` and `.alerts-legacy-alert-mappings`
**Compare the ECS component templates**
Compare 1 and 4 (ECS component template from `main` and installed by
rule registry in this branch). The difference should be:
* no difference in ECS fields
* because the rule registry ECS component template also includes
technical fields, you will see the 2 new technical fields in this branch
Compare 4 and 7 (ECS component template from rule registry & alerting
framework in this branch).
* some new ECS fields for alerting installed template
* each `keyword` mapped field for alerting installed template should
have `ignore_above` setting
* no `kibana.*` fields in the alerting installed template
**Compare the technical component templates**
Compare 2 and 5 (technical component template from `main` and installed
by rule registry in this branch). The difference should be:
* 2 new `kibana.alert` fields (`flapping_history` and `last_detected`)
Compare 5 and 8 (technical component template from rule registry &
alerting framework in this branch).
* there should be no difference!
**Compare the index mappings**
Compare 3 and 6 (index mapping from `main` and installed by rule
registry in this branch). The difference should be:
* 2 new `kibana.alert` fields (`flapping_history` and `last_detected`)
Compare 6 and 9 (index mapping from rule registry & alerting framework
in this branch).
* some new ECS fields
* each `keyword` mapped ECS field should have `ignore_above` setting
### Verify that the generated component templates work with existing
rule registry index templates & indices:
1. Run `main` or a previous version and create a rule that uses both ECS
component templates & technical component templates (detection rules use
both). Let it run a few times.
2. Using the same ES data, switch to this branch with
`xpack.alerting.enableFrameworkAlerts: false` and verify Kibana starts
with no rule registry errors and the rule continues to run as expected.
3. Using the same ES data, switch to this branch with
`xpack.alerting.enableFrameworkAlerts: true` and verify Kibana starts
with no alerting or rule registry errors and the rule continues to run
as expected. Verify that the mapping on the existing
`.internal.alerts-security.alerts-default-000001` has been updated to
include the latest ECS mappings and the two new technical fields.
### Checklist
Delete any items that are not applicable to this PR.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Mike Côté <mikecote@users.noreply.github.com>
Resolves https://github.com/elastic/kibana/issues/150331
## Summary
In a previous [PR](https://github.com/elastic/kibana/pull/145581) we
started installing an ILM policy for framework alerts as data when the
`xpack.alerting.enableFrameworkAlerts` config flag is set to true. In
that PR we used a different name than what is used by the rule registry
even though the policy bodies were the same.
In this PR, we are consolidating the naming of the two ILM policies so
that we are only ever installing 1 policy. The
`xpack.alerting.enableFrameworkAlerts` config is used to determine which
plugin is responsible for installing the policy. When set to true, the
alerting plugin installs the policy. When set the false, the rule
registry installs the policy. This is an incremental step toward the
alerting framework absorbing all of the resource installation
functionality of the rule registry
## To Verify
A few things to verify:
1. Verify that the alerting plugin installs the policy when
`xpack.alerting.enableFrameworkAlerts=true`
* Set `xpack.alerting.enableFrameworkAlerts: true` in your Kibana config
* Start a fresh ES and Kibana instance
* Verify that an ILM policy with name `.alerts-ilm-policy` is installed
* Create a metric threshold rule that creates an alert
* Verify that there is an index template called
`.alerts-observability.metrics.alerts-default-index-template` that uses
the `.alerts-ilm-policy` policy
2. Verify that the rule registry installs the policy when
`xpack.alerting.enableFrameworkAlerts=false`
* Set `xpack.alerting.enableFrameworkAlerts: false` in your Kibana
config
* Start a fresh ES and Kibana instance
* Verify that an ILM policy with name `.alerts-ilm-policy` is installed
* Create a metric threshold rule that creates an alert
* Verify that there is an index template called
`.alerts-observability.metrics.alerts-default-index-template` that uses
the `.alerts-ilm-policy` policy
3. Verify that we can switch between configurations
* Set `xpack.alerting.enableFrameworkAlerts: false` in your Kibana
config
* Start a fresh ES and Kibana instance
* Verify that an ILM policy with name `.alerts-ilm-policy` is installed
* Create a metric threshold rule that creates an alert
* Verify that there is an index template called
`.alerts-observability.metrics.alerts-default-index-template` that uses
the `.alerts-ilm-policy` policy
* Change `xpack.alerting.enableFrameworkAlerts: true`
* Restart Kibana
* Verify there are no errors, and the rule can still write alerts
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Fixes https://github.com/elastic/kibana/issues/149344
This PR migrates all plugins to packages automatically. It does this
using `node scripts/lint_packages` to automatically migrate
`kibana.json` files to `kibana.jsonc` files. By doing this automatically
we can simplify many build and testing procedures to only support
packages, and not both "packages" and "synthetic packages" (basically
pointers to plugins).
The majority of changes are in operations related code, so we'll be
having operations review this before marking it ready for review. The
vast majority of the code owners are simply pinged because we deleted
all `kibana.json` files and replaced them with `kibana.jsonc` files, so
we plan on leaving the PR ready-for-review for about 24 hours before
merging (after feature freeze), assuming we don't have any blockers
(especially from @elastic/kibana-core since there are a few core
specific changes, though the majority were handled in #149370).
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Resolves https://github.com/elastic/kibana/issues/145929
## Summary
Updates previous flapping tests to use the new flapping settings
configs.
Updates flapping logic to use flapping configs instead of hardcoded
values. Calls the flapping api on every rule execution, and then passes
in the flapping settings to the rule executors so they can be used by
the rule registry.
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### To verify
I think it's helpful to hide the whitespace when reviewing this pr.
- The flapping logic should remain the same, and all previous tests
should pass. I only updated them to pass in the flapping settings.
- Create rules, and set flapping settings in the ui and see the flapping
behavior change for your rules.
- Verify that the
`x-pack/test/alerting_api_integration/spaces_only/tests/alerting/event_log.ts`
run with the new flapping configs and output results we would expect
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
This PR upgrades uuid into its latest version `9.0.0`.
The previous default used version `v4` was kept where it was previously
used and places using `v1` or `v5` are still using it.
In this latest version they removed the deep import feature and as we
are not using tree shaking it increased our bundles by a significant
size. As such, I've moved this dependency into the `ui-shared-deps-npm`
bundle.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
The location of plugins was previously somewhat irrelevant, but as we
move into packages it's more important that we can find all plugins in
the repository, and we would like to be able to do that without needing
to maintain a manifest somewhere to accomplish this. In order to make
this possible we plan to find any plugin/package by spotting all
kibana.json files which are not "fixtures". This allows plugin-like code
(but not actual plugin code) to exist for testing purposes, but it must
be within some form of "fixtures" directory, and any plugin that isn't
in a fixtures directory will be automatically pulled into the system
(though test plugins, examples, etc. will still only be loaded when the
plugin's path is passed via `--plugin-path`, the system will know about
them and use that knowledge for other things).
Since this is just a rename Operations will review and merge by EOD Jan
12th unless someone has a blocking concern.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
In this PR, I'm changing the return type of rule executors from `return
state;` to `return { state };`.
This change had to touch all rule type executors so they return `state`
as a key. In the future, the framework could accept more than `state` in
the object, like warnings as an example.
**Before:**
```
executor: async (...) {
const state = {...};
return state;
}
```
**After:**
```
executor: async (...) {
const state = {...};
return { state };
}
```
**Future:**
```
executor: async (...) {
return {
state: {...},
warnings: [...],
metrics: {...},
...
};
}
```
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Resolves https://github.com/elastic/kibana/issues/147531.
In this PR, I'm making the alert summary actions exclude muted alerts.
## To verify
**Scenario 1 (summary per rule run)**
1. Install sample web logs by visiting
`/app/home#/tutorial_directory/sampleData`, clicking `Other sample data
sets` and clicking `Add data` for `Sample web logs`
2. Add `kibana_sample_data_logs` index pattern to O11y settings by
visiting `/app/metrics/explorer`, clicking `Settings` on the top right,
appending `,kibana_sample_data_logs` (with leading comma) to the
`Metrics indices` field and clicking `Apply`.
3. Create a metric threshold rule that generates multiple alerts by
using the following curl command (fix url)
```
curl -XPOST -H "Content-type: application/json" -H "kbn-xsrf: foo" -d '{"params":{"criteria":[{"metric":"bytes","comparator":">","threshold":[0],"timeSize":1,"timeUnit":"h","aggType":"avg"}],"sourceId":"default","alertOnNoData":true,"alertOnGroupDisappear":true,"groupBy":["agent.keyword"]},"consumer":"infrastructure","schedule":{"interval":"10s"},"tags":[],"name":"test","rule_type_id":"metrics.alert.threshold","actions":[{"frequency":{"summary":true,"notify_when":"onActiveAlert"},"group":"metrics.threshold.fired","id":"preconfigured-server-log","params":{"level":"info","message":"Found {{alerts.all.count}} alerts. {{alerts.new.count}} new, {{alerts.ongoing.count}} ongoing, {{alerts.recovered.count}} recovered."}}]}' 'http://elastic:changeme@localhost:5601/api/alerting/rule'
```
4. Observe 3 alerts in the summary (new then ongoing)
5. Mute one of the alerts by using the rule details page alerts tab
6. Observe only 2 alerts are now in the summary
**Scenario 2 (summary over a time spam)**
Same steps as above except for step 3, use the following curl command
(summaries will generate every 30s)
```
curl -XPOST -H "Content-type: application/json" -H "kbn-xsrf: foo" -d '{"params":{"criteria":[{"metric":"bytes","comparator":">","threshold":[0],"timeSize":1,"timeUnit":"h","aggType":"avg"}],"sourceId":"default","alertOnNoData":true,"alertOnGroupDisappear":true,"groupBy":["agent.keyword"]},"consumer":"infrastructure","schedule":{"interval":"10s"},"tags":[],"name":"test","rule_type_id":"metrics.alert.threshold","actions":[{"frequency":{"summary":true,"notify_when":"onThrottleInterval","throttle":"30s"},"group":"metrics.threshold.fired","id":"preconfigured-server-log","params":{"level":"info","message":"Found {{alerts.all.count}} alerts. {{alerts.new.count}} new, {{alerts.ongoing.count}} ongoing, {{alerts.recovered.count}} recovered."}}]}' 'http://elastic:changeme@localhost:5601/api/alerting/rule'
```
Resolves https://github.com/elastic/kibana/issues/143443
## Summary
Added processing to track flapping state, and detect when an alert is
flapping.
When an alert is determined to be flapping we will log a message for
now.
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### To verify
- Create a rule that will change between active and recovered. I like to
create a rule like `index threshold` where I can force the state to
change.
- Look at the task manager document for this rule and verify that the
`flappingHistory` in the `state` is being updated properly.
- Let the rule run and verify that if the alert is flapping in AAD,
Event Log and alert summary.
Co-authored-by: Patrick Mueller <patrick.mueller@elastic.co>
Co-authored-by: Mike Côté <mikecote@users.noreply.github.com>
* Register rule data client with alerting
* wip
* get summarized alerts
* cleanup
* Adding queries
* Adding unit tests
* Adding condition to queries in order to limit number of alerts returned
* Fixing runtime mapping script
* Removing runtime mappings
* Adding function to persistence and lifecycle wrappers
* Adding functional test
* Updating README
* Adding comments
* lte to lt
* Revert "lte to lt"
This reverts commit bbc2604a00.
* lte to lt
* Fixing test
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
* Separate rule execution logic tests and move bulk of the tests to preview for speed
* Remove bad dependency
* Update unit test snapshot
* Fix flaky test
* Fix another flaky test
* Fix more imports
* Remove superfluous return type
**Partially addresses:** https://github.com/elastic/kibana/issues/138600, https://github.com/elastic/kibana/issues/92169, https://github.com/elastic/kibana/issues/138606
**Addresses:** https://github.com/elastic/kibana/issues/136957, https://github.com/elastic/kibana/issues/136962, https://github.com/elastic/kibana/issues/138614
## Summary
In this PR we are:
- Splitting the Detection Engine into subdomains ([ticket](https://github.com/elastic/kibana/issues/138600)). Every subdomain got its own folder under `detection_engine`, and we moved some (not all) code into them. More on that is below. New subdomains introduced:
- `fleet_integrations`
- `prebuilt_rules`
- `rule_actions_legacy`
- `rule_exceptions`
- `rule_management`
- `rule_preview`
- `rule_schema`
- `rule_creation_ui`
- `rule_details_ui`
- `rule_management_ui`
- `rule_exceptions_ui`
- Updating the CODEOWNERS file accordingly.
- Refactoring the Rule Management page and the Rules table. Our main focus was on the way how we communicate with the API endpoints, how we cache and invalidate the fetched data, and how this code is organized in the codebase. More on that is below.
- Increasing the bundle size limit. This is going to be decreased back in a follow-up PR ([ticket](https://github.com/elastic/kibana/issues/143532))
## Restructuring folders into subdomains
For the background and problem statement, please refer to https://github.com/elastic/kibana/issues/138600
We were focusing on code that is closely related to the Rules area: either owned by us de facto (we work on it) or owned by us de jure (according to the CODEOWNERS file). Or goal was to explicitly extract code that we don't own de facto into separate subdomains, transfer ownership to other area teams, and reflect this in the CODEOWNERS file. On the other hand, we wanted the code that we own to also be organized in clear subdomains that we could easily own via CODEOWNERS. We didn't touch the code that is already explicitly owned by other area teams, e.g. `x-pack/plugins/security_solution/server/lib/detection_engine/rule_types`.
This is a draft "domain map" - an architectural diagram that shows how the Detection Engine _could_ be split into subdomains. It's more a TO-BE idea/aspiration rather than an AS-IS statement. Any feedback, critiques, and suggestions would be extremely appreciated!
<img width="2592" alt="Screenshot 2022-10-18 at 16 08 40" src="https://user-images.githubusercontent.com/7359339/196453965-b65f5b49-9a33-4d90-bb48-1347e9576223.png">
It shows the flow of dependencies between subdomains and proposes some rules:
- The whole graph of dependencies between all subdomains should be a DAG. There should not be bi-directional or circular dependencies between them.
- **Generic subdomains** represent some general knowledge that can be used/applied outside of the Detection Engine.
- Can depend on some generic kbn packages, npm packages or utils.
- Can't depend on any other Detection Engine subdomains.
- **Crosscutting subdomains** represent some code that can be common to / shared between many other subdomains. This could be some very common domain models and API schemas.
- Can depend on generic subdomains.
- Can depend on other crosscutting subdomains (dependencies between them must form a DAG).
- Can't depend on core or UI subdomains.
- **Core subdomains** contain most of the "meat" of the Detection Engine: domain models, server-side and client-side business logic, server-side API endpoints, client-side UI (potentially shareable between several pages).
- Can depend on crosscutting and generic subdomains.
- Can depend on other core subdomains (dependencies between them must form a DAG).
- Can't depend on UI subdomains.
- **UI subdomains** contain the implementation of pages related to the Detection Engine. Every page can easily depend on several core subdomains, so these subdomain are on top of everything.
- Can depend on any other subdomains. Dependencies must form a DAG.
Dashed lines show some existing dependencies that we think should be eliminated.
Ownership TO-BE is color-coded. We updated the CODEOWNERS file according to the new folders.
The folder restructuring is not 100% finished but we did a big part of it. Most of the FE code continues to live in legacy folders, e.g. see `x-pack/plugins/security_solution/public/detections`. So this work is to be continued...
## Refactoring of Rule Management FE
- [x] https://github.com/elastic/kibana/issues/136957 For effective HTTP requests caching and deduplication, we've migrated all data fetching logic to `useQuery` and `useMutation` hooks from `react-query`. That allowed us to introduce the following improvements to our codebase:
* All outgoing HTTP requests are now automatically deduplicated. That means that data fetching hooks like `useRule` could be used on any level in the component tree to access response data directly. So, no need to put the hook on the top level anymore and use prop-drilling to make the response data available to all children components that require it.
* All HTTP responses are now cached with the default TTL of 5 minutes—no more redundant requests. With a hot cache, transitions to some pages now happen immediately.
- [x] https://github.com/elastic/kibana/issues/136962 Data fetching hooks of the Rules Area are now organized in one place. `security_solution/public/detection_engine/rule_management/api/hooks` contains abstraction layer on top of the Kibana's HTTP client. All data fetching should happen exclusively through that layer to ensure that:
* Mutation queries automatically invalidate associated cache entries.
* Optimistic updates or updates from mutation responses could be implemented centrally where possible.
- [x] https://github.com/elastic/kibana/issues/92169 From some of the Rule Management components, logic was extracted to hooks located in `security_solution/public/detection_engine/rule_management/logic`.
### Checklist
- [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios
* Remove sending index for fetching data
* Fix test
* Revert "Remove sending index for fetching data"
This reverts commit 29caa5ab09.
* Revert "Fix test"
This reverts commit e2b03b1ac5.
* Change getting alert index based on spacename logic
* Fix APM Alert test
* Rename DEFAULT_SPACE to INDEX_ALIAS
* [CI] Auto-commit changed files from 'node scripts/precommit_hook.js --ref HEAD~1..HEAD --fix'
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
* wip I
* add alert table state in case
* [CI] Auto-commit changed files from 'node scripts/eslint --no-cache --fix'
* add new API to get FeatureID form registrationContext and update UI to use this new API
* rm dead code
* [CI] Auto-commit changed files from 'node scripts/eslint --no-cache --fix'
* remove unnecessary memo
* adds tests for case view helpers
* Move http call to API and add tests for getFeatureIds
* fix type + unit test
* add unit tests + cleanup
* add new api integration test for _feature_ids
* [CI] Auto-commit changed files from 'node scripts/eslint --no-cache --fix'
* Fix small type creating typescript slowness
* remove console log
* use import type for validfeatureId
* force any to improve typescript performance
* Update APM (#132270)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
* [ResponseOps][Docs] Updating ServiceNow docs with OAuth setup instructions (#131344)
* Updating ServiceNow docs. Need screenshots
* Adding screenshots
* Fix nested screenshots and lists
* Tweaks and screenshots
* Updates
* blergh
* Apply suggestions from code review
Co-authored-by: Lisa Cawley <lcawley@elastic.co>
* Apply suggestions from code review
Co-authored-by: Mike Côté <mikecote@users.noreply.github.com>
Co-authored-by: lcawl <lcawley@elastic.co>
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Mike Côté <mikecote@users.noreply.github.com>
* Show polling options when 'Data streams' option is selected in the Console Settings modal. (#132277)
* [Osquery] Make Osquery All with All base privillege (#130523)
* [XY] Add normalizeTable function to correct works with esdocs (#131917)
* Add normalizeTable function to correct works with esdocs
* Fix types
* Fix types
* Fix CI
* Fix CI
* Some fixes
* Remove fallback with min/max value for domain
* Added tests
* Some refactoring
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Yaroslav Kuznietsov <kuznetsov.yaroslav.yk@gmail.com>
* [Osquery] Add default osquery_saved_query objects (#129461)
* [Unified Search] Show error message for invalid date filter value (#131290)
* feat: added show error message for invalid date
* refact: move logic in HOC
* feat: refactoring code and added translation
* refact show error
* refact: show error message
* refact: remove translation
* refactor: changed menu for show FilterEdit
* fix: open/close popover
* feat: field.type => KBN_FIELD_TYPES
* feat: remove extra code with with input check and refactored filter item
* feat: added tests and refactoring code
* refact: getFieldValidityAndErrorMessage
* feat: return isInvalid checking in valur input type for string, ip
* Update navigation landing pages to use appLinks config (#132027)
* Update navigation landing pages to use appLinks config
* Please code review
* align app links changes
* Update links descriptions
* Rollback title changes
* Fix wrong links descriptions
* Fix unit tests
* Fix description
Co-authored-by: semd <sergi.massaneda@elastic.co>
* [Cloud Posture] add resource findings page flyout (#132243)
* [Discover] Add a tour for Document Explorer (#131125)
* [Discover] Add "Take a tour" button to the Document Explorer callout
* [Discover] Tmp
* [Discover] Add a first Document Explorer tour step
* [Discover] Add other Document Explorer tour steps
* [Discover] Update tour steps positioning
* [Discover] Add gifs to tour steps
* [Discover] Refactor how tour steps are registered
* [Discover] Add new step to the tour. Update tour steps text.
* [Discover] Improve steps positioning
* [Discover] Fix positioning for Add field step
* [Discover] Add icons to tour steps
* [Discover] Reorganize components
* [Discover] Skip Columns step when it's not available
* [Discover] Rename components
* [Discover] Add some tests
* [Discover] Fix positioning
* [Discover] Fix props
* [Discover] Render steps only if the tour is active
* [Discover] Update gifs
* [Discover] Add image alt text for gifs
* [Discover] Tag the Take tour button
* [Discover] Update text and tests
* [Discover] Add more tests
* [Discover] Rename assets directory
* [Discover] Fix tour in mobile view. Improve steps positioning and animation.
* [Discover] Update text in tour steps
* [Discover] Update sort.gif
* [Discover] Update image width
* Update src/plugins/discover/public/components/discover_tour/discover_tour_provider.tsx
Co-authored-by: gchaps <33642766+gchaps@users.noreply.github.com>
* Update src/plugins/discover/public/components/discover_tour/discover_tour_provider.tsx
Co-authored-by: gchaps <33642766+gchaps@users.noreply.github.com>
* [Discover] Update sort.gif
* [Discover] Fix code style
Co-authored-by: gchaps <33642766+gchaps@users.noreply.github.com>
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
* [XY] Add `minTimeBarInterval` arg (#128726)
* Added `xAxisInterval` arg
* Add validation
* Add tests
* Rename xAxisInterval to minTimeBarInterval and add validation
* Fix imports
* Add tests to validation
* Fix conflicts
* [CI] Auto-commit changed files from 'node scripts/precommit_hook.js --ref HEAD~1..HEAD --fix'
* Fix tests
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
* do not use barrel imports
* do not use barrel import
* do not use barrel import
* do not use barrel imports
* do not use barrel import
* import types
* Add tests
* Fix cases bundle size
* Add more tests
* [Fleet] Add new API to get current upgrades (#132276)
* Add support of Data View switching for Agg-Based visualizations (#132184)
* Add support of Data View switching for Agg-Based visualizations
* fix CI
* add use_date_view_updates
* implement sync with state
* cleanup
* cleanup
* cleanup
* Update index.ts
* fix PR comments
* Update use_data_view_updates.ts
* Update use_data_view_updates.ts
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
* [Security Solution] Responsive styling fixes (#131951)
* [Discover] Add Analytics No Data Page (#131965)
* [Discover] Add Analytics No Data Page
* Make showEmptyPrompt parameter optional
* Remove unused import
* Remove unnecessary test
* Fix test
* Update failing test?
* Update failing test
* Changing the order of functional tests
* Fix error handling
* Addressing PR comments
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
* Remove barrel export from public index file
* remove barrel export
* Re-export missing exports
* Turn off feature flag
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Esteban Beltran <esteban.beltran@elastic.co>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: Ying Mao <ying.mao@elastic.co>
Co-authored-by: lcawl <lcawley@elastic.co>
Co-authored-by: Mike Côté <mikecote@users.noreply.github.com>
Co-authored-by: CJ Cenizal <cj.cenizal@elastic.co>
Co-authored-by: Tomasz Ciecierski <ciecierskitomek@gmail.com>
Co-authored-by: Uladzislau Lasitsa <Uladzislau_Lasitsa@epam.com>
Co-authored-by: Yaroslav Kuznietsov <kuznetsov.yaroslav.yk@gmail.com>
Co-authored-by: Nodir Latipov <nodir.latypov@gmail.com>
Co-authored-by: Pablo Machado <pablo.nevesmachado@elastic.co>
Co-authored-by: semd <sergi.massaneda@elastic.co>
Co-authored-by: Or Ouziel <or.ouziel@elastic.co>
Co-authored-by: Julia Rechkunova <julia.rechkunova@elastic.co>
Co-authored-by: gchaps <33642766+gchaps@users.noreply.github.com>
Co-authored-by: Christos Nasikas <christos.nasikas@elastic.co>
Co-authored-by: Nicolas Chaulet <nicolas.chaulet@elastic.co>
Co-authored-by: Alexey Antonov <alexwizp@gmail.com>
Co-authored-by: Steph Milovic <stephanie.milovic@elastic.co>
Co-authored-by: Maja Grubic <maja.grubic@elastic.co>
* sort and cursor plumbing for alertsclient
* process events route will now grab alerts for the page of events being requested. range / cursor support added to alerts client.
* handling of missing event.action in some edge case process events
* fixed to fake session leader overwriting original event it was based on
* deduping added for children, alerts. fix to alerts route
* fake process creation cleaned up, will now try and create fake parents from widened event context. this mitigates the number of potentially orphaned processes in the tree
* fixed infinite loop regression :)
* tests fixed
* tweaks to inline alert details and test fixes
* type fix
* added test for new "sort" property in AlertsClient.find
* [CI] Auto-commit changed files from 'node scripts/eslint --no-cache --fix'
* pagination added to alerts tab
* test fixes
* addressed awp team comments
* || -> ??
* e2e tests added for sort and search_after options
* fixed test / type check
* fixed import issue
* restored whitespace
Co-authored-by: mitodrummer <karlgodard@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
* [ftr] automatically determine config run order
* split lens config into two groups
* support ftr configs always running against CI
* Split detection_engine_api_integration rule exception list tests
* Add configs from previous commit
* [ftr] remove testMetadata and maintain a unique lifecycle instance per run
* Revert "[ftr] remove testMetadata and maintain a unique lifecycle instance per run"
This reverts commit d2b4fdb824.
* Split alerting_api_integration/security_and_spaces tests
* Add groups to yaml
* Revert "Revert "[ftr] remove testMetadata and maintain a unique lifecycle instance per run""
This reverts commit 56232eea68.
* stop ES more forcefully and fix timeout
* only cleanup lifecycle phases when the cleanup is totally complete
* only use kill when cleaning up an esTestInstance
* fix broken import
* fix runOptions.alwaysUseSource implementation
* fix config access
* fix x-pack/ccs config
* fix ml import file paths
* update kibana build id
* revert array.concat() change
* fix baseConfig usage
* fix pie chart data
* split up maps tests
* pull in all of group5 so that es archives are loaded correctly
* add to ftr configs.yml
* fix pie chart data without breaking legacy version
* fix more pie_chart stuff in new vis lib
* restore normal PR tasks
* bump kibana-buildkite-library
* remove ciGroup validation
* remove the script which is no longer called from checks.sh
* [CI] Auto-commit changed files from 'yarn kbn run build -i @kbn/pm'
* adapt flaky test runner scripts to handle ftrConfig paths
* fix types in alerting_api_integration
* improve flaky config parsing and use non-local var name for passing explicit configs to ftr_configs.sh
* Split xpack dashboard tests
* Add configs
* [flaky] remove key from ftr-config steps
* [CI] Auto-commit changed files from 'node scripts/eslint --no-cache --fix'
* restore cypress builds
* remove ciGroups from FTR config files
* fixup some docs
* add temporary script to hunt for FTR config files
* use config.base.js naming for clarity
* use script to power ftr_configs.yml
* remove usage of removed x-pack/scripts/functional_tests
* fix test names in dashboard snapshots
* bump kibana-buildkite-library
* Try retrying only failed configs
* be a little quieter about trying to get testStats from configs with testRunners defined
* Remove test code
* bump kibana-buildkite-library
* update es_snapshot and on_merge jobs too
* track duration and exit code for each config and print it at the end of the script
* store results in order, rather than by key, in case there are duplicates in $config
* bash is hard
* fix env source and use +e rather than disabling e for whole file
* bash sucks
* print config summary in jest jobs too
* define results in jest_parallel.sh
* simplify config summary print, format times a little better
* fix reference to unbound time variable, use better variable name
* skip the newline between each result
* finish with the nitpicking
* sync changes with ftr_configs.sh
* refuse to execute config files which aren't listed in the .buildkite/ftr_configs.yml
* fix config.edge.js base config import paths
* fix some readmes
* resolve paths from ftr_configs manifest
* fix readConfigFile tests
* just allow __fixtures__ configs
* list a few more cypress config files
* install the main branch of kibana-buildkite-library
* split up lens group1
* move ml data_visualizer tests to their own config
* fix import paths
* fix more imports
* install specific commit of buildkite-pipeline-library
* sort configs in ftr_configs.yml
* bump kibana-buildkite-library
* remove temporary script
* fix env var for limiting config types
* Update docs/developer/contributing/development-functional-tests.asciidoc
Co-authored-by: Christiane (Tina) Heiligers <christiane.heiligers@elastic.co>
* produce a JUnit report for saved objects field count
* apply standard concurrency limits from flaky test runner
* support customizing FTR concurrency via the env
Co-authored-by: Brian Seeders <brian.seeders@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Christiane (Tina) Heiligers <christiane.heiligers@elastic.co>
* fix bug
* fix unit test
* bing back tests alive after a long CPR
* fix test and bring back recursive aggs
* I need to do an intersectiona and not union
* fix last integration test
* Privatize
* Add test
* Fix types
* debug for ci
* try fetching version
* Use this
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
* Changing structure of minimumScheduleInterval config
* Updating rules client logic to follow enforce flag
* Updating UI to use enforce value
* Updating config key in functional tests
* Fixes
* Fixes
* Updating help text
* Wording suggestsion from PR review
* Log warning instead of throwing an error if rule has default interval less than minimum
* Updating default interval to be minimum if minimum is greater than hardcoded default
* Fixing checks
* Fixing tests
* Fixing tests
* Fixing config
* Fixing checks
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
* Removing minimumScheduleInterval on rule type registration
* Adding minimumScheduleInterval to config and enforcing in rule type registry
* Validating interval on create and update
* Fixing types and tests
* Fixing types and tests
* Fixing types and tests
* Passing config to client and using to validate on rule creation
* Fixing small bug and tests
* Fixing tests
* Fixing tests
* Fixing tests
* Updating interval in docs
* Updating interval in docs
* Updating UI copy
* Fixing types and tests
* Fixing i18n
* Fixing tests from bad merge
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
* Initial code for search strategy in rule registry for use in triggers actions ui
* WIP
* More
* Bump this up
* Add a couple basic tests
* More separation
* Some api tests
* Fix types
* fix type
* Remove tests
* add this back in, not sure why this happened
* Remove test code
* PR feedback
* Fix typing
* Fix unit tests
* Skip this test due to errors
* Add more tests
* Use fields api
* Add issue link
* PR feedback
* Fix types and test
* Use nested key TS definition
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>