Closes#199315
## Summary
This PR changes the Maintenance Window UI to respect the date format
configured in Kibana's advanced settings.
3 places needed changing:
- Maintenance window list.
- Maintenance window creation page.
- Event popover in the maintenance window list(for recurring MWs).
## Summary
As a part of Expandable Findings flyout, we will need to move some
Constants, Types, Functions, Components into Security Solution plugin or
Shared package
This PR is phase 2 for Findings (Misconfiguration flyout) which include
moving functions into shared package or security solution plugin
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## 📓 Summary
Closes https://github.com/elastic/streams-program/issues/102
Closes https://github.com/elastic/streams-program/issues/159
This re-work of the enrichment state management introduces XState as
state library to prepare scaling the enrichment part for more processors
and improve performance reducing unnecessary side effects.
## 🤓 Reviewers note
**There is a lot to digest on this PR, I'm open to any suggestion and I
left some notes around to guide the review.
This is also far from perfect as there is margin for other minor DX
improvements for consuming the state machines, but it will all come in
follow-up work after we resolve prioritized work such as integrating the
Schema Editor.**
Most of the changes on this PR are about the state management for the
stream enrichment, but it touches also some other areas to integrate the
event-based flow.
### Stream enrichment machine
This machine handles the complexity around updating/promoting/deleting
processors, and the available simulation states.
It's a root level machine that spawns and manages its children machine,
one for the **simulation** behaviour and one for each **processor**
instantiated.
<img width="950" alt="Screenshot 2025-02-27 at 17 10 03"
src="https://github.com/user-attachments/assets/756a6668-600d-4863-965e-4fc8ccd3a69f"
/>
### Simulation machine
This machine handle the flow around sampling -> simulating, handling
debouncing and determining once a simulation can run or should refresh.
It also spawn a child date range machine to react to the observable time
changes and reloads.
It also derives all the required table configurations (columns, filters,
documents) centralizing the parsing and reducing the cases for
re-computing, since we don't rely anymore on the previous live
processors copy.
<img width="1652" alt="Screenshot 2025-02-27 at 17 33 40"
src="https://github.com/user-attachments/assets/fc1fa089-acb2-4ec5-84bc-f27f81cc6abe"
/>
### Processor machine
A processor can be in different states depending on the changes, not
this tracks each of them independently and send events to the parent
machine to react accordingly. It provide a boost in performance compared
to the previous approach, as we don't have to rerender the whole page
tree since the changes are encapsulated in the machine state.
<img width="1204" alt="Screenshot 2025-03-04 at 11 34 01"
src="https://github.com/user-attachments/assets/0e6b8854-b7c9-4ee8-a721-f4222354d382"
/>
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
When enabling the entity store with a non-superuser with all required
credentials, it returns the following errors:

To fix it, we need to disable security for the saved object client.
While this change sounds scary (exclude security??) there are three
reasons I believe this is the appropriate fix:
* [It's what rules management/alerting/detections does for creating
their hidden/encrypted saved objects.
](https://github.com/elastic/kibana/blob/main/x-pack/platform/plugins/shared/alerting/server/rules_client_factory.ts#L140)I
view that as the canonical example for doing this kind of work.
* Even with this change, we actually still require the user to have
Saved Object Management capabilities, both in the UI (as a privilege
check) and in the init/enable routes, upstream of where we create the
saved object. You can try this out yourself, the init route will fail
without that privilege.
* We only use that particular Saved Object client in that particular
spot, not throughout the rest of our Saved Object usages.
### How to reproduce it
* On main branch
* With an empty cluster
* Generate data with doc generator
* Login with 'elastic' user and create a test role and user with
following credentials:
* cluster, all
* indices, all
* Kibana, all spaces, all
* Open an anonymous tab and login with the test user
* Enable the entity store with the test user
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Implements controls to have more visibility over the errors, especially
in the initialization phase (populate ELSER indices)
### Changes
- Added timeout to the initialization phase (20 minutes).
- Added concurrency control for initialization tasks, only the first
concurrent migration will trigger it, and the rest will await it.
- Added proper error handling for the ES bulk index operations of
integrations and prebuilt rules ELSER indices.
- Added timeout for individual agent invocations (3 minutes)
- Added `migrationsLastError` server state to store the errors (not
ideal, this should be moved to the migration index when we implement it)
for now it's fine.
- Added the `last_error` in the _/stats_ API response.
- The UI displays the `last_error` if it's defined.
### Screenshots
Onboarding error:

Rules page error:

---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Use search after for finding gaps
Issue: https://github.com/elastic/security-team/issues/11860
To be able process more than 10.000 gaps per rule in one update cycle we
need to implement search after loop for all gaps.
For the API I keep from and size method, as it's much for client to use.
<img width="1250" alt="Screenshot 2025-02-17 at 15 25 27"
src="https://github.com/user-attachments/assets/806b2245-8aad-4960-84f4-d2a2818a4a12"
/>
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
resolves https://github.com/elastic/kibana/issues/198944
## Summary
Currently, the redux store can become out of sync with the state in the
UI, leading to the selected dataview not being preserved in the store,
and thereby not being saved when the timeline is saved. This PR sets the
selected dataview and patterns at the point of saving to ensure that
they are set and not overriden.
For additional background, see referenced issues.
## Summary
Relates https://github.com/elastic/ingest-dev/issues/4720
This PR adds retry logic to the task that handles automatic agent
upgrades originally implemented in
https://github.com/elastic/kibana/pull/211019.
Complementary fleet-server change which sets the agent's
`upgrade_attempts` to `null` once the upgrade is complete.:
https://github.com/elastic/fleet-server/pull/4528
### Approach
- A new `upgrade_attempts` property is added to agents and stored in the
agent doc (ES mapping update in
https://github.com/elastic/elasticsearch/pull/123256).
- When a bulk upgrade action is sent from the automatic upgrade task, it
pushes the timestamp of the upgrade to the affected agents'
`upgrade_attempts`.
- The default retry delays are `['30m', '1h', '2h', '4h', '8h', '16h',
'24h']` and can be overridden with the new
`xpack.fleet.autoUpgrades.retryDelays` setting.
- On every run, the automatic upgrade task will first process retries
and then query more agents if necessary (cf.
https://github.com/elastic/ingest-dev/issues/4720#issuecomment-2671660795).
- Once an agent has completed and failed the max retries defined by the
retry delays array, it is no longer retried.
### Testing
The ES query for fetching agents with existing `upgrade_attempts` needs
the updated mappings, so it might be necessary to pull the latest `main`
in the `elasticsearch` repo and run `yarn es source` instead of `yarn es
snapshot` (requires an up-to-date Java environment, currently 23).
In order to test that `upgrade_attempts` is set to `null` when the
upgrade is complete, fleet-server should be run in dev using the change
in https://github.com/elastic/fleet-server/pull/4528.
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] The PR description includes the appropriate Release Notes section,
and the correct `release_note:*` label is applied per the
[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
### Identify risks
Low probability risk of incorrectly triggering agent upgrades. This
feature is currently behind the `enableAutomaticAgentUpgrades` feature
flag.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Julia Bardi <90178898+juliaElastic@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
**Partially addresses: https://github.com/elastic/kibana/issues/210358**
## Summary
### Editing of prebuilt rules with missing base versions
**When the base version** of a currently installed prebuilt rule **is missing** among the `security-rule` asset saved objects, and the user edits this rule:
- We should mark the rule as customized, only if the new rule settings are different from the current rule settings.
- For example, adding a new tag should mark the rule as customized. Then, if the user removes this tag, the rule should remain to be marked as customized. This matches the current behavior.
- However, if the user saves the rule without making any changes to it, it should keep its `is_customized` field as is. This is different from the current behavior.
### Importing of prebuilt rules with missing base versions
**When the base version** of a prebuilt rule that is being imported **is missing** among the `security-rule` asset saved objects, and the user imports this rule:
- If this rule is not installed, it should be created with `is_customized` field set to `false`.
- If this rule is already installed, it should be updated.
- Its `is_customized` field should be set to `true` if the rule from the import payload is not equal to the installed rule.
- Its `is_customized` field should be be kept unchanged (`false` or `true`) if the rule from the import payload is equal to the installed rule.
Since we decided we don't want to provide routing for classic streams,
it doesn't make sense to be on the level of the ingest stream in the
API. This PR moves routing next to fields to make clear that it's only
supported for wired streams.
This PR adds pendingRecoveredCount field to AAD as a step to make ADD
source of truth.
In the next step we can build alerts in the alerting task runner from
AAD rather than task state.
## Summary
In an effort to make SLI charts more quickly visible on the SLO overview
page, remove SLO details that do not give users valuable insight into
key metrics and add them to a new tab. Retain some of the SLO details
above the tabs like SLI value, tags, and description (see figma for the
inspiration)
https://www.figma.com/design/91R0OtRZHy5xvaE8dGStBo/SLO%2FSLI-assets?node-id=4601-59103&t=K1vI6qtXbb48XPgr-1
<img width="1474" alt="Screenshot 2025-02-28 at 4 53 05 PM"
src="https://github.com/user-attachments/assets/3fdbe766-4047-45b5-a986-3a029c09bd1f"
/>

## Release Notes
SLO overview should give users a clear, immediate picture into key
objective data. Previously, the user would have had to scroll past
static data that describes the SLO definition before seeing valuable
information about their SLIs. This static data has been moved to a
separate tab, making charts more easily accessible.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Allow slo query wild card filters when kql filter and dsl filter both
are used.
For KQL filter, allowLeadingWildcards was true by default. introduces
the ability to use wildcard filters in SLO queries when DSL filters are
also used.
### Changes Made
1. **Updated `getElasticsearchQueryOrThrow` function:**
- Added support for `dataView` parameter in the `toElasticsearchQuery`
function.
- Included additional options for `allowLeadingWildcards`.
- Enhanced error handling to differentiate between invalid KQL and KQL
queries with invalid filters.
2. **Test Coverage:**
- Added new test cases to cover scenarios with wildcard queries and
filters.
## Summary
Background: https://github.com/elastic/kibana/pull/212173
Based off of feedback on the work in the PRs listed in that issue,
additional performance improvements can be made to the cells rendered in
the alert table. The changes made in this PR involve migrating out
shared context to a provider so certain hooks (some expensive... i.e.
browserFieldsByName) aren't made for every cell in the UI, but once and
passed down to each cell accordingly.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
Closes#211783
Part of https://github.com/elastic/kibana/issues/195857
## Summary
This PR expands the logic to get the dashboard files based on the agent.
We have many different ways to ingest data so we want to add more
metrics dashboards to the APM metrics tab. The different ingest paths we
have:
Classic APM Agent --> APM Server --> ES
Vanilla OTel SDKs --> APM Server --> ES
EDOT OTel SDKs --> APM Server --> ES
Classic APM Agent --> EDOT Collector --> ES
Vanilla OTel SDKs. --> EDOT Collector --> ES
EDOT OTel SDKs --> EDOT Collector --> ES
We agreed on having a dashboard filename pattern to make showing the
correct dashboard easier described
[here](https://github.com/elastic/kibana/issues/195857#issue-2580733648)
First, we determine if the ingest path is through APM Server or EDOT
Collector by checking the `telemtry.sdk` fields.
## TODOs / Reviewer notes
- [ ] Currently, we have a fallback to metrics charts which is valid
only if we have APM agent so this PR adds an empty state message:
"Runtime metrics are not available for this Agent / SDK type." in case
there is no dashboard for the service language. To be improved in
https://github.com/elastic/kibana/issues/211774 and will be updated in
this PR when ready - I will still open it for review as the other logic
can be reviewed
- The dashboards are to be updated (by the agent team so not part of the
changes here)
## Testing:
- Using e2e PoC
- The available dashboard cases can be found in
[loadDashboardFile](91f169e19a/x-pack/solutions/observability/plugins/apm/public/components/app/metrics/static_dashboard/dashboards/dashboard_catalog.ts (L40))
- Cases to be checked:
- OTel native with Vanilla OTel SDKs with available dashboard (example
case file: `otel_native-otel_other-nodejs`, `...-java`, `...-dotnet`)
<img width="1903" alt="image"
src="https://github.com/user-attachments/assets/44d37b05-a8e7-4f14-a1de-2c631f1843bb"
/>
- APM server with Vanilla OTel SDKs service with available dashboard
(example case file: `classic_apm-otel_other-nodejs`, `...-java`,
`...-dotnet`)

- APM server with Classic APM Agent (example case file:
`classic_apm-apm-nodejs`, `...-java`)
<img width="962" alt="image"
src="https://github.com/user-attachments/assets/f9e96dce-55c8-467a-93f0-a09fa219597e"
/>
- OTel native with Vanilla OTel SDKs without available dashboard (empty
state case example: python service)

- APM server with Vanilla OTel SDKs service without available dashboard
(empty state)
<img width="1910" alt="image"
src="https://github.com/user-attachments/assets/5219cf94-5013-4874-aaea-e558cca69281"
/>
- APM server with Classic APM Agent without available dashboard (Current
metrics fallback)
<img width="1914" alt="image"
src="https://github.com/user-attachments/assets/66342f49-876c-4ad5-a4d1-1414c3abac75"
/>
- ⚠️ OTel native Dashboards are still not available (at the time of
adding the description)
---------
Co-authored-by: Sergi Romeu <sergi.romeu@elastic.co>
Co-authored-by: Cauê Marcondes <55978943+cauemarcondes@users.noreply.github.com>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
# Backport
This will backport the following commits from `8.18` to `main`:
- [[SecuritySolution] Fix risk engine component template renaming
(#212853)](https://github.com/elastic/kibana/pull/212853)
<!--- Backport version: 9.6.6 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)
<!--BACKPORT [{"author":{"name":"Pablo
Machado","email":"pablo.nevesmachado@elastic.co"},"sourceCommit":{"committedDate":"2025-03-05T09:50:31Z","message":"[SecuritySolution]
Fix risk engine component template renaming (#212853)\n\n##
Summary\n\nThe previous implementation tried to rename the index
template during\n`init` and did not consider multiple spaces. to fix it,
I have:\n* Delete the previous code from `init`\n* Created a new
migration that created the new component templates and\nupdated the
index templates\n* Deleted the old component template after all spaces
migration ran\n* Add support for multiple spaces\n* I also renamed a
function inside `init` to make the code more clear\n* Added error
handling code that concatenates error messages and logs\nall of them at
the end\n\n\n### How to test it:\n\n**Scenario 1**\n\n* The usual way to
desk test this PR would be\n* Create a cluster with 8.17\n* Enable the
risk Engine in 8.17\n* Create a new space in 8.17\n* Upgrade the cluster
to 8.18 (this branch)\n* Enable the risk engine in the second
space.\n\n**Scenario 2**\n* Create a cluster with 8.17\n* Enable the
risk engine\n* Create a space\n* Enable another risk engine\n* Create
another space\n* Upgrade the cluster to 8.18 (this branch)\n* Check if
the migration ran in the logs\n* Check if all risk engines are installed
and the index templates and\nindex components are there.\n* Install a
new risk engine in the space where it isn't installed.\n* Restart Kibana
and make sure the migrations didn't run a second time\n\n###
Checklist\n\nReviewers should verify this PR satisfies this list as
well.\n\n- [x] [Unit or
functional\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\nwere
updated or added to match the most common scenarios\n- [x] The PR
description includes the appropriate Release Notes section,\nand the
correct `release_note:*` label is applied per
the\n[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)\n\n###
Identify risks\n\n[ ] This PR needs to be tested for upgrades between
different versions\nand a diverse number of spaces and risk engines
installed\n\n---------\n\nCo-authored-by: abhishekbhatia1710
<abhishek.bhatia@elastic.co>","sha":"b7908a4c6f91c79459f7b509bfd444ad169d6770","branchLabelMapping":{"^v8.16.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["bug","release_note:skip","v9.0.0","Team:
SecuritySolution","Feature:Entity Analytics","Team:Entity
Analytics","backport:version","v8.18.0","v9.1.0","v8.19.0"],"title":"[SecuritySolution]
Fix risk engine component template
renaming","number":212853,"url":"https://github.com/elastic/kibana/pull/212853","mergeCommit":{"message":"[SecuritySolution]
Fix risk engine component template renaming (#212853)\n\n##
Summary\n\nThe previous implementation tried to rename the index
template during\n`init` and did not consider multiple spaces. to fix it,
I have:\n* Delete the previous code from `init`\n* Created a new
migration that created the new component templates and\nupdated the
index templates\n* Deleted the old component template after all spaces
migration ran\n* Add support for multiple spaces\n* I also renamed a
function inside `init` to make the code more clear\n* Added error
handling code that concatenates error messages and logs\nall of them at
the end\n\n\n### How to test it:\n\n**Scenario 1**\n\n* The usual way to
desk test this PR would be\n* Create a cluster with 8.17\n* Enable the
risk Engine in 8.17\n* Create a new space in 8.17\n* Upgrade the cluster
to 8.18 (this branch)\n* Enable the risk engine in the second
space.\n\n**Scenario 2**\n* Create a cluster with 8.17\n* Enable the
risk engine\n* Create a space\n* Enable another risk engine\n* Create
another space\n* Upgrade the cluster to 8.18 (this branch)\n* Check if
the migration ran in the logs\n* Check if all risk engines are installed
and the index templates and\nindex components are there.\n* Install a
new risk engine in the space where it isn't installed.\n* Restart Kibana
and make sure the migrations didn't run a second time\n\n###
Checklist\n\nReviewers should verify this PR satisfies this list as
well.\n\n- [x] [Unit or
functional\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\nwere
updated or added to match the most common scenarios\n- [x] The PR
description includes the appropriate Release Notes section,\nand the
correct `release_note:*` label is applied per
the\n[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)\n\n###
Identify risks\n\n[ ] This PR needs to be tested for upgrades between
different versions\nand a diverse number of spaces and risk engines
installed\n\n---------\n\nCo-authored-by: abhishekbhatia1710
<abhishek.bhatia@elastic.co>","sha":"b7908a4c6f91c79459f7b509bfd444ad169d6770"}},"sourceBranch":"8.18","suggestedTargetBranches":["9.0","main","8.x"],"targetPullRequestStates":[{"branch":"9.0","label":"v9.0.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.18","label":"v8.18.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/212853","number":212853,"mergeCommit":{"message":"[SecuritySolution]
Fix risk engine component template renaming (#212853)\n\n##
Summary\n\nThe previous implementation tried to rename the index
template during\n`init` and did not consider multiple spaces. to fix it,
I have:\n* Delete the previous code from `init`\n* Created a new
migration that created the new component templates and\nupdated the
index templates\n* Deleted the old component template after all spaces
migration ran\n* Add support for multiple spaces\n* I also renamed a
function inside `init` to make the code more clear\n* Added error
handling code that concatenates error messages and logs\nall of them at
the end\n\n\n### How to test it:\n\n**Scenario 1**\n\n* The usual way to
desk test this PR would be\n* Create a cluster with 8.17\n* Enable the
risk Engine in 8.17\n* Create a new space in 8.17\n* Upgrade the cluster
to 8.18 (this branch)\n* Enable the risk engine in the second
space.\n\n**Scenario 2**\n* Create a cluster with 8.17\n* Enable the
risk engine\n* Create a space\n* Enable another risk engine\n* Create
another space\n* Upgrade the cluster to 8.18 (this branch)\n* Check if
the migration ran in the logs\n* Check if all risk engines are installed
and the index templates and\nindex components are there.\n* Install a
new risk engine in the space where it isn't installed.\n* Restart Kibana
and make sure the migrations didn't run a second time\n\n###
Checklist\n\nReviewers should verify this PR satisfies this list as
well.\n\n- [x] [Unit or
functional\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\nwere
updated or added to match the most common scenarios\n- [x] The PR
description includes the appropriate Release Notes section,\nand the
correct `release_note:*` label is applied per
the\n[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)\n\n###
Identify risks\n\n[ ] This PR needs to be tested for upgrades between
different versions\nand a diverse number of spaces and risk engines
installed\n\n---------\n\nCo-authored-by: abhishekbhatia1710
<abhishek.bhatia@elastic.co>","sha":"b7908a4c6f91c79459f7b509bfd444ad169d6770"}},{"branch":"9.1","label":"v9.1.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.19","label":"v8.19.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->
## Summary
Fixes#212917
The root problem is belongs into the annotation layer logic to produce
the reference id for the persisted saved object.
In the previous logic a new `uuid` was generated all the time leading to
a continuous flow of `setState` calls to update the "runtime" state of
the Lens object when inline editing: the fix was to produce a stable id
in the `extractReferences` logic to avoid the re-renders.
The logic has been tweaked a bit now with some extra explanations inline
to make it more understandable.
New tests have been added to smoke test this scenario.
### Checklist
Check the PR satisfies following conditions.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: Nick Partridge <nick.ryan.partridge@gmail.com>
The component was replace by an enablement dialog
## Summary
Delete the obsolete "enable risk score redirect" test.
The redirect button was replaced by an enablement dialog.