Closes https://github.com/elastic/kibana/issues/213444
The problem is setting the view with the globe view may not set the view
to the exact value. For example setting zoom to 1.74 may move the map to
zoom 1.77. PR resolves this problem by adding a margin of error for
comparing zoom differences.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Closes#199315
## Summary
This PR changes the Maintenance Window UI to respect the date format
configured in Kibana's advanced settings.
3 places needed changing:
- Maintenance window list.
- Maintenance window creation page.
- Event popover in the maintenance window list(for recurring MWs).
## Summary
As a part of Expandable Findings flyout, we will need to move some
Constants, Types, Functions, Components into Security Solution plugin or
Shared package
This PR is phase 2 for Findings (Misconfiguration flyout) which include
moving functions into shared package or security solution plugin
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## 📓 Summary
Closes https://github.com/elastic/streams-program/issues/102
Closes https://github.com/elastic/streams-program/issues/159
This re-work of the enrichment state management introduces XState as
state library to prepare scaling the enrichment part for more processors
and improve performance reducing unnecessary side effects.
## 🤓 Reviewers note
**There is a lot to digest on this PR, I'm open to any suggestion and I
left some notes around to guide the review.
This is also far from perfect as there is margin for other minor DX
improvements for consuming the state machines, but it will all come in
follow-up work after we resolve prioritized work such as integrating the
Schema Editor.**
Most of the changes on this PR are about the state management for the
stream enrichment, but it touches also some other areas to integrate the
event-based flow.
### Stream enrichment machine
This machine handles the complexity around updating/promoting/deleting
processors, and the available simulation states.
It's a root level machine that spawns and manages its children machine,
one for the **simulation** behaviour and one for each **processor**
instantiated.
<img width="950" alt="Screenshot 2025-02-27 at 17 10 03"
src="https://github.com/user-attachments/assets/756a6668-600d-4863-965e-4fc8ccd3a69f"
/>
### Simulation machine
This machine handle the flow around sampling -> simulating, handling
debouncing and determining once a simulation can run or should refresh.
It also spawn a child date range machine to react to the observable time
changes and reloads.
It also derives all the required table configurations (columns, filters,
documents) centralizing the parsing and reducing the cases for
re-computing, since we don't rely anymore on the previous live
processors copy.
<img width="1652" alt="Screenshot 2025-02-27 at 17 33 40"
src="https://github.com/user-attachments/assets/fc1fa089-acb2-4ec5-84bc-f27f81cc6abe"
/>
### Processor machine
A processor can be in different states depending on the changes, not
this tracks each of them independently and send events to the parent
machine to react accordingly. It provide a boost in performance compared
to the previous approach, as we don't have to rerender the whole page
tree since the changes are encapsulated in the machine state.
<img width="1204" alt="Screenshot 2025-03-04 at 11 34 01"
src="https://github.com/user-attachments/assets/0e6b8854-b7c9-4ee8-a721-f4222354d382"
/>
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
When enabling the entity store with a non-superuser with all required
credentials, it returns the following errors:

To fix it, we need to disable security for the saved object client.
While this change sounds scary (exclude security??) there are three
reasons I believe this is the appropriate fix:
* [It's what rules management/alerting/detections does for creating
their hidden/encrypted saved objects.
](https://github.com/elastic/kibana/blob/main/x-pack/platform/plugins/shared/alerting/server/rules_client_factory.ts#L140)I
view that as the canonical example for doing this kind of work.
* Even with this change, we actually still require the user to have
Saved Object Management capabilities, both in the UI (as a privilege
check) and in the init/enable routes, upstream of where we create the
saved object. You can try this out yourself, the init route will fail
without that privilege.
* We only use that particular Saved Object client in that particular
spot, not throughout the rest of our Saved Object usages.
### How to reproduce it
* On main branch
* With an empty cluster
* Generate data with doc generator
* Login with 'elastic' user and create a test role and user with
following credentials:
* cluster, all
* indices, all
* Kibana, all spaces, all
* Open an anonymous tab and login with the test user
* Enable the entity store with the test user
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Implements controls to have more visibility over the errors, especially
in the initialization phase (populate ELSER indices)
### Changes
- Added timeout to the initialization phase (20 minutes).
- Added concurrency control for initialization tasks, only the first
concurrent migration will trigger it, and the rest will await it.
- Added proper error handling for the ES bulk index operations of
integrations and prebuilt rules ELSER indices.
- Added timeout for individual agent invocations (3 minutes)
- Added `migrationsLastError` server state to store the errors (not
ideal, this should be moved to the migration index when we implement it)
for now it's fine.
- Added the `last_error` in the _/stats_ API response.
- The UI displays the `last_error` if it's defined.
### Screenshots
Onboarding error:

Rules page error:

---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Use search after for finding gaps
Issue: https://github.com/elastic/security-team/issues/11860
To be able process more than 10.000 gaps per rule in one update cycle we
need to implement search after loop for all gaps.
For the API I keep from and size method, as it's much for client to use.
<img width="1250" alt="Screenshot 2025-02-17 at 15 25 27"
src="https://github.com/user-attachments/assets/806b2245-8aad-4960-84f4-d2a2818a4a12"
/>
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
resolves https://github.com/elastic/kibana/issues/198944
## Summary
Currently, the redux store can become out of sync with the state in the
UI, leading to the selected dataview not being preserved in the store,
and thereby not being saved when the timeline is saved. This PR sets the
selected dataview and patterns at the point of saving to ensure that
they are set and not overriden.
For additional background, see referenced issues.
## Summary
Relates https://github.com/elastic/ingest-dev/issues/4720
This PR adds retry logic to the task that handles automatic agent
upgrades originally implemented in
https://github.com/elastic/kibana/pull/211019.
Complementary fleet-server change which sets the agent's
`upgrade_attempts` to `null` once the upgrade is complete.:
https://github.com/elastic/fleet-server/pull/4528
### Approach
- A new `upgrade_attempts` property is added to agents and stored in the
agent doc (ES mapping update in
https://github.com/elastic/elasticsearch/pull/123256).
- When a bulk upgrade action is sent from the automatic upgrade task, it
pushes the timestamp of the upgrade to the affected agents'
`upgrade_attempts`.
- The default retry delays are `['30m', '1h', '2h', '4h', '8h', '16h',
'24h']` and can be overridden with the new
`xpack.fleet.autoUpgrades.retryDelays` setting.
- On every run, the automatic upgrade task will first process retries
and then query more agents if necessary (cf.
https://github.com/elastic/ingest-dev/issues/4720#issuecomment-2671660795).
- Once an agent has completed and failed the max retries defined by the
retry delays array, it is no longer retried.
### Testing
The ES query for fetching agents with existing `upgrade_attempts` needs
the updated mappings, so it might be necessary to pull the latest `main`
in the `elasticsearch` repo and run `yarn es source` instead of `yarn es
snapshot` (requires an up-to-date Java environment, currently 23).
In order to test that `upgrade_attempts` is set to `null` when the
upgrade is complete, fleet-server should be run in dev using the change
in https://github.com/elastic/fleet-server/pull/4528.
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] The PR description includes the appropriate Release Notes section,
and the correct `release_note:*` label is applied per the
[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
### Identify risks
Low probability risk of incorrectly triggering agent upgrades. This
feature is currently behind the `enableAutomaticAgentUpgrades` feature
flag.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Julia Bardi <90178898+juliaElastic@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
**Partially addresses: https://github.com/elastic/kibana/issues/210358**
## Summary
### Editing of prebuilt rules with missing base versions
**When the base version** of a currently installed prebuilt rule **is missing** among the `security-rule` asset saved objects, and the user edits this rule:
- We should mark the rule as customized, only if the new rule settings are different from the current rule settings.
- For example, adding a new tag should mark the rule as customized. Then, if the user removes this tag, the rule should remain to be marked as customized. This matches the current behavior.
- However, if the user saves the rule without making any changes to it, it should keep its `is_customized` field as is. This is different from the current behavior.
### Importing of prebuilt rules with missing base versions
**When the base version** of a prebuilt rule that is being imported **is missing** among the `security-rule` asset saved objects, and the user imports this rule:
- If this rule is not installed, it should be created with `is_customized` field set to `false`.
- If this rule is already installed, it should be updated.
- Its `is_customized` field should be set to `true` if the rule from the import payload is not equal to the installed rule.
- Its `is_customized` field should be be kept unchanged (`false` or `true`) if the rule from the import payload is equal to the installed rule.
Since we decided we don't want to provide routing for classic streams,
it doesn't make sense to be on the level of the ingest stream in the
API. This PR moves routing next to fields to make clear that it's only
supported for wired streams.
This PR adds pendingRecoveredCount field to AAD as a step to make ADD
source of truth.
In the next step we can build alerts in the alerting task runner from
AAD rather than task state.
## Summary
In an effort to make SLI charts more quickly visible on the SLO overview
page, remove SLO details that do not give users valuable insight into
key metrics and add them to a new tab. Retain some of the SLO details
above the tabs like SLI value, tags, and description (see figma for the
inspiration)
https://www.figma.com/design/91R0OtRZHy5xvaE8dGStBo/SLO%2FSLI-assets?node-id=4601-59103&t=K1vI6qtXbb48XPgr-1
<img width="1474" alt="Screenshot 2025-02-28 at 4 53 05 PM"
src="https://github.com/user-attachments/assets/3fdbe766-4047-45b5-a986-3a029c09bd1f"
/>

## Release Notes
SLO overview should give users a clear, immediate picture into key
objective data. Previously, the user would have had to scroll past
static data that describes the SLO definition before seeing valuable
information about their SLIs. This static data has been moved to a
separate tab, making charts more easily accessible.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Allow slo query wild card filters when kql filter and dsl filter both
are used.
For KQL filter, allowLeadingWildcards was true by default. introduces
the ability to use wildcard filters in SLO queries when DSL filters are
also used.
### Changes Made
1. **Updated `getElasticsearchQueryOrThrow` function:**
- Added support for `dataView` parameter in the `toElasticsearchQuery`
function.
- Included additional options for `allowLeadingWildcards`.
- Enhanced error handling to differentiate between invalid KQL and KQL
queries with invalid filters.
2. **Test Coverage:**
- Added new test cases to cover scenarios with wildcard queries and
filters.
## Summary
Background: https://github.com/elastic/kibana/pull/212173
Based off of feedback on the work in the PRs listed in that issue,
additional performance improvements can be made to the cells rendered in
the alert table. The changes made in this PR involve migrating out
shared context to a provider so certain hooks (some expensive... i.e.
browserFieldsByName) aren't made for every cell in the UI, but once and
passed down to each cell accordingly.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios