## Use search after for finding gaps
Issue: https://github.com/elastic/security-team/issues/11860
To be able process more than 10.000 gaps per rule in one update cycle we
need to implement search after loop for all gaps.
For the API I keep from and size method, as it's much for client to use.
<img width="1250" alt="Screenshot 2025-02-17 at 15 25 27"
src="https://github.com/user-attachments/assets/806b2245-8aad-4960-84f4-d2a2818a4a12"
/>
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
resolves https://github.com/elastic/kibana/issues/198944
## Summary
Currently, the redux store can become out of sync with the state in the
UI, leading to the selected dataview not being preserved in the store,
and thereby not being saved when the timeline is saved. This PR sets the
selected dataview and patterns at the point of saving to ensure that
they are set and not overriden.
For additional background, see referenced issues.
## Summary
Relates https://github.com/elastic/ingest-dev/issues/4720
This PR adds retry logic to the task that handles automatic agent
upgrades originally implemented in
https://github.com/elastic/kibana/pull/211019.
Complementary fleet-server change which sets the agent's
`upgrade_attempts` to `null` once the upgrade is complete.:
https://github.com/elastic/fleet-server/pull/4528
### Approach
- A new `upgrade_attempts` property is added to agents and stored in the
agent doc (ES mapping update in
https://github.com/elastic/elasticsearch/pull/123256).
- When a bulk upgrade action is sent from the automatic upgrade task, it
pushes the timestamp of the upgrade to the affected agents'
`upgrade_attempts`.
- The default retry delays are `['30m', '1h', '2h', '4h', '8h', '16h',
'24h']` and can be overridden with the new
`xpack.fleet.autoUpgrades.retryDelays` setting.
- On every run, the automatic upgrade task will first process retries
and then query more agents if necessary (cf.
https://github.com/elastic/ingest-dev/issues/4720#issuecomment-2671660795).
- Once an agent has completed and failed the max retries defined by the
retry delays array, it is no longer retried.
### Testing
The ES query for fetching agents with existing `upgrade_attempts` needs
the updated mappings, so it might be necessary to pull the latest `main`
in the `elasticsearch` repo and run `yarn es source` instead of `yarn es
snapshot` (requires an up-to-date Java environment, currently 23).
In order to test that `upgrade_attempts` is set to `null` when the
upgrade is complete, fleet-server should be run in dev using the change
in https://github.com/elastic/fleet-server/pull/4528.
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] The PR description includes the appropriate Release Notes section,
and the correct `release_note:*` label is applied per the
[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
### Identify risks
Low probability risk of incorrectly triggering agent upgrades. This
feature is currently behind the `enableAutomaticAgentUpgrades` feature
flag.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Julia Bardi <90178898+juliaElastic@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
**Partially addresses: https://github.com/elastic/kibana/issues/210358**
## Summary
### Editing of prebuilt rules with missing base versions
**When the base version** of a currently installed prebuilt rule **is missing** among the `security-rule` asset saved objects, and the user edits this rule:
- We should mark the rule as customized, only if the new rule settings are different from the current rule settings.
- For example, adding a new tag should mark the rule as customized. Then, if the user removes this tag, the rule should remain to be marked as customized. This matches the current behavior.
- However, if the user saves the rule without making any changes to it, it should keep its `is_customized` field as is. This is different from the current behavior.
### Importing of prebuilt rules with missing base versions
**When the base version** of a prebuilt rule that is being imported **is missing** among the `security-rule` asset saved objects, and the user imports this rule:
- If this rule is not installed, it should be created with `is_customized` field set to `false`.
- If this rule is already installed, it should be updated.
- Its `is_customized` field should be set to `true` if the rule from the import payload is not equal to the installed rule.
- Its `is_customized` field should be be kept unchanged (`false` or `true`) if the rule from the import payload is equal to the installed rule.
Since we decided we don't want to provide routing for classic streams,
it doesn't make sense to be on the level of the ingest stream in the
API. This PR moves routing next to fields to make clear that it's only
supported for wired streams.
This PR adds pendingRecoveredCount field to AAD as a step to make ADD
source of truth.
In the next step we can build alerts in the alerting task runner from
AAD rather than task state.
## Summary
In an effort to make SLI charts more quickly visible on the SLO overview
page, remove SLO details that do not give users valuable insight into
key metrics and add them to a new tab. Retain some of the SLO details
above the tabs like SLI value, tags, and description (see figma for the
inspiration)
https://www.figma.com/design/91R0OtRZHy5xvaE8dGStBo/SLO%2FSLI-assets?node-id=4601-59103&t=K1vI6qtXbb48XPgr-1
<img width="1474" alt="Screenshot 2025-02-28 at 4 53 05 PM"
src="https://github.com/user-attachments/assets/3fdbe766-4047-45b5-a986-3a029c09bd1f"
/>

## Release Notes
SLO overview should give users a clear, immediate picture into key
objective data. Previously, the user would have had to scroll past
static data that describes the SLO definition before seeing valuable
information about their SLIs. This static data has been moved to a
separate tab, making charts more easily accessible.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Part of #203591
Our current logs are confusing as it is unclear to which worker message
is related. E.g.
```
proc [playwright] debg [scout] Creating new local SAML session for role 'admin'
proc [playwright] debg [scout] Creating new local SAML session for role 'admin'
```
it is not clear if the messages are coming from the same worker or 2
different ones.
Before
```
[chromium] › error_handling.spec.ts:30:12 › Discover app - errors › should render invalid scripted field error @ess
debg [scout] applying update to kibana config: {"timepicker:timeDefaults":"{ \"from\": \"2015-09-19T06:31:44.000Z\", \"to\": \"2015-09-23T18:31:44.000Z\"}"}
debg [scout] Requesting url (redacted): [http://localhost:5620/s/test-space-0/internal/kibana/settings]
debg [scout] scoutSpace:test-space-0 'uiSettings.setDefaultTime' took 116.66ms
...
debg [scout] [service] browserAuth
debg [scout] Creating new local SAML session for role 'viewer'
[chromium] › saved_searches.spec.ts:66:14 › Discover app - saved searches › should customize time range on dashboards @ess @svlSearch @svlOblt
succ [scout] import success
debg [scout] scoutSpace:test-space-1 'savedObjects.load' took 1028.17ms
...
debg [scout] [service] browserAuth
debg [scout] Creating new local SAML session for role 'editor'
debg [scout] [service] scoutPage:test-space-1
debg [scout] [service] pageObjects:test-space-1
```
After:
```
[chromium] › error_handling.spec.ts:30:12 › Discover app - errors › should render invalid scripted field error @ess
debg [scout-worker-1] applying update to kibana config: {"timepicker:timeDefaults":"{ \"from\": \"2015-09-19T06:31:44.000Z\", \"to\": \"2015-09-23T18:31:44.000Z\"}"}
debg [scout-worker-1] Requesting url (redacted): [http://localhost:5620/s/test-space-1/internal/kibana/settings]
debg [scout-worker-1] test-space-1: 'uiSettings.setDefaultTime' took 131.30ms
...
debg [scout-worker-1] [browserAuth] loaded
debg [scout-worker-1] Creating new local SAML session for role 'viewer'
[chromium] › saved_searches.spec.ts:66:14 › Discover app - saved searches › should customize time range on dashboards @ess @svlSearch @svlOblt
debg [scout-worker-2] test-space-2: 'savedObjects.load' took 1005.91ms
...
debg [scout-worker-2] [browserAuth] loaded
debg [scout-worker-2] Creating new local SAML session for role 'editor'
debg [scout-worker-2] [scoutPage] loaded
debg [scout-worker-2] [pageObjects] loaded
```
**Note**: single thread run will log under `[scout-worker]` context
How to verify:
1) start servers - `node scripts/scout.js start-server --stateful`
2) run tests - `npx playwright test --config
x-pack/platform/plugins/private/discover_enhanced/ui_tests/parallel.playwright.config.ts`
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
This PR contains the following updates:
| Package | Update | Change |
|---|---|---|
| docker.elastic.co/wolfi/chainguard-base | digest | `15a4191` ->
`6dcddd8` |
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR has been generated by [Renovate
Bot](https://redirect.github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4xMDcuMCIsInVwZGF0ZWRJblZlciI6IjM5LjEwNy4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJUZWFtOk9wZXJhdGlvbnMiLCJiYWNrcG9ydDpza2lwIiwicmVsZWFzZV9ub3RlOnNraXAiXX0=-->
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
## Summary
Allow slo query wild card filters when kql filter and dsl filter both
are used.
For KQL filter, allowLeadingWildcards was true by default. introduces
the ability to use wildcard filters in SLO queries when DSL filters are
also used.
### Changes Made
1. **Updated `getElasticsearchQueryOrThrow` function:**
- Added support for `dataView` parameter in the `toElasticsearchQuery`
function.
- Included additional options for `allowLeadingWildcards`.
- Enhanced error handling to differentiate between invalid KQL and KQL
queries with invalid filters.
2. **Test Coverage:**
- Added new test cases to cover scenarios with wildcard queries and
filters.
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [msw](https://mswjs.io)
([source](https://redirect.github.com/mswjs/msw)) | devDependencies |
patch | [`~2.7.2` ->
`~2.7.3`](https://renovatebot.com/diffs/npm/msw/2.7.3/2.7.3) |
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR has been generated by [Renovate
Bot](https://redirect.github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4xMDcuMCIsInVwZGF0ZWRJblZlciI6IjM5LjEwNy4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJUZWFtOkNsb3VkIFNlY3VyaXR5IiwiYmFja3BvcnQ6YWxsLW9wZW4iLCJyZWxlYXNlX25vdGU6c2tpcCJdfQ==-->
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
## Summary
Background: https://github.com/elastic/kibana/pull/212173
Based off of feedback on the work in the PRs listed in that issue,
additional performance improvements can be made to the cells rendered in
the alert table. The changes made in this PR involve migrating out
shared context to a provider so certain hooks (some expensive... i.e.
browserFieldsByName) aren't made for every cell in the UI, but once and
passed down to each cell accordingly.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
Closes#211783
Part of https://github.com/elastic/kibana/issues/195857
## Summary
This PR expands the logic to get the dashboard files based on the agent.
We have many different ways to ingest data so we want to add more
metrics dashboards to the APM metrics tab. The different ingest paths we
have:
Classic APM Agent --> APM Server --> ES
Vanilla OTel SDKs --> APM Server --> ES
EDOT OTel SDKs --> APM Server --> ES
Classic APM Agent --> EDOT Collector --> ES
Vanilla OTel SDKs. --> EDOT Collector --> ES
EDOT OTel SDKs --> EDOT Collector --> ES
We agreed on having a dashboard filename pattern to make showing the
correct dashboard easier described
[here](https://github.com/elastic/kibana/issues/195857#issue-2580733648)
First, we determine if the ingest path is through APM Server or EDOT
Collector by checking the `telemtry.sdk` fields.
## TODOs / Reviewer notes
- [ ] Currently, we have a fallback to metrics charts which is valid
only if we have APM agent so this PR adds an empty state message:
"Runtime metrics are not available for this Agent / SDK type." in case
there is no dashboard for the service language. To be improved in
https://github.com/elastic/kibana/issues/211774 and will be updated in
this PR when ready - I will still open it for review as the other logic
can be reviewed
- The dashboards are to be updated (by the agent team so not part of the
changes here)
## Testing:
- Using e2e PoC
- The available dashboard cases can be found in
[loadDashboardFile](91f169e19a/x-pack/solutions/observability/plugins/apm/public/components/app/metrics/static_dashboard/dashboards/dashboard_catalog.ts (L40))
- Cases to be checked:
- OTel native with Vanilla OTel SDKs with available dashboard (example
case file: `otel_native-otel_other-nodejs`, `...-java`, `...-dotnet`)
<img width="1903" alt="image"
src="https://github.com/user-attachments/assets/44d37b05-a8e7-4f14-a1de-2c631f1843bb"
/>
- APM server with Vanilla OTel SDKs service with available dashboard
(example case file: `classic_apm-otel_other-nodejs`, `...-java`,
`...-dotnet`)

- APM server with Classic APM Agent (example case file:
`classic_apm-apm-nodejs`, `...-java`)
<img width="962" alt="image"
src="https://github.com/user-attachments/assets/f9e96dce-55c8-467a-93f0-a09fa219597e"
/>
- OTel native with Vanilla OTel SDKs without available dashboard (empty
state case example: python service)

- APM server with Vanilla OTel SDKs service without available dashboard
(empty state)
<img width="1910" alt="image"
src="https://github.com/user-attachments/assets/5219cf94-5013-4874-aaea-e558cca69281"
/>
- APM server with Classic APM Agent without available dashboard (Current
metrics fallback)
<img width="1914" alt="image"
src="https://github.com/user-attachments/assets/66342f49-876c-4ad5-a4d1-1414c3abac75"
/>
- ⚠️ OTel native Dashboards are still not available (at the time of
adding the description)
---------
Co-authored-by: Sergi Romeu <sergi.romeu@elastic.co>
Co-authored-by: Cauê Marcondes <55978943+cauemarcondes@users.noreply.github.com>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
`99.4.0-borealis.0` ⏩ `100.0.0`
[Questions? Please see our Kibana upgrade
FAQ.](https://github.com/elastic/eui/blob/main/wiki/eui-team-processes/upgrading-kibana.md#faq-for-kibana-teams)
---
First of all, 💯🎉!
> [!Warning]
> Please note that the [public changelog for EUI
v100.0.0](https://github.com/elastic/eui/releases/tag/v100.0.0) is
longer than what's included below.
>
> Kibana has been using Borealis-specific builds of EUI since November
last year (suffixed with `-borealis.X`), which were built from a
just-merged EUI feature branch.
> Since that feature branch just got merged and released with EUI
v100.0.0, **the public changelog differs from what Kibana should be
concerned about** due to updating from a custom Borealis-enabled version
of EUI.
>
> You can find the list of all (one 👀) changes made between version
`99.4.0-borealis.0` and `100.0.0` below.
## [`v100.0.0`](https://github.com/elastic/eui/releases/v100.0.0)
**Bug fixes**
- Fixed `EuiComboBox` by cleaning duplicated values when having a
delimiter prop. ([#8335](https://github.com/elastic/eui/pull/8335))
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
# Backport
This will backport the following commits from `8.18` to `main`:
- [[SecuritySolution] Fix risk engine component template renaming
(#212853)](https://github.com/elastic/kibana/pull/212853)
<!--- Backport version: 9.6.6 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)
<!--BACKPORT [{"author":{"name":"Pablo
Machado","email":"pablo.nevesmachado@elastic.co"},"sourceCommit":{"committedDate":"2025-03-05T09:50:31Z","message":"[SecuritySolution]
Fix risk engine component template renaming (#212853)\n\n##
Summary\n\nThe previous implementation tried to rename the index
template during\n`init` and did not consider multiple spaces. to fix it,
I have:\n* Delete the previous code from `init`\n* Created a new
migration that created the new component templates and\nupdated the
index templates\n* Deleted the old component template after all spaces
migration ran\n* Add support for multiple spaces\n* I also renamed a
function inside `init` to make the code more clear\n* Added error
handling code that concatenates error messages and logs\nall of them at
the end\n\n\n### How to test it:\n\n**Scenario 1**\n\n* The usual way to
desk test this PR would be\n* Create a cluster with 8.17\n* Enable the
risk Engine in 8.17\n* Create a new space in 8.17\n* Upgrade the cluster
to 8.18 (this branch)\n* Enable the risk engine in the second
space.\n\n**Scenario 2**\n* Create a cluster with 8.17\n* Enable the
risk engine\n* Create a space\n* Enable another risk engine\n* Create
another space\n* Upgrade the cluster to 8.18 (this branch)\n* Check if
the migration ran in the logs\n* Check if all risk engines are installed
and the index templates and\nindex components are there.\n* Install a
new risk engine in the space where it isn't installed.\n* Restart Kibana
and make sure the migrations didn't run a second time\n\n###
Checklist\n\nReviewers should verify this PR satisfies this list as
well.\n\n- [x] [Unit or
functional\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\nwere
updated or added to match the most common scenarios\n- [x] The PR
description includes the appropriate Release Notes section,\nand the
correct `release_note:*` label is applied per
the\n[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)\n\n###
Identify risks\n\n[ ] This PR needs to be tested for upgrades between
different versions\nand a diverse number of spaces and risk engines
installed\n\n---------\n\nCo-authored-by: abhishekbhatia1710
<abhishek.bhatia@elastic.co>","sha":"b7908a4c6f91c79459f7b509bfd444ad169d6770","branchLabelMapping":{"^v8.16.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["bug","release_note:skip","v9.0.0","Team:
SecuritySolution","Feature:Entity Analytics","Team:Entity
Analytics","backport:version","v8.18.0","v9.1.0","v8.19.0"],"title":"[SecuritySolution]
Fix risk engine component template
renaming","number":212853,"url":"https://github.com/elastic/kibana/pull/212853","mergeCommit":{"message":"[SecuritySolution]
Fix risk engine component template renaming (#212853)\n\n##
Summary\n\nThe previous implementation tried to rename the index
template during\n`init` and did not consider multiple spaces. to fix it,
I have:\n* Delete the previous code from `init`\n* Created a new
migration that created the new component templates and\nupdated the
index templates\n* Deleted the old component template after all spaces
migration ran\n* Add support for multiple spaces\n* I also renamed a
function inside `init` to make the code more clear\n* Added error
handling code that concatenates error messages and logs\nall of them at
the end\n\n\n### How to test it:\n\n**Scenario 1**\n\n* The usual way to
desk test this PR would be\n* Create a cluster with 8.17\n* Enable the
risk Engine in 8.17\n* Create a new space in 8.17\n* Upgrade the cluster
to 8.18 (this branch)\n* Enable the risk engine in the second
space.\n\n**Scenario 2**\n* Create a cluster with 8.17\n* Enable the
risk engine\n* Create a space\n* Enable another risk engine\n* Create
another space\n* Upgrade the cluster to 8.18 (this branch)\n* Check if
the migration ran in the logs\n* Check if all risk engines are installed
and the index templates and\nindex components are there.\n* Install a
new risk engine in the space where it isn't installed.\n* Restart Kibana
and make sure the migrations didn't run a second time\n\n###
Checklist\n\nReviewers should verify this PR satisfies this list as
well.\n\n- [x] [Unit or
functional\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\nwere
updated or added to match the most common scenarios\n- [x] The PR
description includes the appropriate Release Notes section,\nand the
correct `release_note:*` label is applied per
the\n[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)\n\n###
Identify risks\n\n[ ] This PR needs to be tested for upgrades between
different versions\nand a diverse number of spaces and risk engines
installed\n\n---------\n\nCo-authored-by: abhishekbhatia1710
<abhishek.bhatia@elastic.co>","sha":"b7908a4c6f91c79459f7b509bfd444ad169d6770"}},"sourceBranch":"8.18","suggestedTargetBranches":["9.0","main","8.x"],"targetPullRequestStates":[{"branch":"9.0","label":"v9.0.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.18","label":"v8.18.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/212853","number":212853,"mergeCommit":{"message":"[SecuritySolution]
Fix risk engine component template renaming (#212853)\n\n##
Summary\n\nThe previous implementation tried to rename the index
template during\n`init` and did not consider multiple spaces. to fix it,
I have:\n* Delete the previous code from `init`\n* Created a new
migration that created the new component templates and\nupdated the
index templates\n* Deleted the old component template after all spaces
migration ran\n* Add support for multiple spaces\n* I also renamed a
function inside `init` to make the code more clear\n* Added error
handling code that concatenates error messages and logs\nall of them at
the end\n\n\n### How to test it:\n\n**Scenario 1**\n\n* The usual way to
desk test this PR would be\n* Create a cluster with 8.17\n* Enable the
risk Engine in 8.17\n* Create a new space in 8.17\n* Upgrade the cluster
to 8.18 (this branch)\n* Enable the risk engine in the second
space.\n\n**Scenario 2**\n* Create a cluster with 8.17\n* Enable the
risk engine\n* Create a space\n* Enable another risk engine\n* Create
another space\n* Upgrade the cluster to 8.18 (this branch)\n* Check if
the migration ran in the logs\n* Check if all risk engines are installed
and the index templates and\nindex components are there.\n* Install a
new risk engine in the space where it isn't installed.\n* Restart Kibana
and make sure the migrations didn't run a second time\n\n###
Checklist\n\nReviewers should verify this PR satisfies this list as
well.\n\n- [x] [Unit or
functional\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\nwere
updated or added to match the most common scenarios\n- [x] The PR
description includes the appropriate Release Notes section,\nand the
correct `release_note:*` label is applied per
the\n[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)\n\n###
Identify risks\n\n[ ] This PR needs to be tested for upgrades between
different versions\nand a diverse number of spaces and risk engines
installed\n\n---------\n\nCo-authored-by: abhishekbhatia1710
<abhishek.bhatia@elastic.co>","sha":"b7908a4c6f91c79459f7b509bfd444ad169d6770"}},{"branch":"9.1","label":"v9.1.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.19","label":"v8.19.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}]
BACKPORT-->
## Summary
Fixes#212917
The root problem is belongs into the annotation layer logic to produce
the reference id for the persisted saved object.
In the previous logic a new `uuid` was generated all the time leading to
a continuous flow of `setState` calls to update the "runtime" state of
the Lens object when inline editing: the fix was to produce a stable id
in the `extractReferences` logic to avoid the re-renders.
The logic has been tweaked a bit now with some extra explanations inline
to make it more understandable.
New tests have been added to smoke test this scenario.
### Checklist
Check the PR satisfies following conditions.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: Nick Partridge <nick.ryan.partridge@gmail.com>