## Summary
This PR removes the `isDraggable` prop throughout Security Solution.
Unless I'm mistaken, this property isn't necessary anymore, as we do not
use those draggable elements anymore. From what I could see, we had its
value set to `false` everywhere.
This lead to a lot of files impacted, but most of them have only a
couple of lines changed. In some files though, removing the
`isDraggable` prop allowed to remove more code than became obsolete.
**No UI changes should have been introduced in this PR!**
### What this PR does
- removes `isDraggable` everywhere
- performs the extra small cleanup when obvious
- updates all corresponding unit e2e and tests
### What this PR does
- rename files or component names to limit the already extensive impact
of the code change
This PR switch the endpoint used for the `chat_completion` task type to
`_stream`. Only the URL changes, the request and response format stay
the same. The `_stream` URL was introduced a couple verisons ago and is
the preferred route for interacting with `chat_completion`.
### Testing
Setup a pre-configured connector for security. Add this to your
`config/kibana.dev.yml`
```
xpack.actions.preconfigured:
my-inference-open-ai:
name: Inference Preconfig Jon
actionTypeId: .inference
exposeConfig: true
config:
provider: 'openai'
taskType: 'chat_completion'
inferenceId: 'openai-chat_completion-123'
providerConfig:
rate_limit:
requests_per_minute: 80000
model_id: 'gpt-4o'
url: https://api.openai.com/v1/chat/completions
secrets:
providerSecrets:
api_key: '<api key>'
```
Then via the Connectors page, create an AI connector with the inference
endpoint id set to `openai-chat_completion-123`
https://github.com/user-attachments/assets/29d56d58-cd96-432f-9d13-460446d204a1
## Summary
This PR renames the `enterprise_search` config path from
`enterpriseSearch` to `xpack.search`. This is to migrate away from
customer facing usage of enterprise search and align with other search
plugin config paths like `xpack.serverless.search`.
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Implements the access controls for SIEM rule migrations.
## API changes
- All API routes have been secured with "SIEM Migration" feature checks
- Start migration API route now checks if the user has privileges to use
the connector ID received
## UI changes
### Onboarding SIEM migrations
- AI Connector selection
- Actions & Connectors: Read -> This privilege allows reading and
selecting a connector
Otherwise, we show a callout with the missing privileges:

- Create a migration
- Security All -> Main Security read & write access
- Siem Migrations All -> new feature under the Security catalog
- Actions & Connectors: Read -> This privilege allows connector
execution for LLM calls
Otherwise, we show a callout with the missing privileges:

### Rule Translations page
- Minimum privileges to make the page accessible (read access):
- Security Read -> Main Security read access
- Siem Migrations All -> new feature under the Security catalog
Otherwise, we hide the link in the navigation and display the generic
empty state if accessed:

- To successfully install rules the following privileges are also
required (write access):
- Security All -> Main Security read & write access
- Index privileges for `.alerts*` pattern: _read, write,
view_index_metadata, manage_
- Index privileges for `lookup_*` pattern: _read_
Otherwise, we show a callout at the top of the page, this callout is
consistent with the one displayed on the Detection Rules page
(`/app/security/rules`)

- To retry rule translations (upload missing macros/lookups or retry
errors)
- Actions & Connectors: Read -> This privilege allows connector
execution for LLM calls
Otherwise, when attempted, we show a toast with the missing privilege.

## Other changes
- Technical preview label

- No connector selected toast
https://github.com/user-attachments/assets/e4900129-ae9c-413f-9a41-f7dca452e71d
## Fixes
- [Fixed] Not possible to select a connector when no connector is
selected:

---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Resolves#209261
## Summary
Removes the code used to render Logs Explorer. This does not result in
any functional changes.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
When we run Scout tests in parallel, we call SAML authentication in
parallel too and since by default `.security-profile-8` index does not
exist, we periodically getting 503 response:
```
proc [kibana] [2025-01-29T11:13:10.420+01:00][ERROR][plugins.security.user-profile]
Failed to activate user profile: {"error":{"root_cause":[{"type":"unavailable_shards_exception","reason":
"at least one search shard for the index [.security-profile-8] is unavailable"}],
"type":"unavailable_shards_exception","reason":"at least one search shard
for the index [.security-profile-8] is unavailable"},"status":503}. {"service":{"node":
{"roles":["background_tasks","ui"]}}}
```
The solution is to retry the SAML callback assuming that index will be
created and the issue will be solved.
We agreed with Kibana-Security to retry only **5xx** errors, because for
**4xx** we most likely have to start the authentication from the start.
For reviews: it is not 100% reproducible, so I added unit tests to
verify the retry logic is working only for 5xx requests. Please let me
know if I miss something
Retry was verified locally, you might be seeing this logs output:
```
proc [kibana] [2025-01-30T18:40:41.348+01:00][ERROR][plugins.security.user-profile] Failed to activate user profile:
{"error":{"root_cause":[{"type":"unavailable_shards_exception","reason":"at least one search shard for the index
[.security-profile-8] is unavailable"}],"type":"unavailable_shards_exception","reason":"at least one search shard
for the index [.security-profile-8] is unavailable"},"status":503}. {"service":{"node":{"roles":["background_tasks","ui"]}}}
proc [kibana] [2025-01-30T18:40:41.349+01:00][ERROR][plugins.security.authentication] Login attempt with "saml"
provider failed due to unexpected error: {"error":{"root_cause":[{"type":"unavailable_shards_exception","reason":
"at least one search shard for the index [.security-profile-8] is unavailable"}],"type":"unavailable_shards_exception",
"reason":"at least one search shard for the index [.security-profile-8] is unavailable"},"status":503}
{"service":{"node":{"roles":["background_tasks","ui"]}}}
proc [kibana] [2025-01-30T18:40:41.349+01:00][ERROR][http] 500 Server Error {"http":{"response":{"status_code":500},"request":{"method":"post","path":"/api/security/saml/callback"}},"error":
{"message":"unavailable_shards_exception\n\tRoot causes:\n\t\tunavailable_shards_exception: at least one
search shard for the index [.security-profile-8] is
ERROR [scout] SAML callback failed: expected 302, got 500
Waiting 939 ms before the next attempt
proc [playwright]
info [o.e.c.r.a.AllocationService] [scout] current.health="GREEN" message="Cluster health status changed
from [YELLOW] to [GREEN] (reason: [shards started [[.security-profile-8][0]]])."
previous.health="YELLOW" reason="shards started [[.security-profile-8][0]]"
```
To reproduce:
```
node scripts/scout.js run-tests --stateful --config x-pack/platform/plugins/private/discover_enhanced/ui_tests/parallel.playwright.config.ts
```
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
- Fixes Cypress `parallel.ts` runner to ensure a failure is reported in
conditions where the setup of the test run environment fails to be
craeted
- Adds the `--version` CLI argument to the `run_sentinelone_host` and
`run_microsoft_defender_host` scripts
- Fixes `run_endpoint_host` script to ensure the `--version` (if
defined) is also used for running fleet-server
## Summary
Fixes the agent count issue on the warning model when saving a Defend
package policy. Now it uses the same `active` field instead of `all`, as
the `AgentSummary` component.
Also, re-enables flaky unit test for `PolicySettingsLayout`:
closes: #179984
### Checklist
Check the PR satisfies following conditions.
Reviewers should verify this PR satisfies this list as well.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
## Summary
This PR adds the following tests for Automatic Import:
- jest unit tests for the CEL generation flyout
- FTR tests for the `analyze_api` and `cel` graph endpoints (excluding
200 tests due to https://github.com/elastic/kibana/issues/204177 still
being open)
There is also some very minor cleanup of a test mocking of the now
deprecated FF for generateCel, and small refactor to move a function to
a different file for consistency.
(Cypress tests coming in a separate PR)
## Summary
This extends initial connector telemetry from PR ref
https://github.com/elastic/kibana/pull/186936.
The PR adds the following optional fields when instantiating a new
actionClient as part of its `subActionParams`:
```ts
{
telemetryMetadata : {
pluginId: "your plugin name or unique identifier",
aggregateBy: "ID to aggregate on"
}
}
```
The support is added to all AI connector models for both
stream/non-stream/raw.
The PR also adds token count usage for bedrock `InvokeAIRaw`, as that
was currently not added correctly.
Pierre also helped with adding a new metadata optional field for the `NL
to ESQL functions`, so that users can pass in similar metadata for LLM
conversations using the InfereceClient.
PluginId is a field used to filter telemetry in the way the team wants
to implement it. It could be a team name, a plugin name etc, all
depending on how the team wants to group and filter on the telemetry
event.
AggregateBy is intended to be used to group multiple LLM calls for
aggregations and stats, for example a conversationId that has multiple
LLM calls.
Both fields are optional, so when you do not want to aggregate the
option can simply be ignored.
### Checklist
Check the PR satisfies following conditions.
Reviewers should verify this PR satisfies this list as well.
- [x] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/src/platform/packages/shared/kbn-i18n/README.md)
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] The PR description includes the appropriate Release Notes section,
and the correct `release_note:*` label is applied per the
[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
---------
Co-authored-by: pgayvallet <pierre.gayvallet@elastic.co>
## 📓 Summary
This change introduce a new recursive record type to let the documents
applied used for sampling and simulation not fail on the excessive
strict keys check.
```tsx
// Any primitive value allowed for schema validation, excludes symbols and bigint
type Primitive + zod primitive
// Recursive object
interface RecursiveRecord + zod recursiveRecord
```
Implements an initial UI to manage the data retention of a stream.
The view displays informations about the lifecycle configuration/origin
and also allows one to update it to one of the available options.
Options depend on the type of stream and the deployment type.
These are the options that should be currently available (the api also
have guards):
| | stateful | serverless |
| -------- | ------- | ------ |
| root stream | dsl, ilm | dsl |
| wired stream | inherit, dsl, ilm | inherit, dsl |
| unwired stream* | inherit, dsl | inherit, dsl |
*unwired stream's retention cannot be updated if it's currently using
ILM
### Screenshots



---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Marco Antonio Ghiani <marcoantonio.ghiani01@gmail.com>
Closes https://github.com/elastic/kibana/issues/209308.
### Notes
- Stuck loading state was caused by the changes introduced in
https://github.com/elastic/kibana/pull/206758.
- non-Aggregatable bugs were long running bugs, since this is a tricky
functionality to test I believe they were always there
### 🎥 Demo
In the following scenario, I went into the upgrade scenario, so I
created first a cluster in 7.27.x and then upgrade to latest 8.18.x. The
I performed a manual rollover for `logs-synth.3-default`. Hence what you
can see in the video is:
1. The loading state is not stuck anymore in dataset details page (e.g.
`logs-synth.2-default` )
2. The non-aggregatable is calculated properly for
`logs-synth.3-default`
https://github.com/user-attachments/assets/fa097445-7f0a-4dcb-adae-27688e99bf3c
## Summary
Resolves#209159
Make groupings property in SLO summary optional to fix schema validation
issues with SLOs without groups.
## Release Notes
Fixed bug that caused issues with loading SLOs by status, SLI type, or
instance id.
## Testing
Create a SLO without an entry in the "group by" field. All SLOs should
still be able to be grouped despite this distinction.
Resolves https://github.com/elastic/kibana/issues/205949,
https://github.com/elastic/kibana/issues/191117
## Summary
Trying to fix flaky integration test by performing a bulk create for the
test tasks instead of creating one by one. After making this change, was
able to run the integration test ~100 times without failure.
---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
PR : https://github.com/elastic/kibana/pull/204034 fixed some issues
with timeline batching. It was not able to fix one of the issue with
`Refetch` logic which exists in `main` ( resulting in a flaky test ) and
causing some tests to fail in `8.16`, `8.17` and `8.x`.
## Issue Description
There are 2 issues with below video:
1. When user updates a status of an alert, the `Refetch` only happens on
the first `batch`. This behaviour is flaky currently. Even if the user
is on nth batch, table will fetch 0th batch and reset the user's page
back to 1.
https://github.com/user-attachments/assets/eaf88a82-0e9b-4743-8b2d-60fd327a2443
3. When user clicks `Refresh` manually, then also only first (0th)
`batch` is fetched, which should have rather fetched all the present
batches.
https://github.com/user-attachments/assets/8d578ce3-4f24-4e70-bc3a-ed6ba99167a0
### Checklist
Check the PR satisfies following conditions.
Reviewers should verify this PR satisfies this list as well.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios