Closes https://github.com/elastic/kibana/issues/205479
This filters out the `ChatCompletionTokenCountEvent` from the inference
plugin. This greatly simplifies handling ChatCompletion events in the
Obs AI Assistant.
## Summary
Correctly forwards the selected rule type id to the actions form section
in the Security Solution rule creation/update flow.
Adds a functional test case to cover the bug.
## To verify
1. Navigate to `Security > Rules > Detection rules > Create new rule`
2. Fill in the first 3 steps
3. In the Actions step, select the Cases action
4. Check that the `Group by alert field` dropdown shows the correct
alert fields
5. Create the rule, then repeat point 5 in the rule editing UI
## References
Fixes#210209
## Summary
This PR implements 2 endpoints as a follow up to
https://github.com/elastic/kibana/pull/208126 for working directly with
the `group` object for `GroupStreamDefinition`:
- `PUT /api/streams/{id}/_group`
- `GET /api/streams/{id}/_group`
---------
Co-authored-by: Joe Reuter <johannes.reuter@elastic.co>
## Summary
Fetch and render backend data upon opening the Asset Inventory page.
### Depends on
- https://github.com/elastic/security-team/issues/11270
- https://github.com/elastic/kibana/issues/201709
- https://github.com/elastic/kibana/issues/201710
- https://github.com/elastic/security-team/issues/11687
### Screenshots
<details><summary>No applied filters</summary>
<img width="1452" alt="Screenshot 2025-02-18 at 08 40 51"
src="https://github.com/user-attachments/assets/e8970f92-701f-4bcf-9c43-8c1ce3155ba2"
/>
</details>
<details><summary>Filtering through search bar with KQL</summary>
<img width="1448" alt="Screenshot 2025-02-18 at 08 40 38"
src="https://github.com/user-attachments/assets/fdffe535-bb76-44da-be43-096e3007e680"
/>
</details>
<details><summary>Filtering through filter dropdowns</summary>
<img width="1451" alt="Screenshot 2025-02-18 at 08 41 03"
src="https://github.com/user-attachments/assets/ec68d9e8-5b4f-4c70-ba90-9fb7e4ddf18b"
/>
</details>
<details><summary>Filtering through both search bar and filter dropdowns
- no results found in this case</summary>
<img width="1447" alt="Screenshot 2025-02-18 at 08 40 28"
src="https://github.com/user-attachments/assets/2b2347e1-86fe-4d67-b859-0f84108c58bc"
/>
</details>
<details><summary>Default empty state (no rows fetched)</summary>
<img width="1452" alt="Screenshot 2025-02-18 at 09 39 49"
src="https://github.com/user-attachments/assets/79876021-c09b-42a0-a776-5e5fde688994"
/>
</details>
### Definition of done
- [x] Asset Inventory page fetches data prepared by the data-view that
comes pre-installed with the "Cloud Asset Inventory" integration
- [x] Search bar
- [x] Filters
- [x] Data Grid
- [x] Empty state when number of fetched rows is zero
### How to test
1. Prepare cloud user
- Go to [users
page](https://keep-long-live-env-ess.kb.us-west2.gcp.elastic-cloud.com/app/management/security/users)
on Elastic Cloud
- Create a new user with a custom username and password
- Copy the same roles from the user called `paulo_remote_dev`
2. Start local env running these commands
- Run ES with `node scripts/es snapshot --license trial -E
path.data=../default -E
reindex.remote.whitelist=cb8e85476870428d8c796950e38a2eda.us-west2.gcp.elastic-cloud.com:443
-E xpack.security.authc.api_key.enabled=true`
- Run Kibana with `yarn start --no-base-path`
3. Go to Integrations page, switch on the "*Display beta integrations*"
control, then add the **Cloud Asset Inventory** integration on your
local environment. Postpone Elastic Agent addition.
4. Go to Dev Tools page, click on the "config" tab and add the following
environment variables:
Use the dev tools config tab to save your as follows:
- `${ES_REMOTE_HOST}`:
[https://cb8e85476870428d8c796950e38a2eda.us-west2.gcp.elastic-cloud.com:443](https://cb8e85476870428d8c796950e38a2eda.us-west2.gcp.elastic-cloud.com/)
- `${ES_REMOTE_USER}`: (the username you set for your user on step 0)
- `${ES_REMOTE_PASS}`: (the pass you set for your user on step 0)
5. Run the following script:
<details><summary>Script</summary>
```
POST _reindex?wait_for_completion=false
{
"conflicts": "proceed",
"source": {
"remote": {
"host": "${ES_REMOTE_HOST}",
"username": "${ES_REMOTE_USER}",
"password": "${ES_REMOTE_PASS}"
},
"index": "logs-cloud_asset_inventory*",
"query": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"gte": "now-1d"
}
}
}
]
}
}
},
"dest": {
"op_type": "create",
"index": "logs-cloud_asset_inventory.asset_inventory-default"
},
"script": {
"source": """
ctx._source['entity.category'] = ctx._source.asset.category;
ctx._source['entity.name'] = ctx._source.asset.name;
ctx._source['entity.type'] = ctx._source.asset.type;
ctx._source['entity.sub_type'] = ctx._source.asset.sub_type;
ctx._source['entity.sub_category'] = ctx._source.asset.sub_category;
"""
}
}
```
</details>
Finally, open Discover page and set the DataView filter on the top-right
corner to `logs-cloud_asset_inventory.asset_inventory-*`, as in the
screenshot below. If the grid is populated, you've got data and the
whole setup worked!
<details><summary>Discover page</summary>

</details>
### Checklist
- [ ] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] This was checked for breaking HTTP API changes, and any breaking
changes have been approved by the breaking-change committee. The
`release_note:breaking` label should be applied in these situations.
- [x] The PR description includes the appropriate Release Notes section,
and the correct `release_note:*` label is applied per the
[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
### Identify risks
No risks at all.
## Summary
Switches to the `recursiveRecord` schema so we don't get these in
console:
```
[2025-02-19T15:47:07.556+01:00][WARN ][plugins.streams] Warning for PUT /api/streams/{name}: schema ZodUnknown at body.stream.ingest.wired.fields is not inspectable and could lead to runtime exceptions, convert it to a supported schema
[2025-02-19T15:47:07.557+01:00][WARN ][plugins.streams] Warning for POST /api/streams/{name}/schema/fields_simulation: schema ZodUnknown at body.field_definitions is not inspectable and could lead to runtime exceptions, convert it to a supported schema
[2025-02-19T15:47:07.557+01:00][WARN ][plugins.streams] Warning for POST /api/streams/{name}/processing/_simulate: schema ZodUnknown at body.detected_fields is not inspectable and could lead to runtime exceptions, convert it to a supported schema
```
I had to move the schema definition / types into another file otherwise
a circular dependency was introduced with the `fields/index.ts` file,
causing a `Cannot read properties of undefined (reading '_parse')`
error.
As far as I can see the `recursiveRecord` schema should handle / cover
the ES `MappingProperty` type fine.
## Summary
closes#196319
I think I got the intention of the test wrong in
https://github.com/elastic/kibana/pull/196172.
Looking at the test we enable the risk engine and check everything is
happy. When the risk engine is enabled, the task should be healthy, so I
believe that `running` is a valid status here.
Latest flaky failure:
```
└- ✖ fail: Entity Analytics - Risk Engine @ess @serverless @serverlessQA init_and_status_apis status api should disable / enable risk engine
--
| │ Error: expected [ 'idle', 'claiming' ] to contain 'running'
| │ at Assertion.assert (expect.js💯11)
| │ at Assertion.contain (expect.js:447:10)
| │ at expectTaskIsNotRunning (init_and_status_apis.ts:15:32)
| │ at Context.<anonymous> (init_and_status_apis.ts:781:9)
| │ at processTicksAndRejections (node:internal/process/task_queues:95:5)
| │ at Object.apply (wrap_function.js:74:16)
```
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Closes https://github.com/elastic/kibana/issues/211666
Allow any type in `PackageInfoSchema` and `KibanaAssetReferenceSchema`
to allow new type of epm packages without change in kibana.
Covered with unit test.
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Depends on https://github.com/elastic/kibana/pull/209985
Add suggestions for grok processing:
<img width="594" alt="Screenshot 2025-02-05 at 10 31 27"
src="https://github.com/user-attachments/assets/4b717681-aa7d-4952-a4e0-9013d9b8aaf8"
/>
The logic for generating suggestions works like this:
* Take the current sample
* Split it into patterns based on a simple regex-based grouping
replacing runs of numbers with a placeholder, runs of regular numbers
with a placeholder, etc.
* For the top 5 found groups, pass a couple messages to the LLM in
parallel to come up with a grok pattern
* Check the grok patterns whether they actually match something and
don't break
* Report the patterns that have a positive match rate
For the `Generate patterns` button to show in the UI, make sure a
connector is configured and the license level is above basic (trial
license is easiest to test with).
I did some light refactoring on the processing routes, moving the
simulation bits into a separate file - no changes in this area though.
---------
Co-authored-by: Marco Antonio Ghiani <marcoantonio.ghiani01@gmail.com>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Jean-Louis Leysens <jloleysens@gmail.com>
Closes https://github.com/elastic/kibana/issues/196659
## Summary
This PR adds a new setting schema field `solution` which is used in the
Advanced settings UI to decide whether to display the setting, depending
on the solution of the current space. If the `solution` is not set in
the setting definition, the setting will be displayed in all solutions.
Otherwise, the setting will only be displayed in the set solution.
The current agreement is that we want to display all settings in the
"Observability" settings category in the Oblt solution only and all
settings in the "Security Solution" settings category in the Security
solution only. Therefore, in this PR we set the `solution` field
accordingly in the corresponding setting definitions. Note: We decided
to add a new setting definition field `solution` rather than filtering
by the already existing `category` field so that this approach works in
the future if we want to hide other single settings outside of these two
categories.
**How to test:**
Verify that in the classic solution, you can see all settings, and that
the solution-related settings mentioned above are only displayed in the
corresponding solution.
https://github.com/user-attachments/assets/398ef3e6-973a-4283-ae20-229bf6139d60
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
**Partially resolves: #203523**
## Summary
Fixing the issue of KQL query bar edit component not showing properly
long multiline KQL queries.
The query now isn't fully visible, and it's not possible to navigate
with Up/Down keyboard keys. It's also not possible to scroll down, as
the component doesn't allow to insert new line symbols.
I am fixing the behavior by:
- setting the `bubbleSubmitEvent={true}` so that the key press can
propagate to higher components and be served properly. This fixes the
problem of not allowing to enter new lines.
- I am not touching the broken behavior of Up/Down arrow keys, which
intercepts the event and instead of moving the cursor, iterates items in
the Suggestions panel, which is counterintuitive. Separate issue will be
created for the Kibana Visualization team.
- I am modifying one css style in Kibana Visualization to set height to
and adding a class to set proper alignment of buttons.
# BEFORE
- Not possible to insert new lines.
- Arrow DOWN takes focus to Suggestions Panel, then together with Arrow
UP it is used to iterate the suggestions
- When textarea grows it gets hidden below the parent's panel
https://github.com/user-attachments/assets/d97b81e3-7409-4089-865d-89ee702744f9
# AFTER
- Possible to insert new lines
- Behavior of DOWN / UP Arrows stays the same
- When textarea grows the whole panel resizes
https://github.com/user-attachments/assets/3a59923b-0fb1-49e7-b11d-55474f465ca2https://github.com/user-attachments/assets/48efd325-1c66-43ca-9936-69ef37b4ee7a
## Summary
This PR ensures that the `definition.group.members` is a unique array of
strings. I introduced a new private function to the StreamsClient called
`parseDefinition` that will parse the definition being upserted with the
runtime schemas to ensure they are properly formatted. This is also a
good extension point for doing any transformations we need.
## 📓 Summary
Revert a route config change introduced in [[Streams 🌊] Enrichment
simulation behaviour
improvements](https://github.com/elastic/kibana/pull/209985) that bring
always to the overview page on refresh.
As part of https://github.com/elastic/kibana/pull/208180 the
telemetryMetadata optional field was added to the schema for the AI
connectors, however it seems that one was missing, this PR simply adds
it in.
Similarly to the above PR, the feature cannot be used in the same week
as when it was added, to allow a grace period for serverless. The PR
simply adds the schema update itself.
The storage adapter helper is a very generic package. This PR moves it
out of the observability server utils into a dedicated package to better
reflect this and to be able to use it from non-observability contexts.
The same applies to the observability es client. This PR moves it as
well and renames it to `TracedEsClient` in the same way.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## 📓 Summary
Part of https://github.com/elastic/streams-program/issues/127
Closes https://github.com/elastic/streams-program/issues/114
This update overhauls the internal logic of our processing simulation
endpoint. It now runs parallel simulations (pipeline and, conditionally,
ingest) to extract detailed document reports and processor metrics,
while also handling a host of edge cases.
The key improvements include:
- **Parallel Simulation Execution**
Executes both pipeline and ingest simulations concurrently. The pipeline
simulation always runs to extract per-document reports and metrics. The
ingest simulation runs conditionally when detected fields are provided,
enabling fast failures on mapping mismatches.
- **Document Reporting & Metrics**
Extracts granular differences between source and simulated documents.
Reports include:
- Field-level diffs indicating which processor added or updated fields.
- Detailed error messages (e.g., generic processor failure, generic
simulation failure, non-additive processor failure).
- Calculation of overall success and failure rates, as well as
per-processor metrics.
- **Sequential Processors & Field Overriding**
Supports multiple sequential processors. In cases where later processors
override fields produced by earlier ones, the logic bypasses
non-additive checks to accept the new value.
- **Robust Handling of Partial & Failed Simulations**
Simulations now correctly mark documents as:
- **Parsed** when all processors succeed.
- **Partially parsed** when some processors fail.
- **Failed** when none of the processors processing the document
succeed.
- **Mapping Validation & Non-Additive Detection**
The simulation verifies that the detected field mappings are compatible.
If a processor introduces non-additive changes—updating an existing
field rather than appending—the simulation flags the error and sets a
dedicated `is_non_additive_simulation` flag. Additionally, a failed
ingest simulation (e.g., due to incompatible mapping types) results in
an immediate failure.
The final returned API response adheres to the following TypeScript
type:
```typescript
interface SimulationResponse {
detected_fields: DetectedField[];
documents: SimulationDocReport[];
processors_metrics: Record<string, ProcessorMetrics>;
failure_rate: number;
success_rate: number;
is_non_additive_simulation: boolean;
}
```
## Updated tests
```
Processing Simulation
├── Successful simulations
│ ├── should simulate additive processing
│ ├── should simulate with detected fields
│ ├── should simulate multiple sequential processors
│ ├── should simulate partially parsed documents
│ ├── should return processor metrics
│ ├── should return accurate success/failure rates
│ ├── should allow overriding fields detected by previous simulation processors (skip non-additive check)
│ ├── should gracefully return the errors for each partially parsed or failed document
│ ├── should gracefully return failed simulation errors
│ ├── should gracefully return non-additive simulation errors
│ └── should return the is_non_additive_simulation simulation flag
└── Failed simulations
└── should fail with incompatible detected field mappings
```
## 🚨 API Failure Conditions & Handler Corner Cases
The simulation API handles and reports the following corner cases:
- **Pipeline Simulation Failures** _(Gracefully reported)_
- Syntax errors in processor configurations (e.g., malformed grok
patterns) trigger a pipeline-level failure with detailed error
information (processor ID, error type, and message).
- **Non-Additive Processor Behavior** _(Gracefully reported)_
- If a processor modifies fields already present in the source document
rather than strictly appending new fields, the simulation flags this as
a non-additive change.
- The error is recorded both at the document level (resulting in a
"partially_parsed" or "failed" status) and within per-processor metrics,
with the global flag `is_non_additive_simulation` set to true.
- **Partial Document Processing** _(Gracefully reported)_
- In scenarios with sequential processors where the first processor
succeeds (e.g., a dissect processor) and the subsequent grok processor
fails, documents are marked as "partially_parsed."
- These cases are reflected in the overall success/failure rates and
detailed per-document error lists.
- **Field Overriding**
- When a later processor intentionally overrides fields (for instance,
reassigning a previously calculated field), the simulation bypasses the
non-additive check, and detected fields are aggregated accordingly,
noting both the original and overridden values.
- **Mapping Inconsistencies** _(API failure bad request)_
- When the ingest simulation detects incompatibility between the
provided detected field mappings (such as defining a field as a boolean
when it should be a date) and the source document, it immediately fails.
- The failure response includes an error message explaining the
incompatibility.
## 🔜 Follow-up Work
- **Integrate Schema Editor**
Given the improved support for detected fields, a follow up PR will
introduced the Schema Editor and will allow mapping along the data
enrichment.
- **Granular filtering and report**
Having access to more granular details such as status, errors and
detected fields for each documents, we could enhance the table with
additional information and better filters. cc @LucaWintergerst @patpscal
## 🎥 Demo recordings
https://github.com/user-attachments/assets/29f804eb-6dd4-4452-a798-9d48786cbb7f
---------
Co-authored-by: Jean-Louis Leysens <jloleysens@gmail.com>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This PR aims at relocating some of the Kibana modules (plugins and
packages) into a new folder structure, according to the _Sustainable
Kibana Architecture_ initiative.
> [!IMPORTANT]
> * We kindly ask you to:
> * Manually fix the errors in the error section below (if there are
any).
> * Search for the `packages[\/\\]` and `plugins[\/\\]` patterns in the
source code (Babel and Eslint config files), and update them
appropriately.
> * Manually review
`.buildkite/scripts/pipelines/pull_request/pipeline.ts` to ensure that
any CI pipeline customizations continue to be correctly applied after
the changed path names
> * Review all of the updated files, specially the `.ts` and `.js` files
listed in the sections below, as some of them contain relative paths
that have been updated.
> * Think of potential impact of the move, including tooling and
configuration files that can be pointing to the relocated modules. E.g.:
> * customised eslint rules
> * docs pointing to source code
> [!NOTE]
> * This PR has been auto-generated.
> * Any manual contributions will be lost if the 'relocate' script is
re-run.
> * Try to obtain the missing reviews / approvals before applying manual
fixes, and/or keep your changes in a .patch / git stash.
> * Please use
[#sustainable_kibana_architecture](https://elastic.slack.com/archives/C07TCKTA22E)
Slack channel for feedback.
Are you trying to rebase this PR to solve merge conflicts? Please follow
the steps describe
[here](https://elastic.slack.com/archives/C07TCKTA22E/p1734019532879269?thread_ts=1734019339.935419&cid=C07TCKTA22E).
#### 3 packages(s) are going to be relocated:
| Id | Target folder |
| -- | ------------- |
| `@kbn/securitysolution-data-table` |
`x-pack/solutions/security/packages/data-table` |
| `@kbn/ecs-data-quality-dashboard` |
`x-pack/solutions/security/packages/ecs-data-quality-dashboard` |
| `@kbn/security-solution-side-nav` |
`x-pack/solutions/security/packages/side-nav` |
<details >
<summary>Updated references</summary>
```
./.i18nrc.json
./package.json
./packages/kbn-ts-projects/config-paths.json
./src/platform/packages/private/kbn-repo-packages/package-map.json
./tsconfig.base.json
./tsconfig.base.type_check.json
./tsconfig.refs.json
./x-pack/solutions/security/packages/data-table/jest.config.js
./x-pack/solutions/security/packages/ecs-data-quality-dashboard/jest.config.js
./x-pack/solutions/security/packages/side-nav/jest.config.js
./yarn.lock
.github/CODEOWNERS
```
</details><details >
<summary>Updated relative paths</summary>
```
x-pack/solutions/security/packages/data-table/jest.config.js:11
x-pack/solutions/security/packages/data-table/tsconfig.json:2
x-pack/solutions/security/packages/ecs-data-quality-dashboard/jest.config.js:24
x-pack/solutions/security/packages/ecs-data-quality-dashboard/tsconfig.json:10
x-pack/solutions/security/packages/ecs-data-quality-dashboard/tsconfig.json:2
x-pack/solutions/security/packages/side-nav/jest.config.js:10
x-pack/solutions/security/packages/side-nav/tsconfig.json:2
```
</details>
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
After the recent changes in
https://github.com/elastic/kibana/pull/205699
If a deployment fails, the error will be handled correctly at a single
deployment level, however, the pipeline would break, thus further
deployments wouldn't be proceeded.