This adds support for the new selector syntax to the log source profile
heuristics. It will only match when index name expression exclusively
contains implicit or explicit `data` selectors.
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Currently, the `semantic_text` field supports a default `inference_id`,
meaning users are not required to explicitly select an inference
endpoint during mapping. However, a bug has been identified: if the
`Select inference Id` popover is not opened, the `inference_id` field
property remains as an empty string. This causes Elasticsearch (ES) to
throw an error, as it requires a value to be present if the property is
defined.
To address this issue, the proposed solution is to remove the
`inference_id` property from the `semantic_text` field during field
mapping if its value is empty.
### Screen Recording
https://github.com/user-attachments/assets/e8d8d471-7ff2-493e-8872-e42838579d44
---------
Co-authored-by: Matthew Kime <matt@mattki.me>
## Summary
resolves https://github.com/elastic/kibana/issues/105692
This PR adds a pre response handler that sets a warning header if the
requested endpoint is deprecated.
### Checklist
Check the PR satisfies following conditions.
Reviewers should verify this PR satisfies this list as well.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] The PR description includes the appropriate Release Notes section,
and the correct `release_note:*` label is applied per the
[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
### Identify risks
Does this PR introduce any risks? For example, consider risks like hard
to test bugs, performance regression, potential of data loss.
Describe the risk, its severity, and mitigation for each identified
risk. Invite stakeholders and evaluate how to proceed before merging.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Added createdBy and updatedBy fields in summary documents !!
This will make easier to identify which user have added the SLO and
which user last updated the SLO. It's especially helpful where there are
100s of SLOs defined.
<img width="1728" alt="image"
src="https://github.com/user-attachments/assets/ee7bb4d4-a8ea-40c4-8d91-06c32c9b0ba6"
/>
---------
Co-authored-by: Kevin Delemme <kdelemme@gmail.com>
Co-authored-by: Kevin Delemme <kevin.delemme@elastic.co>
Fixes https://github.com/elastic/security-team/issues/11357
## Summary
In this PR we use cases local storage to preserve the selection of
ordering in the user activity on the cases detail page.
Initially, I was going to save the whole `UserActivityParams` on local
storage but ultimately decided against it just to preserve the defaults
like "selected tab" or "page".
## Summary
This PR improves upon the Universal entity definition and entity store
work being done to support Asset Inventory by introducing a flag
`dynamic` to the definition.
The entity store uses an enrich policy in order to retain observed data
that falls outside of a `lookbackPeriod` used by the transform that runs
the aggregations on the source fields.
Normally, we have to specify a retention strategy per each field defined
in an entity definition. However, for universal entities, (some of) the
fields are dynamically generated based on the JSON extractor pipeline
processor, which means we cannot define which strategy to use in the
definition itself.
To account for this, when `dynamic` is set to `true`, we run an extra
ingest pipeline step to process _any field which does not show up in the
entity definition_ (ie, has been dynamically generated). At the moment,
this pipeline step uses a strategy that always picks the latest value,
although int he future, this might need to be configurable, mimicking
the ability to choose strategies for "static" fields.
See this
[doc](https://docs.google.com/document/d/1D8xDtn3HHP65i1Y3eIButacD6ZizyjZZRJB7mxlXzQY/edit?tab=t.0#heading=h.9fz3qtlfzjg7)
for more details and [this
Figma](https://www.figma.com/board/17dpxrztlM4O120p9qMcNw/Entity-descriptions?node-id=0-1&t=JLcB84l9NxCnudAs-1)
for information regarding Entity Store architecture.
## How to test:
### Setup
1. Ensure the default Security Data View exists by navigating to some
Security solution UI.
2. Set up the `entity.keyword` builder pipeline
* Add it to an index that matches any of the default index patterns in
the security data view (eg: `logs-store`)
* Make sure and ingested doc contains both `event.ingested` and
`@timestamp`.
* Easiest way is to add `set` processors to the builder pipeline.
3. Because of the async nature of the field retention process, it is
recommended to change some of the default values (explained below)
4. Enable `debugging` by adding
`xpack.securitySolution.entityAnalytics.entityStore.developer.pipelineDebugMode:
true` to your `kibana.dev.yml`
5. Enable the `assetInventoryStoreEnabled` FF:
```
xpack.securitySolution.enableExperimental:
- assetInventoryStoreEnabled
```
### Interacting with the store
In Kibana dev tools:
#### Phase 1
1. `POST` some of the example docs to the `logs-store` index
2. Confirm the `entity.keyword` field is being added by the builder
pipeline via `GET logs-store/_search`.
3. Initialise the universal entity engine via: `POST
kbn:/api/entity_store/engines/universal/init {}`
* In order to properly test field retention, it's advisable to reduce
the `lookbackPeriod` setting, which means some of the docs in the index
might fall out of the window if it takes too long to initialize the
engine. Any docs posted when the engine is running should be picked up.
* Note that using the UI does not work, as we've specifically removed
the Universal engine from the normal Entity Store workflow
4. Check the status of the store is `running` via `GET
kbn:/api/entity_store/status`
5. Check that the transform has ran by querying the store index: `GET
.entities.v1.latest.security_universal*/_search`
* There should be one entity per `related.entity` found in the source
index
* The fields in the JSON string in `entities.keyword` should appear as
fields in the target documents
* There should also be a `debug` field and potentially a `historical`
field, if enough time has passed for the enrich policy to run. These are
normally hidden, but show up when in `debug mode`.
#### Phase 2
1. Wait some time (the `INTERVAL` constant) for the enrich policy to
populate the `.enrich` indices with the latest data from the store index
* Ideally, this will mean that any docs in the source index now fall
outside of `lookbackPeriod` of the transform.
* Alternatively, you can manually run the enrich poly via: `PUT
/_enrich/policy/entity_store_field_retention_universal_default_v1.0.0/_execute`.
* It's also possible to update the source docs' timestamps and
`event.ingested` to ensure they're outside the `lookbackPeriod`
3. `POST` a new doc to the source index (eg: `logs-store`)
* The new doc should either have a new, not yet observed property in
`entities.metadata`, or the same fields but with different, new values.
4. Query the store index again.
* The entity in question should now reflect the new changes _but
preserve the old data too!_
* Existing fields should have been updated to new values
* New fields should have been `recursively` merged. Ie, nested fields
should not be an issue.
* The `historical` field should show the "previous state" of the entity
doc. This is useful to confirm that a field's value is, in fact, the
"latest" value, whether that comes from a new doc that falls in the
lookback window of the transform, or from this `historical` "cache".
### Code
#### Default values:
* in
[`server/lib/entity_analytics/entity_store/entity_definition/universal.ts#L75-L76`](6686d57ce5/x-pack/solutions/security/plugins/security_solution/server/lib/entity_analytics/entity_store/entity_definitions/entity_descriptions/universal.ts (L75-L76)):
* Add the following fields to `settings`:
```ts
{ frequency: '2s', lookbackPeriod: '1m', syncDelay: '2s'}
```
* in
[`server/lib/entity_analytics/entity_store/task/constants.ts#L11-L13`](6686d57ce5/x-pack/solutions/security/plugins/security_solution/server/lib/entity_analytics/entity_store/task/constants.ts (L11-L13))
* Change the following defaults:
```ts
export const INTERVAL = '1m';
export const TIMEOUT = '30s';
```
#### Ingest pipeline
<details>
<summary>Pipeline</summary>
```js
PUT _ingest/pipeline/entities-keyword-builder
{
"description":"Serialize entities.metadata into a keyword field",
"processors":[
{
"set": {
"field": "event.ingested",
"value": "{{_ingest.timestamp}}"
}
},
{
"set": {
"field": "@timestamp",
"value": "{{_ingest.timestamp}}"
}
},
{
"script":{
"lang":"painless",
"source":"""
String jsonFromMap(Map map) {
StringBuilder json = new StringBuilder("{");
boolean first = true;
for (entry in map.entrySet()) {
if (!first) {
json.append(",");
}
first = false;
String key = entry.getKey().replace("\"", "\\\"");
Object value = entry.getValue();
json.append("\"").append(key).append("\":");
if (value instanceof String) {
String escapedValue = ((String) value).replace("\"", "\\\"").replace("=", ":");
json.append("\"").append(escapedValue).append("\"");
} else if (value instanceof Map) {
json.append(jsonFromMap((Map) value));
} else if (value instanceof List) {
json.append(jsonFromList((List) value));
} else if (value instanceof Boolean || value instanceof Number) {
json.append(value.toString());
} else {
// For other types, treat as string
String escapedValue = value.toString().replace("\"", "\\\"").replace("=", ":");
json.append("\"").append(escapedValue).append("\"");
}
}
json.append("}");
return json.toString();
}
String jsonFromList(List list) {
StringBuilder json = new StringBuilder("[");
boolean first = true;
for (item in list) {
if (!first) {
json.append(",");
}
first = false;
if (item instanceof String) {
String escapedItem = ((String) item).replace("\"", "\\\"").replace("=", ":");
json.append("\"").append(escapedItem).append("\"");
} else if (item instanceof Map) {
json.append(jsonFromMap((Map) item));
} else if (item instanceof List) {
json.append(jsonFromList((List) item));
} else if (item instanceof Boolean || item instanceof Number) {
json.append(item.toString());
} else {
// For other types, treat as string
String escapedItem = item.toString().replace("\"", "\\\"").replace("=", ":");
json.append("\"").append(escapedItem).append("\"");
}
}
json.append("]");
return json.toString();
}
def metadata = jsonFromMap(ctx['entities']['metadata']);
ctx['entities']['keyword'] = metadata;
"""
}
}
]
}
```
</details>
<details>
<summary>Index template</summary>
```js
PUT /_index_template/entity_store_index_template
{
"index_patterns":[
"logs-store"
],
"template":{
"settings":{
"index":{
"default_pipeline":"entities-keyword-builder"
}
},
"mappings":{
"properties":{
"@timestamp":{
"type":"date"
},
"message":{
"type":"text"
},
"event":{
"properties":{
"action":{
"type":"keyword"
},
"category":{
"type":"keyword"
},
"type":{
"type":"keyword"
},
"outcome":{
"type":"keyword"
},
"provider":{
"type":"keyword"
},
"ingested":{
"type": "date"
}
}
},
"related":{
"properties":{
"entity":{
"type":"keyword"
}
}
},
"entities":{
"properties":{
"metadata":{
"type":"flattened"
},
"keyword":{
"type":"keyword"
}
}
}
}
}
}
}
```
</details>
<details>
<summary>Example source docs</summary>
#### Phase 1:
```js
POST /logs-store/_doc/
{
"related":{
"entity":[
"test-id"
]
},
"entities":{
"metadata":{
"test-id":{
"okta":{
"foo": {
"baz": {
"qux": 1
}
}
},
"cloud": {
"super": 123
}
}
}
}
}
```
```js
POST /logs-store/_doc/
{
"related":{
"entity":[
"test-id"
]
},
"entities":{
"metadata":{
"test-id":{
"cloud":{
"host": "me"
}
}
}
}
}
```
#### Phase 2:
```js
POST /logs-store/_doc/
{
"related":{
"entity":[
"test-id"
]
},
"entities":{
"metadata":{
"test-id":{
"cloud":{
"host": "me",
"super": 1111111,
},
"okta":{
"foo": {
"baz": {
"qux": 99,
"hello": "world"
},
"hello": "world"
},
"hello": "world"
}
}
}
}
}
```
</details>
Closes#204116
## Summary
fix:
o11y assistant Error, when using the model (llama 3.2) the stream get
closed in the middle and fails with an error related to the title
generation
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [oas](https://togithub.com/readmeio/oas)
([source](https://togithub.com/readmeio/oas/tree/HEAD/packages/oas)) |
dependencies | patch | [`^25.2.0` ->
`^25.2.1`](https://renovatebot.com/diffs/npm/oas/25.2.0/25.2.1) |
---
### Release Notes
<details>
<summary>readmeio/oas (oas)</summary>
###
[`v25.2.1`](https://togithub.com/readmeio/oas/compare/oas@25.2.0...oas@25.2.1)
[Compare
Source](https://togithub.com/readmeio/oas/compare/oas@25.2.0...oas@25.2.1)
</details>
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40MjUuMSIsInVwZGF0ZWRJblZlciI6IjM3LjQyNS4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJUZWFtOlNlY3VyaXR5LVNjYWxhYmlsaXR5IiwiYmFja3BvcnQ6YWxsLW9wZW4iLCJyZWxlYXNlX25vdGU6c2tpcCJdfQ==-->
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
This PR contains the following updates:
| Package | Update | Change |
|---|---|---|
| docker.elastic.co/wolfi/chainguard-base | digest | `dd66bee` ->
`ea157dd` |
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40MjUuMSIsInVwZGF0ZWRJblZlciI6IjM3LjQyNS4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6WyJUZWFtOk9wZXJhdGlvbnMiLCJiYWNrcG9ydDpza2lwIiwicmVsZWFzZV9ub3RlOnNraXAiXX0=-->
Co-authored-by: elastic-renovate-prod[bot] <174716857+elastic-renovate-prod[bot]@users.noreply.github.com>
Part of https://github.com/elastic/kibana/issues/203664
## Summary
EUI added `behindText` vis colors to the euiTheme. Replacing here
`euiThemeVars` with the new vis colors.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Extends the Observability AI Assistant's evaluation framework to create
the first set of tests aimed at evaluating the performance of the
Investigation App's AI root cause analysis integration.
To execute tests, please consult the
[README](https://github.com/elastic/kibana/pull/204634/files#diff-4823a154e593051126d3d5822c88d72e89d07f41b8c07a5a69d18281c50b09adR1).
Note the prerequisites and the Kibana & Elasticsearch configuration.
Further evolution
--
This PR is the first MVP of the evaluation framework. A (somewhat light)
[meta issue](https://github.com/elastic/kibana/issues/205670) exists for
our continued work on this project, and will be added to over time.
Test data and fixture architecture
--
Logs, metrics, and traces are indexed to
[edge-rca](https://studious-disco-k66oojq.pages.github.io/edge-rca/).
Observability engineers can [create an oblt-cli
cluster](https://studious-disco-k66oojq.pages.github.io/user-guide/cluster-create-ccs/)
configured for cross cluster search against edge-rca as the remote
cluster.
When creating new testing fixtures, engineers will utilize their
oblt-cli cluster to create rules against the remote cluster data. Once
alerts are triggered in a failure scenario, the engineer can choose to
archive the alert data to utilize as a test fixture.
Test fixtures are added to the `investigate_app/scripts/load/fixtures`
directory for use in tests.
When execute tests, the fixtures are loaded into the engineer's oblt-cli
cluster, configured for cross cluster search against edge-rca. The local
alert fixture and the remote demo data are utilized together to replay
root cause analysis and execute the test evaluations.
Implementation
--
Creates a new directory `scripts`, to house scripts related to setting
up and running these tests. Here's what each directory does:
## scripts/evaluate
1. Extends the evaluation script from
`observability_ai_assistant_app/scripts/evaluation` by creating a
[custom Kibana
client](https://github.com/elastic/kibana/pull/204634/files#diff-ae05b2a20168ea08f452297fc1bd59310c69ac3ea4651da1f65cd9fa93bb8fe9R1)
with RCA specific methods. The custom client is [passed to the
Observability AI Assistant's
`runEvaluations`](https://github.com/elastic/kibana/pull/204634/files#diff-0f2d3662c01df8fbe7d1f19704fa071cbd6232fb5f732b313e8ba99012925d0bR14)
script an[d invoked instead of the default Kibana
Client](https://github.com/elastic/kibana/pull/204634/files#diff-98509a357e86ea5c5931b1b46abc72f76e5304439430358eee845f9ad57f63f1R54).
2. Defines a single, MVP test in `index.spec.ts`. This test find a
specific alert fixture designated for that test, creates an
investigation for that alert with a specified time range, and calls the
root cause analysis api. Once the report is received back from the api,
a prompt is created for the evaluation framework with details of the
report. The evaluation framework then judges how well the root cause
analysis api performed against specified criteria.
## scripts/archive
1. Utilized when creating new test fixtures, this script will easily
archive observability alerts data for use as a fixture in a feature test
## scripts/load
1. Loads created testing fixtures before running the test.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Dario Gieselaar <d.gieselaar@gmail.com>
Closes https://github.com/elastic/kibana/issues/204992
## Summary
Callout for single edit data retention (opened from data stream details
panel):
<img width="1446" alt="Screenshot 2025-01-15 at 13 29 29"
src="https://github.com/user-attachments/assets/c415e634-4b39-43d3-b1ae-8a1de55cb144"
/>
For reference, this is the callout for bulk edit data retention (exists
from before this PR):
<img width="1446" alt="Screenshot 2025-01-15 at 13 26 08"
src="https://github.com/user-attachments/assets/6d167f94-9882-4b48-b1f9-20d26e9bdea7"
/>
**How to test:**
1. Start Es and Kibana
2. Go to Index Management -> Data streams and click on one of the data
streams.
3. Click on the "Manage" button and edit data retention.
4. Decrease the data retention period and verify that the callout
message is correct.
5. Also, verify that the callout message in the bulk edit data retention
modal is still the same.
This adds an e2e test for [the Ensemble
workflow](https://github.com/elastic/ensemble/actions/workflows/nightly.yml)
to cover stack installation part of the OTel K8S quickstart flow.
Besides that I've replaced the retry logic for K8S EA and Auto Detect
flow with a simple timeouts to workaround the missing data issue on the
CTA pages (host details and k8s dashboard) after finishing the
onboarding flow. I've also simplified assertions on the CTA pages.
## Summary
[Internal link](https://github.com/elastic/security-team/issues/10820)
to the feature details
Set @elastic/security-threat-hunting as codewoners of the SIEM
Migrations integration tests folder.
> [!NOTE]
> This feature needs `siemMigrationsEnabled` experimental flag enabled
to work.
## Summary
This fixes an issue in playground where the generated query is using a
multi_match. This is because the field is now defined as a text field
and Playground was treating the field as a text field and using it in a
multi-match.
This fix detects if the field is declared in the mappings API as
semantic_text and what the model_id is.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Extracted remaining easy backward-compatible unit test fixes that fail
with React@18 from https://github.com/elastic/kibana/pull/206411
The idea is that the tests should pass for both React@17 and React@18
This PR adds checks to verify whether the signer_id is present in file
events stored in the ES, which serve as the foundation for generating
endpoint insights. Previously, we relied solely on the executable path,
which caused issues when a single AV generated multiple paths.
With these changes:
* If the `signer_id` exists in the file event, it will be used for
generating insights alongside the path
* For cases where the `signer_id` is unavailable (e.g., Linux, which
lacks signers), the executable path will still be used as an only value.
https://github.com/user-attachments/assets/8965efef-e962-485a-b20f-d2730cffcf10
---------
Co-authored-by: Joey F. Poon <joey.poon@elastic.co>
## Summary
This moves the scss content from an initial bundle load to an async
bundle load for the dev console and index management.
For testing - make sure the mapping editor and the dev console render
correctly. It will be abundantly clear if they don't.
Part of: https://github.com/elastic/kibana/issues/201813
- [x] Memory Usage. Check ML entities are filtered according to the
project type.
- [x] Notifications page. Check ML entities are filtered according to
the project type.
## Summary
Telemetry plugin now publishes the `isOptIn$` boolean Observable in its
start contract. The observable then can be used to subscribe to and get
information about changes in the global telemetry config.
In addition to that, the original `getIsOptedIn()` query function is
marked as deprecated.
### Checklist
Check the PR satisfies following conditions.
Reviewers should verify this PR satisfies this list as well.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
## Summary
Include all data from the migration process in the translated rule
documents, so we are able to display the correct information in the
table, allowing us also to sort and filter by these fields.
The fields added are:
- `integration_ids` -> new field mapped in the index (from
`integration_id`), the field is set when we match a prebuilt rule too.
- `risk_score` -> new field mapped in the index, the field is set when
we match a prebuilt rule and set the default value otherwise.
- `severity` -> the field is set when we match a prebuilt rule too.
Defaults moved from the UI to the LLM graph result.
Next steps:
- Take the `risk_score` from the original rule for the custom translated
rules
- Infer `severity` from the original rule risk_score (and maybe other
parameters) for the custom translated rules
Other changes
- The RuleMigrationSevice has been refactored to take all dependencies
(clients, services) from the API context factory. This change makes all
dependencies always available within the Rule migration service so we
don't need to pass them by parameters in each single operation.
- The Prebuilt rule retriever now stores all the prebuilt rules data in
memory during the migration, so we can return all the prebuilt rule
information when we execute semantic searches. This was necessary to set
`rule_id`, `integration_ids`, `severity`, and `risk_score` fields
correctly.
## Screenshots

---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Since the timerange in Discover of the main request is stable we don't need to trigger a main fetch for all data when the histogram/chart is being hidden/displayed, unless it's necessary to get the data (e.g. when the histogram/chart was hiden when a discover session was being loaded)
## Summary
This improves the behavior described in
https://github.com/elastic/kibana/issues/206274 , where the loading
skeleton is shown even when the similar cases data is already in the
cache.
## Summary
In order to stop using `includeComments` to load the updated data
belonging to the comments/user actions in the cases detail page we
implemented a new internal [`find user
actions`](https://github.com/elastic/kibana/pull/203455/files#diff-6b8d3c46675fe8f130e37afea148107012bb914a5f82eb277cb2448aba78de29)
API. This new API does the same as the public one + an extra step. This
extra step is fetching all the attachments by commentId, in here we will
have all updates to previous comments, etc. The rest of the PR is
updating the case detail page to work with this new schema + test fixing
Closes https://github.com/elastic/kibana/issues/194290
---------
Co-authored-by: Christos Nasikas <christos.nasikas@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>