## Summary
Replace `estimated_heap_memory_usage_bytes` property with `expected
model_size_bytes` per deprecation warning. I unzipped the fixture
archives, replaced the property, and rezipped them.
## To test
Add the following to your `serverArgs` block in
`x-pack/test/fleet_api_integration/config.base.ts`
```
{
name: 'elasticsearch.debug',
level: 'debug',
appenders: ['default'],
},
```
Run the EPM FTR tests e.g.
```
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:server --config x-pack/test/fleet_api_integration/config.epm.ts
# in another terminal session
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:runner --config x-pack/test/fleet_api_integration/config.epm.ts --grep "Assets tagging"
```
Check that the deprecation notice does not appear in the
`elasticsearch.debug` logs in your console
```
x-pack/test/fleet_api_integration/apis/epm/bulk_get_assets.ts: Deprecated field estimated_heap_memory_usage_bytes used, expected model_size_bytes instead
```
Closes https://github.com/elastic/kibana/issues/207310
The deployment agnostic tests were not running properly against MKI
because they directly mess with system indices.
This PR fixes this by removing these parts of the streams tests as they
are anyway tested already by the separate storage adapter tests.
It also extends the behavior of the "disable" streams API endpoint to
also wipe the asset links and stream definitions for classic streams to
leave a clean state. To do this, I extended the storage adapter by a
"clean" function, which deletes the index templates and all backing
indices.
Refactors models to make it more clear what our data model is internally
and what our API responses are. Also some small changes to make it more
elasticsearch-y:
- isSchema variants now are based on specific type narrowing instead of
from any > type, as the latter only gives runtime safety, but does not
add much in terms of type safety
- validation is now entirely encapsulated in the type, removed
additional checks such as `isCompleteCondition`
- the stored document puts all stream properties top level (currently
only `ingest`, instead of `stream.ingest`)
- `condition` is renamed to `if`, and required everywhere
- `always` and `never` conditions were added
- `grok` and `dissect` processors are now similar to ES, where the
condition is a part of the processor config
- `GET /api/streams/{id}` returns `{ stream: ..., dashboards: ..., ...
}` instead of `{ ingest: ...., dashboards: ..., ... }`
- `PUT /api/streams/{id}` now requires `dashboards`, and `stream` is a
top-level property
- `PUT /api/streams/{id}/_ingest` was added to allow consumers to only
update the stream, and not its assets
- there are some legacy definitions (in `legacy.ts`) to minimize the
amount of changes in the UI, this still needs to happen at some point
but not in this PR
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Closes https://github.com/elastic/kibana/issues/207733
Addresses build failures like
https://buildkite.com/elastic/appex-qa-serverless-kibana-ftr-tests/builds/4033
by increasing the timeout from 2 min to 5 min
This is the test that was failing
> 1) Serverless Observability - Deployment-agnostic API integration
tests
--
| │ observability AI Assistant
| │ /internal/observability_ai_assistant/kb/status
| │ "before each" hook for "returns correct status after knowledge
base is setup":
| │
| │ Error: retry.try reached timeout 120000 ms
| │ Error: expected false to equal true
| │ at Assertion.assert (expect.js💯11)
| │ at Assertion.apply (expect.js:227:8)
| │ at Assertion.be (expect.js:69:22)
| │ at helpers.ts:64:31
| │ at processTicksAndRejections
(node:internal/process/task_queues:95:5)
| │ at runAttempt (retry_for_success.ts:30:15)
| │ at retryForSuccess (retry_for_success.ts:103:21)
| │ at RetryService.try (retry.ts:52:12)
| │ at waitForKnowledgeBaseReady (helpers.ts:58:3)
| │ at Context.<anonymous> (knowledge_base_status.spec.ts:31:7)
| │ at Object.apply (wrap_function.js:74:16)
| │ at onFailure (retry_for_success.ts:18:9)
| │ at retryForSuccess (retry_for_success.ts:86:7)
| │ at RetryService.try (retry.ts:52:12)
| │ at waitForKnowledgeBaseReady (helpers.ts:58:3)
| │ at Context.<anonymous> (knowledge_base_status.spec.ts:31:7)
| │ at Object.apply (wrap_function.js:74:16)
## Summary
This PR is followup to, https://github.com/elastic/kibana/pull/203503.
This PR adds a test to make sure that sub-feature description remains
accurate, and changes to hide the connector edit test tab and create
connector button when a user only has read access.
### Checklist
- [ ] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### To verify
1. Create a new read only role and disable EDR connectors under the
Actions and Connectors privilege
2. Create a new user and assign that role to user
3. Create a Sentinel One connector (It doesn't need to work, you can use
fake values for the url and token)
4. Login as the new user and go to the connector page in stack
management
5. Verify that the "Create connector" button is not visible
6. Click on the connector you created, verify that you can't see the
test tab
## Summary
This PR modifies the privilege-checking behavior during rule execution,
restricting the indices against which we verify `read` access to only
those that exist.
### Outstanding questions
- [x] Are there any backwards-compatibility/semver concerns with
changing this behavior?
* We discussed in which situations a user might reasonably be using the
existing behavior, and determined those to be borderline. If we end up
receiving feedback to the contrary, we can add back the old behavior as
configuration.
- [x] Is the `IndexPatternsFetcher` an appropriate implementation to use
for the existence checking?
### Steps to Review
1. Create a rule with a pattern including a non-existent index, e.g.
`auditbeat-*,does-not-exist`
2. Enable the rule, and observe no warning about e.g. missing read
privileges for `does-not-exist`
3. (optional) Remove read access to `auditbeat-*`, or extend the pattern
to include an existing index that the rule author cannot read
4. (optional) Observe a warning for the non-readable index
### Checklist
Delete any items that are not applicable to this PR.
- [ ]
[Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html)
was added for features that require explanation or tutorials
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### For maintainers
- [ ] This was checked for breaking API changes and was [labeled
appropriately](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
---------
Co-authored-by: Yara Tercero <yctercero@users.noreply.github.com>
Unifies the various `LibraryTransforms` interfaces, updates all by reference capable embeddables to use them in the same way, and migrates the clone functionality to use only serialized state.
## Summary
Closes https://github.com/elastic/kibana/issues/206664
This PR moves Profiling Cypress tests to be run on the main pipeline
instead of the unsupported one.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## 📓 Summary
Closes https://github.com/elastic/streams-program/issues/68
This work updates the way a simulation for processing is performed,
working against the `_ingest/_simulate` API.
This gives less specific feedback on the simulation failure (which
processor failed), but allows for a much more realistic simulation
against the index configuration.
This work also adds integration testing for this API.
## 📔 Reviewer notes
The API is poorly typed due to missing typing in the elasticsearch-js
library. #204175 updates the library with those typings, as soon as it's
merged I'll update the API.
## 🎥 Recordings
https://github.com/user-attachments/assets/36ce0d3c-b7de-44d2-bdc2-84ff67fb4b25
Part of https://github.com/elastic/kibana/issues/203716
## Summary
This PR creates the tests for Logsdb in the Snapshot and Restore plugin
* Verify that users can create snapshots from an LogsDB index
- Explanation: It should be possible to create a snapshot of a Index
with LogsDb mode from a regular repository. This test creates a
repository and a index with LogsDb mode, creates a policy, runs the
policy and verifies that the state of the snapshot is `Complete` and it
contains the LogsDb index.
* Verify that users can restore a LogsDB snapshot.
- Explanation: It should be possible to restore a snapshot of a Index
with LogsDb mode from a regular repository. This test takes the snapshot
created in the previous test and restore it. It verifies that it has
been restored and the status is `Complete`.
* Verify that users can NOT create a source-only snapshot from a LogsDB
index [Snapshot result would be "Partial"].
- Explanation: ES doesn't allow to create a snapshot in a source-only
repository for index with a synthetic source. Under the hood LogsDb uses
synthetic source (there is no `_source`). So, is expected that, when
creating a snapshot that includes a LogsDb index the result would be
partial since it won't be able to create the snapshot of the LogsDb
index. To test that, the test creates a source-only repository and a
index with LogsDb mode, creates a policy, runs the policy and verifies
that the state of the snapshot is `Partial`.
* Verify that users can NOT restore a source-only snapshot from a LogsDB
index.
- Explanation: Since the running the policy in the previous test hasn't
create the snapshot for the LogsDb index, the snapshot for that index
couldn't be restored. To verify that, the test tries to restore the
snapshot from the previous step and wait to have the following error
`index [sourceonly-logsdb-index] wasn't fully snapshotted - cannot
restore`
---------
Co-authored-by: Matthew Kime <matt@mattki.me>
Closes https://github.com/elastic/kibana/issues/206826
A bug in the access check meant that when updating the current user's
public user instruction, we'd instead retrieve the public user
instruction from any user - and then overwrite it.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Resolves https://github.com/elastic/response-ops-team/issues/251
## Note
This PR includes some saved object schema changes that I will pull out
into their own separate PR in order to perform an intermediate release.
I wanted to make sure all the schema changes made sense in the overall
context of the PR before opening those separate PRs.
Update: PR for intermediate release here:
https://github.com/elastic/kibana/pull/203184 (Merged)
## Summary
Adds ability to run actions for backfill rule runs.
- Updates schedule backfill API to accept `run_actions` parameter to
specify whether to run actions for backfill.
- Schedule API accepts any action where `frequency.notifyWhen ===
'onActiveAlert'`. If a rule has multiple actions where some are
`onActiveAlert` and some are `onThrottleInterval`, the invalid actions
will be stripped and a warning returned in the schedule response but
valid actions will be scheduled.
- Connector IDs are extracted and stored as references in the ad hoc run
params saved object
- Any actions that result from a backfill task run are scheduled as low
priority tasks
## To Verify
1. Create a detection rule. Make sure you have some past data that the
rule can run over in order to generate actions. Make sure you add
actions to the rule. For testing, I added some conditional actions so I
could see actions running only on backfill runs using
`kibana.alert.rule.execution.type: "manual"`. Create actions with and
without summaries.
2. Schedule a backfill either directly via the API or using the
detection UI. Verify that actions are run for the backfill runs that
generate alerts.
---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
**Fixes: https://github.com/elastic/kibana/issues/202715**
**Fixes: https://github.com/elastic/kibana/issues/204714**
## Summary
This PR makes inconsistent/wrong rule's look-back duration prominent for
a user. It falls back to a default 1 minute value in rule upgrade
workflow.
## Details
### Negative/wrong `lookback` problem
There is a difference between rule schedule value in a saved object and
value represented to users
- Saved object (and rule management API) has `interval`, `from` and `to`
fields representing rule schedule. `interval` shows how often a rule
runs in task runner. `from` and `to` stored in date math format like
`now-10m` represent a date time range used to fetch source events. Task
manager strives to run rules exactly every `interval` but it's not
always possible due to multiple reasons like system load and various
delays. To avoid any gaps to appear `from` point in time usually stands
earlier than current time minus `interval`, for example `interval` is
`10 minutes` and `from` is `now-12m` meaning rule will analyze events
starting from 12 minutes old. `to` represents the latest point in time
source events will be analyzed.
- Diffable rule and UI represent rule schedule as `interval` and
`lookback`. Where `interval` is the same as above and `lookback` and a
time duration before current time minus `interval`. For example
`interval` is `10 minutes` and lookback is `2 minutes` it means a rule
will analyzing events starting with 12 minutes old until the current
moment in time.
Literally `interval`, `from` and `to` mean a rule runs every `interval`
and analyzes events starting from `from` until `to`. Technically `from`
and `to` may not have any correlation with `interval`, for example a
rule may analyze one year old events. While it's reasonable for manual
rule runs and gap remediation the same approach doesn't work well for
usual rule schedule. Transformation between `interval`/`from`/`to` and
`interval`/`lookback` works only when `to` is equal the current moment
in time i.e. `now`.
Rule management APIs allow to set any `from` and `to` values resulting
in inconsistent rule schedule. Transformed `interval`/`lookback` value
won't represent real time interval used to fetch source events for
analysis. On top of that negative `lookback` value may puzzle users on
the meaning of the negative sign.
### Prebuilt rules with `interval`/`from`/`to` resulting in negative
`lookback`
Some prebuilt rules have such `interval`, `from` and `to` field values
thatnegative `lookback` is expected, for example `Multiple Okta Sessions
Detected for a Single User`. It runs every `60 minutes` but has `from`
field set to `now-30m` and `to` equals `now`. In the end we have
`lookback` equals `to` - `from` - `interval` = `30 minutes` - `60
minutes` = `-30 minutes`.
Our UI doesn't handle negative `lookback` values. It simply discards a
negative sign and substitutes the rest for editing. In the case above
`30 minutes` will be suggested for editing. Saving the form will result
in changing `from` to `now-90m`
<img width="1712" alt="image"
src="https://github.com/user-attachments/assets/05519743-9562-4874-8a73-5596eeccacf2"
/>
### Changes in this PR
This PR mitigates rule schedule inconsistencies caused by `to` fields
not using the current point in time i.e. `now`. The following was done
- `DiffableRule`'s `rule_schedule` was changed to have `interval`,
`from` and `to` fields instead of `interval` and `lookback`
- `_perform` rule upgrade API endpoint was adapted to the new
`DIffableRule`'s `rule_schedule`
- Rule upgrade flyout calculates and shows `interval` and `lookback` in
Diff View, readonly view and field form when `lookback` is non-negative
and `to` equals `now`
- Rule upgrade flyout shows `interval`, `from` and `to` in Diff View,
readonly view and field form when `to` isn't equal `now` or calculated
`lookback` is negative
- Rule upgrade flyout shows a warning when `to` isn't equal `now` or
calculated `lookback` is negative
- Rule upgrade flyout's JSON Diff shows `interval` and `lookback` when
`lookback` is non-negative and `to` equals `now` and shows `interval`,
`from` and `to` in any other case
- Rule details page shows `interval`, `from` and `to` in Diff View,
readonly view and field form when `to` isn't equal `now` or calculated
`lookback` is negative
- `maxValue` was added to `ScheduleItemField` to have an ability to
restrict input at reasonable values
## Screenshots
- Rule upgrade workflow (negative look-back)
<img width="2558" alt="Screenshot 2025-01-02 at 13 16 59"
src="https://github.com/user-attachments/assets/b8bf727f-11ca-424f-892b-b024ba7f847a"
/>
<img width="2553" alt="Screenshot 2025-01-02 at 13 17 20"
src="https://github.com/user-attachments/assets/9f751ea4-0ce0-4a23-a3b7-0a16494d957e"
/>
<img width="2558" alt="Screenshot 2025-01-02 at 13 18 24"
src="https://github.com/user-attachments/assets/6908ab02-4011-4a6e-85ce-e60d5eac7993"
/>
- Rule upgrade workflow (positive look-back)
<img width="2555" alt="Screenshot 2025-01-02 at 13 19 12"
src="https://github.com/user-attachments/assets/06208210-c6cd-4842-8aef-6ade5d13bd36"
/>
<img width="2558" alt="Screenshot 2025-01-02 at 13 25 31"
src="https://github.com/user-attachments/assets/aed38bb0-ccfb-479a-bb3b-e5442c518e63"
/>
- JSON view
<img width="2559" alt="Screenshot 2025-01-02 at 13 31 37"
src="https://github.com/user-attachments/assets/07575a81-676f-418e-8b98-48eefe11ab00"
/>
- Rule details page
<img width="2555" alt="Screenshot 2025-01-02 at 13 13 16"
src="https://github.com/user-attachments/assets/e977b752-9d50-4049-917a-af2e8e3f0dfe"
/>
<img width="2558" alt="Screenshot 2025-01-02 at 13 14 10"
src="https://github.com/user-attachments/assets/06d6f477-5730-48ca-a240-b5e7592bf173"
/>
## How to test?
- Ensure the `prebuiltRulesCustomizationEnabled` feature flag is enabled
- Allow internal APIs via adding `server.restrictInternalApis: false` to
`kibana.dev.yaml`
- Clear Elasticsearch data
- Run Elasticsearch and Kibana locally (do not open Kibana in a web
browser)
- Install an outdated version of the `security_detection_engine` Fleet
package
```bash
curl -X POST --user elastic:changeme -H 'Content-Type: application/json' -H 'kbn-xsrf: 123' -H "elastic-api-version: 2023-10-31" -d '{"force":true}' http://localhost:5601/kbn/api/fleet/epm/packages/security_detection_engine/8.14.1
```
- Install prebuilt rules
```bash
curl -X POST --user elastic:changeme -H 'Content-Type: application/json' -H 'kbn-xsrf: 123' -H "elastic-api-version: 1" -d '{"mode":"ALL_RULES"}' http://localhost:5601/kbn/internal/detection_engine/prebuilt_rules/installation/_perform
```
- Set "inconsistent" rule schedule for `Suspicious File Creation via
Kworker` rule by running a query below
```bash
curl -X PATCH --user elastic:changeme -H "Content-Type: application/json" -H "elastic-api-version: 2023-10-31" -H "kbn-xsrf: 123" -d '{"rule_id":"ae343298-97bc-47bc-9ea2-5f2ad831c16e","interval":"10m","from":"now-5m","to":"now-2m"}' http://localhost:5601/kbn/api/detection_engine/rules
```
- Open rule upgrade flyout for `Suspicious File Creation via Kworker`
rule
---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Epic: https://github.com/elastic/security-team/issues/7998
In this PR we're breaking out the `timeline` and `notes` features into
their own feature privilege definition. Previously, access to both
features was granted implicitly through the `siem` feature. However, we
found that this level of access control is not sufficient for all
clients who wanted a more fine-grained way to grant access to parts of
security solution.
In order to break out `timeline` and `notes` from `siem`, we had to
deprecate it feature privilege definition for. That is why you'll find
plenty of changes of `siem` to `siemV2` in this PR. We're making use of
the feature privilege's `replacedBy` functionality, allowing for a
seamless migration of deprecated roles.
This means that roles that previously granted `siem.all` are now granted
`siemV2.all`, `timeline.all` and `notes.all` (same for `*.read`).
Existing users are not impacted and should all still have the correct
access. We added tests to make sure this is working as expected.
Alongside the `ui` privileges, this PR also adds dedicated API tags.
Those tags haven been added to the new and previous version of the
privilege definitions to allow for a clean migration:
```mermaid
flowchart LR
subgraph v1
A(siem) --> Y(all)
A --> X(read)
Y -->|api| W(timeline_write / timeline_read / notes_read / notes_write)
X -->|api| V(timeline_read /notes_read)
end
subgraph v2
A-->|replacedBy| C[siemV2]
A-->|replacedBy| E[timeline]
A-->|replacedBy| G[notes]
E --> L(all)
E --> M(read)
L -->|api| N(timeline_write / timeline_read)
M -->|api| P(timeline_read)
G --> Q(all)
G --> I(read)
Q -->|api| R(notes_write / notes_read)
I -->|api| S(notes_read)
end
```
### Visual changes
#### Hidden/disabled elements
Most of the changes are happening "under" the hood and are only
expressed in case a user has a role with `timeline.none` or
`notes.none`. This would hide and/or disable elements that would usually
allow them to interact with either timeline or the notes feature (within
timeline or the event flyout currently).
As an example, this is how the hover actions look for a user with and
without timeline access:
| With timeline access | Without timeline access |
| --- | --- |
| <img width="616" alt="Screenshot 2024-12-18 at 17 22 49"
src="https://github.com/user-attachments/assets/a767fbb5-49c8-422a-817e-23e7fe1f0042"
/> | <img width="724" alt="Screenshot 2024-12-18 at 17 23 29"
src="https://github.com/user-attachments/assets/3490306a-d1c3-41aa-af5b-05a1dd804b47"
/> |
#### Roles
Another visible change of this PR is the addition of `Timeline` and
`Notes` in the edit-role screen:
| Before | After |
| ------- | ------ |
| <img width="746" alt="Screenshot 2024-12-12 at 16 31 43"
src="https://github.com/user-attachments/assets/20a80dd4-c214-48a5-8c6e-3dc19c0cbc43"
/> | <img width="738" alt="Screenshot 2024-12-12 at 16 32 53"
src="https://github.com/user-attachments/assets/afb1eab4-1729-4c4e-9f51-fddabc32b1dd"
/> |
We made sure that for migrated roles that hard `security.all` selected,
this screen correctly shows `security.all`, `timeline.all` and
`notes.all` after the privilege migration.
#### Timeline toast
There are tons of places in security solution where `Investigate / Add
to timeline` are shown. We did our best to disable all of these actions
but there is no guarantee that this PR catches all the places where we
link to timeline (actions). One layer of extra protection is that the
API endpoints don't give access to timelines to users without the
correct privileges. Another one is a Redux middleware that makes sure
timelines cannot be shown in missed cases. The following toast will be
shown instead of the timeline:
<img width="354" alt="Screenshot 2024-12-19 at 10 34 23"
src="https://github.com/user-attachments/assets/1304005e-2753-4268-b6e7-bd7e22d8a1e3"
/>
### Changes to predefined security roles
All predefined security roles have been updated to grant the new
privileges (in ESS and serverless). In accordance with the migration,
all roles with `siem.all` have been assigned `siemV2.all`,
`timeline.all` and `notes.all` (and `*.read` respectively).
### Checklist
Check the PR satisfies following conditions.
Reviewers should verify this PR satisfies this list as well.
- [x] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/packages/kbn-i18n/README.md)
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] This was checked for breaking HTTP API changes, and any breaking
changes have been approved by the breaking-change committee. The
`release_note:breaking` label should be applied in these situations.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: PhilippeOberti <philippe.oberti@elastic.co>
Co-authored-by: Steph Milovic <stephanie.milovic@elastic.co>
## Summary
Added createdBy and updatedBy fields in summary documents !!
This will make easier to identify which user have added the SLO and
which user last updated the SLO. It's especially helpful where there are
100s of SLOs defined.
<img width="1728" alt="image"
src="https://github.com/user-attachments/assets/ee7bb4d4-a8ea-40c4-8d91-06c32c9b0ba6"
/>
---------
Co-authored-by: Kevin Delemme <kdelemme@gmail.com>
Co-authored-by: Kevin Delemme <kevin.delemme@elastic.co>
## Summary
This PR improves upon the Universal entity definition and entity store
work being done to support Asset Inventory by introducing a flag
`dynamic` to the definition.
The entity store uses an enrich policy in order to retain observed data
that falls outside of a `lookbackPeriod` used by the transform that runs
the aggregations on the source fields.
Normally, we have to specify a retention strategy per each field defined
in an entity definition. However, for universal entities, (some of) the
fields are dynamically generated based on the JSON extractor pipeline
processor, which means we cannot define which strategy to use in the
definition itself.
To account for this, when `dynamic` is set to `true`, we run an extra
ingest pipeline step to process _any field which does not show up in the
entity definition_ (ie, has been dynamically generated). At the moment,
this pipeline step uses a strategy that always picks the latest value,
although int he future, this might need to be configurable, mimicking
the ability to choose strategies for "static" fields.
See this
[doc](https://docs.google.com/document/d/1D8xDtn3HHP65i1Y3eIButacD6ZizyjZZRJB7mxlXzQY/edit?tab=t.0#heading=h.9fz3qtlfzjg7)
for more details and [this
Figma](https://www.figma.com/board/17dpxrztlM4O120p9qMcNw/Entity-descriptions?node-id=0-1&t=JLcB84l9NxCnudAs-1)
for information regarding Entity Store architecture.
## How to test:
### Setup
1. Ensure the default Security Data View exists by navigating to some
Security solution UI.
2. Set up the `entity.keyword` builder pipeline
* Add it to an index that matches any of the default index patterns in
the security data view (eg: `logs-store`)
* Make sure and ingested doc contains both `event.ingested` and
`@timestamp`.
* Easiest way is to add `set` processors to the builder pipeline.
3. Because of the async nature of the field retention process, it is
recommended to change some of the default values (explained below)
4. Enable `debugging` by adding
`xpack.securitySolution.entityAnalytics.entityStore.developer.pipelineDebugMode:
true` to your `kibana.dev.yml`
5. Enable the `assetInventoryStoreEnabled` FF:
```
xpack.securitySolution.enableExperimental:
- assetInventoryStoreEnabled
```
### Interacting with the store
In Kibana dev tools:
#### Phase 1
1. `POST` some of the example docs to the `logs-store` index
2. Confirm the `entity.keyword` field is being added by the builder
pipeline via `GET logs-store/_search`.
3. Initialise the universal entity engine via: `POST
kbn:/api/entity_store/engines/universal/init {}`
* In order to properly test field retention, it's advisable to reduce
the `lookbackPeriod` setting, which means some of the docs in the index
might fall out of the window if it takes too long to initialize the
engine. Any docs posted when the engine is running should be picked up.
* Note that using the UI does not work, as we've specifically removed
the Universal engine from the normal Entity Store workflow
4. Check the status of the store is `running` via `GET
kbn:/api/entity_store/status`
5. Check that the transform has ran by querying the store index: `GET
.entities.v1.latest.security_universal*/_search`
* There should be one entity per `related.entity` found in the source
index
* The fields in the JSON string in `entities.keyword` should appear as
fields in the target documents
* There should also be a `debug` field and potentially a `historical`
field, if enough time has passed for the enrich policy to run. These are
normally hidden, but show up when in `debug mode`.
#### Phase 2
1. Wait some time (the `INTERVAL` constant) for the enrich policy to
populate the `.enrich` indices with the latest data from the store index
* Ideally, this will mean that any docs in the source index now fall
outside of `lookbackPeriod` of the transform.
* Alternatively, you can manually run the enrich poly via: `PUT
/_enrich/policy/entity_store_field_retention_universal_default_v1.0.0/_execute`.
* It's also possible to update the source docs' timestamps and
`event.ingested` to ensure they're outside the `lookbackPeriod`
3. `POST` a new doc to the source index (eg: `logs-store`)
* The new doc should either have a new, not yet observed property in
`entities.metadata`, or the same fields but with different, new values.
4. Query the store index again.
* The entity in question should now reflect the new changes _but
preserve the old data too!_
* Existing fields should have been updated to new values
* New fields should have been `recursively` merged. Ie, nested fields
should not be an issue.
* The `historical` field should show the "previous state" of the entity
doc. This is useful to confirm that a field's value is, in fact, the
"latest" value, whether that comes from a new doc that falls in the
lookback window of the transform, or from this `historical` "cache".
### Code
#### Default values:
* in
[`server/lib/entity_analytics/entity_store/entity_definition/universal.ts#L75-L76`](6686d57ce5/x-pack/solutions/security/plugins/security_solution/server/lib/entity_analytics/entity_store/entity_definitions/entity_descriptions/universal.ts (L75-L76)):
* Add the following fields to `settings`:
```ts
{ frequency: '2s', lookbackPeriod: '1m', syncDelay: '2s'}
```
* in
[`server/lib/entity_analytics/entity_store/task/constants.ts#L11-L13`](6686d57ce5/x-pack/solutions/security/plugins/security_solution/server/lib/entity_analytics/entity_store/task/constants.ts (L11-L13))
* Change the following defaults:
```ts
export const INTERVAL = '1m';
export const TIMEOUT = '30s';
```
#### Ingest pipeline
<details>
<summary>Pipeline</summary>
```js
PUT _ingest/pipeline/entities-keyword-builder
{
"description":"Serialize entities.metadata into a keyword field",
"processors":[
{
"set": {
"field": "event.ingested",
"value": "{{_ingest.timestamp}}"
}
},
{
"set": {
"field": "@timestamp",
"value": "{{_ingest.timestamp}}"
}
},
{
"script":{
"lang":"painless",
"source":"""
String jsonFromMap(Map map) {
StringBuilder json = new StringBuilder("{");
boolean first = true;
for (entry in map.entrySet()) {
if (!first) {
json.append(",");
}
first = false;
String key = entry.getKey().replace("\"", "\\\"");
Object value = entry.getValue();
json.append("\"").append(key).append("\":");
if (value instanceof String) {
String escapedValue = ((String) value).replace("\"", "\\\"").replace("=", ":");
json.append("\"").append(escapedValue).append("\"");
} else if (value instanceof Map) {
json.append(jsonFromMap((Map) value));
} else if (value instanceof List) {
json.append(jsonFromList((List) value));
} else if (value instanceof Boolean || value instanceof Number) {
json.append(value.toString());
} else {
// For other types, treat as string
String escapedValue = value.toString().replace("\"", "\\\"").replace("=", ":");
json.append("\"").append(escapedValue).append("\"");
}
}
json.append("}");
return json.toString();
}
String jsonFromList(List list) {
StringBuilder json = new StringBuilder("[");
boolean first = true;
for (item in list) {
if (!first) {
json.append(",");
}
first = false;
if (item instanceof String) {
String escapedItem = ((String) item).replace("\"", "\\\"").replace("=", ":");
json.append("\"").append(escapedItem).append("\"");
} else if (item instanceof Map) {
json.append(jsonFromMap((Map) item));
} else if (item instanceof List) {
json.append(jsonFromList((List) item));
} else if (item instanceof Boolean || item instanceof Number) {
json.append(item.toString());
} else {
// For other types, treat as string
String escapedItem = item.toString().replace("\"", "\\\"").replace("=", ":");
json.append("\"").append(escapedItem).append("\"");
}
}
json.append("]");
return json.toString();
}
def metadata = jsonFromMap(ctx['entities']['metadata']);
ctx['entities']['keyword'] = metadata;
"""
}
}
]
}
```
</details>
<details>
<summary>Index template</summary>
```js
PUT /_index_template/entity_store_index_template
{
"index_patterns":[
"logs-store"
],
"template":{
"settings":{
"index":{
"default_pipeline":"entities-keyword-builder"
}
},
"mappings":{
"properties":{
"@timestamp":{
"type":"date"
},
"message":{
"type":"text"
},
"event":{
"properties":{
"action":{
"type":"keyword"
},
"category":{
"type":"keyword"
},
"type":{
"type":"keyword"
},
"outcome":{
"type":"keyword"
},
"provider":{
"type":"keyword"
},
"ingested":{
"type": "date"
}
}
},
"related":{
"properties":{
"entity":{
"type":"keyword"
}
}
},
"entities":{
"properties":{
"metadata":{
"type":"flattened"
},
"keyword":{
"type":"keyword"
}
}
}
}
}
}
}
```
</details>
<details>
<summary>Example source docs</summary>
#### Phase 1:
```js
POST /logs-store/_doc/
{
"related":{
"entity":[
"test-id"
]
},
"entities":{
"metadata":{
"test-id":{
"okta":{
"foo": {
"baz": {
"qux": 1
}
}
},
"cloud": {
"super": 123
}
}
}
}
}
```
```js
POST /logs-store/_doc/
{
"related":{
"entity":[
"test-id"
]
},
"entities":{
"metadata":{
"test-id":{
"cloud":{
"host": "me"
}
}
}
}
}
```
#### Phase 2:
```js
POST /logs-store/_doc/
{
"related":{
"entity":[
"test-id"
]
},
"entities":{
"metadata":{
"test-id":{
"cloud":{
"host": "me",
"super": 1111111,
},
"okta":{
"foo": {
"baz": {
"qux": 99,
"hello": "world"
},
"hello": "world"
},
"hello": "world"
}
}
}
}
}
```
</details>
Closes#204116
## Summary
fix:
o11y assistant Error, when using the model (llama 3.2) the stream get
closed in the middle and fails with an error related to the title
generation
Part of: https://github.com/elastic/kibana/issues/201813
- [x] Memory Usage. Check ML entities are filtered according to the
project type.
- [x] Notifications page. Check ML entities are filtered according to
the project type.
## Summary
In order to stop using `includeComments` to load the updated data
belonging to the comments/user actions in the cases detail page we
implemented a new internal [`find user
actions`](https://github.com/elastic/kibana/pull/203455/files#diff-6b8d3c46675fe8f130e37afea148107012bb914a5f82eb277cb2448aba78de29)
API. This new API does the same as the public one + an extra step. This
extra step is fetching all the attachments by commentId, in here we will
have all updates to previous comments, etc. The rest of the PR is
updating the case detail page to work with this new schema + test fixing
Closes https://github.com/elastic/kibana/issues/194290
---------
Co-authored-by: Christos Nasikas <christos.nasikas@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Part of https://github.com/elastic/kibana/issues/203716
## Summary
This PR introduces a new test case for LogsDB in both Stateful and
Serverless:
> Verify that users can override LogsDB index settings including:
ignore_above, ignore_malformed, ignore_dynamic_beyond_limit, subobjects
and timestamp format.
For modify `subobjects` and `timestamp` format it must be done from the
mappings tab. For `ignore_above`, `ignore_malformed`,
`ignore_dynamic_beyond_limit` the configuration is done in the Settings
tab.
It also introduces a test case only for Stateful
(enableMappingsSourceFieldSection [is
disabled](9c6de6aabc/config/serverless.yml (L112))
for serverless)
> Verify that users cannot disable synthetic source for a LogsDB index.
# Summary
As part of the effort to add missing content for Security APIs, this PR
introduces a few missing request, response, and parameter examples for
Detection Engine Exception APIs.
## Summary
This is to support https://github.com/elastic/synthetics/issues/978
Increase lightweight monitors project page size, size of light weight
monitors is minimal, heaving a small size is more of a burden then
advantage since we do batch operations in kibana !!
### Why
Since limit is only mostly applicable for browser monitors size, for
lightweight we can safely do bulk operation on large number of monitors
without hititng memory or size issues
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Justin Kambic <jk@elastic.co>
## Summary
Skip asset criticality integration test on MKI
---------
Co-authored-by: Jared Burgett <147995946+jaredburgettelastic@users.noreply.github.com>