This PR adds a `/diff` endpoint that given two time ranges will return
which assets exist only in either time range and which assets exist in
both time ranges.
### How to test
Start up a local ES and Kibana instance and run these commands to setup
the test data:
```curl
curl -X POST http://localhost:5601/ftw/api/asset-manager/assets/sample \
-u 'elastic:changeme' \
-H 'kbn-xsrf: xxx' \
-H 'Content-Type: application/json' \
-d '{"baseDateTime":"2022-02-07T00:00:00.000Z", "excludeEans": ["k8s.pod:pod-200wwc3","k8s.pod:pod-200naq4","k8s.pod:pod-200ohr5","k8s.pod:pod-200yyx6","k8s.pod:pod-200psd7","k8s.pod:pod-200wmc8","k8s.pod:pod-200ugg9"]}'
```
```curl
curl -X POST http://localhost:5601/ftw/api/asset-manager/assets/sample \
-u 'elastic:changeme' \
-H 'kbn-xsrf: xxx' \
-H 'Content-Type: application/json' \
-d '{"baseDateTime":"2022-02-07T01:30:00.000Z", "excludeEans": ["k8s.pod:pod-200wwc3","k8s.pod:pod-200naq4", "k8s.pod:pod-200xrg1","k8s.pod:pod-200dfp2"]}'
```
```curl
curl -X POST http://localhost:5601/ftw/api/asset-manager/assets/sample \
-u 'elastic:changeme' \
-H 'kbn-xsrf: xxx' \
-H 'Content-Type: application/json' \
-d '{"baseDateTime":"2022-02-07T03:00:00.000Z", "excludeEans": ["k8s.cluster:cluster-001","k8s.cluster:cluster-002","k8s.node:node-101","k8s.node:node-102","k8s.node:node-103","k8s.pod:pod-200xrg1","k8s.pod:pod-200dfp2"]}'
```
From there you can test based on the requests described in the
[documentation](063b730c7a/x-pack/plugins/asset_manager/docs/index.md (get-assetsdiff)).
Closes#153489
## Summary
Relates to https://github.com/elastic/kibana/issues/142655
Resolves https://github.com/elastic/kibana/issues/142653
All monitor schedules in Uptime Monitor Management/Synthetics app apart
from the [supported
schedules](https://github.com/elastic/kibana/pull/154010/files#diff-6e5ef49468e646b5569e213b03876de143291ca3870a7092974793837f1ddc61R33)
have been deprecated.
The only allowed schedules are the below:
<img width="1241" alt="Screen Shot 2023-04-02 at 10 28 20 PM"
src="https://user-images.githubusercontent.com/11356435/229397972-fe2fcaa2-d3c7-450b-9b40-f8c71e6c7dcf.png">
Adds a migration to transform unsupportes schedules from Uptime Monitor
Management to supported Synthetics app schedules. Also adds validation
when an invalid schedule is used.
Also removes zip url fields from monitors. These fields were originally
included in the saved object spec anticipating a future zip url feature.
That feature has now been replaced by project monitors, removing the
need for zip url fields.
## Testing
⚠️ Note ⚠️
--
It's suggested that you use a fresh instance of ES to test this PR. This
can either be done by creating a brand new oblt cluster via oblt-cli, or
by running `yarn es snapshot`. If you run this PR on an existing
oblt-cluster, then switch back to main on that same cluster before this
PR is broken, you'll break the cluster.
Instructions
--
1. Check out 8.7.0
2. Create Uptime monitors with invalid schedules. Ideally, create one of
each monitor type. Some example invalid schedules are 4, 8, 11, and 16,
for example.
3. Create at least one of each type of project monitor by pushing
monitors via the synthetics agent
4. Check out this branch
5. Navigate to Synthetics or Uptime once Kibana is done loading. Observe
that each one of the invalid schedules was transformed into a supported
schedule.
6. (Testing that decryption is still working after migration). Navigate
to each one of the UI monitors' edit pages. Click save to resave each
monitor. Then, visit the edit page again. If you don't see any page
level errors, decryption is still working successfully
7. (Testing that decryption is still working after migration for project
monitors). Change the global schedule your project monitors and repush.
Check the global schedule of your project monitors one more time and
repush again. If both pushes are successful, decryption is still working
after the migration.
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Setting `xpack.alerting.enableFrameworkAlerts` to true by default. This
causes alerts-as-data resource installation to be handled by the
alerting plugin and not the rule registry. We're keeping the feature
flag in case we run into issues but eventually we'll clean up the code
to remove the feature flag and clean up the rule registry code that
relies on the feature flag. Changing this default setting early will
allow us to identify issues before the 8.8 FF where we can revert if
needed.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Closes#153202, closes#153850
## Summary
This PR adds alert start annotation and also uses a custom time range
for the alert details' charts depending on the alert duration. The logic
to calculate the time range was added in a separate package to be used
in other use cases as well.

## 🧪 How to test
Create a metric threshold alert and go to the related alert details
page, verify:
- Alert start annotation
- The time range of the charts should be before the alert was started
(1/8 of the duration was added to each side)
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This PR updates the way how we start the headless browser for testing.
The current way of starting in headless mode is eventually going away
and the new headless mode offers more capabilities and stability, see
https://www.selenium.dev/blog/2023/headless-is-going-away/ and
https://developer.chrome.com/articles/new-headless/.
### Test adjustments
All the adjusted discover, dashboard, maps and infra tests showed the
same pattern during failure investigation that's around the fact that
the new headless mode is closer to the regular / non-headless mode:
* Tests passed with the old headless mode
* Tests failed in regular / non-headless mode the same way they failed
in new headless mode
* The failure reasons were mostly around slightly different font
rendering and slightly different browser sizes
## Summary
- Added some tests that verify the functionality already added by
@jasonrhodes
- Made some small changes to the types and used kbn.schema in route
validation
- Swapped from `term` to `terms` filter for the `ean` filter
- Added a check that throws a 400 if both `type` and `ean` are used at
the same time
- Updated the docs to show the new request responses
- Mark `from` option as optional since it has a default value
Closes#153461
Closes#150907
## Summary
This PR adds the Processes tab to the single host flyout. The component
is already implemented in Inventory so we can reuse it here
## Testing
- Go to hosts view
- Open the flyout for any host to see the single host details
- Click on the processes tab
⚠️ If you want to see the processes summary on top (where the total
processes are displayed) you need inside your metricbeat modules yml
configuration to include the `process_summary` so your config should
include:
```
metricbeat.modules:
- module: system
metricsets:
..... other data .......
- process # Per process metrics
- process_summary # Process summary
..... other data .......
```
<img width="1913" alt="image"
src="https://user-images.githubusercontent.com/14139027/228534978-c38437e4-4279-4ad4-9fc8-5222cbd15c2e.png">
---------
Co-authored-by: Carlos Crespo <crespocarlos@users.noreply.github.com>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
This PR adds a new API for deleting a file within a case given the file
id.
This API will retrieve the file saved object provided in the query and
perform an authorization check using each file's file kind. It will also
retrieve all the attachments associated with the files and perform an
authorization check for each attachment. This api supports calling it
with ids that only have the file saved objects and not the corresponding
attachments. For the deletion sub privilege to work correctly, it must
have access to updating the file saved objects. Therefore we also had to
give the delete sub privilege all access to the file saved objects
types.
This PR does not contain the logic for deleting all files when a case is
deleted. That'll be completed in a separate PR.
Example request
```
POST /internal/cases/a58847c0-cccc-11ed-b071-4f11aa24310c/attachments/files/_bulk_delete
{
"ids": ["clfr5sdky0001n811gjot7tv5", "clfr5sgru0002n8112t54bave"]
}
```
Example response
```
204
```
Notable changes
- Refactored the delete all comments to leverage the bulk delete API
from the saved object client
- Updated the names of the `api_integration` users and roles to avoid
clashing with the ones in `cases_api_integration`
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Fix#154168
I having trouble recreating the error, but I suspect there could be an
issue with toggling the margins switch. The quick save button was still
disabled in the failing tests. Maybe the switch was already off by some
other test? Changing the description might be a better trigger for
unsaved changes.
[Flaky test runner x
200](https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/2066)
These tests rely on the custom logs package being an integration
package, 2.0.0 was released today which is an input package so the index
templates are not created on install.
resolves https://github.com/elastic/kibana/issues/142874
The alerting framework now generates an alert UUID for every alert it
creates. The UUID will be reused for alerts which continue to be active
on subsequent runs, until the alert recovers. When the same alert (alert
instance id) becomes active again, a new UUID will be generated. These
UUIDs then identify a "span" of events for a single alert.
The rule registry plugin was already adding these UUIDs to it's own
alerts-as-data indices, and that code has now been changed to make use
of the new UUID the alerting framework generates.
- adds property in the rule task state
`alertInstances[alertInstanceId].meta.uuid`; this is where the alert
UUID is persisted across runs
- adds a new `Alert` method getUuid(): string` that can be used by rule
executors to obtain the UUID of the alert they just retrieved from the
factory; the rule registry uses this to get the UUID generated by the
alerting framework
- for the event log, adds the property `kibana.alert.uuid` to
`*-instance` event log events; this is the same field the rule registry
writes into the alerts-as-data indices
- various changes to tests to accommodate new UUID data / methods
- migrates the UUID previous stored with lifecycle alerts in the alert
state, via the rule registry *INTO* the new `meta.uuid` field in the
existing alert state.
## 📓 Summary
Closes#153741
This PR fixes the time range filter by using the
`kibana.alert.time_range` field instead of `@timestamp`.
## 🧪 Testing
- Navigate to Hosts View
- Create an Inventory Alert that will trigger immediately
- Refresh the search until some alerts are triggered
- Play with relative time range (eg. 15 min ago -> 2 min ago) to verify
alerts appears correctly
---------
Co-authored-by: Marco Antonio Ghiani <marcoantonio.ghiani@elastic.co>
## Summary
Users can remove alerts from a case by deleting the whole alert
attachment. This PR removes the case id from the alerts when deleting an
attachment of type alert. It does not remove the case info from all
alerts attached to a case when deleting a case. It also fixes a bug
where the success toaster will not show when deleting an attachment.
Related: https://github.com/elastic/kibana/issues/146864,
https://github.com/elastic/kibana/issues/140800
## Testing
1. Create a case and attach some alerts to the case.
2. Verify that the alerts table (in security or in o11y) shows the case
the alert is attached to. You can enable the cases column by pressing
"Fields", searching for "Cases", and then selecting the field.
3. Go to the case and find the alerts' user activity.
4. Press the `...` and press "Remove alerts(s)"
5. Go back to the alert table and verify that the case is not shown in
the Cases column for each alert.
Please check that when you remove alert(s), attachments (ml, etc), and
comments you get a success toaster with the correct text.
### Checklist
Delete any items that are not applicable to this PR.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### For maintainers
- [x] This was checked for breaking API changes and was [labeled
appropriately](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
Reverts elastic/kibana#149127
from pmuellr:
After looking into https://github.com/elastic/kibana/issues/153939 -
Preconfigured slack Web API connector does not work - we realized we
really should have added the support for the Slack Web API in a
different connector.
Lesson learned: don't design connectors where the set of parameters
differs depending on a config (or secret) value. We have code - at
least with the "test" functionality - that assumes you can create a form
of parameters based solely on the connector type, without having access
to the connector data (we could/should probably add an enhancement to do
that). So it would never be able to render the appropriate parameters.
There is also the semantic issue that if you changed the Slack type
(from webhook to web-api), via the HTTP API (prevented in the UX), you
would break all actions using that connector, since they wouldn't have
the right parameters set.
Complete guess, but there may be other lurking bits like this in the
codebase that we just haven't hit yet.
Given all that, we reluctantly decided to split the connector into two,
and revert the PR that added the new code.
RIP Slack connector improvements, hope to see your new version in a few
days! :-)
**>> Reopened to avoid unnecessary notifications for unrelated teams
<<**
Original PR with original comments:
https://github.com/elastic/kibana/pull/152097
## Summary
Added test cases:
- `endpoints.cy.ts`:
- Edit a Policy assigned to a real Endpoint and confirm that the
Endpoint returns a successful Policy response
- `artifacts.cy.ts`:
- Add a trusted application and confirm that the Endpoint returns a
successful Policy response
- Add an Event filter and confirm that the Endpoint returns a successful
Policy response
- Add a Blocklist entry and confirm that the Endpoint returns a
successful Policy response
- Add a Host Isolation exception and confirm that the Endpoint returns a
successful Policy response
To open Cypress for the new e2e test suite, first run this command:
`node scripts/build_kibana_platform_plugins`
Then use this command:
`yarn --cwd x-pack/plugins/security_solution
cypress:dw:endpoint:open-as-ci`
> **Warning**
> The `Endpoint reassignment` test group in `endpoints.cy.ts` will most
probably fail, due to this bug:
https://github.com/elastic/endpoint-dev/issues/12499 (as mentioned in
the PR for that test: https://github.com/elastic/kibana/pull/151887)
>
> So it's the best to skip that one, otherwise the endpoint will freeze
in *Out-of-date* state and you need to spin up the test suite again.
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## Summary
- Addresses https://github.com/elastic/kibana/issues/153619
**Reason:**
- As predefined connectors are not Saved objects, the export method was
failing to get their exported objects.
**Solution:**
- Filter out `Predefined Action` ids from the user's action ids because
we don't need to export them as they are already in the user env, and
they won't be removed or changed
**References**
https://www.elastic.co/guide/en/kibana/8.7/pre-configured-connectors.html
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This changes the compare resolution for the Metricbeat dashboard visual
compare tests in order to remove the vertical scroll bar influencing the
results. Various OSs used in testing have different scroll bar behaviors
and widths.