## Summary
This PR follows #166460 by adding Category panels to the Form.
<img width="1807" alt="Screenshot 2023-09-27 at 3 36 16 PM"
src="2abe8cf5-5822-473f-affd-148fb7949316">
## Notes
This PR is divided into several commits, the first few being
prerequisite codemods. I recommend reviewing each commit separately, as
the codemods might obscure the actual component work.
- [e78586f - Make SettingType pre-defined to clean up
references](e78586fe44)
- This makes the `SettingType` optional, to clean up areas where the
generic need not be specific.
- [80a9988 - [codemod] Make onFieldChange and onInputChange more
distinct](80a9988516)
- The `onChange` handlers weren't very clear as you work your way up the
component tree. This makes the implementation and usage easier to
understand, (and easier to [replace with state
management](https://github.com/elastic/kibana/issues/166579)).
- [5d0beff - [fix] Fix logged errors in form
tests](5d0beff00c)
- This fixes some logged errors in the Form from `Monaco` and from some
missing `act` and `waitFor` calls.
Closes#161754Closes#166807
To make the testing and review easier I merged the old components
[cleanup PR](https://github.com/jennypavlova/kibana/pull/5) into this
one
## Summary
This PR replaces the old node details view with the asset details flyout
### Old

### New

### Testing
1. Go to inventory
2. Click on a host in the waffle map
3. Click on any **host**
- These changes are related only if a `Host` is selected- in the case of
a pod the view shouldn't be changed:

4. Check the new flyout functionality
3557821c-7964-466e-8514-84c2f81bc2fd
Note: the selected host should have a border like in the previous
version (this I fixed in the [last
commit](ff4753aa06))
so it should be added if there is a selected node:
<img width="1193" alt="image"
src="6646fe47-6333-435a-a5ec-248339402224">
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Correct the `license_uuid` field name in the Endpoint Policy. Before, it
was named `license_uid`, but the Endpoint expects `license_uuid`.
This PR in intended to be backported to `8.10.3` which brings up an
interesting problem since we already have a migration added to `main`
for the `8.11` release.
After talking with the kibana-core team, my approach is to add the
migration for this bug fix to this PR. Then, to keep the `modelVersions`
consistent, I will backport all `modelVersions` to `8.10.3` to keep the
migrations consistent. Keeping these consistent is important so that
both users upgrading from `8.10.x` in ESS and the Serverless line all
remain in sync. The end result is that the policies inside of of
`8.10.3` will have an extra field that will be unused until `8.11.0`
The following `8.10.3` backport for this will include the extra
migration and I will request reviews for it since it will be more than a
normal backport.
Policy:

### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
* Adds API versioning to all routes involved in new Risk Engine, public
and private
* Adds missing PLI auth headers for some routes
* Updates API invocations to specify an appropriate version header
* Does NOT add header to legacy transform-based EA routes
### Checklist
- [x] Verify no API calls from the UI were missed
- [ ]
[Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html)
was added for features that require explanation or tutorials
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
## Summary
- Adds the missing `system_indices_superuser` role to the `role.yml`
file used when starting ES for serverless and using the security
solution override files for resources
## Summary
This PR adds Gated form when user visit Workplace Search , when`
kibana_uis_enabled == false`. User will not be able to able to access
any other Workplace search routes other than Overview page.
**Note**: Form submission and API call be included in next PR
Screen Recordings :
672a7b2e-3e5f-4fa1-8535-b5080b3a2dfc
7c8129cf-6f50-4039-9b50-b9a655361bd1
### Checklist
Delete any items that are not applicable to this PR.
- [ ] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/packages/kbn-i18n/README.md)
- [ ] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Updated EMS Styles are waiting to be put into production. They are
already available in Elastic staging environment
([preview](maps-staging.elastic.co/?manifest=testing)). This PR is a
safe measure to ensure that this change do not break our CI tests.
The process has been as follows:
1. Momentarily replaces the EMS Tile Service `tileApiUrl` by our staging
server to force the use of the new styles and check which tests break
with the slightly different basemaps at
[12481c6](12481c6ada)
2. Look for related [broken
tests](https://buildkite.com/elastic/kibana-pull-request/builds/161870)
```
Error: expected 0.030813687704837327 to be below 0.03
```
4. Adjust the threshold for the dashboard report, since the new value
was slightly over the limit
[e655b84](e655b84569)
5. Wait for a green CI (this took a few days because of unrelated issues
with Kibana CI)
6. Revert the `tileApiUrl` change to its original value
[c0030bc](c0030bcff1)
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This PR adds the serverless FTR tests that we already have in the [QA
quality
gate](https://github.com/elastic/kibana/blob/main/.buildkite/pipelines/quality-gates/pipeline.tests-qa.yaml#L18-L24)
to the staging quality gate.
### Details
We intentionally decided run the same set of FTR tests again in staging
for starters. We're accepting the over-testing here until we have enough
confidence and experience with our serverless product stability to
decide which set of tests to run in which environment.
This PR also explicitly sets the `EC_ENV` and `EC_REGION` environment
variables for QA and Staging. It worked fine for QA env so far without
setting the environment variable because it fell back on the QAF default
values. Setting these values explicitly, makes it more robust.
## Summary
Adds Open API definition for `GET /api/fleet/uninstall_tokens`, which is
hidden behind feature flag for now, but **planned to be enabled for
v8.11.0**.
This should be merged with:
- https://github.com/elastic/kibana/pull/166794
## Summary
Close https://github.com/elastic/kibana/issues/167152
Log a warning instead of throwing an error in
`saved_object_content_storage` when response validation failed.
We decided to do this as a precaution and as a follow up to an issue
found in saved search https://github.com/elastic/kibana/pull/166886
where storage started failing because of too strict validation.
As of this PR the saved_object_content_storage covers and this change
cover:
- `search`
- `index_pattern`
- `dashboard`
- `lens`
- `maps`
For other types we agreed with @dej611 that instead of applying the same
change for other types (visualization, graph, annotation) the team would
look into migrating their types to also use
`saved_object_content_storage`
https://github.com/elastic/kibana/issues/167421
resolves: #158403
When conflicts are detected while updating alert docs after a rule runs,
we'll try to resolve the conflict by `mget()`'ing the alert documents
again, to get the updated OCC info `_seq_no` and `_primary_term`. We'll
also get the current versions of "ad-hoc" updated fields (which caused
the conflict), like workflow status, case assignments, etc. And then
attempt to update the alert doc again, with that info, which should get
it back up-to-date.
Note that the rule registry was not touched here. During this PR's
development, I added the retry support to it, but then my function tests
were failing because there were never any conflicts happening. Turns out
rule registry mget's the alerts before it updates them, to get the
latest values. So they won't need this fix.
It's also not clear to me if this can be exercised in serverless, since
it requires the use of an alerting framework based AaD implementation
AND the ability to ad-hoc update alerts. I think this can only be done
with Elasticsearch Query and Index Threshold, and only when used in
metrics scope, so it will show up in the metrics UX, which is where you
can add the alerts to the case.
## manual testing
It's hard! I've seen the conflict messages before, but it's quite
difficult to get them to go off whenever you want. The basic idea is to
get a rule that uses alerting framework AAD (not rule registry, which is
not affected the same way with conflicts (they mget alerts right before
updating them), set it to run on a `1s` interval, and probably also
configure TM to run a `1s` interval, via the following configs:
```
xpack.alerting.rules.minimumScheduleInterval.value: "1s"
xpack.task_manager.poll_interval: 1000
```
You want to get the rule to execute often and generate a lot of alerts,
and run for as long as possible. Then while it's running, add the
generated alerts to cases. Here's the EQ rule definition I used:

I selected the alerts from the o11y alerts page, since you can't add
alerts to cases from the stack page. Hmm. :-). Sort the alert list by
low-high duration, so the newest alerts will be at the top. Refresh,
select all the rules (set page to show 100), then add to case from the
`...` menu. If you force a conflict, you should see something like this
in the Kibana logs:
```
[ERROR] [plugins.alerting] Error writing alerts: 168 successful, 100 conflicts, 0 errors:
[INFO ] [plugins.alerting] Retrying bulk update of 100 conflicted alerts
[INFO ] [plugins.alerting] Retried bulk update of 100 conflicted alerts succeeded
```
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Fixes#167408
This PR reduces the amount of "visualization modifiers" in the specific
case where ALL annotations in a layer are manual.
The `Ignore global filters` will still be shown if at least one
query-based annotation is defined in the layer.
### Checklist
Delete any items that are not applicable to this PR.
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: Stratoula Kalafateli <efstratia.kalafateli@elastic.co>
## 📓 Summary
When retrieving the CPU stats for containerized (or non-container)
clusters, we were not considering a scenario where the user could run in
a cgroup but without limits set.
These changes re-write the conditions to determine whether we allow
treating limitless containers as non-containerized, covering the case
where a user run in a cgroup and for some reason hasn't set the limit.
## Testing
> Taken from https://github.com/elastic/kibana/pull/159351 since it
reproduced the same behaviours
There are 3 main states to test:
No limit set but Kibana configured to use container stats.
Limit changed during lookback period (to/from real value, to/from no
limit).
Limit set and CPU usage crossing threshold and then falling down to
recovery
**Note: Please also test the non-container use case for this rule to
ensure that didn't get broken during this refactor**
**1. Start Elasticsearch in a container without setting the CPU
limits:**
```
docker network create elastic
docker run --name es01 --net elastic -p 9201:9200 -e xpack.license.self_generated.type=trial -it docker.elastic.co/elasticsearch/elasticsearch:master-SNAPSHOT
```
(We're using `master-SNAPSHOT` to include a recent fix to reporting for
cgroup v2)
Make note of the generated password for the `elastic` user.
**2. Start another Elasticsearch instance to act as the monitoring
cluster**
**3. Configure Kibana to connect to the monitoring cluster and start
it**
**4. Configure Metricbeat to collect metrics from the Docker cluster and
ship them to the monitoring cluster, then start it**
Execute the below command next to the Metricbeat binary to grab the CA
certificate from the Elasticsearch cluster.
```
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
```
Use the `elastic` password and the CA certificate to configure the
`elasticsearch` module:
```
- module: elasticsearch
xpack.enabled: true
period: 10s
hosts:
- "https://localhost:9201"
username: "elastic"
password: "PASSWORD"
ssl.certificate_authorities: "PATH_TO_CERT/http_ca.crt"
```
**5. Configure an alert in Kibana with a chosen threshold**
OBSERVE: Alert gets fired to inform you that there looks to be a
misconfiguration, together with reporting the current value for the
fallback metric (warning if the fallback metric is below threshold,
danger is if is above).
**6. Set limit**
First stop ES using `docker stop es01`, then set the limit using `docker
update --cpus=1 es01` and start it again using `docker start es01`.
After a brief delay you should now see the alert change to a warning
about the limits having changed during the alert lookback period and
stating that the CPU usage could not be confidently calculated.
Wait for change event to pass out of lookback window.
**7. Generate load on the monitored cluster**
[Slingshot](https://github.com/elastic/slingshot) is an option. After
you clone it, you need to update the `package.json` to match [this
change](8bfa8351de/package.json (L45-L46))
before running `npm install`.
Then you can modify the value for `elasticsearch` in the
`configs/hosts.json` file like this:
```
"elasticsearch": {
"node": "https://localhost:9201",
"auth": {
"username": "elastic",
"password": "PASSWORD"
},
"ssl": {
"ca": "PATH_TO_CERT/http_ca.crt",
"rejectUnauthorized": false
}
}
```
Then you can start one or more instances of Slingshot like this:
`npx ts-node bin/slingshot load --config configs/hosts.json`
**7. Observe the alert firing in the logs**
Assuming you're using a connector for server log output, you should see
a message like below once the threshold is breached:
```
`[2023-06-13T13:05:50.036+02:00][INFO ][plugins.actions.server-log] Server log: CPU usage alert is firing for node e76ce10526e2 in cluster: docker-cluster. [View node](/app/monitoring#/elasticsearch/nodes/OyDWTz1PS-aEwjqcPN2vNQ?_g=(cluster_uuid:kasJK8VyTG6xNZ2PFPAtYg))`
```
The alert should also be visible in the Stack Monitoring UI overview
page.
At this point you can stop Slingshot and confirm that the alert recovers
once CPU usage goes back down below the threshold.
**8. Stop the load and confirm that the rule recovers.**
---------
Co-authored-by: Marco Antonio Ghiani <marcoantonio.ghiani@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This PR introduces the new color mapping feature into Lens.
The color mapping feature is introduced as a standalone sharable
component available from `@kbn/coloring`. The
[README.md](ddd216457d/packages/kbn-coloring/src/shared_components/color_mapping/README.md)
file describes the components and the logic behind it.
The Color Mapping component is also connected to Lens and is available
in the following charts:
- XY (you can specify the mappings from a breakdown dimension
- Partition (you can specify the mappings from the main slice/group by
dimension)
- Tag cloud (you can specify the mappings from the tags dimension)
This MVP feature will be released under the Tech Preview flag.
This PR needs to prove the user experience and the ease of use. UI
styles, design improvements and embellishments will be released in
subsequent PRs.
The current MVP-provided palettes are just a placeholder. I'm
coordinating with @gvnmagni for a final set of palettes.
close https://github.com/elastic/kibana/issues/155037
close https://github.com/elastic/kibana/issues/6480
fix https://github.com/elastic/kibana/issues/28618
fix https://github.com/elastic/kibana/issues/96044
fix https://github.com/elastic/kibana/issues/101942
fix https://github.com/elastic/kibana/issues/112839
fix https://github.com/elastic/kibana/issues/116634
## Release note
This feature introduces the ability to change and map colors to break
down dimensions in Lens. The feature provides an improved way to specify
colors and their association with categories by giving the user a
predefined set of color choices
or customized one that drives the user toward a correct color selection.
It provides ways to pick new colors and generate gradients.
This feature is in Tech Preview and is enabled by default on every new
visualization but can be turned off at will.

Fixes#162299Fixes#162540
## Summary
This PR unskips the custom threshold rule executor unit test and removes
the warning implementation from the BE.
## 🧪 How to test
- Create a custom threshold rule, it should work as before. (Warning
implementation logic was already removed from FE; this PR only removes
the BE implementation.)
closes: https://github.com/elastic/kibana/issues/166428
## Summary
This PR removes code that is no longer needed after replacing the Node
Details View for Host with the Asset Details.
### TSVB
The TSVB files were apparently only used to display charts in the Node
Details view. Due to the Asset Details using Lens to power the charts,
corresponding `host` TSVB formulas and configs are no longer needed.
Therefore, `host*`, `hostK8s*`, and `hostDocker*` (the latter appears to
have never been used) have been removed. Additionally, `aws*` from
`required_metrics` was also removed because it was host-specific.
### FE Components
The main change is in the `useMetadata` hook. I have changed the hook
signature, making `requiredMetrics` optional. This parameter is used to
process additional filtering and is only used in asset types that the
old Node Details page supports. Not passing it is not expected to break
other places that depend on this hook.
### Server
Removing TSVB files has a direct impact on the
`api/metrics/node_details` endpoint. This endpoint is only used to
provide data to the Node Details page. It returns a 400 error if an
invalid metric is passed - which will be the case for hosts
**Example Request:**
```json
POST kbn:api/metrics/node_details
{
"metrics": [
"hostK8sCpuCap",
"hostSystemOverview"
],
"nodeId": "gke-release-oblt-release-oblt-pool-c4163099-bcaj",
"nodeType": "host",
"timerange": {
"from": 1695210522045,
"to": 1695214122045,
"interval": ">=1m"
},
"cloudId": "6106013995483209805",
"sourceId": "default"
}
```
**Response:**
```json
{
"statusCode": 400,
"error": "Bad Request",
"message": "Failed to validate: \n in metrics/0: \"hostK8sCpuCap\" does not match expected type \"podOverview\" | \"podCpuUsage\" | \"podMemoryUsage\" | \"podLogUsage\" | \"podNetworkTraffic\" | \"containerOverview\" | \"containerCpuKernel\" | \"containerCpuUsage\" | \"containerDiskIOOps\" | \"containerDiskIOBytes\" | \"containerMemory\" | \"containerNetworkTraffic\" | \"containerK8sOverview\" | \"containerK8sCpuUsage\" | \"containerK8sMemoryUsage\" | \"nginxHits\" | \"nginxRequestRate\" | \"nginxActiveConnections\" | \"nginxRequestsPerConnection\" | \"awsEC2CpuUtilization\" | \"awsEC2NetworkTraffic\" | \"awsEC2DiskIOBytes\" | \"awsS3TotalRequests\" | \"awsS3NumberOfObjects\" | \"awsS3BucketSize\" | \"awsS3DownloadBytes\" | \"awsS3UploadBytes\" | \"awsRDSCpuTotal\" | \"awsRDSConnections\" | \"awsRDSQueriesExecuted\" | \"awsRDSActiveTransactions\" | \"awsRDSLatency\" | \"awsSQSMessagesVisible\" | \"awsSQSMessagesDelayed\" | \"awsSQSMessagesSent\" | \"awsSQSMessagesEmpty\" | \"awsSQSOldestMessage\" | \"custom\"\n in metrics/1: \"hostSystemOverview\" does not match expected type \"podOverview\" | \"podCpuUsage\" | \"podMemoryUsage\" | \"podLogUsage\" | \"podNetworkTraffic\" | \"containerOverview\" | \"containerCpuKernel\" | \"containerCpuUsage\" | \"containerDiskIOOps\" | \"containerDiskIOBytes\" | \"containerMemory\" | \"containerNetworkTraffic\" | \"containerK8sOverview\" | \"containerK8sCpuUsage\" | \"containerK8sMemoryUsage\" | \"nginxHits\" | \"nginxRequestRate\" | \"nginxActiveConnections\" | \"nginxRequestsPerConnection\" | \"awsEC2CpuUtilization\" | \"awsEC2NetworkTraffic\" | \"awsEC2DiskIOBytes\" | \"awsS3TotalRequests\" | \"awsS3NumberOfObjects\" | \"awsS3BucketSize\" | \"awsS3DownloadBytes\" | \"awsS3UploadBytes\" | \"awsRDSCpuTotal\" | \"awsRDSConnections\" | \"awsRDSQueriesExecuted\" | \"awsRDSActiveTransactions\" | \"awsRDSLatency\" | \"awsSQSMessagesVisible\" | \"awsSQSMessagesDelayed\" | \"awsSQSMessagesSent\" | \"awsSQSMessagesEmpty\" | \"awsSQSOldestMessage\" | \"custom\""
}
```
### How to Test
- Start a local Kibana instance pointing to an oblt cluster.
- Navigate to `Infrastructure`.
- Try different asset types and navigate to the Node Details view.
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
## 📓 Summary
Closes#166848
This work adds a new tab to navigate Data View from the Log Explorer
selector.
In this first iteration, when the user selects a data view, we move into
discovering preselecting and loading the data view of choice.
**N.B.**: this recording is made on a setup where I have no installed
integrations, so having the no integrations panel is the expected
behaviour.
e8d1f622-86fb-4841-b4cc-4a913067d2cc
## Updated selector state machine
<img width="1492" alt="Screenshot 2023-09-22 at 12 15 44"
src="c563b765-6c6c-41e8-b8cd-769c518932c3">
## New DataViews state machine
<img width="995" alt="Screenshot 2023-09-22 at 12 39 09"
src="e4e43343-c915-42d8-8660-a2ee89f8d595">
---------
Co-authored-by: Marco Antonio Ghiani <marcoantonio.ghiani@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Closes https://github.com/elastic/kibana/issues/156245
One test agent went from offline to inactive due to inactivity timeout
and broke the tests, instead changed the `last_checkin` to now-6m so it
stays in offline state.
## Summary
I had to change `waitForRender` since `page.waitForFunction` tries to
run a script on page and it is not working due to CSP settings on Cloud.
Instead of injecting a script, we use a classical API to find
elements/attributes in the DOM.
Since `PUT /internal/core/_settings` is merged in 8.11.0, journeys run
on Cloud with on-fly labels update is supported starting deployments
8.11.0+. I added error message for 404 code just in case someone runs it
on earlier version.
`many_fields_discover` journey was update since on Cloud the data view
used by scenario is not selected by default.
How it works:
Create a deployment with QAF and re-configure it for journey run:
```
export EC_DEPLOYMENT_NAME=my-run-8.11
qaf elastic-cloud deployments create --stack-version 8.11.0-SNAPSHOT --environment staging --region gcp-us-central1
qaf elastic-cloud deployments configure-for-performance-journeys
```
Run any journey, e.g. many_fields_discover
```
TEST_CLOUD=1 TEST_ES_URL=https://username:pswd@es_url:443 TEST_KIBANA_URL=https://username:pswd@kibana-ur_url node scripts/functional_test_runner --config x-pack/performance/journeys/many_fields_discover.ts
```
You should see a log about labels being updated:
```
Updating telemetry & APM labels: {"testJobId":"local-a3272047-6724-44d1-9a61-5c79781b06a1","testBuildId":"local-d8edbace-f441-4ba9-ac83-5909be3acf2a","journeyName":"many_fields_discover","ftrConfig":"x-pack/performance/journeys/many_fields_discover.ts"}
```
And then able to find APM logs for the journey in
[Ops](https://kibana-ops-e2e-perf.kb.us-central1.gcp.cloud.es.io:9243/app/apm/services?comparisonEnabled=true&environment=ENVIRONMENT_ALL&kuery=labels.testJobId%20%3A%20%22local-d79a878c-cc7a-423b-b884-c9b6b1a8d781%22&latencyAggregationType=avg&offset=1d&rangeFrom=now-24h%2Fh&rangeTo=now&serviceGroup=&transactionType=request)
cluster
Implements #153108.
This enables the
`@kbn/telemetry/event_generating_elements_should_be_instrumented` eslint
rule for the `aiops` plugin to enforce `data-test-subj` attributes on
actionable EUI components so they are auto-instrumented by telemetry.
The ids were first auto-created using `node scripts/eslint --fix
x-pack/plugins/aiops` and then adapted.
## Summary
Hopefully
closes#167104closes#167130closes#167100closes#167013closes#166964
Fixing a few issues with login/logout:
1. Failed to login in "before" hook
<img width="1336" alt="Screenshot 2023-09-25 at 12 37 45"
src="e3b2830e-7b0d-4467-9b90-261b385bf71e">
My theory is that we are loading `/login` route too soon while log out
was not completed yet.
When we navigate to `https://localhost:5620/logout` there are multiple
url re-directions with final page being Cloud login form. This PR makes
sure we wait for this form to be displayed + 2500 ms extra to avoid
"immediate" /login navigation
2. Failed login on MKI:
Updating login via UI for serverless to pass password valid for
deployment: currently FTR uses `changeme` for both Kibana CI & MKI.
3. ES activate user profile call returning 500
We saw some login failures that are preceded with the following logs:
```
[00:03:27] │ debg Find.clickByCssSelector('[data-test-subj="loginSubmit"]') with timeout=10000
[00:03:27] │ debg Find.findByCssSelector('[data-test-subj="loginSubmit"]') with timeout=10000
[00:03:27] │ debg Find.waitForDeletedByCssSelector('.kibanaWelcomeLogo') with timeout=10000
[00:03:27] │ proc [kibana] [2023-09-19T07:08:26.126+00:00][INFO ][plugins.security.routes] Logging in with provider "basic" (basic)
[00:03:27] │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-profile-8] with alias [.security-profile]
[00:03:27] │ proc [kibana] [2023-09-19T07:08:26.140+00:00][ERROR][plugins.security.user-profile] Failed to activate user profile: {"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [1] shards, but this cluster currently has [27]/[27] maximum normal shards open;"}],"type":"validation_exception","reason":"Validation Failed: 1: this action would add [1] shards, but this cluster currently has [27]/[27] maximum normal shards open;"},"status":400}.
[00:03:27] │ proc [kibana] [2023-09-19T07:08:26.140+00:00][ERROR][http] 500 Server Error
[00:03:27] │ warn browser[SEVERE] http://localhost:5620/internal/security/login - Failed to load resource: the server responded with a status of 500 (Internal Server Error)
```
User activation happens during `POST internal/security/login` call to
Kibana server. ~~The only improvement that we can do from FTR
perspective is to call this end-point via API to makes sure user is
activated and only after proceed with UI login.~~
While working on issue #4 and talking to @jeramysoucy I believe retrying
login via UI will work here as well. We are checking if we are still on
login page (similar to incorrect password login), waiting 2500 ms and
pressing login button again.
4. Failed to login with Kibana reporting UNEXPECTED_SESSION_ERROR and
been re-directed to Cloud login page
```
proc [kibana] [2023-09-25T11:35:12.794+00:00][INFO ][plugins.security.authentication] Authentication attempt failed: UNEXPECTED_SESSION_ERROR
```
Temporary solution is to retry login from scratch (navigation to Kibana
login page & re-login )
Flaky-test-runner for functional obtl tests 50x
https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/3215
This PR is not fixing random 401 response when user navigates to some
apps with direct url
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Follow up to #167237.
Part of #167467.
We plan to reuse some of the queries log pattern analysis does to use
via log rate analysis too. Log pattern analysis mostly does queries from
the client side, late rate analysis has its own API endpoint and does ES
queries via Kibana server. In preparation for the use via log rate
analysis, this moves the code we need to have available server side for
log rate analysis to the `common` area of the plugin so it can be used
both on server/client.
In https://github.com/elastic/kibana/pull/167148 the filename was
updated. This updates the file read path in the fleet plugin.
Fixes `proc [kibana] [2023-09-27T18:12:07.561+00:00][WARN
][plugins.fleet] Unable to retrieve GPG key from
'/var/lib/buildkite-agent/builds/kb-n2-4-spot-e622c074d3147d71/elastic/kibana-on-merge/kibana-build-xpack/node_modules/@kbn/fleet-plugin/target/keys/GPG-KEY-elasticsearch':
ENOENT`