## Summary
Trying to address slow config issue:
```
The following "Functional Tests" configs have durations that exceed the maximum amount of time desired for a single CI job. This is not an error, and if you don't own any of these configs then you can ignore this warning.If you own any of these configs please split them up ASAP and ask Operations if you have questions about how to do that.
x-pack/test/alerting_api_integration/spaces_only/config.ts: 41.4 minutes
```
by splitting it into multiple groups.
_1 round (splitting main index file with 3 index suites where each one
has its own setup/tearDown + alerting suite into 4 groups)_
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group1/config.ts
7m 1s
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group2/config.ts
**15m 10s**
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group3/config.ts
**21m 40s**
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group4/config.ts
5m 30s
x-pack/test/alerting_api_integration/spaces_only/tests/action_task_params/config.ts
2m 31s
x-pack/test/alerting_api_integration/spaces_only/tests/actions/config.ts
4m 22s
_2 round (rebalance groups 1-4 to be more time equal)_
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group1/config.ts
12m 46s
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group2/config.ts
8m 46s
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group3/config.ts
17m 30s
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group4/config.ts
9m 5s
Here `Alerting eventLog alerts should generate expected alert events for
normal operation` test started to fail, probably there is a dependency
on the previous tests.
_3 round (rebalance groups 1-4, to keep tests order in group 1 up until
`event_log.ts` suite)_
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group1/config.ts
17m 12s
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group2/config.ts
8m 28s
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group3/config.ts
16m 15s
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group4/config.ts
6m 21s
_4 round (rebalancing groups 3-4 to be more time equal)_
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group1/config.ts
**17m 14s**
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group2/config.ts
**8m 37s**
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group3/config.ts
**12m 40s**
x-pack/test/alerting_api_integration/spaces_only/tests/alerting/group4/config.ts
**9m 49s**
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This PR attempts to fix config duration time warning
```
The following "Functional Tests" configs have durations that exceed the maximum amount of time desired for a single CI job. This is not an error, and if you don't own any of these configs then you can ignore this warning.If you own any of these configs please split them up ASAP and ask Operations if you have questions about how to do that.
x-pack/test/functional_basic/config.ts: 38.8 minutes
```
<img width="1188" alt="image"
src="https://user-images.githubusercontent.com/10977896/214912243-800a1c80-13fa-406b-93dd-0f5ab208cda9.png">
PR initially splits original test suite into 3 config files based on
area: permission, data visualizer and transform.
- x-pack/test/functional_basic/apps/ml/data_visualizer/config.ts
duration: **19m 24s** (left for later)
- x-pack/test/functional_basic/apps/transform/config.ts duration: **18m
14s** -> let's split in 5 configs
- x-pack/test/functional_basic/apps/ml/permissions/config.ts. duration:
5m 10s
2nd split round:
-
x-pack/test/functional_basic/apps/transform/feature_controls/config.ts.
duration: 2m 4s
- x-pack/test/functional_basic/apps/transform/group1/config.ts duration:
**8m 16s** -> let's split in 2 configs
- x-pack/test/functional_basic/apps/transform/group2/config.ts.
duration: 5m 20s
- x-pack/test/functional_basic/apps/transform/group3/config.ts.
duration: 5m 12s
- x-pack/test/functional_basic/apps/ml/permissions/config.ts. duration:
5m 10s -> let's split in 3 configs (1 test file each)
3rd split round:
- x-pack/test/functional_basic/apps/ml/permissions/group1/config.ts.
duration: 3m 11s
- x-pack/test/functional_basic/apps/ml/permissions/group2/config.ts
duration: 3m 42s
- x-pack/test/functional_basic/apps/ml/permissions/group3/config.ts
duration 2m 14s
- x-pack/test/functional_basic/apps/transform/group4/config.ts duration:
4m 43s
lets split into 3 configs
- x-pack/test/functional_basic/apps/ml/data_visualizer/config.ts
duration: **19m 24s**
4th split round:
- x-pack/test/functional_basic/apps/ml/data_visualizer/group1/config.ts
duration: 4m 42s
- x-pack/test/functional_basic/apps/ml/data_visualizer/group2/config.ts
duration: 9m 27s
- x-pack/test/functional_basic/apps/ml/data_visualizer/group3/config.ts
duration: 7m 39s
[Build time
](https://buildkite.com/elastic/kibana-pull-request/builds/103355) is
49m 26sec (55 FTR groups)
Currently on-merge pipeline for
[main](https://buildkite.com/elastic/kibana-on-merge/builds?branch=main)
takes around 1h
## Summary
This PR only deletes the component from the UI action plugin.
@semd has already added the component to a new package here
https://github.com/elastic/kibana/pull/149057
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Fix https://github.com/elastic/kibana/issues/148412
More and more SO types will not be accessible from the HTTP APIs (either
`hidden:true` or `hiddenFromHTTPApis: true`).
However, the FTR SO client (`KbnClientSavedObjects`) still needs to be
able to access and manipulate all SO types.
This PR introduces a `ftrSoApis` plugin that is loaded for all FTR
suites. This plugin exposes SO APIs that are used by the FTR client
instead of the public SO HTTP APIs. These APIs are configured to know
about all types, even hidden ones.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
We just had an issue where two PRs were merged and it caused the limit
of the `triggerActionsUi` bundle to be exceeded, breaking PR builds. The
issue is that we didn't see any indication of this in the on-merge jobs
because we don't produce the PR report for on-merge jobs or ask ci-stats
if we should fail the job. Instead, we just ship the metrics for
baseline purposes. This fixes that problem by adding a `--validate` flag
to `node scripts/ship_ci_stats`, which takes care of sending at least
some ci-stats and will verify that the bundle limits are not exceeded.
Since we didn't catch this issue in the on-merge job the limits were
incorrect for over an hour and merged into many PRs, wasting engineering
and CI time.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Reopens#149143 with updates to the target file and service
After a commit is merged, tested, and images are built and pushed to the
container registry we need to send a notification that a new tag is
available.
This triggers a promotion pipeline with the latest container tag when:
1) the branch is tracked (i.e. main, and not a personal branch) 1)
~triggered from our on-merge test pipeline.~
https://github.com/elastic/kibana/pull/149350 had to remove support for
this - we're triggering via REST now which removes the from trigger
environment variable.
After a commit is merged, tested, and images are built and pushed to the
container registry we need to send a notification that a new tag is
available.
This triggers a promotion pipeline with the latest container tag when:
1) the branch is tracked (i.e. main, and not a personal branch)
1) ~triggered from our on-merge test pipeline.~
https://github.com/elastic/kibana/pull/149350 had to remove support for
this - we're triggering via REST now which removes the from trigger
environment variable.
Co-authored-by: Tiago Costa <tiago.costa@elastic.co>
Reopens#148864 to trigger via REST instead of yaml. The previous
implementation did not support commit triggered builds.
This conditionally adds a pipeline trigger to
`kibana-artifacts-container-image` at the end of the on-merge pipeline
when tests are passing. The triggered pipeline will build (and
eventually push) our default docker images.
## Summary
I noticed some noise in [Performance
dashboard](dd0473ac-826f-5621-9a10-25319700326e?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-24h%2Fh,to:now)))
and think it is better to disable Telemetry for journeys by default.
We use it to report performance events and this PR enables it in the
performance pipeline via env variable `PERFORMANCE_ENABLE_TELEMETRY`.
For other pipelines (PRs,
[performance-data-set-extraction](https://buildkite.com/elastic/kibana-performance-data-set-extraction))
running on regular workers or local troubleshooting there is no much
value to collect inconsistent values.
This adds a new pipeline to build our default container image, using the
`kibana-ci` docker namespace and the docker version based on the first 7
digits of the commit hash.
https://buildkite.com/elastic/kibana-artifacts-container-image/builds/3
Will have followups for:
1) on-merge trigger
2) docker push / controller pipeline trigger
need to make sure branches other than main, and manual triggers
(untested) skip publishing.
FTR groups on CI target a 40 minute runtime. In situations where tests
are updated or moved, and there's no prior data, we're seeing occasional
timeouts with a 60 minute timeout. This increases the timeout to 90
minutes.
Updates mocha to 10.2.0 and types/mocha to 10.0.1 to address an `npm
audit` warning. Verified tests still run and pass.
Re-opens https://github.com/elastic/kibana/pull/146951 with a fix added
to `package.json`. Credits to the original author, thanks for the
contribution.
Co-authored-by: Sergev ₱ <118327710+iot-defcon@users.noreply.github.com>
This PR implements a linter like the TS Project linter, except for
packages in the repo. It does this by extracting the reusable bits from
the TS Project linter and reusing them for the project linter. The only
rule that exists for packages right now is that the "name" in the
package.json file matches the "id" in Kibana.jsonc. The goal is to use a
rule to migrate kibana.json files on the future.
Additionally, a new rule for validating the indentation of tsconfig.json
files was added.
Validating and fixing violations is what has triggered review by so many
teams, but we plan to treat those review requests as notifications of
the changes and not as blockers for merging.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This PR adds capability to run capacity testing for single apis #143066
Currently in main we have to 2 types of performance tests:
- single user performance journey that simulates single end-user
experience in browser
- scalability journey that uses APM traces from single user performance
journey to simulate multiple end-users experience
This new type of performance tests allow to better understand how each
single server api scale under the similar load.
How to run locally:
make sure to clone the latest main branch of
[elastic/kibana-load-testing](https://github.com/elastic/kibana-load-testing)
in Kibana repo run:
`node scripts/run_scalability.js --journey-path
x-pack/test/scalability/apis/api.core.capabilities.json`
How it works:
FTR is used to start Kibana/ES and run Gatling simulation with json file
as input. After run the latest report matching journey name is parsed to
get perf metrics and report using EBT to the Telemetry cluster.
How will it run after merge:
I plan to run pipeline every 3 hours on bare metal machine and report
metrics to Telemetry staging cluster.
<img width="2023" alt="image"
src="https://user-images.githubusercontent.com/10977896/208771628-f4f5dbcb-cb73-40c6-9aa1-4ec3fbf5285b.png">
APM traces are collected and reported to Kibana stats cluster:
<img width="1520" alt="image"
src="https://user-images.githubusercontent.com/10977896/208771323-4cca531a-eeea-4941-8b01-50b890f932b1.png">
What metrics are collected:
1. warmupAvgResponseTime - average response time during warmup phase
2. rpsAtWarmup - average requests per second during warmup phase
3. warmupDuration
4. responseTimeMetric (default: 85%) Gatling has response time
25/50/75/80/85/90/95/99 percentiles, as well as min/max values
5. threshold1ResponseTime (default 3000 ms)
6. rpsAtThreshold1 requests per second when `responseTimeMetric` first
reach threshold1ResponseTime
7. threshold2ResponseTime
8. rpsAtThreshold2 (default 9000 ms)
9. threshold3ResponseTime
10. rpsAtThreshold3 (default 15000 ms)
As long as we agree on metrics I will update indexer for telemetry.
Co-authored-by: Alejandro Fernández Haro <alejandro.haro@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
### Summary
**Sets up the foundations for
https://github.com/elastic/kibana/issues/146000**
- created a new test server under
`x-pack/test/monitoring_api_integration/` that allows loading of
packages at kibana startup
- a test runner utility which is a simple for loop executing the
supplied test twice, one time with `metricbeat` data and a second time
with `package` data
- a utility that allows transformation of package data into metricbeat
data
**Adds API tests for the beats package**
- created a test case for each API exposed
- removed the duplicates from
`x-pack/test/api_integration/apis/monitoring`
-----
_See the included
[README](b55de5c1cc/x-pack/test/monitoring_api_integration/README.md)
for additional details_
This directory defines a custom test server that provides bundled
integrations
packages to the spawned test Kibana. This allows us to install those
packages at
startup, with all their assets (index templates, ingest pipelines..),
without
having to reach a remote package registry.
With the packages and their templates already installed we don't have to
provide
the static mappings in the tests archives. This has the benefit of
reducing our
disk footprint and setup time but more importantly it enables an easy
upgrade path
of the mappings so we can verify no breaking changes were introduced by
bundling
the new versions of the packages.
_Note that while Stack Monitoring currently supports 3 collection modes,
the tests
in this directory only focus on metricbeat and elastic-agent data. Tests
for legacy
data are defined under `x-pack/test/api_integration/apis/monitoring`._
Since an elastic-agent integration spawns the corresponding metricbeat
module under
the hood (ie when an agent policy defines elasticsearch metrics data
streams,
a metricbeat process with the elasticsearch module will be spawned), the
output
documents are _almost_ identical. This means that we can easily
transform documents
from a source (elastic-agent) to another (metricbeat), and have the same
tests run
against both datasets.
Note that we don't have to install anything for the metricbeat data
since the mappings
are already installed by elasticseach at startup, and available at
`.monitoring-<component>-8-mb`
patterns. So we are always running the metricbeat tests against the
latest version of
the mappings.
We could have a similar approach for packages, for example by installing
the latest
packages versions from public EPR before the test suites run, instead of
using pinned
versions. Besides the questionable reliance on remote services for
running tests,
this is also dangerous given that packages are released in a continuous
model.
This means that whenever the test suite would execute against the latest
version
of packages it would be too late, as in already available to users.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
The new images have an updated gh binary which now requires setting the
`GITHUB_REPO` env var, or calling `gh repo set-default`. I opted for the
env var so that we didn't need to find a good time to execute the CLI
(after the keys are in the env, but before all other user code) or worry
about the logging. This also allows other users of our scripts to
customize as makes sense without having to dive into a bunch of
imperative shell code.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Dearest Reviewers 👋
I've been working on this branch with @mistic and @tylersmalley and
we're really confident in these changes. Additionally, this changes code
in nearly every package in the repo so we don't plan to wait for reviews
to get in before merging this. If you'd like to have a concern
addressed, please feel free to leave a review, but assuming that nobody
raises a blocker in the next 24 hours we plan to merge this EOD pacific
tomorrow, 12/22.
We'll be paying close attention to any issues this causes after merging
and work on getting those fixed ASAP. 🚀
---
The operations team is not confident that we'll have the time to achieve
what we originally set out to accomplish by moving to Bazel with the
time and resources we have available. We have also bought ourselves some
headroom with improvements to babel-register, optimizer caching, and
typescript project structure.
In order to make sure we deliver packages as quickly as possible (many
teams really want them), with a usable and familiar developer
experience, this PR removes Bazel for building packages in favor of
using the same JIT transpilation we use for plugins.
Additionally, packages now use `kbn_references` (again, just copying the
dx from plugins to packages).
Because of the complex relationships between packages/plugins and in
order to prepare ourselves for automatic dependency detection tools we
plan to use in the future, this PR also introduces a "TS Project Linter"
which will validate that every tsconfig.json file meets a few
requirements:
1. the chain of base config files extended by each config includes
`tsconfig.base.json` and not `tsconfig.json`
1. the `include` config is used, and not `files`
2. the `exclude` config includes `target/**/*`
3. the `outDir` compiler option is specified as `target/types`
1. none of these compiler options are specified: `declaration`,
`declarationMap`, `emitDeclarationOnly`, `skipLibCheck`, `target`,
`paths`
4. all references to other packages/plugins use their pkg id, ie:
```js
// valid
{
"kbn_references": ["@kbn/core"]
}
// not valid
{
"kbn_references": [{ "path": "../../../src/core/tsconfig.json" }]
}
```
5. only packages/plugins which are imported somewhere in the ts code are
listed in `kbn_references`
This linter is not only validating all of the tsconfig.json files, but
it also will fix these config files to deal with just about any
violation that can be produced. Just run `node scripts/ts_project_linter
--fix` locally to apply these fixes, or let CI take care of
automatically fixing things and pushing the changes to your PR.
> **Example:** [`64e93e5`
(#146212)](64e93e5806)
When I merged main into my PR it included a change which removed the
`@kbn/core-injected-metadata-browser` package. After resolving the
conflicts I missed a few tsconfig files which included references to the
now removed package. The TS Project Linter identified that these
references were removed from the code and pushed a change to the PR to
remove them from the tsconfig.json files.
## No bazel? Does that mean no packages??
Nope! We're still doing packages but we're pretty sure now that we won't
be using Bazel to accomplish the 'distributed caching' and 'change-based
tasks' portions of the packages project.
This PR actually makes packages much easier to work with and will be
followed up with the bundling benefits described by the original
packages RFC. Then we'll work on documentation and advocacy for using
packages for any and all new code.
We're pretty confident that implementing distributed caching and
change-based tasks will be necessary in the future, but because of
recent improvements in the repo we think we can live without them for
**at least** a year.
## Wait, there are still BUILD.bazel files in the repo
Yes, there are still three webpack bundles which are built by Bazel: the
`@kbn/ui-shared-deps-npm` DLL, `@kbn/ui-shared-deps-src` externals, and
the `@kbn/monaco` workers. These three webpack bundles are still created
during bootstrap and remotely cached using bazel. The next phase of this
project is to figure out how to get the package bundling features
described in the RFC with the current optimizer, and we expect these
bundles to go away then. Until then any package that is used in those
three bundles still needs to have a BUILD.bazel file so that they can be
referenced by the remaining webpack builds.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
After moving away from composite projects in the IDE we now have an
issue where projects like security solutions are getting `@types/jest`
and `@types/mocha` loaded up, even though the "types" compiler option in
security solutions focuses on jest. To fix this I've removed the
`@types/mocha` package, implemented/copied a portion of the mocha types
into a new `@kbn/ambient-ftr-types` package which can be used in ftr
packages to define the describe/it/etc. globals.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Reactivating triggers_actions_ui tests.
Rules list tests will remain skipped until we finish its refactor in
https://github.com/elastic/kibana/pull/147014
@cnasikas will take care of this one
`x-pack/plugins/triggers_actions_ui/public/application/sections/action_connector_form/connector_form.test.tsx`
Co-authored-by: Xavier Mouligneau <xavier.mouligneau@elastic.co>
## Summary
This PR is the follow-up to #147002 and #146129 and makes few changes to
make both performance scripts expose very similar cli and allow run
`run_performance` locally for debug purpose.
- to run a single test locally against source:
- single user journey:
`node scripts/run_performance.js --journey-path
x-pack/performance/journeys/login.ts`
- scalability journey (auto-generated):
`node scripts/run_scalability.js --journey-path
target/scalability_traces/kibana/login-0184f19e-0903-450d-884d-436d737a3abe.json`
`skip-warmup` flag to avoid journey warmup runs for performance data set
extraction (we don't need to run journey twice while interested in APM
traces only)
PR also updates pipeline scripts with new changes
## Summary
Part of #140828
PR for run yml file
[elastic/kibana-buildkite/pull/67](https://github.com/elastic/kibana-buildkite/pull/67)
This PR moves data set extraction step in separate pipeline, still
reporting KIbana scalability and ES Rally output in Kibana-related
bucket.
Reporting ES Rally data to required bucket will be added in the
follow-up PR.
## Summary
Closes https://github.com/elastic/kibana/issues/111246
Removes the implementation of the vislib pie. Specifically:
- Removes the `visualization:visualize:legacyPieChartsLibrary` advanced
setting which was used as a fallback to vislib pie,
- Cleanups the vislib code from the pie
## Summary
Closes#146546
This PR replaces bash script with node-based runner script.
Script can take relative path to directory with scalability journey
files or relative path to individual journey json file.
`node scripts/run_scalability.js --journey-config-path
scalability_traces/server`
`node scripts/run_scalability.js --journey-config-path
scalability_traces/server/api.core.capabilities.json`
### Checklist
Delete any items that are not applicable to this PR.
- [ ] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/packages/kbn-i18n/README.md)
- [ ]
[Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html)
was added for features that require explanation or tutorials
- [ ] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [ ] Any UI touched in this PR is usable by keyboard only (learn more
about [keyboard accessibility](https://webaim.org/techniques/keyboard/))
- [ ] Any UI touched in this PR does not create any new axe failures
(run axe in browser:
[FF](https://addons.mozilla.org/en-US/firefox/addon/axe-devtools/),
[Chrome](https://chrome.google.com/webstore/detail/axe-web-accessibility-tes/lhdoppojpmngadmnindnejefpokejbdd?hl=en-US))
- [ ] If a plugin configuration key changed, check if it needs to be
allowlisted in the cloud and added to the [docker
list](https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/resources/base/bin/kibana-docker)
- [ ] This renders correctly on smaller devices using a responsive
layout. (You can test this [in your
browser](https://www.browserstack.com/guide/responsive-testing-on-local-server))
- [ ] This was checked for [cross-browser
compatibility](https://www.elastic.co/support/matrix#matrix_browsers)
### Risk Matrix
Delete this section if it is not applicable to this PR.
Before closing this PR, invite QA, stakeholders, and other developers to
identify risks that should be tested prior to the change/feature
release.
When forming the risk matrix, consider some of the following examples
and how they may potentially impact the change:
| Risk | Probability | Severity | Mitigation/Notes |
|---------------------------|-------------|----------|-------------------------|
| Multiple Spaces—unexpected behavior in non-default Kibana Space.
| Low | High | Integration tests will verify that all features are still
supported in non-default Kibana Space and when user switches between
spaces. |
| Multiple nodes—Elasticsearch polling might have race conditions
when multiple Kibana nodes are polling for the same tasks. | High | Low
| Tasks are idempotent, so executing them multiple times will not result
in logical error, but will degrade performance. To test for this case we
add plenty of unit tests around this logic and document manual testing
procedure. |
| Code should gracefully handle cases when feature X or plugin Y are
disabled. | Medium | High | Unit tests will verify that any feature flag
or plugin combination still results in our service operational. |
| [See more potential risk
examples](https://github.com/elastic/kibana/blob/main/RISK_MATRIX.mdx) |
### For maintainers
- [ ] This was checked for breaking API changes and was [labeled
appropriately](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)