## Summary
This PR is part of the Kibana Sustainable Architecture effort.
The goal is to start categorising Kibana packages into _generic
platform_ (`group: "platform"`) vs _solution-specific_.
```
group?: 'search' | 'security' | 'observability' | 'platform'
visibility?: 'private' | 'shared'
```
Uncategorised modules are considered to be `group: 'common', visibility:
'shared'` by default.
We want to prevent code from solution A to depend on code from solution
B.
Thus, the rules are pretty simple:
* Modules can only depend on:
* Modules in the same group
* OR modules with 'shared' visibility
* Modules in `'observability', 'security', 'search'` groups are
mandatorily `visibility: "private"`.
Long term, the goal is to re-organise packages into dedicated folders,
e.g.:
```
x-pack/platform/plugins/private
x-pack/observability/packages
```
For this first wave, we have categorised packages that seem
"straightforward":
* Any packages that have:
* at least one dependant module
* all dependants belong to the same group
* Categorise all Core packages:
* `@kbn/core-...-internal` => _platform/private_
* everything else => _platform/shared_
* Categorise as _platform/shared_ those packages that:
* Have at least one dependant in the _platform_ group.
* Don't have any `devOnly: true` dependants.
### What we ask from you, as CODEOWNERS of the _package manifests_, is
that you confirm that the categorisation is correct:
* `group: "platform", visibility: "private"` if it's a package that
should only be used from platform code, not from any solution code. It
will be loaded systematically in all serverless flavors, but solution
plugins and packages won't be able to `import` from it.
* `group: "platform", visibility: "shared"` if it's a package that can
be consumed by both platform and solutions code. It will be loaded
systematically in all serverless flavors, and anybody can import / use
code from it.
* `group: "observability" | "security" | "search", visibility:
"private"` if it's a package that is intented to be used exclusively
from a given solution. It won't be accessible nor loaded from other
solutions nor platform code.
Please refer to
[#kibana-sustainable-architecture](https://elastic.slack.com/archives/C07TCKTA22E)
for any related questions.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Fix https://github.com/elastic/kibana/issues/142915
### Risk Matrix
| Risk | Probability | Severity | Mitigation/Notes |
|---------------------------|-------------|----------|-------------------------|
| Third party plugin types throw type errors | Low | Low | type checks
will error when using a deprecated type. Plugin authors should extend
the supported types or define new ones inline |
### For maintainers
- [X] This was checked for breaking API changes and was [labeled
appropriately](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
(no breaking changes)
## Summary
This actually consumes the public base url in the cloud plugin and the
places depending on the `elasticsearchUrl` value populated there.
---------
Co-authored-by: Rodney Norris <rodney@tattdcodemonkey.com>
## Summary
Closes https://github.com/elastic/kibana/issues/192004
Calling `client.asSecondaryAuthUser` from a client scoped to a fake
request instantiated with `getKibanaFakeRequest` returns the following
error:
`Error: asSecondaryAuthUser called from a client scoped to a request
without 'authorization' header.`.
This is because we use the same branch when dealing with a real or fake
request and expect the headers to be cached. There are existing tests to
verify a fake request works but these requests are raw objects not
created through `getKibanaFakeRequest`
### Testing
This snippet does not throw
```
const fakeRequest = getFakeKibanaRequest({ id: apiKey.id, api_key: apiKey.apiKey });
const esClient = server.core.elasticsearch.client.asScoped(fakeRequest).asSecondaryAuthUser;
```
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This adds a publicBaseUrl to the Elasticsearch plugin config so users
can set a publicly accessible URL for Elasticsearch.
---------
Co-authored-by: Rudolf Meijering <skaapgif@gmail.com>
## Summary
This PR has breadth, but not depth. This adds 3 new `eslint` rules. The
first two protect against the use of code generated from strings (`eval`
and friends), which will not work client-side due to our CSP, and is not
something we wish to support server-side. The last rule aims to prevent
a subtle class of bugs, and to defend against a subset of prototype
pollution exploits:
- `no-new-func` to be compliant with our CSP, and to prevent code
execution from strings server-side:
https://eslint.org/docs/latest/rules/no-new-func
- `no-implied-eval` to be compliant with our CSP, and to prevent code
execution from strings server-side:
https://eslint.org/docs/latest/rules/no-implied-eval. Note that this
function implies that it prevents no-new-func, but I don't see [test
cases](https://github.com/eslint/eslint/blob/main/tests/lib/rules/no-implied-eval.js)
covering this behavior, so I think we should play it safe and enable
both rules.
- `no-prototype-builtins` to prevent accessing shadowed properties:
https://eslint.org/docs/latest/rules/no-prototype-builtins
In order to be compliant with `no-prototype-builtins`, I've migrated all
usages and variants of `Object.hasOwnProperty` to use the newer
[`Object.hasOwn`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/hasOwn).
## Summary

At the moment, our package generator creates all packages with the type
`shared-common`. This means that we cannot enforce boundaries between
server-side-only code and the browser, and vice-versa.
- [x] I started fixing `packages/core/*`
- [x] It took me to fixing `src/core/` type to be identified by the
`plugin` pattern (`public` and `server` directories) vs. a package
(either common, or single-scoped)
- [x] Unsurprisingly, this extended to packages importing core packages
hitting the boundaries eslint rules. And other packages importing the
latter.
- [x] Also a bunch of `common` logic that shouldn't be so _common_ 🙃
### For maintainers
- [x] This was checked for breaking API changes and was [labeled
appropriately](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Fix https://github.com/elastic/kibana/issues/185042
- Add a new `elasticsearch.maxResponseSize` config option
- Set this value to `100mb` on our serverless configuration file
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Fix https://github.com/elastic/kibana/issues/179458
Add a third method to `IScopedClusterClient`, `asSecondaryAuth` which
allow performing requests on behalf of the kibana system users with the
current user as secondary authentication (via the
`es-secondary-authorization` header)
## Summary
This PR attempts to make it easier to quantity the time we're spending
waiting on ES during Kibana startup.
- Add a log entry once successfully connected to ES, surfacing the info
of how much time we waited.
- Add two new metric to our `kibana_started` event:
- the time we spent waiting for ES
- the time it took to perform the SO migration
Note that for "BWC" reasons (primarily - and simplicity's sake too)
we've not subtracting the time we spent from the `start` lifecycle
timing we already had.
## Summary
Use a shorter interval for Elasticsearch healthchecks before the first
green status, to overall reduce the time spent waiting for ES when both
Kibana and ES are starting at the same time.
## Summary
When starting with a clean ES (with no SO indices), Kibana fails to find
the `.kibana` index, and logs a _warning_ message.
This PR aims at removing that undesired line from the logs.
## Summary
In this PR we:
* Allow using JWT credentials to grant API keys
* Extend default value of `elasticsearch.requestHeadersWhitelist` to
include both `authorization` and `es-client-authentication` to support
JWT with required client authentication _by default_. See
https://www.elastic.co/guide/en/elasticsearch/reference/8.11/jwt-auth-realm.html#jwt-realm-configuration
* Add API integration tests for both JWTs with client authentication and
without it
__NOTE:__ We're not gating this functionality with the config flag
(`xpack.security.authc.http.jwt.taggedRoutesOnly`) as we did for the
Serverless offering. It'd be a breaking change as we already implicitly
support JWT authentication without client authentication, and to be
honest, it's not really necessary anyway.
## Testing
Refer to the `Testing` section in this PR description:
https://github.com/elastic/kibana/pull/159117.
Or run already pre-configured Kibana functional test server:
1. `node scripts/functional_tests_server.js --config
x-pack/test/security_api_integration/api_keys.config.ts`
2. Create a role mapping for JWT user:
```bash
curl -X POST --location "http://localhost:9220/_security/role_mapping/jwt" \
-H "Authorization: Basic ZWxhc3RpYzpjaGFuZ2VtZQ==" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d "{
\"roles\": [ \"superuser\" ],
\"enabled\": true,
\"rules\": { \"all\": [{\"field\" : { \"realm.name\" : \"jwt_with_secret\" }}] }
}"
```
3. Send any Kibana API request with the following credentials:
```bash
curl -X POST --location "xxxx"
-H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJodHRwczovL2tpYmFuYS5lbGFzdGljLmNvL2p3dC8iLCJzdWIiOiJlbGFzdGljLWFnZW50IiwiYXVkIjoiZWxhc3RpY3NlYXJjaCIsIm5hbWUiOiJFbGFzdGljIEFnZW50IiwiaWF0Ijo5NDY2ODQ4MDAsImV4cCI6NDA3MDkwODgwMH0.P7RHKZlLskS5DfVRqoVO4ivoIq9rXl2-GW6hhC9NvTSkwphYivcjpTVcyENZvxTTvJJNqcyx6rF3T-7otTTIHBOZIMhZauc5dob-sqcN_mT2htqm3BpSdlJlz60TBq6diOtlNhV212gQCEJMPZj0MNj7kZRj_GsECrTaU7FU0A3HAzkbdx15vQJMKZiFbbQCVI7-X2J0bZzQKIWfMHD-VgHFwOe6nomT-jbYIXtCBDd6fNj1zTKRl-_uzjVqNK-h8YW1h6tE4xvZmXyHQ1-9yNKZIWC7iEaPkBLaBKQulLU5MvW3AtVDUhzm6--5H1J85JH5QhRrnKYRon7ZW5q1AQ'
-H 'ES-Client-Authentication: SharedSecret my_super_secret'
....for example....
curl -X GET --location "http://localhost:5620/internal/security/me" \
-H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJodHRwczovL2tpYmFuYS5lbGFzdGljLmNvL2p3dC8iLCJzdWIiOiJlbGFzdGljLWFnZW50IiwiYXVkIjoiZWxhc3RpY3NlYXJjaCIsIm5hbWUiOiJFbGFzdGljIEFnZW50IiwiaWF0Ijo5NDY2ODQ4MDAsImV4cCI6NDA3MDkwODgwMH0.P7RHKZlLskS5DfVRqoVO4ivoIq9rXl2-GW6hhC9NvTSkwphYivcjpTVcyENZvxTTvJJNqcyx6rF3T-7otTTIHBOZIMhZauc5dob-sqcN_mT2htqm3BpSdlJlz60TBq6diOtlNhV212gQCEJMPZj0MNj7kZRj_GsECrTaU7FU0A3HAzkbdx15vQJMKZiFbbQCVI7-X2J0bZzQKIWfMHD-VgHFwOe6nomT-jbYIXtCBDd6fNj1zTKRl-_uzjVqNK-h8YW1h6tE4xvZmXyHQ1-9yNKZIWC7iEaPkBLaBKQulLU5MvW3AtVDUhzm6--5H1J85JH5QhRrnKYRon7ZW5q1AQ' \
-H 'ES-Client-Authentication: SharedSecret my_super_secret' \
-H "Accept: application/json"
----
{
"username": "elastic-agent",
"roles": [
"superuser"
],
"full_name": null,
"email": null,
"metadata": {
"jwt_claim_sub": "elastic-agent",
"jwt_token_type": "access_token",
"jwt_claim_iss": "https://kibana.elastic.co/jwt/",
"jwt_claim_name": "Elastic Agent",
"jwt_claim_aud": [
"elasticsearch"
]
},
"enabled": true,
"authentication_realm": {
"name": "jwt_with_secret",
"type": "jwt"
},
"lookup_realm": {
"name": "jwt_with_secret",
"type": "jwt"
},
"authentication_type": "realm",
"authentication_provider": {
"type": "http",
"name": "__http__"
},
"elastic_cloud_user": false
}
```
__Fixes:__ https://github.com/elastic/kibana/issues/171522
----
Release note: The default value of the
`elasticsearch.requestHeadersWhitelist` configuration option has been
expanded to include the `es-client-authentication` HTTP header, in
addition to `authorization`.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Fix https://github.com/elastic/kibana/issues/163787
Change the way `isInlineScriptingEnabled` function to retry retryable
errors from ES (similar to how the valid connection or migration ES
calls do)
## Summary
We recently got problems because some index creation settings are
rejected by stateless ES, causing the whole system to fail and Kibana to
terminate.
We can't really use feature flags for this, given:
1. it doesn't really make sense to use manual flags for something that
strictly depend on one of our dependency's capabilities
2. we're mixing the concept of "serverless" offering and "serverless"
build. Atm we sometimes run "serverless" Kibana against traditional ES,
meaning that the "serverless" info **cannot** be used to determine if
we're connected against a default or serverless version of ES.
This was something that was agreed a few weeks back, but never acted
upon.
## Introducing ES capabilities
This PR introduces the concept of elasticsearch "capabilities".
Those capabilities are built exclusively from info coming from the ES
cluster (and not by some config flag).
This first implementation simply exposes a `serverless` flag, that is
populated depending on the `build_flavor` field of the `info` API (`/`
endpoint).
The end goal would be to expose a real capabilities (e.g "what is
supported") list instead. But ideally this would be provided by some ES
API and not by us guessing what is supported depending on the build
flavor, so for now, just exposing whether we're connected to a default
of serverless ES will suffice.
### Using it to adapt some API calls during SO migration
This PR also adapts the `createIndex` and `cloneIndex` migration action
to use this information and change their request against ES accordingly
(removing some index creation parameters that are not supported).
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Bumps node.js to 18.17.0 (replacement for PR #144012 which was later
reverted)
As a result, these categorical additions were needed:
- `node` evocations will need the `--openssl-legacy-provider` flag,
wherever it would use certain crypto functionalities
- tests required updating of the expected HTTPS Agent call arguments,
`noDelay` seems to be a default
- `window.[NAME]` fields cannot be written directly
- some stricter typechecks
This is using our in-house built node.js 18 versions through the URLs
the proxy-cache. (built with
https://github.com/elastic/kibana-custom-nodejs-builds/pull/4)
These urls are served from a bucket, where the RHEL7/Centos7 compatible
node distributables are. (see:
https://github.com/elastic/kibana-ci-proxy-cache/pull/7)
Further todos:
- [x] check docs wording and consistency
- [ ] update the dependency report
- [x] explain custom builds in documentation
- [x] node_sass prebuilts
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Tiago Costa <tiago.costa@elastic.co>
Co-authored-by: Thomas Watson <w@tson.dk>
## Summary
Analyzing the MKI QA logs, I discovered that errors encountered during
shutdown were effectively triggering a second shutdown process, making
the logs unclear:
<img width="1564" alt="Screenshot 2023-07-13 at 16 07 22"
src="8d718a99-2187-4fa3-b6f6-9c3f0e7a3925">
it has the side effect to also make "normals" shutdown (e.g via SIGINT
like in the screenshot) to appear as error shutdowns because of the
error thrown during the shutdown.
This PR addresses it, by making sure that `Root` only shutdown once.
Errors occurring during the shutdown will be appearing in the logs, but
they will not surface as the cause of the shutdown (no `FATAL` log
entry).
Addresses the following feedback:
https://github.com/elastic/kibana/pull/154151#discussion_r1158470566
Similar to what has been done for ZDT, the goal of this PR is to extract
the logic of the `runV2Migration()` from the `KibanaMigrator` into a
separate file.
The PR also fixes some incomplete / incorrect UTs and adds a few missing
ones.
## Dearest Reviewers 👋
I've been working on this branch with @mistic and @tylersmalley and
we're really confident in these changes. Additionally, this changes code
in nearly every package in the repo so we don't plan to wait for reviews
to get in before merging this. If you'd like to have a concern
addressed, please feel free to leave a review, but assuming that nobody
raises a blocker in the next 24 hours we plan to merge this EOD pacific
tomorrow, 12/22.
We'll be paying close attention to any issues this causes after merging
and work on getting those fixed ASAP. 🚀
---
The operations team is not confident that we'll have the time to achieve
what we originally set out to accomplish by moving to Bazel with the
time and resources we have available. We have also bought ourselves some
headroom with improvements to babel-register, optimizer caching, and
typescript project structure.
In order to make sure we deliver packages as quickly as possible (many
teams really want them), with a usable and familiar developer
experience, this PR removes Bazel for building packages in favor of
using the same JIT transpilation we use for plugins.
Additionally, packages now use `kbn_references` (again, just copying the
dx from plugins to packages).
Because of the complex relationships between packages/plugins and in
order to prepare ourselves for automatic dependency detection tools we
plan to use in the future, this PR also introduces a "TS Project Linter"
which will validate that every tsconfig.json file meets a few
requirements:
1. the chain of base config files extended by each config includes
`tsconfig.base.json` and not `tsconfig.json`
1. the `include` config is used, and not `files`
2. the `exclude` config includes `target/**/*`
3. the `outDir` compiler option is specified as `target/types`
1. none of these compiler options are specified: `declaration`,
`declarationMap`, `emitDeclarationOnly`, `skipLibCheck`, `target`,
`paths`
4. all references to other packages/plugins use their pkg id, ie:
```js
// valid
{
"kbn_references": ["@kbn/core"]
}
// not valid
{
"kbn_references": [{ "path": "../../../src/core/tsconfig.json" }]
}
```
5. only packages/plugins which are imported somewhere in the ts code are
listed in `kbn_references`
This linter is not only validating all of the tsconfig.json files, but
it also will fix these config files to deal with just about any
violation that can be produced. Just run `node scripts/ts_project_linter
--fix` locally to apply these fixes, or let CI take care of
automatically fixing things and pushing the changes to your PR.
> **Example:** [`64e93e5`
(#146212)](64e93e5806)
When I merged main into my PR it included a change which removed the
`@kbn/core-injected-metadata-browser` package. After resolving the
conflicts I missed a few tsconfig files which included references to the
now removed package. The TS Project Linter identified that these
references were removed from the code and pushed a change to the PR to
remove them from the tsconfig.json files.
## No bazel? Does that mean no packages??
Nope! We're still doing packages but we're pretty sure now that we won't
be using Bazel to accomplish the 'distributed caching' and 'change-based
tasks' portions of the packages project.
This PR actually makes packages much easier to work with and will be
followed up with the bundling benefits described by the original
packages RFC. Then we'll work on documentation and advocacy for using
packages for any and all new code.
We're pretty confident that implementing distributed caching and
change-based tasks will be necessary in the future, but because of
recent improvements in the repo we think we can live without them for
**at least** a year.
## Wait, there are still BUILD.bazel files in the repo
Yes, there are still three webpack bundles which are built by Bazel: the
`@kbn/ui-shared-deps-npm` DLL, `@kbn/ui-shared-deps-src` externals, and
the `@kbn/monaco` workers. These three webpack bundles are still created
during bootstrap and remotely cached using bazel. The next phase of this
project is to figure out how to get the package bundling features
described in the RFC with the current optimizer, and we expect these
bundles to go away then. Until then any package that is used in those
three bundles still needs to have a BUILD.bazel file so that they can be
referenced by the remaining webpack builds.
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>