Creates the Streams app plugin, which renders UI for managing streams
(see https://github.com/elastic/kibana/pull/198713).
Additional changes in this PR:
- The menus were updated to conditionally add a link to the Streams app.
The Streams plugin itself returns a status$ observable which signals if
Streams have been enabled. This value is used to conditionally render
the link in the various flavors of menus.
- There's a small change in the ES types to allow for ordered params in
ES|QL (vs named params)
- `@kbn/server-route-repository` was updated to be able to override
`access` (instead of only inferring it from the endpoint name).
Additionally, we now allow all route options by default.
- `@kbn/typed-react-router-config` now also exports a `useBreadcrumbs`.
This was copied over from the APM implementation.
- the signature of the `esql` method in
`ObservabilityElasticsearchClient` was updated to separate processing
options from options that are sent over to the _query endpoint.
---------
Co-authored-by: Chris Cowan <chris@elastic.co>
Co-authored-by: Joe Reuter <johannes.reuter@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
- create `_search` endpoint to discover entities with esql queries. It
currently reads sources of the provided `type` from
`kibana_entity_definitions` index. Run this query to insert a
definition:
```
POST kibana_entity_definitions/_doc
{
"entity_type": "service",
"index_patterns": ["remote_cluster:logs-*"],
"metadata_fields": [],
"identity_fields": ["service.name"],
"filters": [],
"timestamp_field": "@timestamp"
}
```
By default `_search` will look at data in the last 5m. The lookup period
can be overriden by providing `start`/`end` parameters in ISO format. It
also accepts a `limit` to specify the number of entities returned which
defaults to 10
```
POST kbn:/internal/entities/v2/_search
{
"type": "service",
"start": "2024-11-19T20:40:00.000Z",
"end": "2024-11-19T20:50:00.000Z",
"limit": 20
}
```
- create `_search/preview` endpoint to preview output of entity sources
without persisting them
- create UI to preview results of an entity definition at
`/app/entity_manager`. The application is living in its own plugin at
`observability_solution/entity_manager_app`

---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Milton Hultgren <miltonhultgren@gmail.com>
This PR resets the release notes, upgrade notes, and what's new for 9.0.
It also cleans up a few references/files that were focusing on migration
to 8.0
Some more PRs will happen to prepare the rest of the docs for v9
Closes: https://github.com/elastic/platform-docs-team/issues/564
## Summary
Close https://github.com/elastic/kibana/issues/193473
Close https://github.com/elastic/kibana/issues/193474
This PR utilize the documentation packages that are build via the tool
introduced by https://github.com/elastic/kibana/pull/193847, allowing to
install them in Kibana and expose documentation retrieval as an LLM task
that AI assistants (or other consumers) can call.
Users can now decide to install the Elastic documentation from the
assistant's config screen, which will expose a new tool for the
assistant, `retrieve_documentation` (only implemented for the o11y
assistant in the current PR, shall be done for security as a follow up).
For more information, please refer to the self-review.
## General architecture
<img width="1118" alt="Screenshot 2024-10-17 at 09 22 32"
src="https://github.com/user-attachments/assets/3df8c30a-9ccc-49ab-92ce-c204b96d6fc4">
## What this PR does
Adds two plugin:
- `productDocBase`: contains all the logic related to product
documentation installation, status, and search. This is meant to be a
"low level" components only responsible for this specific part.
- `llmTasks`: an higher level plugin that will contain various LLM tasks
to be used by assistants and genAI consumers. The intent is not to have
a single place to put all llm tasks, but more to have a default place
where we can introduce new tasks from. (fwiw, the `nlToEsql` task will
probably be moved to that plugin).
- Add a `retrieve_documentation` tool registration for the o11y
assistant
- Add a component on the o11y assistant configuration page to install
the product doc
(wiring the feature to the o11y assistant was done for testing purposes
mostly, any addition / changes / enhancement should be done by the
owning team - either in this PR or as a follow-up)
## What is NOT included in this PR:
- Wire product base feature to the security assistant (should be done by
the owning team as a follow-up)
- installation
- utilization as tool
- FTR tests: this is somewhat blocked by the same things we need to
figure out for https://github.com/elastic/kibana-team/issues/1271
## Screenshots
### Installation from o11y assistant configuration page
<img width="1476" alt="Screenshot 2024-10-17 at 09 41 24"
src="https://github.com/user-attachments/assets/31daa585-9fb2-400a-a2d1-5917a262367a">
### Example of output
#### Without product documentation installed
<img width="739" alt="Screenshot 2024-10-10 at 09 59 41"
src="https://github.com/user-attachments/assets/993fb216-6c9a-433f-bf44-f6e383d20d9d">
#### With product documentation installed
<img width="718" alt="Screenshot 2024-10-10 at 09 55 38"
src="https://github.com/user-attachments/assets/805ea4ca-8bc9-4355-a434-0ba81f8228a9">
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Alex Szabo <alex.szabo@elastic.co>
Co-authored-by: Matthias Wilhelm <matthias.wilhelm@elastic.co>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Closes#176164
## Summary
Updates the legacy URL aliases developer documentation to cover the two
applicable scenarios where legacy URL aliases would be generated.
Formerly, the document was only concerned with saved object namespace
type migrations, but aliases can now also be generated when copying or
importing saved objects.
## Summary
This PR introduces the new experimental "Streams" plugin into the Kibana
project. The Streams project aims to simplify workflows around dealing
with messy logs in Elasticsearch. Our current offering is either
extremely opinionated with integrations or leaves the user alone with
the high flexibility of Elasticsearch concepts like index templates,
component templates and so on, which make it challenging to configure
everything correctly for good performance and controlling search speed
and cost.
### Scope of PR
- Provides an API for the user to "enable" the streams framework which
creates the "root" entity `logs` with all the backing Elasticsearch
assets
- Provides an API for the user to "fork" a stream
- Provides an API for the user to "read" a stream and all of it's
Elasticsearch assets.
- Provides an API for the user to upsert a stream (and implicitly child
streams that are mentioned)
- Part of this API is placing grok and disscect processing steps as well
as fields to the mapping
- Implements the Stream Naming Schema (SNS) which uses dots to express
the index patterns and stream IDs. Example: `logs.nginx.errors`
- The APIs will fully manage the `index_template`, `component_template`,
and `ingest_pipelines`.
### Out of scope
- Integration tests (coming in a follow-up)
### Reviewer Notes
- I haven't implemented tests beyond a unit test for converting the
filter conditions to Painless. I wanted to get a PR up so we can start
iterating on the interface and functionality before we invest in
testing.
- You might need to add `server.versioned.versionResolution: oldest` to
your `config/kibana.dev.yaml` to play with the requests below in the
Kibana "Dev console".
### Example API Calls
Enable the root stream (and set the mapping for the internal `.streams`
index)
```
POST kbn:/api/streams/_enable
```
Read the root entity "logs"
```
GET kbn:/api/streams/logs
```
Fork the "root" entity "logs" and create "logs.nginx" based on a
condition
```
POST kbn:/api/streams/logs/_fork
{
"stream": {
"id": "logs.nginx",
"children": [],
"processing": [],
"fields": [],
},
"condition": {
"field": "log.logger",
"operator": "eq",
"value": "nginx_proxy"
}
}
```
Fork the entity "logs.nginx" and create "logs.nginx.errors" based on a
condition
```
POST kbn:/api/streams/logs.nginx/_fork
{
"stream": {
"id": "logs.nginx.error",
"children": [],
"processing": [],
"fields": [],
},
"condition": {
"or": [
{ "field": "log.level", "operator": "eq", "value": "error" },
{ "field": "log.level", "operator": "eq", "value": "ERROR" }
]
}
}
```
Set some processing on a stream and map the generated field
```
PUT kbn:/api/streams/logs.nginx
{
"children": [],
"processing": [
{ "config": { "type": "grok", "patterns": ["^%{IP:ip} – –"], "field": "message" } }
],
"fields": [
{ "name": "ip", "type": "ip" }
],
}
}
```
Field definitions are checked for both descendants and ancestors for
incompatibilities to ensure they stay additive.
If children are defined in the `PUT /api/streams/<name>` API,
sub-streams are created implicitly. If a stream is `PUT`, it's added to
the parent as well with a condition that is never true (can be edited
subsequently).
`POST /api/streams/_resync` can be used to re-sync all streams from
their meta data in case the Elasticsearch objects got messed up by some
external change - not sure whether we want to keep that.
Follow-ups
* API integration tests
* Check read permissions on data streams to determine whether a user is
allowed to read certain streams
---------
Co-authored-by: Joe Reuter <johannes.reuter@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
This PR:
- updates navigation instructions to accommodate for the navigation
changes related to solution views.
- updates instructions for adding sample data to rely on the
integrations page instead of the home page, that only exists with the
classic solution view
- updates references to the home page to avoid confusing users using one
of the new solution views
Closes: https://github.com/elastic/platform-docs-team/issues/529
Closes: https://github.com/elastic/platform-docs-team/issues/540
## Summary
This renames the Observability AI Assistant in some places to AI
Assistant for Observability and Search. It also makes the scope
multi-valued on both sides.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Close https://github.com/elastic/kibana-team/issues/1017
This PR removes the unused Cloud Chat functionality from Kibana. The
chat was not used for some time. Moreover, we've seen some issues with
it where users saw it when it wasn't expected. Given the absence of
automated tests and the fact that the feature is no longer needed, we
are removing it to improve the overall maintainability and reliability
of the codebase. This will also decrease the amount of code loaded for
trial users of Kibana in cloud making the app slightly faster.
The initial serverless only plugin for viewing data usage and retention
in Management. The purpose of this PR is to provide a place for other
engineers to work on it, hidden from public use.
- Plugin is hidden by default and can be enabled through kibana.yml
`xpack.dataUsage.enabled: true`
- Currently it will show up in both stateful and serverless (if enabled
using config above). When we are ready to make the plugin available we
will enable it in config/serverless.yml
- Renders a card in Management (serverless) when enabled:
<img width="1269" alt="Screenshot 2024-09-19 at 4 14 15 PM"
src="https://github.com/user-attachments/assets/705e3866-bc88-436a-8532-2af53167f7b1">
https://github.com/elastic/kibana/issues/192965https://github.com/elastic/kibana/issues/192966
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Moving the Entity Manager out of Observability and into a Kibana plugin
as per [this
ticket](https://github.com/elastic/security-team/issues/10156)
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Kibana needs to more tightly control the set of visible features within
a space, in order to support the new solution-based navigation.
Added `scope` field to the features configuration. This enhancement is
intended to prevent new features from appearing in Space Visibility
Toggles.
### Checklist
- [x]
[Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html)
was added for features that require explanation or tutorials
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
__Fixes: https://github.com/elastic/kibana/issues/191299__
## Release Note
Added `scope` field to the features configuration. This enhancement is
intended to prevent new features from appearing in Space Visibility
Toggles.
---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Description
This PR adds an inventory plugin, which renders an inventory UI.
Currently only data streams are rendered. This is part of the LogsAI
initiative - basically we need a UI for tasks like structuring data,
extracting entities, listing the results etc. This is mostly POC-level
stuff. Eventually some of this code might be handed over to ECO but
let's cross that bridge when we get to it.
## Notes for reviewers:
@elastic/appex-ai-infra @elastic/security-generative-ai: added a
`truncateList` utility function that takes the first n elements of an
array and appends a `{l-n} more` string value if there are more values
than n. Really simple but I expect will also be very often used because
we cannot send a huge amount of items to the LLM.
@elastic/kibana-core @elastic/kibana-operations: just boiler plate stuff
for adding a new plugin (and thank you for enabling us to run
`quick_checks` locally!
@elastic/obs-knowledge-team: added support for streaming using an
Observable.
@elastic/obs-ux-management-team: added links to the Inventory UI in the
Observability plugin
@elastic/obs-entities: I've added an entity manager client to be able to
fetch entity definitions on the server. Maybe there's a better way? LMK.
@elastic/obs-ux-logs-team: added a deeplink to the Inventory UI. I've
also moved CODEOWNERS for this package to
@elastic/obs-ux-management-team as they own the Observability plugin
where this is mostly used.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Moving common services to respective new homes.
Resolves: https://github.com/elastic/kibana/issues/188541
---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Scaffolding empty plugin for development of day 0 onboarding work. This
plugin is currently disabled by default and will be enabled for ES3 when
it's ready for users.
This PR introduces an Inference plugin.
## Goals
- Provide a single place for all interactions with large language models
and other generative AI adjacent tasks.
- Abstract away differences between different LLM providers like OpenAI,
Bedrock and Gemini
- Host commonly used LLM-based tasks like generating ES|QL from natural
language and knowledge base recall.
- Allow us to move gradually to the _inference endpoint without
disrupting engineers.
## Architecture and examples

## Terminology
The following concepts are referenced throughout this POC:
- **chat completion**: the process in which the LLM generates the next
message in the conversation. This is sometimes referred to as inference,
text completion, text generation or content generation.
- **tasks**: higher level tasks that, based on its input, use the LLM in
conjunction with other services like Elasticsearch to achieve a result.
The example in this POC is natural language to ES|QL.
- **tools**: a set of tools that the LLM can choose to use when
generating the next message. In essence, it allows the consumer of the
API to define a schema for structured output instead of plain text, and
having the LLM select the most appropriate one.
- **tool call**: when the LLM has chosen a tool (schema) to use for its
output, and returns a document that matches the schema, this is referred
to as a tool call.
## Usage examples
```ts
class MyPlugin {
setup(coreSetup, pluginsSetup) {
const router = coreSetup.http.createRouter();
router.post(
{
path: '/internal/my_plugin/do_something',
validate: {
body: schema.object({
connectorId: schema.string(),
}),
},
},
async (context, request, response) => {
const [coreStart, pluginsStart] = await coreSetup.getStartServices();
const inferenceClient = pluginsSetup.inference.getClient({ request });
const chatComplete$ = inferenceClient.chatComplete({
connectorId: request.body.connectorId,
system: `Here is my system message`,
messages: [
{
role: MessageRole.User,
content: 'Do something',
},
],
});
const message = await lastValueFrom(
chatComplete$.pipe(withoutTokenCountEvents(), withoutChunkEvents())
);
return response.ok({
body: {
message,
},
});
}
);
}
}
```
## Implementation
The bulk of the work here is implementing a `chatComplete` API. Here's
what it does:
- Formats the request for the specific LLM that is being called (all
have different API specifications).
- Executes the specified connector with the formatted request.
- Creates and returns an Observable, and starts reading from the stream.
- Every event in the stream is normalized to a format that is close to
(but not exactly the same) as OpenAI's format, and emitted as a value
from the Observable.
- When the stream ends, the individual events (chunks) are concatenated
into a single message.
- If the LLM has called any tools, the tool call is validated according
to its schema.
- After emitting the message, the Observable completes
There's also a thin wrapper around this API, which is called the
`output` API. It simplifies a few things:
- It doesn't require a conversation (list of messages), a simple `input`
string suffices.
- You can define a schema for the output of the LLM.
- It drops the token count events that are emitted
- It simplifies the event format (update & complete)
### Observable event streams
These APIs, both on the client and the server, return Observables that
emit events. When converting the Observable into a stream, the following
things happen:
- Errors are caught and serialized as events sent over the stream (after
an error, the stream ends).
- The response stream outputs data as [server-sent
events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events)
- The client that reads the stream, parses the event source as an
Observable, and if it encounters a serialized error, it deserializes it
and throws an error in the Observable.
### Errors
All known errors are instances, and not extensions, from the
`InferenceTaskError` base class, which has a `code`, a `message`, and
`meta` information about the error. This allows us to serialize and
deserialize errors over the wire without a complicated factory pattern.
### Tools
Tools are defined as a record, with a `description` and optionally a
`schema`. The reason why it's a record is because of type-safety. This
allows us to have fully typed tool calls (e.g. when the name of the tool
being called is `x`, its arguments are typed as the schema of `x`).
## Notes for reviewers
- I've only added one reference implementation for a connector adapter,
which is OpenAI. Adding more would create noise in the PR, but I can add
them as well. Bedrock would need simulated function calling, which I
would also expect to be handled by this plugin.
- Similarly, the natural language to ES|QL task just creates dummy
steps, as moving the entire implementation would mean 1000s of
additional LOC due to it needing the documentation, for instance.
- Observables over promises/iterators: Observables are a well-defined
and widely-adopted solution for async programming. Promises are not
suitable for streamed/chunked responses because there are no
intermediate values. Async iterators are not widely adopted for Kibana
engineers.
- JSON Schema over Zod: I've tried using Zod, because I like its
ergonomics over plain JSON Schema, but we need to convert it to JSON
Schema at some point, which is a lossy conversion, creating a risk of
using features that we cannot convert to JSON Schema. Additionally,
tools for converting Zod to and [from JSON Schema are not always
suitable
](https://github.com/StefanTerdell/json-schema-to-zod#use-at-runtime).
I've implemented my own JSON Schema to type definition, as
[json-schema-to-ts](https://github.com/ThomasAribart/json-schema-to-ts)
is very slow.
- There's no option for raw input or output. There could be, but it
would defeat the purpose of the normalization that the `chatComplete`
API handles. At that point it might be better to use the connector
directly.
- That also means that for LangChain, something would be needed to
convert the Observable into an async iterator that returns
OpenAI-compatible output. This is doable, although it would be nice if
we could just use the output from the OpenAI API in that case.
- I have not made room for any vendor-specific parameters in the
`chatComplete` API. We might need it, but hopefully not.
- I think type safety is critical here, so there is some TypeScript
voodoo in some places to make that happen.
- `system` is not a message in the conversation, but a separate
property. Given the semantics of a system message (there can only be
one, and only at the beginning of the conversation), I think it's easier
to make it a top-level property than a message type.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Part of #186515
Split FTR configs manifest into multiple files based on distro
(serverless/stateful) and area of testing (platform/solutions)
Update the CI scripts to support the change, but without logic
modification
More context:
With this change we will have a clear split of FTR test configs owned by
Platform and Solutions. It is a starting point to make configs
discoverable, our test pipelines be flexible and run tests based on
distro/solution.
## 📓 Summary
Closes#188443
Adding a static source repository for [metadata
fields](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-fields.html#_indexing_metadata_fields)
in the resolution chain, so that it's now possible to retrieve metadata
info for them too.
**GET /internal/fields_metadata?fieldNames=_index,_source**
```json
{
"fields": {
"_index": {
"dashed_name": "index",
"description": "The index to which the document belongs. This metadata field specifies the exact index name in which the document is stored.",
"example": "index_1",
"flat_name": "_index",
"name": "_index",
"short": "The index to which the document belongs.",
"type": "keyword",
"documentation_url": "https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-index-field.html",
"source": "metadata",
"normalize": []
},
"_source": {
"dashed_name": "source",
"description": "The original JSON representing the body of the document. This field contains all the source data that was provided at the time of indexing.",
"example": "{\"user\": \"John Doe\", \"message\": \"Hello\"}",
"flat_name": "_source",
"name": "_source",
"short": "The original JSON representing the body of the document.",
"documentation_url": "https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html",
"source": "metadata",
"normalize": [],
"type": "unknown"
}
}
}
```
---------
Co-authored-by: Marco Antonio Ghiani <marcoantonio.ghiani@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Renames `@kbn/text-based-languages` plugin to `@kbn/esql` plugin. This
has been discussed internally, the rationale is that now there will be
only one language: ES|QL; and we may use this plugin for ES|QL related
HTTP routes.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
## Summary
Renames the experimental asset_manager plugin (never
documented/officially released) into entity_manager. I've used `node
scripts/lint_ts_projects --fix` and `node scripts/lint_packages.js
--fix` to help with the procedure and also renamed manually the
asset_manager references left.
The change also removes the deprecated asset_manager code, including the
`assetManager.alphaEnabled` plugin configuration. This means
entityManager plugin will be enabled by default.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
Introducing the `search_homepage` plugin along with integration into
`enterprise_search` and `serverless_search` behind a feature flag. This
will allow implementing the feature gated behind the feature flag.
To test these changes you can enable the feature flag with the Kibana
Dev Console using the following command:
```
POST kbn:/internal/kibana/settings/searchHomepage:homepageEnabled
{"value": true}
```
You can then disable the feature flag with the following command:
```
DELETE kbn:/internal/kibana/settings/searchHomepage:homepageEnabled
```
### Checklist
- [x] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/packages/kbn-i18n/README.md)
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary
This is a PR to add a new backend plugin (frontend will be done in
separate [PR](https://github.com/elastic/kibana/pull/184546)).
The purpose of the plugin is to provide a set of API routes that is used
to perform a variety of GenAI workflows to generate new integrations
based on provided inputs.
It reuses the existing GenAI connectors for its LLM communication, and
provides a set of API's to create ECS mapping, Categorization, Related
Fields and an API to generate the actual integration package zip, which
is forwarded to the UI component.
### Planned follow-up changes:
As the PR is getting way too large, some planned changes would be added
in much smaller follow-ups. This includes mostly more improved try/catch
for certain routes, adding debug/error log entries where relevant,
especially for the API endpoints themself, some more unit and end2end
tests.
- OpenAPI spec for the API will be handled in a separate PR
- All the missing unit tests will be added as a followup PR
### Testing
The `integration_assistant` plugin will be disabled by default while
it's being implemented so we can iterate and merge partial PRs without
interfering with the releases. This config will work as our feature
flag:
6aefd4ff7b/x-pack/plugins/integration_assistant/server/config.ts (L11-L13)
To test it add this to your _kibana.dev.yml_:
```
xpack.integration_assistant.enabled: true
```
### Checklist
Delete any items that are not applicable to this PR.
- [x]
[Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html)
was added for features that require explanation or tutorials
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
### Risk Matrix
Delete this section if it is not applicable to this PR.
Before closing this PR, invite QA, stakeholders, and other developers to
identify risks that should be tested prior to the change/feature
release.
When forming the risk matrix, consider some of the following examples
and how they may potentially impact the change:
| Risk | Probability | Severity | Mitigation/Notes |
|---------------------------|-------------|----------|-------------------------|
| Multiple Spaces—unexpected behavior in non-default Kibana Space.
| Low | High | Integration tests will verify that all features are still
supported in non-default Kibana Space and when user switches between
spaces. |
| Multiple nodes—Elasticsearch polling might have race conditions
when multiple Kibana nodes are polling for the same tasks. | High | Low
| Tasks are idempotent, so executing them multiple times will not result
in logical error, but will degrade performance. To test for this case we
add plenty of unit tests around this logic and document manual testing
procedure. |
| Code should gracefully handle cases when feature X or plugin Y are
disabled. | Medium | High | Unit tests will verify that any feature flag
or plugin combination still results in our service operational. |
| [See more potential risk
examples](https://github.com/elastic/kibana/blob/main/RISK_MATRIX.mdx) |
### For maintainers
- [ ] This was checked for breaking API changes and was [labeled
appropriately](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
---------
Co-authored-by: Patryk Kopycinski <contact@patrykkopycinski.com>
Co-authored-by: Sergi Massaneda <sergi.massaneda@elastic.co>
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Bharat Pasupula <saibharatchandra.pasupula@elastic.co>
Co-authored-by: Bharat Pasupula <123897612+bhapas@users.noreply.github.com>
## Description
In this PR, we implemented a view for managing inference endpoints. The
changes include the following items for both **Serverless** and
**Stack**.
- A blank page will be displayed if no inference endpoints are
available.
- A page displaying a list of inference endpoints. The user can view
various details about each endpoint, such as the endpoint itself, the
provider, and the type. The table supports pagination and sorting.
- Users can add a new inference endpoint using Elasticsearch models and
third-party APIs, including Hugging Face, Cohere, and OpenAI.
To keep the changes in this PR manageable, the following items are **out
of scope** but will be added in subsequent PRs
- Option to delete an inference endpoint
- Filtering and Search bar
- Information about allocations, thread.
- Icons for **Provider**
- Deployment status of underlying trained models
## Empty page in Stack Management
e2064ee8-3623-457f-8a04-19603e97e815
## Page with all inference endpoints in Stack Management
89bec450-1569-4425-b013-5058b577b95a
## Inference Endpoints Management in Serverless
bd8b6b71-0e09-49f4-aa9a-19338a1da225
---------
Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
The PR changes the app's title from "Datasets" to "Data Set Quality".
Note that the changed title "Data Set Quality" will also be used as a
side nav menu item under Stack Management (in followups). The PR also
adds an app description as suggested in the parent issue.

The link leads to Doc's ECS page: [Data Stream
Fields](https://www.elastic.co/guide/en/ecs/current/ecs-data_stream.html).
The PR also changes the breadcrumb and quick search entry to use "Data
Set Quality" instead of "Data quality" or "Logs data quality".

As per the [writing style
guidelines](https://brand.elastic.co/302f66895/p/194a3b-writing-style-guide/t/889c93),
any instance of "Dataest" is changed to either "Data Set" or "Data set".
Create the Investigate plugin (naming TBD). Part of
https://github.com/elastic/kibana/pull/183293, splitting up the work in
several PRs.
The investigate plugin is mostly a registry to allow plugins to register
their widgets without creating dependency issues.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## 📓 Summary
Closes https://github.com/elastic/observability-dev/issues/3331
Given the needs described in the linked issue about having a centralized
and async way to consume field metadata across Kibana, this work focuses
on providing server/client services to consume field metadata on demand
from static ECS definition and integration manifests, with the chance to
extend further the possible resolution sources.
3b2d9027-5c29-4081-ab17-1b43618c62a7
## 💡 Reviewers hints
This PR got quite long as it involves and touches different parts of the
codebase, so I'll break down the interesting parts for an easier review.
More details, code examples and mechanics description can be found in
the README file for the plugin.
### `@kbn/fields-metadata-plugin`
To avoid bundling and consuming the whole ECS static definition
client-side, a new plugin `@kbn/fields-metadata-plugin` is created to
expose the server/client services which enable retrieving only the
fields needed on a use-case basis.
### FieldsMetadataService server side
A `FieldsMetadataService` is instantiated on the plugin setup/start
server lifecycle, exposing a client to consume the fields and setup
tools for registering external dependencies.
The start contract exposes a `FieldsMetadataClient` instance. With this,
any application in Kibana can query for some fields using the available
methods, currently:
- `FieldsMetadataClient.prototype.getByName()`: retrieves a single
`FieldMetadata` instance.
- `FieldsMetadataClient.prototype.find()`: retrieves a record of
matching `FieldMetadata` instances.
`FieldsMetadataClient` is instantiated with the source repositories
dependencies. They act as encapsulated sources which are responsible for
fetching fields from their related source. Currently, there are 2 field
repository sources used in the resolution step, but we can use this
concept to extend the resolution step in future with more sources (LLM,
OTel, ...).
The currently used sources are:
- `EcsFieldsRepository`: allows fetching static ECS field metadata.
- `IntegrationFieldsRepository`: allows fetching fields from an
integration package from EPR, where the fields metadata are stored. To
correctly consume these fields, the `fleet` plugin must be enabled,
otherwise, the service won't be able to access the registered fields
extractor implemented with the fleet services.
As this service performs a more expensive retrieval process than the
`EcsFieldsRepository` constant complexity access, a caching layer is
applied to the retrieved results from the external source to minimize
latency.
### Fields metadata API
To expose this service to the client, a first API endpoint is created to
find field metadata and filter the results to minimize the served
payload.
- `GET /internal/fields_metadata/find` supports some initial query
parameters to narrow the fields' search.
### FieldsMetadataService client side
As we have a server-side `FieldsMetadataService`, we need a client
counterpart to consume the exposed API safely and go through the
validation steps.
The client `FieldsMetadataService` works similarly to the server-side
one, exposing a client which is returned by the public start contract of
the plugin, allowing any other to directly use fields metadata
client-side.
This client would work well with existing state management solutions, as
it's not decoupled from any library.
### useFieldsMetadata
For simpler use cases where we need a quick and easy way to consume
fields metadata client-side, the plugin start contract also exposes a
`useFieldsMetadata` react custom hook, which is pre-created accessing
the FieldsMetadataService client described above. It is important to
retrieve the hook from the start contract of this plugin, as it already
gets all the required dependencies injected minimizing the effort on the
consumer side.
The `UnifiedDocViewer` plugin changes exemplify how we can use this hook
to access and filter fields' metadata quickly.
### `registerIntegrationFieldsExtractor` (@elastic/fleet)
Getting fields from an integration dataset is more complex than
accessing a static dictionary of ECS fields, and to achieve that we need
access to the PackageService implemented by the fleet team.
To get access to the package, maintain a proper separation of concerns
and avoid a direct dependency on the fleet plugin, some actions were
taken:
- the `PackageService.prototype.getPackageFieldsMetadata()` method is
implemented to keep the knowledge about retrieving package details on
this service instead of mixing it on parallel services.
- a fleet `registerIntegrationFieldsExtractor` service is created and
used during the fleet plugin setup to register a callback that accesses
the service as an internal user and retrieves the fields by the given
parameters.
- the fields metadata plugin returns a
`registerIntegrationFieldsExtractor` function from its server setup so
that we can use it to register the above-mentioned callback that
retrieves fields from an integration.
This inverts the dependency between `fields_metadata` and `fleet`
plugins so that the `fields_metadata` plugin keeps zero dependencies on
external apps.
## Adoption
We currently have places where the `@elastic/ecs` package is directly
accessed and where we might be able to refactor the codebase to consume
this service.
**[EcsFlat usages in
Kibana](https://github.com/search?q=repo%3Aelastic%2Fkibana%20EcsFlat&type=code)**
---------
Co-authored-by: Marco Antonio Ghiani <marcoantonio.ghiani@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Relates to https://github.com/elastic/kibana/issues/183406.
## 📝 Summary
This PR creates a new plugin `data_quality` in order to register dataset
quality as a Stack management page under data section. For now there is
no reference to this new page in the sideNav in stateful or serverless.
In order to navigate to this new page you can use the url
`/app/management/data/data_quality`
Changes included in this PR:
- New plugin created
- Plugin registered in stack management, data section
- Dataset quality plugin is instantiated and the state is in sync with
URL
- Removed references to dataset quality in Logs explorer
## 🎥 Demo
501c9c47-4a1b-4f91-9be6-d022a821e88e
## 🙅🏼 Missing
- Dataset quality locator
- There are still references to logs explorer (table and flyout) that
will be handled in a follow up PR.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Paired with @ThomThomson to expand Embeddable documentation with
"Guiding principles" and "Best practices"
PR also moves overview to src/plugins/embeddables/README.md. Then, this
markdown is displayed in the embeddable example application as well.
---------
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Devon Thomson <devon.thomson@elastic.co>
Having the same thing in multiple places is confusing and hard to
maintain. Embeddable documentation is exposed via developer examples.
This PR removes embeddable documentation in README and dev-docs and
points those locations to the single location for embeddable
documenation.
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## 📓 Summary
Closes#181528Closes#181976
**N.B.** This work was initially reviewed on a separate PR at
https://github.com/tonyghiani/kibana/pull/1.
This implementation aims to have a stateful layer that allows the
management of dependencies between Discover and other plugins, reducing
the need for a direct dependency.
Although the initial thought was to have a plugin to register features
for Discover, there might be other cases in future where we need to
prevent cyclic dependencies.
With this in mind, I created the plugin as a more generic solution to
hold stateful logic as a communication layer for Discover <-> Plugins.
## Discover Shared
Based on some recurring naming in the Kibana codebase, `discover_shared`
felt like the right place for owning these dependencies and exposing
Discover functionalities to the external world.
It is initially pretty simple and only exposes a registry for the
Discover features, but might be a good place to implement other upcoming
concepts related to Discover.
Also, this plugin should ideally never depend on other solution plugins
and keep its dependencies to a bare minimum of packages and core/data
services.
```mermaid
flowchart TD
A(Discover) -- Get --> E[DiscoverShared]
B(Logs Explorer) -- Set --> E[DiscoverShared]
C(Security) -- Set --> E[DiscoverShared]
D(Any app) -- Set --> E[DiscoverShared]
```
## DiscoverFeaturesService
This service initializes and exposes a strictly typed registry to allow
consumer apps to register additional features and Discover and retrieve
them.
The **README** file explains a real use case of when we'd need to use it
and how to do that step-by-step.
Although it introduces a more nested folder structure, I decided to
implement the service as a client-service and expose it through the
plugin lifecycle methods to provide the necessary flexibility we might
need:
- We don't know yet if any of the features we register will be done on
the setup/start steps, so having the registry available in both places
opens the door to any case we face.
- The service is client-only on purpose. My opinion is that if we ever
need to register features such as server services or anything else, it
should be scoped to a similar service dedicated for the server lifecycle
and its environment.
It should never be possible to register the ObsAIAssistant
presentational component from the server, as it should not be permitted
to register a server service in the client registry.
A server DiscoverFeaturesService is not required yet for any feature, so
I left it out to avoid overcomplicating the implementation.
## FeaturesRegistry
To have a strictly typed utility that suggests the available features on
a registry and adheres to a base contract, the registry exposed on the
DiscoverFeaturesService is an instance of the `FeaturesRegistry` class,
which implements the registration/retrieval logic such that:
- If a feature is already registered, is not possible to override it and
an error is thrown to notify the user of the collision.
- In case we need to react to registry changes, is possible to subscribe
to the registry or obtain it as an observable for more complex
scenarios.
The FeaturesRegistry already takes care of the required logic for the
registry, so that `DiscoverFeaturesService`is left with the
responsibility of instantiating/exposing an instance and provide the set
of allowed features.
---------
Co-authored-by: Marco Antonio Ghiani <marcoantonio.ghiani@elastic.co>
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Davis McPhee <davismcphee@hotmail.com>