[DOCS] Adds user-facing docs for the new KP logging configuration (#94993)

Co-authored-by: Lisa Cawley <lcawley@elastic.co>
This commit is contained in:
Christiane (Tina) Heiligers 2021-03-30 07:54:15 -07:00 committed by GitHub
parent 47761eed5d
commit d29abdfa15
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
15 changed files with 1358 additions and 588 deletions

View file

@ -0,0 +1,40 @@
[[application-service]]
== Application service
Kibana has migrated to be a Single Page Application. Plugins should use `Application service` API to instruct Kibana that an application should be loaded and rendered in the UI in response to user interactions. The service also provides utilities for controlling the navigation link state, seamlessly integrating routing between applications, and loading async chunks on demand.
NOTE: The Application service is only available client side.
[source,typescript]
----
import { AppMountParameters, CoreSetup, Plugin, DEFAULT_APP_CATEGORIES } from 'kibana/public';
export class MyPlugin implements Plugin {
public setup(core: CoreSetup) {
core.application.register({ // <1>
category: DEFAULT_APP_CATEGORIES.kibana,
id: 'my-plugin',
title: 'my plugin title',
euiIconType: '/path/to/some.svg',
order: 100,
appRoute: '/app/my_plugin', // <2>
async mount(params: AppMountParameters) { // <3>
// Load application bundle
const { renderApp } = await import('./application');
// Get start services
const [coreStart, depsStart] = await core.getStartServices(); // <4>
// Render the application
return renderApp(coreStart, depsStart, params); // <5>
},
});
}
}
----
<1> See {kib-repo}blob/{branch}/docs/development/core/public/kibana-plugin-core-public.applicationsetup.register.md[application.register interface]
<2> Application specific URL.
<3> `mount` callback is invoked when a user navigates to the application-specific URL.
<4> `core.getStartServices` method provides API available during `start` lifecycle.
<5> `mount` method must return a function that will be called to unmount the application, which is called when Kibana unmounts the application. You can put a clean-up logic there.
NOTE: you are free to use any UI library to render a plugin application in DOM.
However, we recommend using React and https://elastic.github.io/eui[EUI] for all your basic UI
components to create a consistent UI experience.

View file

@ -0,0 +1,149 @@
[[configuration-service]]
== Configuration service
{kib} provides `ConfigService` for plugin developers that want to support
adjustable runtime behavior for their plugins.
Plugins can only read their own configuration values, it is not possible to access the configuration values from {kib} Core or other plugins directly.
NOTE: The Configuration service is only available server side.
[source,js]
----
// in Legacy platform
const basePath = config.get('server.basePath');
// in Kibana Platform 'basePath' belongs to the http service
const basePath = core.http.basePath.get(request);
----
To have access to your plugin config, you _should_:
* Declare plugin-specific `configPath` (will fallback to plugin `id`
if not specified) in {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.pluginmanifest.md[`kibana.json`] manifest file.
* Export schema validation for the config from plugin's main file. Schema is
mandatory. If a plugin reads from the config without schema declaration,
`ConfigService` will throw an error.
*my_plugin/server/index.ts*
[source,typescript]
----
import { schema, TypeOf } from '@kbn/config-schema';
export const plugin = …
export const config = {
schema: schema.object(…),
};
export type MyPluginConfigType = TypeOf<typeof config.schema>;
----
* Read config value exposed via `PluginInitializerContext`:
*my_plugin/server/index.ts*
[source,typescript]
----
import type { PluginInitializerContext } from 'kibana/server';
export class MyPlugin {
constructor(initializerContext: PluginInitializerContext) {
this.config$ = initializerContext.config.create<MyPluginConfigType>();
// or if config is optional:
this.config$ = initializerContext.config.createIfExists<MyPluginConfigType>();
}
...
}
----
If your plugin also has a client-side part, you can also expose
configuration properties to it using the configuration `exposeToBrowser`
allow-list property.
*my_plugin/server/index.ts*
[source,typescript]
----
import { schema, TypeOf } from '@kbn/config-schema';
import type { PluginConfigDescriptor } from 'kibana/server';
const configSchema = schema.object({
secret: schema.string({ defaultValue: 'Only on server' }),
uiProp: schema.string({ defaultValue: 'Accessible from client' }),
});
type ConfigType = TypeOf<typeof configSchema>;
export const config: PluginConfigDescriptor<ConfigType> = {
exposeToBrowser: {
uiProp: true,
},
schema: configSchema,
};
----
Configuration containing only the exposed properties will be then
available on the client-side using the plugin's `initializerContext`:
*my_plugin/public/index.ts*
[source,typescript]
----
interface ClientConfigType {
uiProp: string;
}
export class MyPlugin implements Plugin<PluginSetup, PluginStart> {
constructor(private readonly initializerContext: PluginInitializerContext) {}
public async setup(core: CoreSetup, deps: {}) {
const config = this.initializerContext.config.get<ClientConfigType>();
}
----
All plugins are considered enabled by default. If you want to disable
your plugin, you could declare the `enabled` flag in the plugin
config. This is a special {kib} Platform key. {kib} reads its
value and wont create a plugin instance if `enabled: false`.
[source,js]
----
export const config = {
schema: schema.object({ enabled: schema.boolean({ defaultValue: false }) }),
};
----
[[handle-plugin-configuration-deprecations]]
=== Handle plugin configuration deprecations
If your plugin has deprecated configuration keys, you can describe them using
the `deprecations` config descriptor field.
Deprecations are managed on a per-plugin basis, meaning you dont need to specify
the whole property path, but use the relative path from your plugins
configuration root.
*my_plugin/server/index.ts*
[source,typescript]
----
import { schema, TypeOf } from '@kbn/config-schema';
import type { PluginConfigDescriptor } from 'kibana/server';
const configSchema = schema.object({
newProperty: schema.string({ defaultValue: 'Some string' }),
});
type ConfigType = TypeOf<typeof configSchema>;
export const config: PluginConfigDescriptor<ConfigType> = {
schema: configSchema,
deprecations: ({ rename, unused }) => [
rename('oldProperty', 'newProperty'),
unused('someUnusedProperty'),
],
};
----
In some cases, accessing the whole configuration for deprecations is
necessary. For these edge cases, `renameFromRoot` and `unusedFromRoot`
are also accessible when declaring deprecations.
*my_plugin/server/index.ts*
[source,typescript]
----
export const config: PluginConfigDescriptor<ConfigType> = {
schema: configSchema,
deprecations: ({ renameFromRoot, unusedFromRoot }) => [
renameFromRoot('oldplugin.property', 'myplugin.property'),
unusedFromRoot('oldplugin.deprecated'),
],
};
----

View file

@ -0,0 +1,30 @@
[[elasticsearch-service]]
== Elasticsearch service
`Elasticsearch service` provides `elasticsearch.client` program API to communicate with Elasticsearch server HTTP API.
NOTE: The Elasticsearch service is only available server side. You can use the {kib-repo}blob/{branch}/docs/development/plugins/data/public/kibana-plugin-plugins-data-public.md[Data plugin] APIs on the client side.
`elasticsearch.client` interacts with Elasticsearch service on behalf of:
- `kibana_system` user via `elasticsearch.client.asInternalUser.*` methods.
- a current end-user via `elasticsearch.client.asCurrentUser.*` methods. In this case Elasticsearch client should be given the current user credentials.
See <<scoped-services>> and <<development-security>>.
{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.elasticsearchservicestart.md[Elasticsearch service API docs]
[source,typescript]
----
import { CoreStart, Plugin } from 'kibana/public';
export class MyPlugin implements Plugin {
public start(core: CoreStart) {
async function asyncTask() {
const result = await core.elasticsearch.client.asInternalUser.ping(…);
}
asyncTask();
}
}
----
For advanced use-cases, such as a search, use {kib-repo}blob/{branch}/docs/development/plugins/data/server/kibana-plugin-plugins-data-server.md[Data plugin]

View file

@ -0,0 +1,67 @@
[[http-service]]
== HTTP service
NOTE: The HTTP service is available both server and client side.
=== Server side usage
The server-side HttpService allows server-side plugins to register endpoints with built-in support for request validation. These endpoints may be used by client-side code or be exposed as a public API for users. Most plugins integrate directly with this service.
The service allows plugins to:
* to extend the {kib} server with custom HTTP API.
* to execute custom logic on an incoming request or server response.
* implement custom authentication and authorization strategy.
See {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.httpservicesetup.md[HTTP service API docs]
[source,typescript]
----
import { schema } from '@kbn/config-schema';
import type { CoreSetup, Plugin } from 'kibana/server';
export class MyPlugin implements Plugin {
public setup(core: CoreSetup) {
const router = core.http.createRouter();
const validate = {
params: schema.object({
id: schema.string(),
}),
};
router.get({
path: 'my_plugin/{id}',
validate
},
async (context, request, response) => {
const data = await findObject(request.params.id);
if (!data) return response.notFound();
return response.ok({
body: data,
headers: {
'content-type': 'application/json'
}
});
});
}
}
----
=== Client side usage
The HTTP service is also offered on the client side and provides an API to communicate with the {kib} server via HTTP interface.
The client-side HttpService is a preconfigured wrapper around `window.fetch` that includes some default behavior and automatically handles common errors (such as session expiration). The service should only be used for access to backend endpoints registered by the same plugin. Feel free to use another HTTP client library to request 3rd party services.
[source,typescript]
----
import { CoreStart } from 'kibana/public';
interface ResponseType {…};
interface MyPluginData {…};
async function fetchData<ResponseType>(core: CoreStart) {
return await core.http.get<MyPluginData>(
'/api/my_plugin/',
{ query: … },
);
}
----
See {kib-repo}blob/{branch}/docs/development/core/public/kibana-plugin-core-public.httpsetup.md[for all available API].

View file

@ -27,421 +27,18 @@ export class MyPlugin {
}
----
=== Server-side
[[configuration-service]]
==== Configuration service
{kib} provides `ConfigService` if a plugin developer may want to support
adjustable runtime behavior for their plugins.
Plugins can only read their own configuration values, it is not possible to access the configuration values from {kib} Core or other plugins directly.
The services that core provides are:
[source,js]
----
// in Legacy platform
const basePath = config.get('server.basePath');
// in Kibana Platform 'basePath' belongs to the http service
const basePath = core.http.basePath.get(request);
----
* <<application-service, Application service>>
* <<configuration-service, Configuration service>>
* <<elasticsearch-service, Elasticsearch service>>
* <<http-service, HTTP service>>
* <<logging-service, Logging service>>
* <<saved-objects-service, Saved Objects service>>
* <<ui-settings-service, UI settings service>>
To have access to your plugin config, you _should_:
* Declare plugin-specific `configPath` (will fallback to plugin `id`
if not specified) in {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.pluginmanifest.md[`kibana.json`] manifest file.
* Export schema validation for the config from plugin's main file. Schema is
mandatory. If a plugin reads from the config without schema declaration,
`ConfigService` will throw an error.
*my_plugin/server/index.ts*
[source,typescript]
----
import { schema, TypeOf } from '@kbn/config-schema';
export const plugin = …
export const config = {
schema: schema.object(…),
};
export type MyPluginConfigType = TypeOf<typeof config.schema>;
----
* Read config value exposed via `PluginInitializerContext`.
*my_plugin/server/index.ts*
[source,typescript]
----
import type { PluginInitializerContext } from 'kibana/server';
export class MyPlugin {
constructor(initializerContext: PluginInitializerContext) {
this.config$ = initializerContext.config.create<MyPluginConfigType>();
// or if config is optional:
this.config$ = initializerContext.config.createIfExists<MyPluginConfigType>();
}
----
If your plugin also has a client-side part, you can also expose
configuration properties to it using the configuration `exposeToBrowser`
allow-list property.
*my_plugin/server/index.ts*
[source,typescript]
----
import { schema, TypeOf } from '@kbn/config-schema';
import type { PluginConfigDescriptor } from 'kibana/server';
const configSchema = schema.object({
secret: schema.string({ defaultValue: 'Only on server' }),
uiProp: schema.string({ defaultValue: 'Accessible from client' }),
});
type ConfigType = TypeOf<typeof configSchema>;
export const config: PluginConfigDescriptor<ConfigType> = {
exposeToBrowser: {
uiProp: true,
},
schema: configSchema,
};
----
Configuration containing only the exposed properties will be then
available on the client-side using the plugin's `initializerContext`:
*my_plugin/public/index.ts*
[source,typescript]
----
interface ClientConfigType {
uiProp: string;
}
export class MyPlugin implements Plugin<PluginSetup, PluginStart> {
constructor(private readonly initializerContext: PluginInitializerContext) {}
public async setup(core: CoreSetup, deps: {}) {
const config = this.initializerContext.config.get<ClientConfigType>();
}
----
All plugins are considered enabled by default. If you want to disable
your plugin, you could declare the `enabled` flag in the plugin
config. This is a special {kib} Platform key. {kib} reads its
value and wont create a plugin instance if `enabled: false`.
[source,js]
----
export const config = {
schema: schema.object({ enabled: schema.boolean({ defaultValue: false }) }),
};
----
[[handle-plugin-configuration-deprecations]]
===== Handle plugin configuration deprecations
If your plugin has deprecated configuration keys, you can describe them using
the `deprecations` config descriptor field.
Deprecations are managed on a per-plugin basis, meaning you dont need to specify
the whole property path, but use the relative path from your plugins
configuration root.
*my_plugin/server/index.ts*
[source,typescript]
----
import { schema, TypeOf } from '@kbn/config-schema';
import type { PluginConfigDescriptor } from 'kibana/server';
const configSchema = schema.object({
newProperty: schema.string({ defaultValue: 'Some string' }),
});
type ConfigType = TypeOf<typeof configSchema>;
export const config: PluginConfigDescriptor<ConfigType> = {
schema: configSchema,
deprecations: ({ rename, unused }) => [
rename('oldProperty', 'newProperty'),
unused('someUnusedProperty'),
],
};
----
In some cases, accessing the whole configuration for deprecations is
necessary. For these edge cases, `renameFromRoot` and `unusedFromRoot`
are also accessible when declaring deprecations.
*my_plugin/server/index.ts*
[source,typescript]
----
export const config: PluginConfigDescriptor<ConfigType> = {
schema: configSchema,
deprecations: ({ renameFromRoot, unusedFromRoot }) => [
renameFromRoot('oldplugin.property', 'myplugin.property'),
unusedFromRoot('oldplugin.deprecated'),
],
};
----
==== Logging service
Allows a plugin to provide status and diagnostic information.
For detailed instructions see the {kib-repo}blob/{branch}/src/core/server/logging/README.md[logging service documentation].
[source,typescript]
----
import type { PluginInitializerContext, CoreSetup, Plugin, Logger } from 'kibana/server';
export class MyPlugin implements Plugin {
private readonly logger: Logger;
constructor(initializerContext: PluginInitializerContext) {
this.logger = initializerContext.logger.get();
}
public setup(core: CoreSetup) {
try {
this.logger.debug('doing something...');
// …
} catch (e) {
this.logger.error('failed doing something...');
}
}
}
----
==== Elasticsearch service
`Elasticsearch service` provides `elasticsearch.client` program API to communicate with Elasticsearch server REST API.
`elasticsearch.client` interacts with Elasticsearch service on behalf of:
- `kibana_system` user via `elasticsearch.client.asInternalUser.*` methods.
- a current end-user via `elasticsearch.client.asCurrentUser.*` methods. In this case Elasticsearch client should be given the current user credentials.
See <<scoped-services>> and <<development-security>>.
{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.elasticsearchservicestart.md[Elasticsearch service API docs]
[source,typescript]
----
import { CoreStart, Plugin } from 'kibana/public';
export class MyPlugin implements Plugin {
public start(core: CoreStart) {
async function asyncTask() {
const result = await core.elasticsearch.client.asInternalUser.ping(…);
}
asyncTask();
}
}
----
For advanced use-cases, such as a search, use {kib-repo}blob/{branch}/docs/development/plugins/data/server/kibana-plugin-plugins-data-server.md[Data plugin]
include::saved-objects-service.asciidoc[leveloffset=+1]
==== HTTP service
Allows plugins:
* to extend the {kib} server with custom REST API.
* to execute custom logic on an incoming request or server response.
* implement custom authentication and authorization strategy.
See {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.httpservicesetup.md[HTTP service API docs]
[source,typescript]
----
import { schema } from '@kbn/config-schema';
import type { CoreSetup, Plugin } from 'kibana/server';
export class MyPlugin implements Plugin {
public setup(core: CoreSetup) {
const router = core.http.createRouter();
const validate = {
params: schema.object({
id: schema.string(),
}),
};
router.get({
path: 'my_plugin/{id}',
validate
},
async (context, request, response) => {
const data = await findObject(request.params.id);
if (!data) return response.notFound();
return response.ok({
body: data,
headers: {
'content-type': 'application/json'
}
});
});
}
}
----
==== UI settings service
The program interface to <<advanced-options, UI settings>>.
It makes it possible for Kibana plugins to extend Kibana UI Settings Management with custom settings.
See:
- {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.uisettingsservicesetup.register.md[UI settings service Setup API docs]
- {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.uisettingsservicestart.register.md[UI settings service Start API docs]
[source,typescript]
----
import { schema } from '@kbn/config-schema';
import type { CoreSetup,Plugin } from 'kibana/server';
export class MyPlugin implements Plugin {
public setup(core: CoreSetup) {
core.uiSettings.register({
custom: {
value: '42',
schema: schema.string(),
},
});
const router = core.http.createRouter();
router.get({
path: 'my_plugin/{id}',
validate: …,
},
async (context, request, response) => {
const customSetting = await context.uiSettings.client.get('custom');
});
}
}
----
=== Client-side
==== Application service
Kibana has migrated to be a Single Page Application. Plugins should use `Application service` API to instruct Kibana what an application should be loaded & rendered in the UI in response to user interactions.
[source,typescript]
----
import { AppMountParameters, CoreSetup, Plugin, DEFAULT_APP_CATEGORIES } from 'kibana/public';
export class MyPlugin implements Plugin {
public setup(core: CoreSetup) {
core.application.register({ // <1>
category: DEFAULT_APP_CATEGORIES.kibana,
id: 'my-plugin',
title: 'my plugin title',
euiIconType: '/path/to/some.svg',
order: 100,
appRoute: '/app/my_plugin', // <2>
async mount(params: AppMountParameters) { // <3>
// Load application bundle
const { renderApp } = await import('./application');
// Get start services
const [coreStart, depsStart] = await core.getStartServices(); // <4>
// Render the application
return renderApp(coreStart, depsStart, params); // <5>
},
});
}
}
----
<1> See {kib-repo}blob/{branch}/docs/development/core/public/kibana-plugin-core-public.applicationsetup.register.md[application.register interface]
<2> Application specific URL.
<3> `mount` callback is invoked when a user navigates to the application-specific URL.
<4> `core.getStartServices` method provides API available during `start` lifecycle.
<5> `mount` method must return a function that will be called to unmount the application.
NOTE:: you are free to use any UI library to render a plugin application in DOM.
However, we recommend using React and https://elastic.github.io/eui[EUI] for all your basic UI
components to create a consistent UI experience.
==== HTTP service
Provides API to communicate with the {kib} server. Feel free to use another HTTP client library to request 3rd party services.
[source,typescript]
----
import { CoreStart } from 'kibana/public';
interface ResponseType {…};
async function fetchData<ResponseType>(core: CoreStart) {
return await core.http.get<>(
'/api/my_plugin/',
{ query: … },
);
}
----
See {kib-repo}blob/{branch}/docs/development/core/public/kibana-plugin-core-public.httpsetup.md[for all available API].
==== Elasticsearch service
Not available in the browser. Use {kib-repo}blob/{branch}/docs/development/plugins/data/public/kibana-plugin-plugins-data-public.md[Data plugin] instead.
== Patterns
[[scoped-services]]
=== Scoped services
Whenever Kibana needs to get access to data saved in elasticsearch, it
should perform a check whether an end-user has access to the data. In
the legacy platform, Kibana requires binding elasticsearch related API
with an incoming request to access elasticsearch service on behalf of a
user.
[source,js]
----
async function handler(req, res) {
const dataCluster = server.plugins.elasticsearch.getCluster('data');
const data = await dataCluster.callWithRequest(req, 'ping');
}
----
The Kibana Platform introduced a handler interface on the server-side to perform that association
internally. Core services, that require impersonation with an incoming
request, are exposed via `context` argument of
{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.requesthandler.md[the
request handler interface.] The above example looks in the Kibana Platform
as
[source,js]
----
async function handler(context, req, res) {
const data = await context.core.elasticsearch.client.asCurrentUser('ping');
}
----
The
{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.requesthandlercontext.md[request
handler context] exposed the next scoped *core* services:
[width="100%",cols="30%,70%",options="header",]
|===
|Legacy Platform |Kibana Platform
|`request.getSavedObjectsClient`
|{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.savedobjectsclient.md[`context.savedObjects.client`]
|`server.plugins.elasticsearch.getCluster('admin')`
|{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.iscopedclusterclient.md[`context.elasticsearch.client`]
|`server.plugins.elasticsearch.getCluster('data')`
|{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.iscopedclusterclient.md[`context.elasticsearch.client`]
|`request.getUiSettingsService`
|{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.iuisettingsclient.md[`context.uiSettings.client`]
|===
==== Declare a custom scoped service
Plugins can extend the handler context with a custom API that will be
available to the plugin itself and all dependent plugins. For example,
the plugin creates a custom elasticsearch client and wants to use it via
the request handler context:
[source,typescript]
----
import type { CoreSetup, RequestHandlerContext, IScopedClusterClient } from 'kibana/server';
interface MyRequestHandlerContext extends RequestHandlerContext {
myPlugin: {
client: IScopedClusterClient;
};
}
class MyPlugin {
setup(core: CoreSetup) {
const client = core.elasticsearch.createClient('myClient');
core.http.registerRouteHandlerContext<MyRequestHandlerContext, 'myPlugin'>('myPlugin', (context, req, res) => {
return { client: client.asScoped(req) };
});
const router = core.http.createRouter<MyRequestHandlerContext>();
router.get(
{ path: '/api/my-plugin/', validate: … },
async (context, req, res) => {
// context type is inferred as MyPluginContext
const data = await context.myPlugin.client.asCurrentUser('endpoint');
}
);
}
----

View file

@ -0,0 +1,84 @@
[[logging-configuration-migration]]
== Logging configuration migration
Compatibility with the legacy logging system is assured until the end of the `v7` version.
All log messages handled by `root` context are forwarded to the legacy logging service. If you re-write
root appenders, make sure that it contains `default` appender to provide backward compatibility.
NOTE: When you switch to the new logging configuration, you will start seeing duplicate log entries in both formats.
These will be removed when the `default` appender is no longer required. If you define an appender for a logger,
the log messages aren't handled by the `root` logger anymore and are not forwarded to the legacy logging service.
[[logging-pattern-format-old-and-new-example]]
[options="header"]
|===
| Parameter | Platform log record in **pattern** format | Legacy Platform log record **text** format
| @timestamp | ISO8601_TZ `2012-01-31T23:33:22.011-05:00` | Absolute `23:33:22.011`
| logger | `parent.child` | `['parent', 'child']`
| level | `DEBUG` | `['debug']`
| meta | stringified JSON object `{"to": "v8"}`| N/A
| pid | can be configured as `%pid` | N/A
|===
[[logging-json-format-old-and-new-example]]
[options="header"]
|===
| Parameter | Platform log record in **json** format | Legacy Platform log record **json** format
| @timestamp | ISO8601_TZ `2012-01-31T23:33:22.011-05:00` | ISO8601 `2012-01-31T23:33:22.011Z`
| logger | `log.logger: parent.child` | `tags: ['parent', 'child']`
| level | `log.level: DEBUG` | `tags: ['debug']`
| meta | merged in log record `{... "to": "v8"}` | merged in log record `{... "to": "v8"}`
| pid | `process.pid: 12345` | `pid: 12345`
| type | N/A | `type: log`
| error | `{ message, name, stack }` | `{ message, name, stack, code, signal }`
|===
[[logging-cli-migration]]
=== Logging configuration via CLI
As is the case for any of Kibana's config settings, you can specify your logging configuration via the CLI. For convenience, the `--verbose` and `--silent` flags exist as shortcuts and will continue to be supported beyond v7.
If you wish to override these flags, you can always do so by passing your preferred logging configuration directly to the CLI. For example, with the following configuration:
[source,yaml]
----
logging:
appenders:
custom:
type: console
layout:
type: pattern
pattern: "[%date][%level] %message"
----
you can override the flags with:
[options="header"]
|===
| legacy logging | {kib} Platform logging | cli shortcuts
|--verbose| --logging.root.level=debug --logging.root.appenders[0]=default --logging.root.appenders[1]=custom | --verbose
|--quiet| --logging.root.level=error --logging.root.appenders[0]=default --logging.root.appenders[1]=custom | not supported
|--silent| --logging.root.level=off | --silent
|===
NOTE: To preserve backwards compatibility, you are required to pass the root `default` appender until the legacy logging system is removed in `v8.0`.

View file

@ -0,0 +1,545 @@
[[logging-service]]
== Logging service
Allows a plugin to provide status and diagnostic information.
NOTE: The Logging service is only available server side.
[source,typescript]
----
import type { PluginInitializerContext, CoreSetup, Plugin, Logger } from 'kibana/server';
export class MyPlugin implements Plugin {
private readonly logger: Logger;
constructor(initializerContext: PluginInitializerContext) {
this.logger = initializerContext.logger.get();
}
public setup(core: CoreSetup) {
try {
this.logger.debug('doing something...');
// …
} catch (e) {
this.logger.error('failed doing something...');
}
}
}
----
The way logging works in {kib} is inspired by the `log4j 2` logging framework used by {ref-bare}/current/logging.html[Elasticsearch].
The main idea is to have consistent logging behavior (configuration, log format etc.) across the entire Elastic Stack where possible.
=== Loggers, Appenders and Layouts
The {kib} logging system has three main components: _loggers_, _appenders_ and _layouts_. These components allow us to log
messages according to message type and level, to control how these messages are formatted and where the final logs
will be displayed or stored.
__Loggers__ define what logging settings should be applied to a particular logger.
__<<logging-appenders,Appenders>>__ define where log messages are displayed (eg. stdout or console) and stored (eg. file on the disk).
__<<logging-layouts,Layouts>>__ define how log messages are formatted and what type of information they include.
[[log-level]]
=== Log level
Currently we support the following log levels: _all_, _fatal_, _error_, _warn_, _info_, _debug_, _trace_, _off_.
Levels are ordered, so _all_ > _fatal_ > _error_ > _warn_ > _info_ > _debug_ > _trace_ > _off_.
A log record is being logged by the logger if its level is higher than or equal to the level of its logger. Otherwise,
the log record is ignored.
The _all_ and _off_ levels can be used only in configuration and are just handy shortcuts that allow you to log every
log record or disable logging entirely or for a specific logger. These levels are also configurable as <<logging-cli-migration,cli arguments>>.
[[logging-layouts]]
=== Layouts
Every appender should know exactly how to format log messages before they are written to the console or file on the disk.
This behavior is controlled by the layouts and configured through `appender.layout` configuration property for every
custom appender. Currently we don't define any default layout for the
custom appenders, so one should always make the choice explicitly.
There are two types of layout supported at the moment: <<pattern-layout,`pattern`>> and <<json-layout,`json`>>.
[[pattern-layout]]
==== Pattern layout
With `pattern` layout it's possible to define a string pattern with special placeholders `%conversion_pattern` that will be replaced with data from the actual log message. By default the following pattern is used: `[%date][%level][%logger] %message`.
NOTE: The `pattern` layout uses a sub-set of https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout[log4j 2 pattern syntax] and **doesn't implement** all `log4j 2` capabilities.
The conversions that are provided out of the box are:
**level**
Outputs the <<log-level,level>> of the logging event.
Example of `%level` output: `TRACE`, `DEBUG`, `INFO`.
**logger**
Outputs the name of the logger that published the logging event.
Example of `%logger` output: `server`, `server.http`, `server.http.kibana`.
**message**
Outputs the application supplied message associated with the logging event.
**meta**
Outputs the entries of `meta` object data in **json** format, if one is present in the event.
Example of `%meta` output:
[source,bash]
----
// Meta{from: 'v7', to: 'v8'}
'{"from":"v7","to":"v8"}'
// Meta empty object
'{}'
// no Meta provided
''
----
[[date-format]]
**date**
Outputs the date of the logging event. The date conversion specifier may be followed by a set of braces containing a name of predefined date format and canonical timezone name.
Timezone name is expected to be one from https://en.wikipedia.org/wiki/List_of_tz_database_time_zones[TZ database name].
Timezone defaults to the host timezone when not explicitly specified.
Example of `%date` output:
[[date-conversion-pattern-examples]]
[options="header"]
|===
| Conversion pattern | Example
| `%date`
| `2012-02-01T14:30:22.011Z` uses `ISO8601` format by default
| `%date{ISO8601}`
| `2012-02-01T14:30:22.011Z`
| `%date{ISO8601_TZ}`
| `2012-02-01T09:30:22.011-05:00` `ISO8601` with timezone
| `%date{ISO8601_TZ}{America/Los_Angeles}`
| `2012-02-01T06:30:22.011-08:00`
| `%date{ABSOLUTE}`
| `09:30:22.011`
| `%date{ABSOLUTE}{America/Los_Angeles}`
| `06:30:22.011`
| `%date{UNIX}`
| `1328106622`
| `%date{UNIX_MILLIS}`
| `1328106622011`
|===
**pid**
Outputs the process ID.
The pattern layout also offers a `highlight` option that allows you to highlight
some parts of the log message with different colors. Highlighting is quite handy if log messages are forwarded
to a terminal with color support.
[[json-layout]]
==== JSON layout
With `json` layout log messages will be formatted as JSON strings in https://www.elastic.co/guide/en/ecs/current/ecs-reference.html[ECS format] that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself.
[[logging-appenders]]
=== Appenders
[[rolling-file-appender]]
==== Rolling File Appender
Similar to Log4j's `RollingFileAppender`, this appender will log into a file, and rotate it following a rolling
strategy when the configured policy triggers.
===== Triggering Policies
The triggering policy determines when a rollover should occur.
There are currently two policies supported: `size-limit` and `time-interval`.
[[size-limit-triggering-policy]]
**SizeLimitTriggeringPolicy**
This policy will rotate the file when it reaches a predetermined size.
[source,yaml]
----
logging:
appenders:
rolling-file:
type: rolling-file
fileName: /var/logs/kibana.log
policy:
type: size-limit
size: 50mb
strategy:
//...
layout:
type: pattern
----
The options are:
- `size`
The maximum size the log file should reach before a rollover should be performed. The default value is `100mb`
[[time-interval-triggering-policy]]
**TimeIntervalTriggeringPolicy**
This policy will rotate the file every given interval of time.
[source,yaml]
----
logging:
appenders:
rolling-file:
type: rolling-file
fileName: /var/logs/kibana.log
policy:
type: time-interval
interval: 10s
modulate: true
strategy:
//...
layout:
type: pattern
----
The options are:
- `interval`
How often a rollover should occur. The default value is `24h`
- `modulate`
Whether the interval should be adjusted to cause the next rollover to occur on the interval boundary.
For example, if modulate is true and the interval is `4h`, if the current hour is 3 am then the first rollover will occur at 4 am
and then next ones will occur at 8 am, noon, 4pm, etc. The default value is `true`.
===== Rolling strategies
The rolling strategy determines how the rollover should occur: both the naming of the rolled files,
and their retention policy.
There is currently one strategy supported: `numeric`.
**NumericRollingStrategy**
This strategy will suffix the file with a given pattern when rolling,
and will retains a fixed amount of rolled files.
[source,yaml]
----
logging:
appenders:
rolling-file:
type: rolling-file
fileName: /var/logs/kibana.log
policy:
// ...
strategy:
type: numeric
pattern: '-%i'
max: 2
layout:
type: pattern
----
For example, with this configuration:
- During the first rollover kibana.log is renamed to kibana-1.log. A new kibana.log file is created and starts
being written to.
- During the second rollover kibana-1.log is renamed to kibana-2.log and kibana.log is renamed to kibana-1.log.
A new kibana.log file is created and starts being written to.
- During the third and subsequent rollovers, kibana-2.log is deleted, kibana-1.log is renamed to kibana-2.log and
kibana.log is renamed to kibana-1.log. A new kibana.log file is created and starts being written to.
The options are:
- `pattern`
The suffix to append to the file path when rolling. Must include `%i`, as this is the value
that will be converted to the file index.
For example, with `fileName: /var/logs/kibana.log` and `pattern: '-%i'`, the rolling files created
will be `/var/logs/kibana-1.log`, `/var/logs/kibana-2.log`, and so on. The default value is `-%i`
- `max`
The maximum number of files to keep. Once this number is reached, oldest files will be deleted. The default value is `7`
==== Rewrite Appender
WARNING: This appender is currently considered experimental and is not intended
for public consumption. The API is subject to change at any time.
Similar to log4j's `RewriteAppender`, this appender serves as a sort of middleware,
modifying the provided log events before passing them along to another
appender.
[source,yaml]
----
logging:
appenders:
my-rewrite-appender:
type: rewrite
appenders: [console, file] # name of "destination" appender(s)
policy:
# ...
----
The most common use case for the `RewriteAppender` is when you want to
filter or censor sensitive data that may be contained in a log entry.
In fact, with a default configuration, {kib} will automatically redact
any `authorization`, `cookie`, or `set-cookie` headers when logging http
requests & responses.
To configure additional rewrite rules, you'll need to specify a <<rewrite-policies,`RewritePolicy`>>.
[[rewrite-policies]]
===== Rewrite Policies
Rewrite policies exist to indicate which parts of a log record can be
modified within the rewrite appender.
**Meta**
The `meta` rewrite policy can read and modify any data contained in the
`LogMeta` before passing it along to a destination appender.
Meta policies must specify one of three modes, which indicate which action
to perform on the configured properties:
- `update` updates an existing property at the provided `path`.
- `remove` removes an existing property at the provided `path`.
The `properties` are listed as a `path` and `value` pair, where `path` is
the dot-delimited path to the target property in the `LogMeta` object, and
`value` is the value to add or update in that target property. When using
the `remove` mode, a `value` is not necessary.
Here's an example of how you would replace any `cookie` header values with `[REDACTED]`:
[source,yaml]
----
logging:
appenders:
my-rewrite-appender:
type: rewrite
appenders: [console]
policy:
type: meta # indicates that we want to rewrite the LogMeta
mode: update # will update an existing property only
properties:
- path: "http.request.headers.cookie" # path to property
value: "[REDACTED]" # value to replace at path
----
Rewrite appenders can even be passed to other rewrite appenders to apply
multiple filter policies/modes, as long as it doesn't create a circular
reference. Each rewrite appender is applied sequentially (one after the other).
[source,yaml]
----
logging:
appenders:
remove-request-headers:
type: rewrite
appenders: [censor-response-headers] # redirect to the next rewrite appender
policy:
type: meta
mode: remove
properties:
- path: "http.request.headers" # remove all request headers
censor-response-headers:
type: rewrite
appenders: [console] # output to console
policy:
type: meta
mode: update
properties:
- path: "http.response.headers.set-cookie"
value: "[REDACTED]"
----
===== Complete Example For Rewrite Appender
[source,yaml]
----
logging:
appenders:
custom_console:
type: console
layout:
type: pattern
highlight: true
pattern: "[%date][%level][%logger] %message %meta"
file:
type: file
fileName: ./kibana.log
layout:
type: json
censor:
type: rewrite
appenders: [custom_console, file]
policy:
type: meta
mode: update
properties:
- path: "http.request.headers.cookie"
value: "[REDACTED]"
loggers:
- name: http.server.response
appenders: [censor] # pass these logs to our rewrite appender
level: debug
----
[[logger-hierarchy]]
=== Logger hierarchy
Every logger has a unique name that follows a hierarchical naming rule. The logger is considered to be an
ancestor of another logger if its name followed by a `.` is a prefix of the descendant logger. For example, a logger
named `a.b` is an ancestor of logger `a.b.c`. All top-level loggers are descendants of a special `root` logger at the top of the logger hierarchy. The `root` logger always exists and
fully configured.
You can configure _<<log-level, log level>>_ and _appenders_ for a specific logger. If a logger only has a _log level_ configured, then the _appenders_ configuration applied to the logger is inherited from the ancestor logger.
NOTE: In the current implementation we __don't support__ so called _appender additivity_ when log messages are forwarded to _every_ distinct appender within the
ancestor chain including `root`. That means that log messages are only forwarded to appenders that are configured for a particular logger. If a logger doesn't have any appenders configured, the configuration of that particular logger will be inherited from its closest ancestor.
[[dedicated-loggers]]
==== Dedicated loggers
**Root**
The `root` logger has a dedicated configuration node since this logger is special and should always exist. By default `root` is configured with `info` level and `default` appender that is also always available. This is the configuration that all custom loggers will use unless they're re-configured explicitly.
For example to see _all_ log messages that fall back on the `root` logger configuration, just add one line to the configuration:
[source,yaml]
----
logging.root.level: all
----
Or disable logging entirely with `off`:
[source,yaml]
----
logging.root.level: off
----
**Metrics Logs**
The `metrics.ops` logger is configured with `debug` level and will automatically output sample system and process information at a regular interval.
The metrics that are logged are a subset of the data collected and are formatted in the log message as follows:
[options="header"]
|===
| Ops formatted log property | Location in metrics service | Log units
| memory | process.memory.heap.used_in_bytes | http://numeraljs.com/#format[depends on the value], typically MB or GB
| uptime | process.uptime_in_millis | HH:mm:ss
| load | os.load | [ "load for the last 1 min" "load for the last 5 min" "load for the last 15 min"]
| delay | process.event_loop_delay | ms
|===
The log interval is the same as the interval at which system and process information is refreshed and is configurable under `ops.interval`:
[source,yaml]
----
ops.interval: 5000
----
The minimum interval is 100ms and defaults to 5000ms.
[[request-response-logger]]
**Request and Response Logs**
The `http.server.response` logger is configured with `debug` level and will automatically output
data about http requests and responses occurring on the {kib} server.
The message contains some high-level information, and the corresponding log meta contains the following:
[options="header"]
|===
| Meta property | Description | Format
| client.ip | IP address of the requesting client | ip
| http.request.method | http verb for the request (uppercase) | string
| http.request.mime_type | (optional) mime as specified in the headers | string
| http.request.referrer | (optional) referrer | string
| http.request.headers | request headers | object
| http.response.body.bytes | (optional) Calculated response payload size in bytes | number
| http.response.status_code | status code returned | number
| http.response.headers | response headers | object
| http.response.responseTime | (optional) Calculated response time in ms | number
| url.path | request path | string
| url.query | (optional) request query string | string
| user_agent.original | raw user-agent string provided in request headers | string
|===
=== Usage
Usage is very straightforward, one should just get a logger for a specific context and use it to log messages with
different log level.
[source,typescript]
----
const logger = kibana.logger.get('server');
logger.trace('Message with `trace` log level.');
logger.debug('Message with `debug` log level.');
logger.info('Message with `info` log level.');
logger.warn('Message with `warn` log level.');
logger.error('Message with `error` log level.');
logger.fatal('Message with `fatal` log level.');
const loggerWithNestedContext = kibana.logger.get('server', 'http');
loggerWithNestedContext.trace('Message with `trace` log level.');
loggerWithNestedContext.debug('Message with `debug` log level.');
----
And assuming logger for `server` name with `console` appender and `trace` level was used, console output will look like this:
[source,bash]
----
[2017-07-25T11:54:41.639-07:00][TRACE][server] Message with `trace` log level.
[2017-07-25T11:54:41.639-07:00][DEBUG][server] Message with `debug` log level.
[2017-07-25T11:54:41.639-07:00][INFO ][server] Message with `info` log level.
[2017-07-25T11:54:41.639-07:00][WARN ][server] Message with `warn` log level.
[2017-07-25T11:54:41.639-07:00][ERROR][server] Message with `error` log level.
[2017-07-25T11:54:41.639-07:00][FATAL][server] Message with `fatal` log level.
[2017-07-25T11:54:41.639-07:00][TRACE][server.http] Message with `trace` log level.
[2017-07-25T11:54:41.639-07:00][DEBUG][server.http] Message with `debug` log level.
----
The log will be less verbose with `warn` level for the `server` logger:
[source,bash]
----
[2017-07-25T11:54:41.639-07:00][WARN ][server] Message with `warn` log level.
[2017-07-25T11:54:41.639-07:00][ERROR][server] Message with `error` log level.
[2017-07-25T11:54:41.639-07:00][FATAL][server] Message with `fatal` log level.
----

View file

@ -0,0 +1,61 @@
[[patterns]]
== Patterns
[[scoped-services]]
=== Scoped services
Whenever Kibana needs to get access to data saved in Elasticsearch, it
should perform a check whether an end-user has access to the data.
The Kibana Platform introduced a handler interface on the server-side to perform that association
internally. Core services, that require impersonation with an incoming
request, are exposed via `context` argument of
{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.requesthandler.md[the
request handler interface.]
as
[source,js]
----
async function handler(context, req, res) {
const data = await context.core.elasticsearch.client.asCurrentUser('ping');
}
----
The
{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.requesthandlercontext.md[request
handler context] exposes the following scoped *core* services:
* {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.savedobjectsclient.md[`context.savedObjects.client`]
* {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.iscopedclusterclient.md[`context.elasticsearch.client`]
* {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.iuisettingsclient.md[`context.uiSettings.client`]
==== Declare a custom scoped service
Plugins can extend the handler context with a custom API that will be
available to the plugin itself and all dependent plugins. For example,
the plugin creates a custom Elasticsearch client and wants to use it via
the request handler context:
[source,typescript]
----
import type { CoreSetup, RequestHandlerContext, IScopedClusterClient } from 'kibana/server';
interface MyRequestHandlerContext extends RequestHandlerContext {
myPlugin: {
client: IScopedClusterClient;
};
}
class MyPlugin {
setup(core: CoreSetup) {
const client = core.elasticsearch.createClient('myClient');
core.http.registerRouteHandlerContext<MyRequestHandlerContext, 'myPlugin'>('myPlugin', (context, req, res) => {
return { client: client.asScoped(req) };
});
const router = core.http.createRouter<MyRequestHandlerContext>();
router.get(
{ path: '/api/my-plugin/', validate: … },
async (context, req, res) => {
// context type is inferred as MyPluginContext
const data = await context.myPlugin.client.asCurrentUser('endpoint');
}
);
}
----

View file

@ -1,6 +1,8 @@
[[saved-objects-service]]
== Saved Objects service
NOTE: The Saved Objects service is available both server and client side.
`Saved Objects service` allows {kib} plugins to use {es} like a primary
database. Think of it as an Object Document Mapper for {es}. Once a
plugin has registered one or more Saved Object types, the Saved Objects client
@ -28,7 +30,9 @@ spaces.
This document contains developer guidelines and best-practices for plugins
wanting to use Saved Objects.
=== Registering a Saved Object type
=== Server side usage
==== Registering a Saved Object type
Saved object type definitions should be defined in their own `my_plugin/server/saved_objects` directory.
The folder should contain a file per type, named after the snake_case name of the type, and an `index.ts` file exporting all the types.
@ -83,7 +87,7 @@ export class MyPlugin implements Plugin {
}
----
=== Mappings
==== Mappings
Each Saved Object type can define it's own {es} field mappings.
Because multiple Saved Object types can share the same index, mappings defined
by a type will be nested under a top-level field that matches the type name.
@ -149,59 +153,6 @@ should carefully consider the fields they add to the mappings. Similarly,
Saved Object types should never use `dynamic: true` as this can cause an
arbitrary amount of fields to be added to the `.kibana` index.
=== References
When a Saved Object declares `references` to other Saved Objects, the
Saved Objects Export API will automatically export the target object with all
of it's references. This makes it easy for users to export the entire
reference graph of an object.
If a Saved Object can't be used on it's own, that is, it needs other objects
to exist for a feature to function correctly, that Saved Object should declare
references to all the objects it requires. For example, a `dashboard`
object might have panels for several `visualization` objects. When these
`visualization` objects don't exist, the dashboard cannot be rendered
correctly. The `dashboard` object should declare references to all it's
visualizations.
However, `visualization` objects can continue to be rendered or embedded into
other dashboards even if the `dashboard` it was originally embedded into
doesn't exist. As a result, `visualization` objects should not declare
references to `dashboard` objects.
For each referenced object, an `id`, `type` and `name` are added to the
`references` array:
[source, typescript]
----
router.get(
{ path: '/some-path', validate: false },
async (context, req, res) => {
const object = await context.core.savedObjects.client.create(
'dashboard',
{
title: 'my dashboard',
panels: [
{ visualization: 'vis1' }, // <1>
],
indexPattern: 'indexPattern1'
},
{ references: [
{ id: '...', type: 'visualization', name: 'vis1' },
{ id: '...', type: 'index_pattern', name: 'indexPattern1' },
]
}
)
...
}
);
----
<1> Note how `dashboard.panels[0].visualization` stores the `name` property of
the reference (not the `id` directly) to be able to uniquely identify this
reference. This guarantees that the id the reference points to always remains
up to date. If a visualization `id` was directly stored in
`dashboard.panels[0].visualization` there is a risk that this `id` gets
updated without updating the reference in the references array.
==== Writing Migrations
Saved Objects support schema changes between Kibana versions, which we call
@ -308,4 +259,60 @@ point in time.
It is critical that you have extensive tests to ensure that migrations behave
as expected with all possible input documents. Given how simple it is to test
all the branch conditions in a migration function and the high impact of a bug
in this code, there's really no reason not to aim for 100% test code coverage.
in this code, there's really no reason not to aim for 100% test code coverage.
=== Client side usage
==== References
When a Saved Object declares `references` to other Saved Objects, the
Saved Objects Export API will automatically export the target object with all
of its references. This makes it easy for users to export the entire
reference graph of an object.
If a Saved Object can't be used on its own, that is, it needs other objects
to exist for a feature to function correctly, that Saved Object should declare
references to all the objects it requires. For example, a `dashboard`
object might have panels for several `visualization` objects. When these
`visualization` objects don't exist, the dashboard cannot be rendered
correctly. The `dashboard` object should declare references to all its
visualizations.
However, `visualization` objects can continue to be rendered or embedded into
other dashboards even if the `dashboard` it was originally embedded into
doesn't exist. As a result, `visualization` objects should not declare
references to `dashboard` objects.
For each referenced object, an `id`, `type` and `name` are added to the
`references` array:
[source, typescript]
----
router.get(
{ path: '/some-path', validate: false },
async (context, req, res) => {
const object = await context.core.savedObjects.client.create(
'dashboard',
{
title: 'my dashboard',
panels: [
{ visualization: 'vis1' }, // <1>
],
indexPattern: 'indexPattern1'
},
{ references: [
{ id: '...', type: 'visualization', name: 'vis1' },
{ id: '...', type: 'index_pattern', name: 'indexPattern1' },
]
}
)
...
}
);
----
<1> Note how `dashboard.panels[0].visualization` stores the `name` property of
the reference (not the `id` directly) to be able to uniquely identify this
reference. This guarantees that the id the reference points to always remains
up to date. If a visualization `id` was directly stored in
`dashboard.panels[0].visualization` there is a risk that this `id` gets
updated without updating the reference in the references array.

View file

@ -0,0 +1,40 @@
[[ui-settings-service]]
== UI settings service
NOTE: The UI settings service is available both server and client side.
=== Server side usage
The program interface to <<advanced-options, UI settings>>.
It makes it possible for Kibana plugins to extend Kibana UI Settings Management with custom settings.
See:
- {kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.uisettingsservicesetup.register.md[UI settings service Setup API docs]
[source,typescript]
----
import { schema } from '@kbn/config-schema';
import type { CoreSetup,Plugin } from 'kibana/server';
export class MyPlugin implements Plugin {
public setup(core: CoreSetup) {
core.uiSettings.register({
custom: {
value: '42',
schema: schema.string(),
},
});
const router = core.http.createRouter();
router.get({
path: 'my_plugin/{id}',
validate: …,
},
async (context, request, response) => {
const customSetting = await context.uiSettings.client.get('custom');
});
}
}
----

View file

@ -29,6 +29,24 @@ include::kibana-platform-plugin-api.asciidoc[leveloffset=+1]
include::core/index.asciidoc[leveloffset=+1]
include::core/application_service.asciidoc[leveloffset=+1]
include::core/configuration-service.asciidoc[leveloffset=+1]
include::core/elasticsearch-service.asciidoc[leveloffset=+1]
include::core/http-service.asciidoc[leveloffset=+1]
include::core/logging-service.asciidoc[leveloffset=+1]
include::core/logging-configuration-migration.asciidoc[leveloffset=+1]
include::core/saved-objects-service.asciidoc[leveloffset=+1]
include::core/uisettings-service.asciidoc[leveloffset=+1]
include::core/patterns-scoped-services.asciidoc[leveloffset=+1]
include::security/index.asciidoc[leveloffset=+1]
include::add-data-tutorials.asciidoc[leveloffset=+1]

View file

@ -50,46 +50,56 @@ for example, `logstash-*`.
[float]
==== Default logging timezone is now the system's timezone
*Details:* In prior releases the timezone used in logs defaulted to UTC. We now use the host machine's timezone by default.
*Details:* In prior releases the timezone used in logs defaulted to UTC. We now use the host machine's timezone by default.
*Impact:* To restore the previous behavior, in kibana.yml use the pattern layout, with a date modifier:
[source,yaml]
-------------------
logging:
appenders:
console:
kind: console
custom:
type: console
layout:
kind: pattern
type: pattern
pattern: "%date{ISO8601_TZ}{UTC}"
-------------------
See https://github.com/elastic/kibana/pull/90368 for more details.
[float]
==== Responses are never logged by default
*Details:* Previously responses would be logged if either `logging.json` was true, `logging.dest` was specified, or a `TTY` was detected.
*Details:* Previously responses would be logged if either `logging.json` was true, `logging.dest` was specified, or a `TTY` was detected. With the new logging configuration, these are provided by a dedicated logger.
*Impact:* To restore the previous behavior, in kibana.yml enable `debug` logs for the `http.server.response` context under `logging.loggers`:
*Impact:* To restore the previous behavior, in `kibana.yml` enable `debug` for the `http.server.response` logger:
[source,yaml]
-------------------
logging:
appenders:
custom:
type: console
layout:
type: pattern
loggers:
- context: http.server.response
appenders: [console]
- name: http.server.response
appenders: [custom]
level: debug
-------------------
See https://github.com/elastic/kibana/pull/87939 for more details.
[float]
==== Logging destination is specified by the appender
*Details:* Previously log destination would be `stdout` and could be changed to `file` using `logging.dest`.
*Details:* Previously log destination would be `stdout` and could be changed to `file` using `logging.dest`. With the new logging configuration, you can specify the destination using appenders.
*Impact:* To restore the previous behavior, in `kibana.yml` use the `console` appender to send logs to `stdout`.
*Impact:* To restore the previous behavior and log records to *stdout*, in `kibana.yml` use an appender with `type: console`.
[source,yaml]
-------------------
logging:
appenders:
custom:
type: console
layout:
type: pattern
root:
appenders: [default, console]
appenders: [default, custom]
-------------------
To send logs to `file` with a given file path, you should define a custom appender with `type:file`:
@ -107,16 +117,15 @@ logging:
-------------------
[float]
==== Specify log event output with root
*Details:* Previously logging output would be specified by `logging.silent` (none), 'logging.quiet' (error messages only) and `logging.verbose` (all).
==== Set log verbosity with root
*Details:* Previously logging output would be specified by `logging.silent` (none), `logging.quiet` (error messages only) and `logging.verbose` (all). With the new logging configuration, set the minimum required log level.
*Impact:* To restore the previous behavior, in `kibana.yml` specify `logging.root.level` as one of `off`, `error`, `all`:
*Impact:* To restore the previous behavior, in `kibana.yml` specify `logging.root.level`:
[source,yaml]
-------------------
# suppress all logs
logging:
root:
appenders: [default]
level: off
-------------------
@ -125,7 +134,6 @@ logging:
# only log error messages
logging:
root:
appenders: [default]
level: error
-------------------
@ -134,54 +142,14 @@ logging:
# log all events
logging:
root:
appenders: [default]
level: all
-------------------
[float]
==== Suppress all log output with root
*Details:* Previously all logging output would be suppressed if `logging.silent` was true.
==== Declare log message format
*Details:* Previously all events would be logged in `json` format when `logging.json` was true. With the new logging configuration you can specify the output format with layouts. You can choose between `json` and pattern format depending on your needs.
*Impact:* To restore the previous behavior, in `kibana.yml` turn `logging.root.level` to 'off'.
[source,yaml]
-------------------
logging:
root:
appenders: [default]
level: off
-------------------
[float]
==== Suppress log output with root
*Details:* Previously all logging output other than error messages would be suppressed if `logging.quiet` was true.
*Impact:* To restore the previous behavior, in `kibana.yml` turn `logging.root.level` to 'error'.
[source,yaml]
-------------------
logging:
root:
appenders: [default]
level: error
-------------------
[float]
==== Log all output with root
*Details:* Previously all events would be logged if `logging.verbose` was true.
*Impact:* To restore the previous behavior, in `kibana.yml` turn `logging.root.level` to 'all'.
[source,yaml]
-------------------
logging:
root:
appenders: [default]
level: all
-------------------
[float]
==== Declare log message format for each custom appender
*Details:* Previously all events would be logged in `json` format when `logging.json` was true.
*Impact:* To restore the previous behavior, in `kibana.yml` configure the logging format for each custom appender with the `appender.layout` property. There is no default for custom appenders and each one must be configured expilictly.
*Impact:* To restore the previous behavior, in `kibana.yml` configure the logging format for each custom appender with the `appender.layout` property. There is no default for custom appenders and each one must be configured expilictly.
[source,yaml]
-------------------

View file

@ -0,0 +1,173 @@
[[logging-settings]]
=== Logging settings in {kib}
++++
<titleabbrev>Logging settings</titleabbrev>
++++
Compatibility with the legacy logging system is assured until the end of the `v7` version.
All log messages handled by `root` context (default) are forwarded to the legacy logging service.
The logging configuration is validated against the predefined schema and if there are
any issues with it, {kib} will fail to start with the detailed error message.
NOTE: When you switch to the new logging configuration, you will start seeing duplicate log entries in both formats.
These will be removed when the `default` appender is no longer required.
Here are some configuration examples for the most common logging use cases:
[[log-to-file-example]]
==== Log to a file
Log the default log format to a file instead of to stdout (the default).
[source,yaml]
----
logging:
appenders:
file:
type: file
fileName: /var/log/kibana.log
layout:
type: pattern
root:
appenders: [default, file]
----
[[log-in-json-ECS-example]]
==== Log in json format
Log the default log format to json layout instead of pattern (the default).
With `json` layout log messages will be formatted as JSON strings in https://www.elastic.co/guide/en/ecs/current/ecs-reference.html[ECS format] that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself
[source,yaml]
----
logging:
appenders:
json-layout:
type: console
layout:
type: json
root:
appenders: [default, json-layout]
----
[[log-with-meta-to-stdout]]
==== Log with meta to stdout
Include `%meta` in your pattern layout:
[source,yaml]
----
logging:
appenders:
console-meta:
type: console
layout:
type: pattern
pattern: "[%date] [%level] [%logger] [%meta] %message"
root:
appenders: [default, console-meta]
----
[[log-elasticsearch-queries]]
==== Log {es} queries
[source,yaml]
--
logging:
appenders:
console_appender:
type: console
layout:
type: pattern
highlight: true
root:
appenders: [default, console_appender]
level: warn
loggers:
- name: elasticsearch.query
level: debug
--
[[change-overall-log-level]]
==== Change overall log level.
[source,yaml]
----
logging:
root:
level: debug
----
[[customize-specific-log-records]]
==== Customize specific log records
Here is a detailed configuration example that can be used to configure _loggers_, _appenders_ and _layouts_:
[source,yaml]
----
logging:
appenders:
console:
type: console
layout:
type: pattern
highlight: true
file:
type: file
fileName: /var/log/kibana.log
custom:
type: console
layout:
type: pattern
pattern: "[%date][%level] %message"
json-file-appender:
type: file
fileName: /var/log/kibana-json.log
layout:
type: json
root:
appenders: [default, console, file]
level: error
loggers:
- name: plugins
appenders: [custom]
level: warn
- name: plugins.myPlugin
level: info
- name: server
level: fatal
- name: optimize
appenders: [console]
- name: telemetry
appenders: [json-file-appender]
level: all
- name: metrics.ops
appenders: [console]
level: debug
----
Here is what we get with the config above:
[options="header"]
|===
| Context name | Appenders | Level
| root | console, file | error
| plugins | custom | warn
| plugins.myPlugin | custom | info
| server | console, file | fatal
| optimize | console | error
| telemetry | json-file-appender | all
| metrics.ops | console | debug
|===
NOTE: If you modify `root.appenders`, make sure to include `default`.
// For more details about logging configuration, refer to the logging system documentation (update to include a link).

View file

@ -64,11 +64,34 @@ To enable SSL/TLS for outbound connections to {es}, use the `https` protocol
in this setting.
| `elasticsearch.logQueries:`
| *deprecated* This setting is no longer used and will get removed in Kibana 8.0. Instead, set <<logging-verbose, `logging.verbose`>> to `true`
| *deprecated* This setting is no longer used and will get removed in Kibana 8.0. Instead, configure the `elasticsearch.query` logger.
This is useful for seeing the query DSL generated by applications that
currently do not have an inspector, for example Timelion and Monitoring.
*Default: `false`*
The following example shows a valid `elasticsearch.query` logger configuration:
|===
[source,text]
--
logging:
appenders:
console_appender:
type: console
layout:
type: pattern
highlight: true
root:
appenders: [default, console_appender]
level: warn
loggers:
- name: elasticsearch.query
level: debug
--
[cols="2*<"]
|===
|[[elasticsearch-pingTimeout]] `elasticsearch.pingTimeout:`
| Time in milliseconds to wait for {es} to respond to pings.
*Default: the value of the <<elasticsearch-requestTimeout, `elasticsearch.requestTimeout`>> setting*
@ -249,77 +272,44 @@ To reload the logging settings, send a SIGHUP signal to {kib}.
[cols="2*<"]
|===
|[[logging-dest]] `logging.dest:`
| Enables you to specify a file where {kib} stores log output.
*Default: `stdout`*
|[[logging-root]] `logging.root:`
| The `root` logger has a dedicated configuration node since this context name is special and is pre-configured for logging by default.
// TODO: add link to the advanced logging documentation.
| `logging.json:`
| Logs output as JSON. When set to `true`, the logs are formatted as JSON
strings that include timestamp, log level, context, message text, and any other
metadata that may be associated with the log message.
When <<logging-dest, `logging.dest.stdout`>> is set, and there is no interactive terminal ("TTY"),
this setting defaults to `true`. *Default: `false`*
|[[logging-root-appenders]] `logging.root.appenders:`
| A list of logging appenders to forward the root level logger instance to. By default `root` is configured with the `default` appender that must be included in the list. This is the configuration that all custom loggers will use unless they're re-configured explicitly. Additional appenders, if configured, can be included in the list.
| `logging.quiet:`
| Set the value of this setting to `true` to suppress all logging output other
than error messages. *Default: `false`*
|[[logging-root-level]] `logging.root.level:` {ess-icon}
| Level at which a log record should be logged. Supported levels are: _all_, _fatal_, _error_, _warn_, _info_, _debug_, _trace_, _off_. Levels are ordered from _all_ (highest) to _off_ and a log record will be logged it its level is higher than or equal to the level of its logger, otherwise the log record is ignored. Use this value to <<change-overall-log-level,change the overall log level>>. Set to `all` to log all events, including system usage information and all requests. Set to `off` to silence all logs. *Default: `info`*.
| `logging.rotate:`
| experimental[] Specifies the options for the logging rotate feature.
When not defined, all the sub options defaults would be applied.
The following example shows a valid logging rotate configuration:
|[[logging-loggers]] `logging.loggers:`
| Allows you to <<customize-specific-log-records,customize a specific logger instance>>.
|===
| `logging.loggers.name:`
| Specific logger instance.
[source,text]
--
logging.rotate:
enabled: true
everyBytes: 10485760
keepFiles: 10
--
| `logging.loggers.level:`
| Level at which a log record should be shown. Supported levels are: _all_, _fatal_, _error_, _warn_, _info_, _debug_, _trace_, _off_.
[cols="2*<"]
|===
| `logging.loggers.appenders:`
| Specific appender format to apply for a particular logger context.
| `logging.rotate.enabled:`
| experimental[] Set the value of this setting to `true` to
enable log rotation. If you do not have a <<logging-dest, `logging.dest`>> set that is different from `stdout`
that feature would not take any effect. *Default: `false`*
| `logging.appenders:`
| Define how and where log messages are displayed (eg. *stdout* or console) and stored (eg. file on the disk).
// TODO: add link to the advanced logging documentation.
| `logging.rotate.everyBytes:`
| experimental[] The maximum size of a log file (that is `not an exact` limit). After the
limit is reached, a new log file is generated. The default size limit is 10485760 (10 MB) and
this option should be in the range of 1048576 (1 MB) to 1073741824 (1 GB). *Default: `10485760`*
| `logging.appenders.console:`
| Appender to use for logging records to *stdout*. By default, uses the `[%date][%level][%logger] %message` **pattern** layout. To use a **json**, set the <<log-in-json-ECS-example,layout type to `json`>>.
| `logging.rotate.keepFiles:`
| experimental[] The number of most recent rotated log files to keep
on disk. Older files are deleted during log rotation. The default value is 7. The `logging.rotate.keepFiles`
option has to be in the range of 2 to 1024 files. *Default: `7`*
| `logging.appenders.file:`
| Allows you to specify a fileName to send log records to on disk. To send <<log-to-file-example,all log records to file>>, add the file appender to `root.appenders`.
| `logging.rotate.pollingInterval:`
| experimental[] The number of milliseconds for the polling strategy in case
the <<logging-rotate-usePolling, `logging.rotate.usePolling`>> is enabled. `logging.rotate.usePolling` must be in the 5000 to 3600000 millisecond range. *Default: `10000`*
| `logging.appenders.rolling-file:`
| Similar to Log4j's `RollingFileAppender`, this appender will log into a file and rotate if following a rolling strategy when the configured policy triggers. There are currently two policies supported: `size-limit` and `time-interval`.
|[[logging-rotate-usePolling]] `logging.rotate.usePolling:`
| experimental[] By default we try to understand the best way to monitoring
the log file and warning about it. Please be aware there are some systems where watch api is not accurate. In those cases, in order to get the feature working,
the `polling` method could be used enabling that option. *Default: `false`*
The size limit policy will perform a rollover when the log file reaches a maximum `size`. *Default 100mb*
| `logging.silent:`
| Set the value of this setting to `true` to
suppress all logging output. *Default: `false`*
| `logging.timezone`
| Set to the canonical time zone ID
(for example, `America/Los_Angeles`) to log events using that time zone.
For possible values, refer to
https://en.wikipedia.org/wiki/List_of_tz_database_time_zones[database time zones].
When not set, log events use the host timezone
| [[logging-verbose]] `logging.verbose:` {ess-icon}
| Set to `true` to log all events, including system usage information and all
requests. *Default: `false`*
The time interval policy will rotate the log file every given interval of time. *Default 24h*
| [[regionmap-ES-map]] `map.includeElasticMapsService:` {ess-icon}
| Set to `false` to disable connections to Elastic Maps Service.
@ -690,6 +680,7 @@ include::{kib-repo-dir}/settings/dev-settings.asciidoc[]
include::{kib-repo-dir}/settings/graph-settings.asciidoc[]
include::{kib-repo-dir}/settings/fleet-settings.asciidoc[]
include::{kib-repo-dir}/settings/i18n-settings.asciidoc[]
include::{kib-repo-dir}/settings/logging-settings.asciidoc[]
include::{kib-repo-dir}/settings/logs-ui-settings.asciidoc[]
include::{kib-repo-dir}/settings/infrastructure-ui-settings.asciidoc[]
include::{kib-repo-dir}/settings/ml-settings.asciidoc[]

View file

@ -126,10 +126,10 @@ all, the full logs from Reporting will be the first place to look. In `kibana.ym
[source,yaml]
--------------------------------------------------------------------------------
logging.verbose: true
logging.root.level: all
--------------------------------------------------------------------------------
For more information about logging, see <<logging-verbose,Kibana configuration settings>>.
For more information about logging, see <<logging-root-level,Kibana configuration settings>>.
[float]
[[reporting-troubleshooting-puppeteer-debug-logs]]