mirror of
https://github.com/elastic/kibana.git
synced 2025-04-23 17:28:26 -04:00
* You need to start somewhere * revert comment * rename default strategy to numeric * add some tests * fix some tests * update documentation * update generated doc * change applyBaseConfig to be async * fix integ tests * add integration tests * some renames * more tests * more tests * nits on README * some self review * doc nits * self review * use `escapeRegExp` from lodash * address some review comments * a few more nits * extract `isDevCliParent` check outside of LoggingSystem.upgrade * log errors from context * add defaults for policy/strategy
This commit is contained in:
parent
207fa22d25
commit
48a206334b
43 changed files with 2889 additions and 57 deletions
|
@ -8,5 +8,5 @@
|
|||
<b>Signature:</b>
|
||||
|
||||
```typescript
|
||||
export declare type AppenderConfigType = ConsoleAppenderConfig | FileAppenderConfig | LegacyAppenderConfig;
|
||||
export declare type AppenderConfigType = ConsoleAppenderConfig | FileAppenderConfig | LegacyAppenderConfig | RollingFileAppenderConfig;
|
||||
```
|
||||
|
|
|
@ -35,5 +35,5 @@ export interface Appender {
|
|||
* @internal
|
||||
*/
|
||||
export interface DisposableAppender extends Appender {
|
||||
dispose: () => void;
|
||||
dispose: () => void | Promise<void>;
|
||||
}
|
||||
|
|
|
@ -5,6 +5,10 @@
|
|||
- [Layouts](#layouts)
|
||||
- [Pattern layout](#pattern-layout)
|
||||
- [JSON layout](#json-layout)
|
||||
- [Appenders](#appenders)
|
||||
- [Rolling File Appender](#rolling-file-appender)
|
||||
- [Triggering Policies](#triggering-policies)
|
||||
- [Rolling strategies](#rolling-strategies)
|
||||
- [Configuration](#configuration)
|
||||
- [Usage](#usage)
|
||||
- [Logging config migration](#logging-config-migration)
|
||||
|
@ -127,6 +131,138 @@ Outputs the process ID.
|
|||
With `json` layout log messages will be formatted as JSON strings that include timestamp, log level, context, message
|
||||
text and any other metadata that may be associated with the log message itself.
|
||||
|
||||
## Appenders
|
||||
|
||||
### Rolling File Appender
|
||||
|
||||
Similar to Log4j's `RollingFileAppender`, this appender will log into a file, and rotate it following a rolling
|
||||
strategy when the configured policy triggers.
|
||||
|
||||
#### Triggering Policies
|
||||
|
||||
The triggering policy determines when a rolling should occur.
|
||||
|
||||
There are currently two policies supported: `size-limit` and `time-interval`.
|
||||
|
||||
##### SizeLimitTriggeringPolicy
|
||||
|
||||
This policy will rotate the file when it reaches a predetermined size.
|
||||
|
||||
```yaml
|
||||
logging:
|
||||
appenders:
|
||||
rolling-file:
|
||||
kind: rolling-file
|
||||
path: /var/logs/kibana.log
|
||||
policy:
|
||||
kind: size-limit
|
||||
size: 50mb
|
||||
strategy:
|
||||
//...
|
||||
layout:
|
||||
kind: pattern
|
||||
```
|
||||
|
||||
The options are:
|
||||
|
||||
- `size`
|
||||
|
||||
the maximum size the log file should reach before a rollover should be performed.
|
||||
|
||||
The default value is `100mb`
|
||||
|
||||
##### TimeIntervalTriggeringPolicy
|
||||
|
||||
This policy will rotate the file every given interval of time.
|
||||
|
||||
```yaml
|
||||
logging:
|
||||
appenders:
|
||||
rolling-file:
|
||||
kind: rolling-file
|
||||
path: /var/logs/kibana.log
|
||||
policy:
|
||||
kind: time-interval
|
||||
interval: 10s
|
||||
modulate: true
|
||||
strategy:
|
||||
//...
|
||||
layout:
|
||||
kind: pattern
|
||||
```
|
||||
|
||||
The options are:
|
||||
|
||||
- `interval`
|
||||
|
||||
How often a rollover should occur.
|
||||
|
||||
The default value is `24h`
|
||||
|
||||
- `modulate`
|
||||
|
||||
Whether the interval should be adjusted to cause the next rollover to occur on the interval boundary.
|
||||
|
||||
For example, when true, if the interval is `4h` and the current hour is 3 am then the first rollover will occur at 4 am
|
||||
and then next ones will occur at 8 am, noon, 4pm, etc.
|
||||
|
||||
The default value is `true`.
|
||||
|
||||
#### Rolling strategies
|
||||
|
||||
The rolling strategy determines how the rollover should occur: both the naming of the rolled files,
|
||||
and their retention policy.
|
||||
|
||||
There is currently one strategy supported: `numeric`.
|
||||
|
||||
##### NumericRollingStrategy
|
||||
|
||||
This strategy will suffix the file with a given pattern when rolling,
|
||||
and will retains a fixed amount of rolled files.
|
||||
|
||||
```yaml
|
||||
logging:
|
||||
appenders:
|
||||
rolling-file:
|
||||
kind: rolling-file
|
||||
path: /var/logs/kibana.log
|
||||
policy:
|
||||
// ...
|
||||
strategy:
|
||||
kind: numeric
|
||||
pattern: '-%i'
|
||||
max: 2
|
||||
layout:
|
||||
kind: pattern
|
||||
```
|
||||
|
||||
For example, with this configuration:
|
||||
|
||||
- During the first rollover kibana.log is renamed to kibana-1.log. A new kibana.log file is created and starts
|
||||
being written to.
|
||||
- During the second rollover kibana-1.log is renamed to kibana-2.log and kibana.log is renamed to kibana-1.log.
|
||||
A new kibana.log file is created and starts being written to.
|
||||
- During the third and subsequent rollovers, kibana-2.log is deleted, kibana-1.log is renamed to kibana-2.log and
|
||||
kibana.log is renamed to kibana-1.log. A new kibana.log file is created and starts being written to.
|
||||
|
||||
The options are:
|
||||
|
||||
- `pattern`
|
||||
|
||||
The suffix to append to the file path when rolling. Must include `%i`, as this is the value
|
||||
that will be converted to the file index.
|
||||
|
||||
for example, with `path: /var/logs/kibana.log` and `pattern: '-%i'`, the created rolling files
|
||||
will be `/var/logs/kibana-1.log`, `/var/logs/kibana-2.log`, and so on.
|
||||
|
||||
The default value is `-%i`
|
||||
|
||||
- `max`
|
||||
|
||||
The maximum number of files to keep. Once this number is reached, oldest files will be deleted.
|
||||
|
||||
The default value is `7`
|
||||
|
||||
## Configuration
|
||||
|
||||
As any configuration in the platform, logging configuration is validated against the predefined schema and if there are
|
||||
|
|
|
@ -19,10 +19,12 @@
|
|||
|
||||
import { mockCreateLayout } from './appenders.test.mocks';
|
||||
|
||||
import { ByteSizeValue } from '@kbn/config-schema';
|
||||
import { LegacyAppender } from '../../legacy/logging/appenders/legacy_appender';
|
||||
import { Appenders } from './appenders';
|
||||
import { ConsoleAppender } from './console/console_appender';
|
||||
import { FileAppender } from './file/file_appender';
|
||||
import { RollingFileAppender } from './rolling_file/rolling_file_appender';
|
||||
|
||||
beforeEach(() => {
|
||||
mockCreateLayout.mockReset();
|
||||
|
@ -83,4 +85,13 @@ test('`create()` creates correct appender.', () => {
|
|||
});
|
||||
|
||||
expect(legacyAppender).toBeInstanceOf(LegacyAppender);
|
||||
|
||||
const rollingFileAppender = Appenders.create({
|
||||
kind: 'rolling-file',
|
||||
path: 'path',
|
||||
layout: { highlight: true, kind: 'pattern', pattern: '' },
|
||||
strategy: { kind: 'numeric', max: 5, pattern: '%i' },
|
||||
policy: { kind: 'size-limit', size: ByteSizeValue.parse('15b') },
|
||||
});
|
||||
expect(rollingFileAppender).toBeInstanceOf(RollingFileAppender);
|
||||
});
|
||||
|
|
|
@ -28,6 +28,10 @@ import {
|
|||
import { Layouts } from '../layouts/layouts';
|
||||
import { ConsoleAppender, ConsoleAppenderConfig } from './console/console_appender';
|
||||
import { FileAppender, FileAppenderConfig } from './file/file_appender';
|
||||
import {
|
||||
RollingFileAppender,
|
||||
RollingFileAppenderConfig,
|
||||
} from './rolling_file/rolling_file_appender';
|
||||
|
||||
/**
|
||||
* Config schema for validting the shape of the `appenders` key in in {@link LoggerContextConfigType} or
|
||||
|
@ -39,10 +43,15 @@ export const appendersSchema = schema.oneOf([
|
|||
ConsoleAppender.configSchema,
|
||||
FileAppender.configSchema,
|
||||
LegacyAppender.configSchema,
|
||||
RollingFileAppender.configSchema,
|
||||
]);
|
||||
|
||||
/** @public */
|
||||
export type AppenderConfigType = ConsoleAppenderConfig | FileAppenderConfig | LegacyAppenderConfig;
|
||||
export type AppenderConfigType =
|
||||
| ConsoleAppenderConfig
|
||||
| FileAppenderConfig
|
||||
| LegacyAppenderConfig
|
||||
| RollingFileAppenderConfig;
|
||||
|
||||
/** @internal */
|
||||
export class Appenders {
|
||||
|
@ -57,10 +66,10 @@ export class Appenders {
|
|||
switch (config.kind) {
|
||||
case 'console':
|
||||
return new ConsoleAppender(Layouts.create(config.layout));
|
||||
|
||||
case 'file':
|
||||
return new FileAppender(Layouts.create(config.layout), config.path);
|
||||
|
||||
case 'rolling-file':
|
||||
return new RollingFileAppender(config);
|
||||
case 'legacy-appender':
|
||||
return new LegacyAppender(config.legacyLoggingConfig);
|
||||
|
||||
|
|
72
src/core/server/logging/appenders/rolling_file/mocks.ts
Normal file
72
src/core/server/logging/appenders/rolling_file/mocks.ts
Normal file
|
@ -0,0 +1,72 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { PublicMethodsOf } from '@kbn/utility-types';
|
||||
import type { Layout } from '@kbn/logging';
|
||||
import type { RollingFileContext } from './rolling_file_context';
|
||||
import type { RollingFileManager } from './rolling_file_manager';
|
||||
import type { TriggeringPolicy } from './policies/policy';
|
||||
import type { RollingStrategy } from './strategies/strategy';
|
||||
|
||||
const createContextMock = (filePath: string) => {
|
||||
const mock: jest.Mocked<RollingFileContext> = {
|
||||
currentFileSize: 0,
|
||||
currentFileTime: 0,
|
||||
filePath,
|
||||
refreshFileInfo: jest.fn(),
|
||||
};
|
||||
return mock;
|
||||
};
|
||||
|
||||
const createStrategyMock = () => {
|
||||
const mock: jest.Mocked<RollingStrategy> = {
|
||||
rollout: jest.fn(),
|
||||
};
|
||||
return mock;
|
||||
};
|
||||
|
||||
const createPolicyMock = () => {
|
||||
const mock: jest.Mocked<TriggeringPolicy> = {
|
||||
isTriggeringEvent: jest.fn(),
|
||||
};
|
||||
return mock;
|
||||
};
|
||||
|
||||
const createLayoutMock = () => {
|
||||
const mock: jest.Mocked<Layout> = {
|
||||
format: jest.fn(),
|
||||
};
|
||||
return mock;
|
||||
};
|
||||
|
||||
const createFileManagerMock = () => {
|
||||
const mock: jest.Mocked<PublicMethodsOf<RollingFileManager>> = {
|
||||
write: jest.fn(),
|
||||
closeStream: jest.fn(),
|
||||
};
|
||||
return mock;
|
||||
};
|
||||
|
||||
export const rollingFileAppenderMocks = {
|
||||
createContext: createContextMock,
|
||||
createStrategy: createStrategyMock,
|
||||
createPolicy: createPolicyMock,
|
||||
createLayout: createLayoutMock,
|
||||
createFileManager: createFileManagerMock,
|
||||
};
|
|
@ -0,0 +1,70 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { schema } from '@kbn/config-schema';
|
||||
import moment from 'moment-timezone';
|
||||
import { assertNever } from '@kbn/std';
|
||||
import { TriggeringPolicy } from './policy';
|
||||
import { RollingFileContext } from '../rolling_file_context';
|
||||
import {
|
||||
sizeLimitTriggeringPolicyConfigSchema,
|
||||
SizeLimitTriggeringPolicyConfig,
|
||||
SizeLimitTriggeringPolicy,
|
||||
} from './size_limit';
|
||||
import {
|
||||
TimeIntervalTriggeringPolicyConfig,
|
||||
TimeIntervalTriggeringPolicy,
|
||||
timeIntervalTriggeringPolicyConfigSchema,
|
||||
} from './time_interval';
|
||||
|
||||
export { TriggeringPolicy } from './policy';
|
||||
|
||||
/**
|
||||
* Any of the existing policy's configuration
|
||||
*
|
||||
* See {@link SizeLimitTriggeringPolicyConfig} and {@link TimeIntervalTriggeringPolicyConfig}
|
||||
*/
|
||||
export type TriggeringPolicyConfig =
|
||||
| SizeLimitTriggeringPolicyConfig
|
||||
| TimeIntervalTriggeringPolicyConfig;
|
||||
|
||||
const defaultPolicy: TimeIntervalTriggeringPolicyConfig = {
|
||||
kind: 'time-interval',
|
||||
interval: moment.duration(24, 'hour'),
|
||||
modulate: true,
|
||||
};
|
||||
|
||||
export const triggeringPolicyConfigSchema = schema.oneOf(
|
||||
[sizeLimitTriggeringPolicyConfigSchema, timeIntervalTriggeringPolicyConfigSchema],
|
||||
{ defaultValue: defaultPolicy }
|
||||
);
|
||||
|
||||
export const createTriggeringPolicy = (
|
||||
config: TriggeringPolicyConfig,
|
||||
context: RollingFileContext
|
||||
): TriggeringPolicy => {
|
||||
switch (config.kind) {
|
||||
case 'size-limit':
|
||||
return new SizeLimitTriggeringPolicy(config, context);
|
||||
case 'time-interval':
|
||||
return new TimeIntervalTriggeringPolicy(config, context);
|
||||
default:
|
||||
return assertNever(config);
|
||||
}
|
||||
};
|
|
@ -0,0 +1,30 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { LogRecord } from '@kbn/logging';
|
||||
|
||||
/**
|
||||
* A policy used to determinate when a rollout should be performed.
|
||||
*/
|
||||
export interface TriggeringPolicy {
|
||||
/**
|
||||
* Determines whether a rollover should occur before logging given record.
|
||||
**/
|
||||
isTriggeringEvent(record: LogRecord): boolean;
|
||||
}
|
|
@ -0,0 +1,24 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
export {
|
||||
SizeLimitTriggeringPolicy,
|
||||
SizeLimitTriggeringPolicyConfig,
|
||||
sizeLimitTriggeringPolicyConfigSchema,
|
||||
} from './size_limit_policy';
|
|
@ -0,0 +1,76 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { ByteSizeValue } from '@kbn/config-schema';
|
||||
import { LogRecord, LogLevel } from '@kbn/logging';
|
||||
import { SizeLimitTriggeringPolicy } from './size_limit_policy';
|
||||
import { RollingFileContext } from '../../rolling_file_context';
|
||||
|
||||
describe('SizeLimitTriggeringPolicy', () => {
|
||||
let context: RollingFileContext;
|
||||
|
||||
const createPolicy = (size: ByteSizeValue) =>
|
||||
new SizeLimitTriggeringPolicy({ kind: 'size-limit', size }, context);
|
||||
|
||||
const createLogRecord = (parts: Partial<LogRecord> = {}): LogRecord => ({
|
||||
timestamp: new Date(),
|
||||
level: LogLevel.Info,
|
||||
context: 'context',
|
||||
message: 'just a log',
|
||||
pid: 42,
|
||||
...parts,
|
||||
});
|
||||
|
||||
const isTriggering = ({ fileSize, maxSize }: { maxSize: string; fileSize: string }) => {
|
||||
const policy = createPolicy(ByteSizeValue.parse(maxSize));
|
||||
context.currentFileSize = ByteSizeValue.parse(fileSize).getValueInBytes();
|
||||
return policy.isTriggeringEvent(createLogRecord());
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
context = new RollingFileContext('foo.log');
|
||||
});
|
||||
|
||||
it('triggers a rollover when the file size exceeds the max size', () => {
|
||||
expect(
|
||||
isTriggering({
|
||||
fileSize: '70b',
|
||||
maxSize: '50b',
|
||||
})
|
||||
).toBeTruthy();
|
||||
});
|
||||
|
||||
it('triggers a rollover when the file size equals the max size', () => {
|
||||
expect(
|
||||
isTriggering({
|
||||
fileSize: '20b',
|
||||
maxSize: '20b',
|
||||
})
|
||||
).toBeTruthy();
|
||||
});
|
||||
|
||||
it('does not triggers a rollover when the file size did not rea h the max size', () => {
|
||||
expect(
|
||||
isTriggering({
|
||||
fileSize: '20b',
|
||||
maxSize: '50b',
|
||||
})
|
||||
).toBeFalsy();
|
||||
});
|
||||
});
|
|
@ -0,0 +1,58 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { schema, ByteSizeValue } from '@kbn/config-schema';
|
||||
import { LogRecord } from '@kbn/logging';
|
||||
import { RollingFileContext } from '../../rolling_file_context';
|
||||
import { TriggeringPolicy } from '../policy';
|
||||
|
||||
export interface SizeLimitTriggeringPolicyConfig {
|
||||
kind: 'size-limit';
|
||||
|
||||
/**
|
||||
* The minimum size the file must have to roll over.
|
||||
*/
|
||||
size: ByteSizeValue;
|
||||
}
|
||||
|
||||
export const sizeLimitTriggeringPolicyConfigSchema = schema.object({
|
||||
kind: schema.literal('size-limit'),
|
||||
size: schema.byteSize({ min: '1b', defaultValue: '100mb' }),
|
||||
});
|
||||
|
||||
/**
|
||||
* A triggering policy based on a fixed size limit.
|
||||
*
|
||||
* Will trigger a rollover when the current log size exceed the
|
||||
* given {@link SizeLimitTriggeringPolicyConfig.size | size}.
|
||||
*/
|
||||
export class SizeLimitTriggeringPolicy implements TriggeringPolicy {
|
||||
private readonly maxFileSize: number;
|
||||
|
||||
constructor(
|
||||
config: SizeLimitTriggeringPolicyConfig,
|
||||
private readonly context: RollingFileContext
|
||||
) {
|
||||
this.maxFileSize = config.size.getValueInBytes();
|
||||
}
|
||||
|
||||
isTriggeringEvent(record: LogRecord): boolean {
|
||||
return this.context.currentFileSize >= this.maxFileSize;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,86 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import moment from 'moment-timezone';
|
||||
import { schema } from '@kbn/config-schema';
|
||||
import { getNextRollingTime } from './get_next_rolling_time';
|
||||
|
||||
const format = 'YYYY-MM-DD HH:mm:ss:SSS';
|
||||
|
||||
const formattedRollingTime = (date: string, duration: string, modulate: boolean) =>
|
||||
moment(
|
||||
getNextRollingTime(
|
||||
moment(date, format).toDate().getTime(),
|
||||
schema.duration().validate(duration),
|
||||
modulate
|
||||
)
|
||||
).format(format);
|
||||
|
||||
describe('getNextRollingTime', () => {
|
||||
describe('when `modulate` is false', () => {
|
||||
it('increments the current time by the interval', () => {
|
||||
expect(formattedRollingTime('2010-10-20 04:27:12:000', '15m', false)).toEqual(
|
||||
'2010-10-20 04:42:12:000'
|
||||
);
|
||||
|
||||
expect(formattedRollingTime('2010-02-12 04:27:12:000', '24h', false)).toEqual(
|
||||
'2010-02-13 04:27:12:000'
|
||||
);
|
||||
|
||||
expect(formattedRollingTime('2010-02-17 06:34:55', '2d', false)).toEqual(
|
||||
'2010-02-19 06:34:55:000'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('when `modulate` is true', () => {
|
||||
it('increments the current time to reach the next boundary', () => {
|
||||
expect(formattedRollingTime('2010-10-20 04:27:12:512', '30m', true)).toEqual(
|
||||
'2010-10-20 04:30:00:000'
|
||||
);
|
||||
expect(formattedRollingTime('2010-10-20 04:27:12:512', '6h', true)).toEqual(
|
||||
'2010-10-20 06:00:00:000'
|
||||
);
|
||||
expect(formattedRollingTime('2010-10-20 04:27:12:512', '1w', true)).toEqual(
|
||||
'2010-10-24 00:00:00:000'
|
||||
);
|
||||
});
|
||||
|
||||
it('works when on the edge of a boundary', () => {
|
||||
expect(formattedRollingTime('2010-10-20 06:00:00:000', '6h', true)).toEqual(
|
||||
'2010-10-20 12:00:00:000'
|
||||
);
|
||||
expect(formattedRollingTime('2010-10-14 00:00:00:000', '1d', true)).toEqual(
|
||||
'2010-10-15 00:00:00:000'
|
||||
);
|
||||
expect(formattedRollingTime('2010-01-03 00:00:00:000', '2w', true)).toEqual(
|
||||
'2010-01-17 00:00:00:000'
|
||||
);
|
||||
});
|
||||
|
||||
it('increments a higher unit when necessary', () => {
|
||||
expect(formattedRollingTime('2010-10-20 21:00:00:000', '9h', true)).toEqual(
|
||||
'2010-10-21 03:00:00:000'
|
||||
);
|
||||
expect(formattedRollingTime('2010-12-31 21:00:00:000', '4d', true)).toEqual(
|
||||
'2011-01-03 00:00:00:000'
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
|
@ -0,0 +1,42 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import moment, { Duration } from 'moment-timezone';
|
||||
import { getHighestTimeUnit } from './utils';
|
||||
|
||||
/**
|
||||
* Return the next rollout time, given current time and rollout interval
|
||||
*/
|
||||
export const getNextRollingTime = (
|
||||
currentTime: number,
|
||||
interval: Duration,
|
||||
modulate: boolean
|
||||
): number => {
|
||||
if (modulate) {
|
||||
const incrementedUnit = getHighestTimeUnit(interval);
|
||||
const currentMoment = moment(currentTime);
|
||||
const increment =
|
||||
interval.get(incrementedUnit) -
|
||||
(currentMoment.get(incrementedUnit) % interval.get(incrementedUnit));
|
||||
const incrementInMs = moment.duration(increment, incrementedUnit).asMilliseconds();
|
||||
return currentMoment.startOf(incrementedUnit).toDate().getTime() + incrementInMs;
|
||||
} else {
|
||||
return currentTime + interval.asMilliseconds();
|
||||
}
|
||||
};
|
|
@ -0,0 +1,24 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
export {
|
||||
TimeIntervalTriggeringPolicy,
|
||||
TimeIntervalTriggeringPolicyConfig,
|
||||
timeIntervalTriggeringPolicyConfigSchema,
|
||||
} from './time_interval_policy';
|
|
@ -0,0 +1,21 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
export const getNextRollingTimeMock = jest.fn();
|
||||
jest.doMock('./get_next_rolling_time', () => ({ getNextRollingTime: getNextRollingTimeMock }));
|
|
@ -0,0 +1,147 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { getNextRollingTimeMock } from './time_interval_policy.test.mocks';
|
||||
import moment from 'moment-timezone';
|
||||
import { LogLevel, LogRecord } from '@kbn/logging';
|
||||
import { schema } from '@kbn/config-schema';
|
||||
import {
|
||||
TimeIntervalTriggeringPolicy,
|
||||
TimeIntervalTriggeringPolicyConfig,
|
||||
} from './time_interval_policy';
|
||||
import { RollingFileContext } from '../../rolling_file_context';
|
||||
|
||||
const format = 'YYYY-MM-DD HH:mm:ss';
|
||||
|
||||
describe('TimeIntervalTriggeringPolicy', () => {
|
||||
afterEach(() => {
|
||||
getNextRollingTimeMock.mockReset();
|
||||
jest.restoreAllMocks();
|
||||
});
|
||||
|
||||
const createLogRecord = (timestamp: Date): LogRecord => ({
|
||||
timestamp,
|
||||
level: LogLevel.Info,
|
||||
context: 'context',
|
||||
message: 'just a log',
|
||||
pid: 42,
|
||||
});
|
||||
|
||||
const createContext = (currentFileTime: number = Date.now()): RollingFileContext => {
|
||||
const context = new RollingFileContext('foo.log');
|
||||
context.currentFileTime = currentFileTime;
|
||||
return context;
|
||||
};
|
||||
|
||||
const createConfig = (
|
||||
interval: string = '15m',
|
||||
modulate: boolean = false
|
||||
): TimeIntervalTriggeringPolicyConfig => ({
|
||||
kind: 'time-interval',
|
||||
interval: schema.duration().validate(interval),
|
||||
modulate,
|
||||
});
|
||||
|
||||
it('calls `getNextRollingTime` during construction with the correct parameters', () => {
|
||||
const date = moment('2010-10-20 04:27:12', format).toDate();
|
||||
const context = createContext(date.getTime());
|
||||
const config = createConfig('15m', true);
|
||||
|
||||
new TimeIntervalTriggeringPolicy(config, context);
|
||||
|
||||
expect(getNextRollingTimeMock).toHaveBeenCalledTimes(1);
|
||||
expect(getNextRollingTimeMock).toHaveBeenCalledWith(
|
||||
context.currentFileTime,
|
||||
config.interval,
|
||||
config.modulate
|
||||
);
|
||||
});
|
||||
|
||||
it('calls `getNextRollingTime` with the current time if `context.currentFileTime` is not set', () => {
|
||||
const currentTime = moment('2018-06-15 04:27:12', format).toDate().getTime();
|
||||
jest.spyOn(Date, 'now').mockReturnValue(currentTime);
|
||||
const context = createContext(0);
|
||||
const config = createConfig('15m', true);
|
||||
|
||||
new TimeIntervalTriggeringPolicy(config, context);
|
||||
|
||||
expect(getNextRollingTimeMock).toHaveBeenCalledTimes(1);
|
||||
expect(getNextRollingTimeMock).toHaveBeenCalledWith(
|
||||
currentTime,
|
||||
config.interval,
|
||||
config.modulate
|
||||
);
|
||||
});
|
||||
|
||||
describe('#isTriggeringEvent', () => {
|
||||
it('returns true if the event time is after the nextRolloverTime', () => {
|
||||
const eventDate = moment('2010-10-20 04:43:12', format).toDate();
|
||||
const nextRolloverDate = moment('2010-10-20 04:00:00', format).toDate();
|
||||
|
||||
getNextRollingTimeMock.mockReturnValue(nextRolloverDate.getTime());
|
||||
|
||||
const policy = new TimeIntervalTriggeringPolicy(createConfig(), createContext());
|
||||
|
||||
expect(policy.isTriggeringEvent(createLogRecord(eventDate))).toBeTruthy();
|
||||
});
|
||||
|
||||
it('returns true if the event time is exactly the nextRolloverTime', () => {
|
||||
const eventDate = moment('2010-10-20 04:00:00', format).toDate();
|
||||
const nextRolloverDate = moment('2010-10-20 04:00:00', format).toDate();
|
||||
|
||||
getNextRollingTimeMock.mockReturnValue(nextRolloverDate.getTime());
|
||||
|
||||
const policy = new TimeIntervalTriggeringPolicy(createConfig(), createContext());
|
||||
|
||||
expect(policy.isTriggeringEvent(createLogRecord(eventDate))).toBeTruthy();
|
||||
});
|
||||
|
||||
it('returns false if the event time is before the nextRolloverTime', () => {
|
||||
const eventDate = moment('2010-10-20 03:47:12', format).toDate();
|
||||
const nextRolloverDate = moment('2010-10-20 04:00:00', format).toDate();
|
||||
|
||||
getNextRollingTimeMock.mockReturnValue(nextRolloverDate.getTime());
|
||||
|
||||
const policy = new TimeIntervalTriggeringPolicy(createConfig(), createContext());
|
||||
|
||||
expect(policy.isTriggeringEvent(createLogRecord(eventDate))).toBeFalsy();
|
||||
});
|
||||
|
||||
it('refreshes its `nextRolloverTime` when returning true', () => {
|
||||
const eventDate = moment('2010-10-20 04:43:12', format).toDate();
|
||||
const firstRollOverDate = moment('2010-10-20 04:00:00', format).toDate();
|
||||
const nextRollOverDate = moment('2010-10-20 08:00:00', format).toDate();
|
||||
|
||||
getNextRollingTimeMock
|
||||
// constructor call
|
||||
.mockReturnValueOnce(firstRollOverDate.getTime())
|
||||
// call performed during `isTriggeringEvent` to refresh the rolling time
|
||||
.mockReturnValueOnce(nextRollOverDate.getTime());
|
||||
|
||||
const policy = new TimeIntervalTriggeringPolicy(createConfig(), createContext());
|
||||
|
||||
const logRecord = createLogRecord(eventDate);
|
||||
|
||||
// rollingDate is firstRollOverDate
|
||||
expect(policy.isTriggeringEvent(logRecord)).toBeTruthy();
|
||||
// rollingDate should be nextRollOverDate
|
||||
expect(policy.isTriggeringEvent(logRecord)).toBeFalsy();
|
||||
});
|
||||
});
|
||||
});
|
|
@ -0,0 +1,96 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { Duration } from 'moment-timezone';
|
||||
import { schema } from '@kbn/config-schema';
|
||||
import { LogRecord } from '@kbn/logging';
|
||||
import { RollingFileContext } from '../../rolling_file_context';
|
||||
import { TriggeringPolicy } from '../policy';
|
||||
import { getNextRollingTime } from './get_next_rolling_time';
|
||||
import { isValidRolloverInterval } from './utils';
|
||||
|
||||
export interface TimeIntervalTriggeringPolicyConfig {
|
||||
kind: 'time-interval';
|
||||
|
||||
/**
|
||||
* How often a rollover should occur.
|
||||
*
|
||||
* @remarks
|
||||
* Due to how modulate rolling works, it is required to have an integer value for the highest time unit
|
||||
* of the duration (you can't overflow to a higher unit).
|
||||
* For example, `15m` and `4h` are valid values , but `90m` is not (as it is `1.5h`).
|
||||
*/
|
||||
interval: Duration;
|
||||
|
||||
/**
|
||||
* Indicates whether the interval should be adjusted to cause the next rollover to occur on the interval boundary.
|
||||
*
|
||||
* For example, if the interval is `4h` and the current hour is 3 am then
|
||||
* the first rollover will occur at 4 am and then next ones will occur at 8 am, noon, 4pm, etc.
|
||||
* The default value is true.
|
||||
*/
|
||||
modulate: boolean;
|
||||
}
|
||||
|
||||
export const timeIntervalTriggeringPolicyConfigSchema = schema.object({
|
||||
kind: schema.literal('time-interval'),
|
||||
interval: schema.duration({
|
||||
defaultValue: '24h',
|
||||
validate: (interval) => {
|
||||
if (!isValidRolloverInterval(interval)) {
|
||||
return 'Interval value cannot overflow to a higher time unit.';
|
||||
}
|
||||
},
|
||||
}),
|
||||
modulate: schema.boolean({ defaultValue: true }),
|
||||
});
|
||||
|
||||
/**
|
||||
* A triggering policy based on a fixed time interval
|
||||
*/
|
||||
export class TimeIntervalTriggeringPolicy implements TriggeringPolicy {
|
||||
/**
|
||||
* milliseconds timestamp of when the next rollover should occur.
|
||||
*/
|
||||
private nextRolloverTime: number;
|
||||
|
||||
constructor(
|
||||
private readonly config: TimeIntervalTriggeringPolicyConfig,
|
||||
context: RollingFileContext
|
||||
) {
|
||||
this.nextRolloverTime = getNextRollingTime(
|
||||
context.currentFileTime || Date.now(),
|
||||
config.interval,
|
||||
config.modulate
|
||||
);
|
||||
}
|
||||
|
||||
isTriggeringEvent(record: LogRecord): boolean {
|
||||
const eventTime = record.timestamp.getTime();
|
||||
if (eventTime >= this.nextRolloverTime) {
|
||||
this.nextRolloverTime = getNextRollingTime(
|
||||
eventTime,
|
||||
this.config.interval,
|
||||
this.config.modulate
|
||||
);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,78 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { schema } from '@kbn/config-schema';
|
||||
import { getHighestTimeUnit, isValidRolloverInterval } from './utils';
|
||||
|
||||
const duration = (raw: string) => schema.duration().validate(raw);
|
||||
|
||||
describe('getHighestTimeUnit', () => {
|
||||
it('returns the highest time unit of the duration', () => {
|
||||
expect(getHighestTimeUnit(duration('500ms'))).toEqual('millisecond');
|
||||
expect(getHighestTimeUnit(duration('30s'))).toEqual('second');
|
||||
expect(getHighestTimeUnit(duration('15m'))).toEqual('minute');
|
||||
expect(getHighestTimeUnit(duration('12h'))).toEqual('hour');
|
||||
expect(getHighestTimeUnit(duration('4d'))).toEqual('day');
|
||||
expect(getHighestTimeUnit(duration('3w'))).toEqual('week');
|
||||
expect(getHighestTimeUnit(duration('7M'))).toEqual('month');
|
||||
expect(getHighestTimeUnit(duration('7Y'))).toEqual('year');
|
||||
});
|
||||
|
||||
it('handles overflows', () => {
|
||||
expect(getHighestTimeUnit(duration('2000ms'))).toEqual('second');
|
||||
expect(getHighestTimeUnit(duration('90s'))).toEqual('minute');
|
||||
expect(getHighestTimeUnit(duration('75m'))).toEqual('hour');
|
||||
expect(getHighestTimeUnit(duration('36h'))).toEqual('day');
|
||||
expect(getHighestTimeUnit(duration('9d'))).toEqual('week');
|
||||
expect(getHighestTimeUnit(duration('15w'))).toEqual('month');
|
||||
expect(getHighestTimeUnit(duration('23M'))).toEqual('year');
|
||||
});
|
||||
});
|
||||
|
||||
describe('isValidRolloverInterval', () => {
|
||||
it('returns true if the interval does not overflow', () => {
|
||||
expect(isValidRolloverInterval(duration('500ms'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('30s'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('15m'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('12h'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('4d'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('3w'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('7M'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('7Y'))).toEqual(true);
|
||||
});
|
||||
|
||||
it('returns false if the interval overflows to a non integer value', () => {
|
||||
expect(isValidRolloverInterval(duration('2500ms'))).toEqual(false);
|
||||
expect(isValidRolloverInterval(duration('90s'))).toEqual(false);
|
||||
expect(isValidRolloverInterval(duration('75m'))).toEqual(false);
|
||||
expect(isValidRolloverInterval(duration('36h'))).toEqual(false);
|
||||
expect(isValidRolloverInterval(duration('9d'))).toEqual(false);
|
||||
expect(isValidRolloverInterval(duration('15w'))).toEqual(false);
|
||||
expect(isValidRolloverInterval(duration('23M'))).toEqual(false);
|
||||
});
|
||||
|
||||
it('returns true if the interval overflows to an integer value', () => {
|
||||
expect(isValidRolloverInterval(duration('2000ms'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('120s'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('240m'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('48h'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('14d'))).toEqual(true);
|
||||
expect(isValidRolloverInterval(duration('24M'))).toEqual(true);
|
||||
});
|
||||
});
|
|
@ -0,0 +1,70 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { Duration, unitOfTime } from 'moment-timezone';
|
||||
|
||||
/**
|
||||
* Returns the highest time unit of the given duration
|
||||
* (the highest unit with a value higher of equal to 1)
|
||||
*
|
||||
* @example
|
||||
* ```
|
||||
* getHighestTimeUnit(moment.duration(4, 'day'))
|
||||
* // 'day'
|
||||
* getHighestTimeUnit(moment.duration(90, 'minute'))
|
||||
* // 'hour' - 90min = 1.5h
|
||||
* getHighestTimeUnit(moment.duration(30, 'minute'))
|
||||
* // 'minute' - 30min = 0,5h
|
||||
* ```
|
||||
*/
|
||||
export const getHighestTimeUnit = (duration: Duration): unitOfTime.Base => {
|
||||
if (duration.asYears() >= 1) {
|
||||
return 'year';
|
||||
}
|
||||
if (duration.asMonths() >= 1) {
|
||||
return 'month';
|
||||
}
|
||||
if (duration.asWeeks() >= 1) {
|
||||
return 'week';
|
||||
}
|
||||
if (duration.asDays() >= 1) {
|
||||
return 'day';
|
||||
}
|
||||
if (duration.asHours() >= 1) {
|
||||
return 'hour';
|
||||
}
|
||||
if (duration.asMinutes() >= 1) {
|
||||
return 'minute';
|
||||
}
|
||||
if (duration.asSeconds() >= 1) {
|
||||
return 'second';
|
||||
}
|
||||
return 'millisecond';
|
||||
};
|
||||
|
||||
/**
|
||||
* Returns true if the given duration is valid to be used with by the {@link TimeIntervalTriggeringPolicy | policy}
|
||||
*
|
||||
* See {@link TimeIntervalTriggeringPolicyConfig.interval} for rules and reasons around this validation.
|
||||
*/
|
||||
export const isValidRolloverInterval = (duration: Duration): boolean => {
|
||||
const highestUnit = getHighestTimeUnit(duration);
|
||||
const asHighestUnit = duration.as(highestUnit);
|
||||
return Number.isInteger(asHighestUnit);
|
||||
};
|
|
@ -0,0 +1,58 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { schema } from '@kbn/config-schema';
|
||||
|
||||
export const LayoutsMock = {
|
||||
create: jest.fn(),
|
||||
configSchema: schema.any(),
|
||||
};
|
||||
jest.doMock('../../layouts/layouts', () => ({
|
||||
Layouts: LayoutsMock,
|
||||
}));
|
||||
|
||||
export const createTriggeringPolicyMock = jest.fn();
|
||||
jest.doMock('./policies', () => ({
|
||||
triggeringPolicyConfigSchema: schema.any(),
|
||||
createTriggeringPolicy: createTriggeringPolicyMock,
|
||||
}));
|
||||
|
||||
export const createRollingStrategyMock = jest.fn();
|
||||
jest.doMock('./strategies', () => ({
|
||||
rollingStrategyConfigSchema: schema.any(),
|
||||
createRollingStrategy: createRollingStrategyMock,
|
||||
}));
|
||||
|
||||
export const RollingFileManagerMock = jest.fn();
|
||||
jest.doMock('./rolling_file_manager', () => ({
|
||||
RollingFileManager: RollingFileManagerMock,
|
||||
}));
|
||||
|
||||
export const RollingFileContextMock = jest.fn();
|
||||
jest.doMock('./rolling_file_context', () => ({
|
||||
RollingFileContext: RollingFileContextMock,
|
||||
}));
|
||||
|
||||
export const resetAllMocks = () => {
|
||||
LayoutsMock.create.mockReset();
|
||||
createTriggeringPolicyMock.mockReset();
|
||||
createRollingStrategyMock.mockReset();
|
||||
RollingFileManagerMock.mockReset();
|
||||
RollingFileContextMock.mockReset();
|
||||
};
|
|
@ -0,0 +1,275 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import {
|
||||
createRollingStrategyMock,
|
||||
createTriggeringPolicyMock,
|
||||
LayoutsMock,
|
||||
resetAllMocks,
|
||||
RollingFileContextMock,
|
||||
RollingFileManagerMock,
|
||||
} from './rolling_file_appender.test.mocks';
|
||||
import { rollingFileAppenderMocks } from './mocks';
|
||||
import moment from 'moment-timezone';
|
||||
import { LogLevel, LogRecord } from '@kbn/logging';
|
||||
import { RollingFileAppender, RollingFileAppenderConfig } from './rolling_file_appender';
|
||||
|
||||
const config: RollingFileAppenderConfig = {
|
||||
kind: 'rolling-file',
|
||||
path: '/var/log/kibana.log',
|
||||
layout: {
|
||||
kind: 'pattern',
|
||||
pattern: '%message',
|
||||
highlight: false,
|
||||
},
|
||||
policy: {
|
||||
kind: 'time-interval',
|
||||
interval: moment.duration(4, 'hour'),
|
||||
modulate: true,
|
||||
},
|
||||
strategy: {
|
||||
kind: 'numeric',
|
||||
max: 5,
|
||||
pattern: '-%i',
|
||||
},
|
||||
};
|
||||
|
||||
const createLogRecord = (parts: Partial<LogRecord> = {}): LogRecord => ({
|
||||
timestamp: new Date(),
|
||||
level: LogLevel.Info,
|
||||
context: 'context',
|
||||
message: 'just a log',
|
||||
pid: 42,
|
||||
...parts,
|
||||
});
|
||||
|
||||
const nextTick = () => new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
const createPromiseResolver = () => {
|
||||
let resolve: () => void;
|
||||
let reject: () => void;
|
||||
const promise = new Promise<void>((_resolve, _reject) => {
|
||||
resolve = _resolve;
|
||||
reject = _reject;
|
||||
});
|
||||
|
||||
return {
|
||||
promise,
|
||||
resolve: resolve!,
|
||||
reject: reject!,
|
||||
};
|
||||
};
|
||||
|
||||
describe('RollingFileAppender', () => {
|
||||
let appender: RollingFileAppender;
|
||||
|
||||
let layout: ReturnType<typeof rollingFileAppenderMocks.createLayout>;
|
||||
let strategy: ReturnType<typeof rollingFileAppenderMocks.createStrategy>;
|
||||
let policy: ReturnType<typeof rollingFileAppenderMocks.createPolicy>;
|
||||
let context: ReturnType<typeof rollingFileAppenderMocks.createContext>;
|
||||
let fileManager: ReturnType<typeof rollingFileAppenderMocks.createFileManager>;
|
||||
|
||||
beforeEach(() => {
|
||||
layout = rollingFileAppenderMocks.createLayout();
|
||||
LayoutsMock.create.mockReturnValue(layout);
|
||||
|
||||
policy = rollingFileAppenderMocks.createPolicy();
|
||||
createTriggeringPolicyMock.mockReturnValue(policy);
|
||||
|
||||
strategy = rollingFileAppenderMocks.createStrategy();
|
||||
createRollingStrategyMock.mockReturnValue(strategy);
|
||||
|
||||
context = rollingFileAppenderMocks.createContext('file-path');
|
||||
RollingFileContextMock.mockImplementation(() => context);
|
||||
|
||||
fileManager = rollingFileAppenderMocks.createFileManager();
|
||||
RollingFileManagerMock.mockImplementation(() => fileManager);
|
||||
|
||||
appender = new RollingFileAppender(config);
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
resetAllMocks();
|
||||
});
|
||||
|
||||
it('constructs its delegates with the correct parameters', () => {
|
||||
expect(RollingFileContextMock).toHaveBeenCalledTimes(1);
|
||||
expect(RollingFileContextMock).toHaveBeenCalledWith(config.path);
|
||||
|
||||
expect(RollingFileManagerMock).toHaveBeenCalledTimes(1);
|
||||
expect(RollingFileManagerMock).toHaveBeenCalledWith(context);
|
||||
|
||||
expect(LayoutsMock.create).toHaveBeenCalledTimes(1);
|
||||
expect(LayoutsMock.create).toHaveBeenCalledWith(config.layout);
|
||||
|
||||
expect(createTriggeringPolicyMock).toHaveBeenCalledTimes(1);
|
||||
expect(createTriggeringPolicyMock).toHaveBeenCalledWith(config.policy, context);
|
||||
|
||||
expect(createRollingStrategyMock).toHaveBeenCalledTimes(1);
|
||||
expect(createRollingStrategyMock).toHaveBeenCalledWith(config.strategy, context);
|
||||
});
|
||||
|
||||
describe('#append', () => {
|
||||
describe('when rollout is not needed', () => {
|
||||
beforeEach(() => {
|
||||
policy.isTriggeringEvent.mockReturnValue(false);
|
||||
});
|
||||
|
||||
it('calls `layout.format` with the message', () => {
|
||||
const log1 = createLogRecord({ message: '1' });
|
||||
const log2 = createLogRecord({ message: '2' });
|
||||
|
||||
appender.append(log1);
|
||||
|
||||
expect(layout.format).toHaveBeenCalledTimes(1);
|
||||
expect(layout.format).toHaveBeenCalledWith(log1);
|
||||
|
||||
appender.append(log2);
|
||||
|
||||
expect(layout.format).toHaveBeenCalledTimes(2);
|
||||
expect(layout.format).toHaveBeenCalledWith(log2);
|
||||
});
|
||||
|
||||
it('calls `fileManager.write` with the formatted message', () => {
|
||||
layout.format.mockImplementation(({ message }) => message);
|
||||
|
||||
const log1 = createLogRecord({ message: '1' });
|
||||
const log2 = createLogRecord({ message: '2' });
|
||||
|
||||
appender.append(log1);
|
||||
|
||||
expect(fileManager.write).toHaveBeenCalledTimes(1);
|
||||
expect(fileManager.write).toHaveBeenCalledWith('1\n');
|
||||
|
||||
appender.append(log2);
|
||||
|
||||
expect(fileManager.write).toHaveBeenCalledTimes(2);
|
||||
expect(fileManager.write).toHaveBeenCalledWith('2\n');
|
||||
});
|
||||
});
|
||||
|
||||
describe('when rollout is needed', () => {
|
||||
beforeEach(() => {
|
||||
policy.isTriggeringEvent.mockReturnValueOnce(true).mockReturnValue(false);
|
||||
});
|
||||
|
||||
it('does not log the event triggering the rollout', () => {
|
||||
const log = createLogRecord({ message: '1' });
|
||||
appender.append(log);
|
||||
|
||||
expect(layout.format).not.toHaveBeenCalled();
|
||||
expect(fileManager.write).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('triggers the rollout', () => {
|
||||
const log = createLogRecord({ message: '1' });
|
||||
appender.append(log);
|
||||
|
||||
expect(strategy.rollout).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('closes the manager stream once the rollout is complete', async () => {
|
||||
const { promise, resolve } = createPromiseResolver();
|
||||
strategy.rollout.mockReturnValue(promise);
|
||||
|
||||
const log = createLogRecord({ message: '1' });
|
||||
appender.append(log);
|
||||
|
||||
expect(fileManager.closeStream).not.toHaveBeenCalled();
|
||||
|
||||
resolve();
|
||||
await nextTick();
|
||||
|
||||
expect(fileManager.closeStream).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('logs the event once the rollout is complete', async () => {
|
||||
const { promise, resolve } = createPromiseResolver();
|
||||
strategy.rollout.mockReturnValue(promise);
|
||||
|
||||
const log = createLogRecord({ message: '1' });
|
||||
appender.append(log);
|
||||
|
||||
expect(fileManager.write).not.toHaveBeenCalled();
|
||||
|
||||
resolve();
|
||||
await nextTick();
|
||||
|
||||
expect(fileManager.write).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('logs any pending events once the rollout is complete', async () => {
|
||||
const { promise, resolve } = createPromiseResolver();
|
||||
strategy.rollout.mockReturnValue(promise);
|
||||
|
||||
appender.append(createLogRecord({ message: '1' }));
|
||||
appender.append(createLogRecord({ message: '2' }));
|
||||
appender.append(createLogRecord({ message: '3' }));
|
||||
|
||||
expect(fileManager.write).not.toHaveBeenCalled();
|
||||
|
||||
resolve();
|
||||
await nextTick();
|
||||
|
||||
expect(fileManager.write).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('#dispose', () => {
|
||||
it('closes the file manager', async () => {
|
||||
await appender.dispose();
|
||||
|
||||
expect(fileManager.closeStream).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('noops if called multiple times', async () => {
|
||||
await appender.dispose();
|
||||
|
||||
expect(fileManager.closeStream).toHaveBeenCalledTimes(1);
|
||||
|
||||
await appender.dispose();
|
||||
|
||||
expect(fileManager.closeStream).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('waits until the rollout completes if a rollout was in progress', async () => {
|
||||
expect.assertions(1);
|
||||
|
||||
const { promise, resolve } = createPromiseResolver();
|
||||
let rolloutComplete = false;
|
||||
|
||||
strategy.rollout.mockReturnValue(
|
||||
promise.then(() => {
|
||||
rolloutComplete = true;
|
||||
})
|
||||
);
|
||||
|
||||
appender.append(createLogRecord({ message: '1' }));
|
||||
|
||||
const dispose = appender.dispose().then(() => {
|
||||
expect(rolloutComplete).toEqual(true);
|
||||
});
|
||||
|
||||
resolve();
|
||||
|
||||
await Promise.all([dispose, promise]);
|
||||
});
|
||||
});
|
||||
});
|
|
@ -0,0 +1,174 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { schema } from '@kbn/config-schema';
|
||||
import { LogRecord, Layout, DisposableAppender } from '@kbn/logging';
|
||||
import { Layouts, LayoutConfigType } from '../../layouts/layouts';
|
||||
import { BufferAppender } from '../buffer/buffer_appender';
|
||||
import {
|
||||
TriggeringPolicyConfig,
|
||||
createTriggeringPolicy,
|
||||
triggeringPolicyConfigSchema,
|
||||
TriggeringPolicy,
|
||||
} from './policies';
|
||||
import {
|
||||
RollingStrategy,
|
||||
createRollingStrategy,
|
||||
RollingStrategyConfig,
|
||||
rollingStrategyConfigSchema,
|
||||
} from './strategies';
|
||||
import { RollingFileManager } from './rolling_file_manager';
|
||||
import { RollingFileContext } from './rolling_file_context';
|
||||
|
||||
export interface RollingFileAppenderConfig {
|
||||
kind: 'rolling-file';
|
||||
/**
|
||||
* The layout to use when writing log entries
|
||||
*/
|
||||
layout: LayoutConfigType;
|
||||
/**
|
||||
* The absolute path of the file to write to.
|
||||
*/
|
||||
path: string;
|
||||
/**
|
||||
* The {@link TriggeringPolicy | policy} to use to determine if a rollover should occur.
|
||||
*/
|
||||
policy: TriggeringPolicyConfig;
|
||||
/**
|
||||
* The {@link RollingStrategy | rollout strategy} to use for rolling.
|
||||
*/
|
||||
strategy: RollingStrategyConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* Appender that formats all the `LogRecord` instances it receives and writes them to the specified file.
|
||||
* @internal
|
||||
*/
|
||||
export class RollingFileAppender implements DisposableAppender {
|
||||
public static configSchema = schema.object({
|
||||
kind: schema.literal('rolling-file'),
|
||||
layout: Layouts.configSchema,
|
||||
path: schema.string(),
|
||||
policy: triggeringPolicyConfigSchema,
|
||||
strategy: rollingStrategyConfigSchema,
|
||||
});
|
||||
|
||||
private isRolling = false;
|
||||
private disposed = false;
|
||||
private rollingPromise?: Promise<void>;
|
||||
|
||||
private readonly layout: Layout;
|
||||
private readonly context: RollingFileContext;
|
||||
private readonly fileManager: RollingFileManager;
|
||||
private readonly policy: TriggeringPolicy;
|
||||
private readonly strategy: RollingStrategy;
|
||||
private readonly buffer: BufferAppender;
|
||||
|
||||
constructor(config: RollingFileAppenderConfig) {
|
||||
this.context = new RollingFileContext(config.path);
|
||||
this.context.refreshFileInfo();
|
||||
this.fileManager = new RollingFileManager(this.context);
|
||||
this.layout = Layouts.create(config.layout);
|
||||
this.policy = createTriggeringPolicy(config.policy, this.context);
|
||||
this.strategy = createRollingStrategy(config.strategy, this.context);
|
||||
this.buffer = new BufferAppender();
|
||||
}
|
||||
|
||||
/**
|
||||
* Formats specified `record` and writes it to the specified file. If the record
|
||||
* would trigger a rollover, it will be performed before the effective write operation.
|
||||
*/
|
||||
public append(record: LogRecord) {
|
||||
// if we are currently rolling the files, push the log record
|
||||
// into the buffer, which will be flushed once rolling is complete
|
||||
if (this.isRolling) {
|
||||
this.buffer.append(record);
|
||||
return;
|
||||
}
|
||||
if (this.needRollout(record)) {
|
||||
this.buffer.append(record);
|
||||
this.rollingPromise = this.performRollout();
|
||||
return;
|
||||
}
|
||||
|
||||
this._writeToFile(record);
|
||||
}
|
||||
|
||||
private _writeToFile(record: LogRecord) {
|
||||
this.fileManager.write(`${this.layout.format(record)}\n`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Disposes the appender.
|
||||
* If a rollout is currently in progress, it will be awaited.
|
||||
*/
|
||||
public async dispose() {
|
||||
if (this.disposed) {
|
||||
return;
|
||||
}
|
||||
this.disposed = true;
|
||||
if (this.rollingPromise) {
|
||||
await this.rollingPromise;
|
||||
}
|
||||
await this.buffer.dispose();
|
||||
await this.fileManager.closeStream();
|
||||
}
|
||||
|
||||
private async performRollout() {
|
||||
if (this.isRolling) {
|
||||
return;
|
||||
}
|
||||
this.isRolling = true;
|
||||
try {
|
||||
await this.strategy.rollout();
|
||||
await this.fileManager.closeStream();
|
||||
} catch (e) {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error('[RollingFileAppender]: error while rolling file: ', e);
|
||||
}
|
||||
this.rollingPromise = undefined;
|
||||
this.isRolling = false;
|
||||
this.flushBuffer();
|
||||
}
|
||||
|
||||
private flushBuffer() {
|
||||
const pendingLogs = this.buffer.flush();
|
||||
// in some extreme scenarios, `dispose` can be called during a rollover
|
||||
// where the internal buffered logs would trigger another rollover
|
||||
// (rollover started, logs keep coming and got buffered, dispose is called, rollover ends and we then flush)
|
||||
// this would cause a second rollover that would not be awaited
|
||||
// and could result in a race with the newly created appender
|
||||
// that would also be performing a rollover.
|
||||
// so if we are disposed, we just flush the buffer directly to the file instead to avoid loosing the entries.
|
||||
for (const log of pendingLogs) {
|
||||
if (this.disposed) {
|
||||
this._writeToFile(log);
|
||||
} else {
|
||||
this.append(log);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the current event should trigger a rollout
|
||||
*/
|
||||
private needRollout(record: LogRecord) {
|
||||
return this.policy.isTriggeringEvent(record);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,50 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { statSync } from 'fs';
|
||||
|
||||
/**
|
||||
* Context shared between the rolling file manager, policy and strategy.
|
||||
*/
|
||||
export class RollingFileContext {
|
||||
constructor(public readonly filePath: string) {}
|
||||
/**
|
||||
* The size of the currently opened file.
|
||||
*/
|
||||
public currentFileSize: number = 0;
|
||||
/**
|
||||
* The time the currently opened file was created.
|
||||
*/
|
||||
public currentFileTime: number = 0;
|
||||
|
||||
public refreshFileInfo() {
|
||||
try {
|
||||
const { birthtime, size } = statSync(this.filePath);
|
||||
this.currentFileTime = birthtime.getTime();
|
||||
this.currentFileSize = size;
|
||||
} catch (e) {
|
||||
if (e.code !== 'ENOENT') {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error('[RollingFileAppender] error accessing the log file', e);
|
||||
}
|
||||
this.currentFileTime = Date.now();
|
||||
this.currentFileSize = 0;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,63 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { createWriteStream, WriteStream } from 'fs';
|
||||
import { RollingFileContext } from './rolling_file_context';
|
||||
|
||||
/**
|
||||
* Delegate of the {@link RollingFileAppender} used to manage the log file access
|
||||
*/
|
||||
export class RollingFileManager {
|
||||
private readonly filePath;
|
||||
private outputStream?: WriteStream;
|
||||
|
||||
constructor(private readonly context: RollingFileContext) {
|
||||
this.filePath = context.filePath;
|
||||
}
|
||||
|
||||
write(chunk: string) {
|
||||
const stream = this.ensureStreamOpen();
|
||||
this.context.currentFileSize += Buffer.byteLength(chunk, 'utf8');
|
||||
stream.write(chunk);
|
||||
}
|
||||
|
||||
async closeStream() {
|
||||
return new Promise<void>((resolve) => {
|
||||
if (this.outputStream === undefined) {
|
||||
return resolve();
|
||||
}
|
||||
this.outputStream.end(() => {
|
||||
this.outputStream = undefined;
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
private ensureStreamOpen() {
|
||||
if (this.outputStream === undefined) {
|
||||
this.outputStream = createWriteStream(this.filePath, {
|
||||
encoding: 'utf8',
|
||||
flags: 'a',
|
||||
});
|
||||
// refresh the file meta in case it was not initialized yet.
|
||||
this.context.refreshFileInfo();
|
||||
}
|
||||
return this.outputStream!;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,47 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { schema } from '@kbn/config-schema';
|
||||
import { RollingStrategy } from './strategy';
|
||||
import {
|
||||
NumericRollingStrategy,
|
||||
NumericRollingStrategyConfig,
|
||||
numericRollingStrategyConfigSchema,
|
||||
} from './numeric';
|
||||
import { RollingFileContext } from '../rolling_file_context';
|
||||
|
||||
export { RollingStrategy } from './strategy';
|
||||
export type RollingStrategyConfig = NumericRollingStrategyConfig;
|
||||
|
||||
const defaultStrategy: NumericRollingStrategyConfig = {
|
||||
kind: 'numeric',
|
||||
pattern: '-%i',
|
||||
max: 7,
|
||||
};
|
||||
|
||||
export const rollingStrategyConfigSchema = schema.oneOf([numericRollingStrategyConfigSchema], {
|
||||
defaultValue: defaultStrategy,
|
||||
});
|
||||
|
||||
export const createRollingStrategy = (
|
||||
config: RollingStrategyConfig,
|
||||
context: RollingFileContext
|
||||
): RollingStrategy => {
|
||||
return new NumericRollingStrategy(config, context);
|
||||
};
|
|
@ -0,0 +1,24 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
export {
|
||||
NumericRollingStrategy,
|
||||
NumericRollingStrategyConfig,
|
||||
numericRollingStrategyConfigSchema,
|
||||
} from './numeric_strategy';
|
|
@ -0,0 +1,40 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
export const getOrderedRolledFilesMock = jest.fn();
|
||||
export const deleteFilesMock = jest.fn();
|
||||
export const rollPreviousFilesInOrderMock = jest.fn();
|
||||
export const rollCurrentFileMock = jest.fn();
|
||||
export const shouldSkipRolloutMock = jest.fn();
|
||||
|
||||
jest.doMock('./rolling_tasks', () => ({
|
||||
getOrderedRolledFiles: getOrderedRolledFilesMock,
|
||||
deleteFiles: deleteFilesMock,
|
||||
rollPreviousFilesInOrder: rollPreviousFilesInOrderMock,
|
||||
rollCurrentFile: rollCurrentFileMock,
|
||||
shouldSkipRollout: shouldSkipRolloutMock,
|
||||
}));
|
||||
|
||||
export const resetAllMock = () => {
|
||||
shouldSkipRolloutMock.mockReset();
|
||||
getOrderedRolledFilesMock.mockReset();
|
||||
deleteFilesMock.mockReset();
|
||||
rollPreviousFilesInOrderMock.mockReset();
|
||||
rollCurrentFileMock.mockReset();
|
||||
};
|
|
@ -0,0 +1,172 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { join } from 'path';
|
||||
import {
|
||||
resetAllMock,
|
||||
shouldSkipRolloutMock,
|
||||
deleteFilesMock,
|
||||
getOrderedRolledFilesMock,
|
||||
rollCurrentFileMock,
|
||||
rollPreviousFilesInOrderMock,
|
||||
} from './numeric_strategy.test.mocks';
|
||||
import { rollingFileAppenderMocks } from '../../mocks';
|
||||
import { NumericRollingStrategy, NumericRollingStrategyConfig } from './numeric_strategy';
|
||||
|
||||
const logFileFolder = 'log-file-folder';
|
||||
const logFileBaseName = 'kibana.log';
|
||||
const pattern = '.%i';
|
||||
const logFilePath = join(logFileFolder, logFileBaseName);
|
||||
|
||||
describe('NumericRollingStrategy', () => {
|
||||
let context: ReturnType<typeof rollingFileAppenderMocks.createContext>;
|
||||
let strategy: NumericRollingStrategy;
|
||||
|
||||
const createStrategy = (config: Omit<NumericRollingStrategyConfig, 'kind'>) =>
|
||||
new NumericRollingStrategy({ ...config, kind: 'numeric' }, context);
|
||||
|
||||
beforeEach(() => {
|
||||
context = rollingFileAppenderMocks.createContext(logFilePath);
|
||||
strategy = createStrategy({ pattern, max: 3 });
|
||||
shouldSkipRolloutMock.mockResolvedValue(false);
|
||||
getOrderedRolledFilesMock.mockResolvedValue([]);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
resetAllMock();
|
||||
});
|
||||
|
||||
it('calls `getOrderedRolledFiles` with the correct parameters', async () => {
|
||||
await strategy.rollout();
|
||||
|
||||
expect(getOrderedRolledFilesMock).toHaveBeenCalledTimes(1);
|
||||
expect(getOrderedRolledFilesMock).toHaveBeenCalledWith({
|
||||
logFileFolder,
|
||||
logFileBaseName,
|
||||
pattern,
|
||||
});
|
||||
});
|
||||
|
||||
it('calls `deleteFiles` with the correct files', async () => {
|
||||
getOrderedRolledFilesMock.mockResolvedValue([
|
||||
'kibana.1.log',
|
||||
'kibana.2.log',
|
||||
'kibana.3.log',
|
||||
'kibana.4.log',
|
||||
]);
|
||||
|
||||
await strategy.rollout();
|
||||
|
||||
expect(deleteFilesMock).toHaveBeenCalledTimes(1);
|
||||
expect(deleteFilesMock).toHaveBeenCalledWith({
|
||||
filesToDelete: ['kibana.3.log', 'kibana.4.log'],
|
||||
logFileFolder,
|
||||
});
|
||||
});
|
||||
|
||||
it('calls `rollPreviousFilesInOrder` with the correct files', async () => {
|
||||
getOrderedRolledFilesMock.mockResolvedValue([
|
||||
'kibana.1.log',
|
||||
'kibana.2.log',
|
||||
'kibana.3.log',
|
||||
'kibana.4.log',
|
||||
]);
|
||||
|
||||
await strategy.rollout();
|
||||
|
||||
expect(rollPreviousFilesInOrderMock).toHaveBeenCalledTimes(1);
|
||||
expect(rollPreviousFilesInOrderMock).toHaveBeenCalledWith({
|
||||
filesToRoll: ['kibana.1.log', 'kibana.2.log'],
|
||||
logFileFolder,
|
||||
logFileBaseName,
|
||||
pattern,
|
||||
});
|
||||
});
|
||||
|
||||
it('calls `rollCurrentFile` with the correct parameters', async () => {
|
||||
await strategy.rollout();
|
||||
|
||||
expect(rollCurrentFileMock).toHaveBeenCalledTimes(1);
|
||||
expect(rollCurrentFileMock).toHaveBeenCalledWith({
|
||||
pattern,
|
||||
logFileBaseName,
|
||||
logFileFolder,
|
||||
});
|
||||
});
|
||||
|
||||
it('calls `context.refreshFileInfo` with the correct parameters', async () => {
|
||||
await strategy.rollout();
|
||||
|
||||
expect(context.refreshFileInfo).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('calls the tasks in the correct order', async () => {
|
||||
getOrderedRolledFilesMock.mockResolvedValue([
|
||||
'kibana.1.log',
|
||||
'kibana.2.log',
|
||||
'kibana.3.log',
|
||||
'kibana.4.log',
|
||||
]);
|
||||
|
||||
await strategy.rollout();
|
||||
|
||||
const deleteFilesCall = deleteFilesMock.mock.invocationCallOrder[0];
|
||||
const rollPreviousFilesInOrderCall = rollPreviousFilesInOrderMock.mock.invocationCallOrder[0];
|
||||
const rollCurrentFileCall = rollCurrentFileMock.mock.invocationCallOrder[0];
|
||||
const refreshFileInfoCall = context.refreshFileInfo.mock.invocationCallOrder[0];
|
||||
|
||||
expect(deleteFilesCall).toBeLessThan(rollPreviousFilesInOrderCall);
|
||||
expect(rollPreviousFilesInOrderCall).toBeLessThan(rollCurrentFileCall);
|
||||
expect(rollCurrentFileCall).toBeLessThan(refreshFileInfoCall);
|
||||
});
|
||||
|
||||
it('do not calls `deleteFiles` if no file should be deleted', async () => {
|
||||
getOrderedRolledFilesMock.mockResolvedValue(['kibana.1.log', 'kibana.2.log']);
|
||||
|
||||
await strategy.rollout();
|
||||
|
||||
expect(deleteFilesMock).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('do not calls `rollPreviousFilesInOrder` if no file should be rolled', async () => {
|
||||
getOrderedRolledFilesMock.mockResolvedValue([]);
|
||||
|
||||
await strategy.rollout();
|
||||
|
||||
expect(rollPreviousFilesInOrderMock).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('skips the rollout if `shouldSkipRollout` returns true', async () => {
|
||||
shouldSkipRolloutMock.mockResolvedValue(true);
|
||||
getOrderedRolledFilesMock.mockResolvedValue([
|
||||
'kibana.1.log',
|
||||
'kibana.2.log',
|
||||
'kibana.3.log',
|
||||
'kibana.4.log',
|
||||
]);
|
||||
|
||||
await strategy.rollout();
|
||||
|
||||
expect(getOrderedRolledFilesMock).not.toHaveBeenCalled();
|
||||
expect(deleteFilesMock).not.toHaveBeenCalled();
|
||||
expect(rollPreviousFilesInOrderMock).not.toHaveBeenCalled();
|
||||
expect(rollCurrentFileMock).not.toHaveBeenCalled();
|
||||
expect(context.refreshFileInfo).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
|
@ -0,0 +1,152 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { basename, dirname } from 'path';
|
||||
import { schema } from '@kbn/config-schema';
|
||||
import { RollingStrategy } from '../strategy';
|
||||
import { RollingFileContext } from '../../rolling_file_context';
|
||||
import {
|
||||
shouldSkipRollout,
|
||||
getOrderedRolledFiles,
|
||||
deleteFiles,
|
||||
rollCurrentFile,
|
||||
rollPreviousFilesInOrder,
|
||||
} from './rolling_tasks';
|
||||
|
||||
export interface NumericRollingStrategyConfig {
|
||||
kind: 'numeric';
|
||||
/**
|
||||
* The suffix pattern to apply when renaming a file. The suffix will be applied
|
||||
* after the `appender.path` file name, but before the file extension.
|
||||
*
|
||||
* Must include `%i`, as it is the value that will be converted to the file index
|
||||
*
|
||||
* @example
|
||||
* ```yaml
|
||||
* logging:
|
||||
* appenders:
|
||||
* rolling-file:
|
||||
* kind: rolling-file
|
||||
* path: /var/logs/kibana.log
|
||||
* strategy:
|
||||
* type: default
|
||||
* pattern: "-%i"
|
||||
* max: 5
|
||||
* ```
|
||||
*
|
||||
* will create `/var/logs/kibana-1.log`, `/var/logs/kibana-2.log`, and so on.
|
||||
*
|
||||
* Defaults to `-%i`.
|
||||
*/
|
||||
pattern: string;
|
||||
/**
|
||||
* The maximum number of files to keep. Once this number is reached, oldest
|
||||
* files will be deleted. Defaults to `7`
|
||||
*/
|
||||
max: number;
|
||||
}
|
||||
|
||||
export const numericRollingStrategyConfigSchema = schema.object({
|
||||
kind: schema.literal('numeric'),
|
||||
pattern: schema.string({
|
||||
defaultValue: '-%i',
|
||||
validate: (pattern) => {
|
||||
if (!pattern.includes('%i')) {
|
||||
return `pattern must include '%i'`;
|
||||
}
|
||||
},
|
||||
}),
|
||||
max: schema.number({ min: 1, max: 100, defaultValue: 7 }),
|
||||
});
|
||||
|
||||
/**
|
||||
* A rolling strategy that will suffix the file with a given pattern when rolling,
|
||||
* and will only retain a fixed amount of rolled files.
|
||||
*
|
||||
* @example
|
||||
* ```yaml
|
||||
* logging:
|
||||
* appenders:
|
||||
* rolling-file:
|
||||
* kind: rolling-file
|
||||
* path: /kibana.log
|
||||
* strategy:
|
||||
* type: numeric
|
||||
* pattern: "-%i"
|
||||
* max: 2
|
||||
* ```
|
||||
* - During the first rollover kibana.log is renamed to kibana-1.log. A new kibana.log file is created and starts
|
||||
* being written to.
|
||||
* - During the second rollover kibana-1.log is renamed to kibana-2.log and kibana.log is renamed to kibana-1.log.
|
||||
* A new kibana.log file is created and starts being written to.
|
||||
* - During the third and subsequent rollovers, kibana-2.log is deleted, kibana-1.log is renamed to kibana-2.log and
|
||||
* kibana.log is renamed to kibana-1.log. A new kibana.log file is created and starts being written to.
|
||||
*
|
||||
* See {@link NumericRollingStrategyConfig} for more details.
|
||||
*/
|
||||
export class NumericRollingStrategy implements RollingStrategy {
|
||||
private readonly logFilePath;
|
||||
private readonly logFileBaseName;
|
||||
private readonly logFileFolder;
|
||||
|
||||
constructor(
|
||||
private readonly config: NumericRollingStrategyConfig,
|
||||
private readonly context: RollingFileContext
|
||||
) {
|
||||
this.logFilePath = this.context.filePath;
|
||||
this.logFileBaseName = basename(this.context.filePath);
|
||||
this.logFileFolder = dirname(this.context.filePath);
|
||||
}
|
||||
|
||||
async rollout() {
|
||||
const logFilePath = this.logFilePath;
|
||||
const logFileBaseName = this.logFileBaseName;
|
||||
const logFileFolder = this.logFileFolder;
|
||||
const pattern = this.config.pattern;
|
||||
|
||||
if (await shouldSkipRollout({ logFilePath })) {
|
||||
return;
|
||||
}
|
||||
|
||||
// get the files matching the pattern in the folder, and sort them by `%i` value
|
||||
const orderedFiles = await getOrderedRolledFiles({
|
||||
logFileFolder,
|
||||
logFileBaseName,
|
||||
pattern,
|
||||
});
|
||||
const filesToRoll = orderedFiles.slice(0, this.config.max - 1);
|
||||
const filesToDelete = orderedFiles.slice(filesToRoll.length, orderedFiles.length);
|
||||
|
||||
if (filesToDelete.length > 0) {
|
||||
await deleteFiles({ logFileFolder, filesToDelete });
|
||||
}
|
||||
|
||||
if (filesToRoll.length > 0) {
|
||||
await rollPreviousFilesInOrder({ filesToRoll, logFileFolder, logFileBaseName, pattern });
|
||||
}
|
||||
|
||||
await rollCurrentFile({ pattern, logFileBaseName, logFileFolder });
|
||||
|
||||
// updates the context file info to mirror the new size and date
|
||||
// this is required for the time based policy, as the next time check
|
||||
// will be performed before the file manager updates the context itself by reopening
|
||||
// a writer to the new file.
|
||||
this.context.refreshFileInfo();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,65 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { getFileNameMatcher, getRollingFileName } from './pattern_matcher';
|
||||
|
||||
describe('getFileNameMatcher', () => {
|
||||
it('returns the file index when the file matches the pattern', () => {
|
||||
const matcher = getFileNameMatcher('log.json', '.%i');
|
||||
expect(matcher('log.1.json')).toEqual(1);
|
||||
expect(matcher('log.12.json')).toEqual(12);
|
||||
});
|
||||
it('handles special characters in the pattern', () => {
|
||||
const matcher = getFileNameMatcher('kibana.log', '-{%i}');
|
||||
expect(matcher('kibana-{1}.log')).toEqual(1);
|
||||
});
|
||||
it('returns undefined when the file does not match the pattern', () => {
|
||||
const matcher = getFileNameMatcher('log.json', '.%i');
|
||||
expect(matcher('log.1.text')).toBeUndefined();
|
||||
expect(matcher('log*1.json')).toBeUndefined();
|
||||
expect(matcher('log.2foo.json')).toBeUndefined();
|
||||
});
|
||||
it('handles multiple extensions', () => {
|
||||
const matcher = getFileNameMatcher('log.foo.bar', '.%i');
|
||||
expect(matcher('log.1.foo.bar')).toEqual(1);
|
||||
expect(matcher('log.12.foo.bar')).toEqual(12);
|
||||
});
|
||||
it('handles files without extension', () => {
|
||||
const matcher = getFileNameMatcher('log', '.%i');
|
||||
expect(matcher('log.1')).toEqual(1);
|
||||
expect(matcher('log.42')).toEqual(42);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getRollingFileName', () => {
|
||||
it('returns the correct file name', () => {
|
||||
expect(getRollingFileName('kibana.json', '.%i', 5)).toEqual('kibana.5.json');
|
||||
expect(getRollingFileName('log.txt', '-%i', 3)).toEqual('log-3.txt');
|
||||
});
|
||||
|
||||
it('handles multiple extensions', () => {
|
||||
expect(getRollingFileName('kibana.foo.bar', '.%i', 5)).toEqual('kibana.5.foo.bar');
|
||||
expect(getRollingFileName('log.foo.bar', '-%i', 3)).toEqual('log-3.foo.bar');
|
||||
});
|
||||
|
||||
it('handles files without extension', () => {
|
||||
expect(getRollingFileName('kibana', '.%i', 12)).toEqual('kibana.12');
|
||||
expect(getRollingFileName('log', '-%i', 7)).toEqual('log-7');
|
||||
});
|
||||
});
|
|
@ -0,0 +1,81 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { escapeRegExp } from 'lodash';
|
||||
|
||||
const createNumericMatcher = (fileBaseName: string, pattern: string): RegExp => {
|
||||
let extStart = fileBaseName.indexOf('.');
|
||||
if (extStart === -1) {
|
||||
extStart = fileBaseName.length;
|
||||
}
|
||||
const baseNameWithoutExt = escapeRegExp(fileBaseName.substr(0, extStart));
|
||||
const extension = escapeRegExp(fileBaseName.substr(extStart, fileBaseName.length));
|
||||
const processedPattern = escapeRegExp(pattern)
|
||||
// create matching group for `%i`
|
||||
.replace(/%i/g, '(?<counter>\\d+)');
|
||||
return new RegExp(`^${baseNameWithoutExt}${processedPattern}${extension}$`);
|
||||
};
|
||||
|
||||
/**
|
||||
* Builds a matcher that can be used to match a filename against the rolling
|
||||
* file name pattern associated with given `logFileName` and `pattern`
|
||||
*
|
||||
* @example
|
||||
* ```ts
|
||||
* const matcher = getFileNameMatcher('kibana.log', '-%i');
|
||||
* matcher('kibana-1.log') // `1`
|
||||
* matcher('kibana-5.log') // `5`
|
||||
* matcher('kibana-A.log') // undefined
|
||||
* matcher('kibana.log') // undefined
|
||||
* ```
|
||||
*/
|
||||
export const getFileNameMatcher = (logFileName: string, pattern: string) => {
|
||||
const matcher = createNumericMatcher(logFileName, pattern);
|
||||
return (fileName: string): number | undefined => {
|
||||
const match = matcher.exec(fileName);
|
||||
if (!match) {
|
||||
return undefined;
|
||||
}
|
||||
return parseInt(match.groups!.counter, 10);
|
||||
};
|
||||
};
|
||||
|
||||
/**
|
||||
* Returns the rolling file name associated with given basename and pattern for given index.
|
||||
*
|
||||
* @example
|
||||
* ```ts
|
||||
* getNumericFileName('foo.log', '.%i', 4) // -> `foo.4.log`
|
||||
* getNumericFileName('kibana.log', '-{%i}', 12) // -> `kibana-{12}.log`
|
||||
* ```
|
||||
*/
|
||||
export const getRollingFileName = (
|
||||
fileBaseName: string,
|
||||
pattern: string,
|
||||
index: number
|
||||
): string => {
|
||||
let suffixStart = fileBaseName.indexOf('.');
|
||||
if (suffixStart === -1) {
|
||||
suffixStart = fileBaseName.length;
|
||||
}
|
||||
const baseNameWithoutSuffix = fileBaseName.substr(0, suffixStart);
|
||||
const suffix = fileBaseName.substr(suffixStart, fileBaseName.length);
|
||||
const interpolatedPattern = pattern.replace('%i', String(index));
|
||||
return `${baseNameWithoutSuffix}${interpolatedPattern}${suffix}`;
|
||||
};
|
|
@ -0,0 +1,37 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
export const readdirMock = jest.fn();
|
||||
export const unlinkMock = jest.fn();
|
||||
export const renameMock = jest.fn();
|
||||
export const accessMock = jest.fn();
|
||||
|
||||
jest.doMock('fs/promises', () => ({
|
||||
readdir: readdirMock,
|
||||
unlink: unlinkMock,
|
||||
rename: renameMock,
|
||||
access: accessMock,
|
||||
}));
|
||||
|
||||
export const clearAllMocks = () => {
|
||||
readdirMock.mockClear();
|
||||
unlinkMock.mockClear();
|
||||
renameMock.mockClear();
|
||||
accessMock.mockClear();
|
||||
};
|
|
@ -0,0 +1,173 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { join } from 'path';
|
||||
import {
|
||||
accessMock,
|
||||
readdirMock,
|
||||
renameMock,
|
||||
unlinkMock,
|
||||
clearAllMocks,
|
||||
} from './rolling_tasks.test.mocks';
|
||||
import {
|
||||
shouldSkipRollout,
|
||||
rollCurrentFile,
|
||||
rollPreviousFilesInOrder,
|
||||
deleteFiles,
|
||||
getOrderedRolledFiles,
|
||||
} from './rolling_tasks';
|
||||
|
||||
describe('NumericRollingStrategy tasks', () => {
|
||||
afterEach(() => {
|
||||
clearAllMocks();
|
||||
});
|
||||
|
||||
describe('shouldSkipRollout', () => {
|
||||
it('calls `exists` with the correct parameters', async () => {
|
||||
await shouldSkipRollout({ logFilePath: 'some-file' });
|
||||
|
||||
expect(accessMock).toHaveBeenCalledTimes(1);
|
||||
expect(accessMock).toHaveBeenCalledWith('some-file');
|
||||
});
|
||||
it('returns `true` if the file is current log file does not exist', async () => {
|
||||
accessMock.mockImplementation(() => {
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
expect(await shouldSkipRollout({ logFilePath: 'some-file' })).toEqual(true);
|
||||
});
|
||||
it('returns `false` if the file is current log file exists', async () => {
|
||||
accessMock.mockResolvedValue(undefined);
|
||||
|
||||
expect(await shouldSkipRollout({ logFilePath: 'some-file' })).toEqual(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('rollCurrentFile', () => {
|
||||
it('calls `rename` with the correct parameters', async () => {
|
||||
await rollCurrentFile({
|
||||
logFileFolder: 'log-folder',
|
||||
logFileBaseName: 'kibana.log',
|
||||
pattern: '.%i',
|
||||
});
|
||||
|
||||
expect(renameMock).toHaveBeenCalledTimes(1);
|
||||
expect(renameMock).toHaveBeenCalledWith(
|
||||
join('log-folder', 'kibana.log'),
|
||||
join('log-folder', 'kibana.1.log')
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('rollPreviousFilesInOrder', () => {
|
||||
it('calls `rename` once for each file', async () => {
|
||||
await rollPreviousFilesInOrder({
|
||||
filesToRoll: ['file-1', 'file-2', 'file-3'],
|
||||
logFileFolder: 'log-folder',
|
||||
logFileBaseName: 'file',
|
||||
pattern: '-%i',
|
||||
});
|
||||
|
||||
expect(renameMock).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
|
||||
it('calls `rename` with the correct parameters', async () => {
|
||||
await rollPreviousFilesInOrder({
|
||||
filesToRoll: ['file-1', 'file-2'],
|
||||
logFileFolder: 'log-folder',
|
||||
logFileBaseName: 'file',
|
||||
pattern: '-%i',
|
||||
});
|
||||
|
||||
expect(renameMock).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
join('log-folder', 'file-2'),
|
||||
join('log-folder', 'file-3')
|
||||
);
|
||||
expect(renameMock).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
join('log-folder', 'file-1'),
|
||||
join('log-folder', 'file-2')
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('deleteFiles', () => {
|
||||
it('calls `unlink` once for each file', async () => {
|
||||
await deleteFiles({
|
||||
logFileFolder: 'log-folder',
|
||||
filesToDelete: ['file-a', 'file-b', 'file-c'],
|
||||
});
|
||||
|
||||
expect(unlinkMock).toHaveBeenCalledTimes(3);
|
||||
});
|
||||
it('calls `unlink` with the correct parameters', async () => {
|
||||
await deleteFiles({
|
||||
logFileFolder: 'log-folder',
|
||||
filesToDelete: ['file-a', 'file-b'],
|
||||
});
|
||||
|
||||
expect(unlinkMock).toHaveBeenNthCalledWith(1, join('log-folder', 'file-a'));
|
||||
expect(unlinkMock).toHaveBeenNthCalledWith(2, join('log-folder', 'file-b'));
|
||||
});
|
||||
});
|
||||
|
||||
describe('getOrderedRolledFiles', () => {
|
||||
it('returns the rolled files matching the pattern in order', async () => {
|
||||
readdirMock.mockResolvedValue([
|
||||
'kibana-10.log',
|
||||
'kibana-1.log',
|
||||
'kibana-12.log',
|
||||
'kibana-2.log',
|
||||
]);
|
||||
|
||||
const files = await getOrderedRolledFiles({
|
||||
logFileFolder: 'log-folder',
|
||||
logFileBaseName: 'kibana.log',
|
||||
pattern: '-%i',
|
||||
});
|
||||
|
||||
expect(files).toEqual(['kibana-1.log', 'kibana-2.log', 'kibana-10.log', 'kibana-12.log']);
|
||||
});
|
||||
|
||||
it('ignores files that do no match the pattern', async () => {
|
||||
readdirMock.mockResolvedValue(['kibana.2.log', 'kibana.1.log', 'kibana-3.log', 'foo.log']);
|
||||
|
||||
const files = await getOrderedRolledFiles({
|
||||
logFileFolder: 'log-folder',
|
||||
logFileBaseName: 'kibana.log',
|
||||
pattern: '.%i',
|
||||
});
|
||||
|
||||
expect(files).toEqual(['kibana.1.log', 'kibana.2.log']);
|
||||
});
|
||||
|
||||
it('does not return the base log file', async () => {
|
||||
readdirMock.mockResolvedValue(['kibana.log', 'kibana-1.log', 'kibana-2.log']);
|
||||
|
||||
const files = await getOrderedRolledFiles({
|
||||
logFileFolder: 'log-folder',
|
||||
logFileBaseName: 'kibana.log',
|
||||
pattern: '-%i',
|
||||
});
|
||||
|
||||
expect(files).toEqual(['kibana-1.log', 'kibana-2.log']);
|
||||
});
|
||||
});
|
||||
});
|
|
@ -0,0 +1,99 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { join } from 'path';
|
||||
import { readdir, rename, unlink, access } from 'fs/promises';
|
||||
import { getFileNameMatcher, getRollingFileName } from './pattern_matcher';
|
||||
|
||||
export const shouldSkipRollout = async ({ logFilePath }: { logFilePath: string }) => {
|
||||
// in case of time-interval triggering policy, we can have an entire
|
||||
// interval without any log event. In that case, the log file is not even
|
||||
// present, and we should not perform the rollout
|
||||
try {
|
||||
await access(logFilePath);
|
||||
return false;
|
||||
} catch (e) {
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Returns the rolled file basenames, from the most recent to the oldest.
|
||||
*/
|
||||
export const getOrderedRolledFiles = async ({
|
||||
logFileBaseName,
|
||||
logFileFolder,
|
||||
pattern,
|
||||
}: {
|
||||
logFileFolder: string;
|
||||
logFileBaseName: string;
|
||||
pattern: string;
|
||||
}): Promise<string[]> => {
|
||||
const matcher = getFileNameMatcher(logFileBaseName, pattern);
|
||||
const dirContent = await readdir(logFileFolder);
|
||||
return dirContent
|
||||
.map((fileName) => ({
|
||||
fileName,
|
||||
index: matcher(fileName),
|
||||
}))
|
||||
.filter(({ index }) => index !== undefined)
|
||||
.sort((a, b) => a.index! - b.index!)
|
||||
.map(({ fileName }) => fileName);
|
||||
};
|
||||
|
||||
export const deleteFiles = async ({
|
||||
logFileFolder,
|
||||
filesToDelete,
|
||||
}: {
|
||||
logFileFolder: string;
|
||||
filesToDelete: string[];
|
||||
}) => {
|
||||
await Promise.all(filesToDelete.map((fileToDelete) => unlink(join(logFileFolder, fileToDelete))));
|
||||
};
|
||||
|
||||
export const rollPreviousFilesInOrder = async ({
|
||||
filesToRoll,
|
||||
logFileFolder,
|
||||
logFileBaseName,
|
||||
pattern,
|
||||
}: {
|
||||
logFileFolder: string;
|
||||
logFileBaseName: string;
|
||||
pattern: string;
|
||||
filesToRoll: string[];
|
||||
}) => {
|
||||
for (let i = filesToRoll.length - 1; i >= 0; i--) {
|
||||
const oldFileName = filesToRoll[i];
|
||||
const newFileName = getRollingFileName(logFileBaseName, pattern, i + 2);
|
||||
await rename(join(logFileFolder, oldFileName), join(logFileFolder, newFileName));
|
||||
}
|
||||
};
|
||||
|
||||
export const rollCurrentFile = async ({
|
||||
logFileFolder,
|
||||
logFileBaseName,
|
||||
pattern,
|
||||
}: {
|
||||
logFileFolder: string;
|
||||
logFileBaseName: string;
|
||||
pattern: string;
|
||||
}) => {
|
||||
const rolledBaseName = getRollingFileName(logFileBaseName, pattern, 1);
|
||||
await rename(join(logFileFolder, logFileBaseName), join(logFileFolder, rolledBaseName));
|
||||
};
|
|
@ -0,0 +1,28 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
/**
|
||||
* A strategy to perform the log file rollover.
|
||||
*/
|
||||
export interface RollingStrategy {
|
||||
/**
|
||||
* Performs the rollout
|
||||
*/
|
||||
rollout(): Promise<void>;
|
||||
}
|
|
@ -146,12 +146,18 @@ describe('logging service', () => {
|
|||
],
|
||||
};
|
||||
|
||||
const delay = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms));
|
||||
|
||||
let root: ReturnType<typeof createRoot>;
|
||||
let setup: InternalCoreSetup;
|
||||
let mockConsoleLog: jest.SpyInstance;
|
||||
const loggingConfig$ = new Subject<LoggerContextConfigInput>();
|
||||
const setContextConfig = (enable: boolean) =>
|
||||
enable ? loggingConfig$.next(CUSTOM_LOGGING_CONFIG) : loggingConfig$.next({});
|
||||
const setContextConfig = async (enable: boolean) => {
|
||||
loggingConfig$.next(enable ? CUSTOM_LOGGING_CONFIG : {});
|
||||
// need to wait for config to reload. nextTick is enough, using delay just to be sure
|
||||
await delay(10);
|
||||
};
|
||||
|
||||
beforeAll(async () => {
|
||||
mockConsoleLog = jest.spyOn(global.console, 'log');
|
||||
root = kbnTestServer.createRoot();
|
||||
|
@ -171,12 +177,12 @@ describe('logging service', () => {
|
|||
|
||||
it('does not write to custom appenders when not configured', async () => {
|
||||
const logger = root.logger.get('plugins.myplugin.debug_pattern');
|
||||
setContextConfig(false);
|
||||
await setContextConfig(false);
|
||||
logger.info('log1');
|
||||
setContextConfig(true);
|
||||
await setContextConfig(true);
|
||||
logger.debug('log2');
|
||||
logger.info('log3');
|
||||
setContextConfig(false);
|
||||
await setContextConfig(false);
|
||||
logger.info('log4');
|
||||
expect(mockConsoleLog).toHaveBeenCalledTimes(2);
|
||||
expect(mockConsoleLog).toHaveBeenCalledWith(
|
||||
|
@ -188,7 +194,7 @@ describe('logging service', () => {
|
|||
});
|
||||
|
||||
it('writes debug_json context to custom JSON appender', async () => {
|
||||
setContextConfig(true);
|
||||
await setContextConfig(true);
|
||||
const logger = root.logger.get('plugins.myplugin.debug_json');
|
||||
logger.debug('log1');
|
||||
logger.info('log2');
|
||||
|
@ -214,7 +220,7 @@ describe('logging service', () => {
|
|||
});
|
||||
|
||||
it('writes info_json context to custom JSON appender', async () => {
|
||||
setContextConfig(true);
|
||||
await setContextConfig(true);
|
||||
const logger = root.logger.get('plugins.myplugin.info_json');
|
||||
logger.debug('i should not be logged!');
|
||||
logger.info('log2');
|
||||
|
@ -230,7 +236,7 @@ describe('logging service', () => {
|
|||
});
|
||||
|
||||
it('writes debug_pattern context to custom pattern appender', async () => {
|
||||
setContextConfig(true);
|
||||
await setContextConfig(true);
|
||||
const logger = root.logger.get('plugins.myplugin.debug_pattern');
|
||||
logger.debug('log1');
|
||||
logger.info('log2');
|
||||
|
@ -245,7 +251,7 @@ describe('logging service', () => {
|
|||
});
|
||||
|
||||
it('writes info_pattern context to custom pattern appender', async () => {
|
||||
setContextConfig(true);
|
||||
await setContextConfig(true);
|
||||
const logger = root.logger.get('plugins.myplugin.info_pattern');
|
||||
logger.debug('i should not be logged!');
|
||||
logger.info('log2');
|
||||
|
@ -256,7 +262,7 @@ describe('logging service', () => {
|
|||
});
|
||||
|
||||
it('writes all context to both appenders', async () => {
|
||||
setContextConfig(true);
|
||||
await setContextConfig(true);
|
||||
const logger = root.logger.get('plugins.myplugin.all');
|
||||
logger.debug('log1');
|
||||
logger.info('log2');
|
||||
|
|
|
@ -0,0 +1,220 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch B.V. under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch B.V. licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
import { join } from 'path';
|
||||
import { rmdir, mkdtemp, readFile, readdir } from 'fs/promises';
|
||||
import moment from 'moment-timezone';
|
||||
import * as kbnTestServer from '../../../test_helpers/kbn_server';
|
||||
import { getNextRollingTime } from '../appenders/rolling_file/policies/time_interval/get_next_rolling_time';
|
||||
|
||||
const flushDelay = 250;
|
||||
const delay = (waitInMs: number) => new Promise((resolve) => setTimeout(resolve, waitInMs));
|
||||
const flush = async () => delay(flushDelay);
|
||||
|
||||
function createRoot(appenderConfig: any) {
|
||||
return kbnTestServer.createRoot({
|
||||
logging: {
|
||||
silent: true, // set "true" in kbnTestServer
|
||||
appenders: {
|
||||
'rolling-file': appenderConfig,
|
||||
},
|
||||
loggers: [
|
||||
{
|
||||
context: 'test.rolling.file',
|
||||
appenders: ['rolling-file'],
|
||||
level: 'debug',
|
||||
},
|
||||
],
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
describe('RollingFileAppender', () => {
|
||||
let root: ReturnType<typeof createRoot>;
|
||||
let testDir: string;
|
||||
let logFile: string;
|
||||
|
||||
const getFileContent = async (basename: string) =>
|
||||
(await readFile(join(testDir, basename))).toString('utf-8');
|
||||
|
||||
beforeEach(async () => {
|
||||
testDir = await mkdtemp('rolling-test');
|
||||
logFile = join(testDir, 'kibana.log');
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
try {
|
||||
await rmdir(testDir);
|
||||
} catch (e) {
|
||||
/* trap */
|
||||
}
|
||||
if (root) {
|
||||
await root.shutdown();
|
||||
}
|
||||
});
|
||||
|
||||
const message = (index: number) => `some message of around 40 bytes number ${index}`;
|
||||
const expectedFileContent = (indices: number[]) => indices.map(message).join('\n') + '\n';
|
||||
|
||||
describe('`size-limit` policy with `numeric` strategy', () => {
|
||||
it('rolls the log file in the correct order', async () => {
|
||||
root = createRoot({
|
||||
kind: 'rolling-file',
|
||||
path: logFile,
|
||||
layout: {
|
||||
kind: 'pattern',
|
||||
pattern: '%message',
|
||||
},
|
||||
policy: {
|
||||
kind: 'size-limit',
|
||||
size: '100b',
|
||||
},
|
||||
strategy: {
|
||||
kind: 'numeric',
|
||||
max: 5,
|
||||
pattern: '.%i',
|
||||
},
|
||||
});
|
||||
await root.setup();
|
||||
|
||||
const logger = root.logger.get('test.rolling.file');
|
||||
|
||||
// size = 100b, message.length ~= 40b, should roll every 3 message
|
||||
|
||||
// last file - 'kibana.2.log'
|
||||
logger.info(message(1));
|
||||
logger.info(message(2));
|
||||
logger.info(message(3));
|
||||
// roll - 'kibana.1.log'
|
||||
logger.info(message(4));
|
||||
logger.info(message(5));
|
||||
logger.info(message(6));
|
||||
// roll - 'kibana.log'
|
||||
logger.info(message(7));
|
||||
|
||||
await flush();
|
||||
|
||||
const files = await readdir(testDir);
|
||||
|
||||
expect(files.sort()).toEqual(['kibana.1.log', 'kibana.2.log', 'kibana.log']);
|
||||
expect(await getFileContent('kibana.log')).toEqual(expectedFileContent([7]));
|
||||
expect(await getFileContent('kibana.1.log')).toEqual(expectedFileContent([4, 5, 6]));
|
||||
expect(await getFileContent('kibana.2.log')).toEqual(expectedFileContent([1, 2, 3]));
|
||||
});
|
||||
|
||||
it('only keep the correct number of files', async () => {
|
||||
root = createRoot({
|
||||
kind: 'rolling-file',
|
||||
path: logFile,
|
||||
layout: {
|
||||
kind: 'pattern',
|
||||
pattern: '%message',
|
||||
},
|
||||
policy: {
|
||||
kind: 'size-limit',
|
||||
size: '60b',
|
||||
},
|
||||
strategy: {
|
||||
kind: 'numeric',
|
||||
max: 2,
|
||||
pattern: '-%i',
|
||||
},
|
||||
});
|
||||
await root.setup();
|
||||
|
||||
const logger = root.logger.get('test.rolling.file');
|
||||
|
||||
// size = 60b, message.length ~= 40b, should roll every 2 message
|
||||
|
||||
// last file - 'kibana-3.log' (which will be removed during the last rolling)
|
||||
logger.info(message(1));
|
||||
logger.info(message(2));
|
||||
// roll - 'kibana-2.log'
|
||||
logger.info(message(3));
|
||||
logger.info(message(4));
|
||||
// roll - 'kibana-1.log'
|
||||
logger.info(message(5));
|
||||
logger.info(message(6));
|
||||
// roll - 'kibana.log'
|
||||
logger.info(message(7));
|
||||
logger.info(message(8));
|
||||
|
||||
await flush();
|
||||
|
||||
const files = await readdir(testDir);
|
||||
|
||||
expect(files.sort()).toEqual(['kibana-1.log', 'kibana-2.log', 'kibana.log']);
|
||||
expect(await getFileContent('kibana.log')).toEqual(expectedFileContent([7, 8]));
|
||||
expect(await getFileContent('kibana-1.log')).toEqual(expectedFileContent([5, 6]));
|
||||
expect(await getFileContent('kibana-2.log')).toEqual(expectedFileContent([3, 4]));
|
||||
});
|
||||
});
|
||||
|
||||
describe('`time-interval` policy with `numeric` strategy', () => {
|
||||
it('rolls the log file at the given interval', async () => {
|
||||
root = createRoot({
|
||||
kind: 'rolling-file',
|
||||
path: logFile,
|
||||
layout: {
|
||||
kind: 'pattern',
|
||||
pattern: '%message',
|
||||
},
|
||||
policy: {
|
||||
kind: 'time-interval',
|
||||
interval: '1s',
|
||||
modulate: true,
|
||||
},
|
||||
strategy: {
|
||||
kind: 'numeric',
|
||||
max: 2,
|
||||
pattern: '-%i',
|
||||
},
|
||||
});
|
||||
await root.setup();
|
||||
|
||||
const logger = root.logger.get('test.rolling.file');
|
||||
|
||||
const waitForNextRollingTime = () => {
|
||||
const now = Date.now();
|
||||
const nextRolling = getNextRollingTime(now, moment.duration(1, 'second'), true);
|
||||
return delay(nextRolling - now + 1);
|
||||
};
|
||||
|
||||
// wait for a rolling time boundary to minimize the risk to have logs emitted in different intervals
|
||||
// the `1s` interval should be way more than enough to log 2 messages
|
||||
await waitForNextRollingTime();
|
||||
|
||||
logger.info(message(1));
|
||||
logger.info(message(2));
|
||||
|
||||
await waitForNextRollingTime();
|
||||
|
||||
logger.info(message(3));
|
||||
logger.info(message(4));
|
||||
|
||||
await flush();
|
||||
|
||||
const files = await readdir(testDir);
|
||||
|
||||
expect(files.sort()).toEqual(['kibana-1.log', 'kibana.log']);
|
||||
expect(await getFileContent('kibana.log')).toEqual(expectedFileContent([3, 4]));
|
||||
expect(await getFileContent('kibana-1.log')).toEqual(expectedFileContent([1, 2]));
|
||||
});
|
||||
});
|
||||
});
|
|
@ -42,6 +42,7 @@ const createLoggingSystemMock = () => {
|
|||
context,
|
||||
}));
|
||||
mocked.asLoggerFactory.mockImplementation(() => mocked);
|
||||
mocked.upgrade.mockResolvedValue(undefined);
|
||||
mocked.stop.mockResolvedValue();
|
||||
return mocked;
|
||||
};
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
|
||||
const mockStreamWrite = jest.fn();
|
||||
jest.mock('fs', () => ({
|
||||
...(jest.requireActual('fs') as any),
|
||||
constants: {},
|
||||
createWriteStream: jest.fn(() => ({ write: mockStreamWrite })),
|
||||
}));
|
||||
|
@ -67,7 +68,7 @@ test('uses default memory buffer logger until config is provided', () => {
|
|||
expect(bufferAppendSpy.mock.calls[1][0]).toMatchSnapshot({ pid: expect.any(Number) });
|
||||
});
|
||||
|
||||
test('flushes memory buffer logger and switches to real logger once config is provided', () => {
|
||||
test('flushes memory buffer logger and switches to real logger once config is provided', async () => {
|
||||
const logger = system.get('test', 'context');
|
||||
|
||||
logger.trace('buffered trace message');
|
||||
|
@ -77,7 +78,7 @@ test('flushes memory buffer logger and switches to real logger once config is pr
|
|||
const bufferAppendSpy = jest.spyOn((system as any).bufferAppender, 'append');
|
||||
|
||||
// Switch to console appender with `info` level, so that `trace` message won't go through.
|
||||
system.upgrade(
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: { default: { kind: 'console', layout: { kind: 'json' } } },
|
||||
root: { level: 'info' },
|
||||
|
@ -96,7 +97,7 @@ test('flushes memory buffer logger and switches to real logger once config is pr
|
|||
expect(bufferAppendSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('appends records via multiple appenders.', () => {
|
||||
test('appends records via multiple appenders.', async () => {
|
||||
const loggerWithoutConfig = system.get('some-context');
|
||||
const testsLogger = system.get('tests');
|
||||
const testsChildLogger = system.get('tests', 'child');
|
||||
|
@ -109,7 +110,7 @@ test('appends records via multiple appenders.', () => {
|
|||
expect(mockConsoleLog).not.toHaveBeenCalled();
|
||||
expect(mockCreateWriteStream).not.toHaveBeenCalled();
|
||||
|
||||
system.upgrade(
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: {
|
||||
default: { kind: 'console', layout: { kind: 'pattern' } },
|
||||
|
@ -131,8 +132,8 @@ test('appends records via multiple appenders.', () => {
|
|||
expect(mockStreamWrite.mock.calls[1][0]).toMatchSnapshot('file logs');
|
||||
});
|
||||
|
||||
test('uses `root` logger if context is not specified.', () => {
|
||||
system.upgrade(
|
||||
test('uses `root` logger if context is not specified.', async () => {
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: { default: { kind: 'console', layout: { kind: 'pattern' } } },
|
||||
})
|
||||
|
@ -145,7 +146,7 @@ test('uses `root` logger if context is not specified.', () => {
|
|||
});
|
||||
|
||||
test('`stop()` disposes all appenders.', async () => {
|
||||
system.upgrade(
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: { default: { kind: 'console', layout: { kind: 'json' } } },
|
||||
root: { level: 'info' },
|
||||
|
@ -161,10 +162,10 @@ test('`stop()` disposes all appenders.', async () => {
|
|||
expect(consoleDisposeSpy).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
test('asLoggerFactory() only allows to create new loggers.', () => {
|
||||
test('asLoggerFactory() only allows to create new loggers.', async () => {
|
||||
const logger = system.asLoggerFactory().get('test', 'context');
|
||||
|
||||
system.upgrade(
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: { default: { kind: 'console', layout: { kind: 'json' } } },
|
||||
root: { level: 'all' },
|
||||
|
@ -183,19 +184,19 @@ test('asLoggerFactory() only allows to create new loggers.', () => {
|
|||
expect(JSON.parse(mockConsoleLog.mock.calls[2][0])).toMatchSnapshot(dynamicProps);
|
||||
});
|
||||
|
||||
test('setContextConfig() updates config with relative contexts', () => {
|
||||
test('setContextConfig() updates config with relative contexts', async () => {
|
||||
const testsLogger = system.get('tests');
|
||||
const testsChildLogger = system.get('tests', 'child');
|
||||
const testsGrandchildLogger = system.get('tests', 'child', 'grandchild');
|
||||
|
||||
system.upgrade(
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: { default: { kind: 'console', layout: { kind: 'json' } } },
|
||||
root: { level: 'info' },
|
||||
})
|
||||
);
|
||||
|
||||
system.setContextConfig(['tests', 'child'], {
|
||||
await system.setContextConfig(['tests', 'child'], {
|
||||
appenders: new Map([
|
||||
[
|
||||
'custom',
|
||||
|
@ -238,19 +239,19 @@ test('setContextConfig() updates config with relative contexts', () => {
|
|||
);
|
||||
});
|
||||
|
||||
test('setContextConfig() updates config for a root context', () => {
|
||||
test('setContextConfig() updates config for a root context', async () => {
|
||||
const testsLogger = system.get('tests');
|
||||
const testsChildLogger = system.get('tests', 'child');
|
||||
const testsGrandchildLogger = system.get('tests', 'child', 'grandchild');
|
||||
|
||||
system.upgrade(
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: { default: { kind: 'console', layout: { kind: 'json' } } },
|
||||
root: { level: 'info' },
|
||||
})
|
||||
);
|
||||
|
||||
system.setContextConfig(['tests', 'child'], {
|
||||
await system.setContextConfig(['tests', 'child'], {
|
||||
appenders: new Map([
|
||||
[
|
||||
'custom',
|
||||
|
@ -283,8 +284,8 @@ test('setContextConfig() updates config for a root context', () => {
|
|||
);
|
||||
});
|
||||
|
||||
test('custom context configs are applied on subsequent calls to update()', () => {
|
||||
system.setContextConfig(['tests', 'child'], {
|
||||
test('custom context configs are applied on subsequent calls to update()', async () => {
|
||||
await system.setContextConfig(['tests', 'child'], {
|
||||
appenders: new Map([
|
||||
[
|
||||
'custom',
|
||||
|
@ -295,7 +296,7 @@ test('custom context configs are applied on subsequent calls to update()', () =>
|
|||
});
|
||||
|
||||
// Calling upgrade after setContextConfig should not throw away the context-specific config
|
||||
system.upgrade(
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: { default: { kind: 'console', layout: { kind: 'json' } } },
|
||||
root: { level: 'info' },
|
||||
|
@ -320,15 +321,15 @@ test('custom context configs are applied on subsequent calls to update()', () =>
|
|||
);
|
||||
});
|
||||
|
||||
test('subsequent calls to setContextConfig() for the same context override the previous config', () => {
|
||||
system.upgrade(
|
||||
test('subsequent calls to setContextConfig() for the same context override the previous config', async () => {
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: { default: { kind: 'console', layout: { kind: 'json' } } },
|
||||
root: { level: 'info' },
|
||||
})
|
||||
);
|
||||
|
||||
system.setContextConfig(['tests', 'child'], {
|
||||
await system.setContextConfig(['tests', 'child'], {
|
||||
appenders: new Map([
|
||||
[
|
||||
'custom',
|
||||
|
@ -339,7 +340,7 @@ test('subsequent calls to setContextConfig() for the same context override the p
|
|||
});
|
||||
|
||||
// Call again, this time with level: 'warn' and a different pattern
|
||||
system.setContextConfig(['tests', 'child'], {
|
||||
await system.setContextConfig(['tests', 'child'], {
|
||||
appenders: new Map([
|
||||
[
|
||||
'custom',
|
||||
|
@ -370,15 +371,15 @@ test('subsequent calls to setContextConfig() for the same context override the p
|
|||
);
|
||||
});
|
||||
|
||||
test('subsequent calls to setContextConfig() for the same context can disable the previous config', () => {
|
||||
system.upgrade(
|
||||
test('subsequent calls to setContextConfig() for the same context can disable the previous config', async () => {
|
||||
await system.upgrade(
|
||||
config.schema.validate({
|
||||
appenders: { default: { kind: 'console', layout: { kind: 'json' } } },
|
||||
root: { level: 'info' },
|
||||
})
|
||||
);
|
||||
|
||||
system.setContextConfig(['tests', 'child'], {
|
||||
await system.setContextConfig(['tests', 'child'], {
|
||||
appenders: new Map([
|
||||
[
|
||||
'custom',
|
||||
|
@ -389,7 +390,7 @@ test('subsequent calls to setContextConfig() for the same context can disable th
|
|||
});
|
||||
|
||||
// Call again, this time no customizations (effectively disabling)
|
||||
system.setContextConfig(['tests', 'child'], {});
|
||||
await system.setContextConfig(['tests', 'child'], {});
|
||||
|
||||
const logger = system.get('tests', 'child', 'grandchild');
|
||||
logger.debug('this should not show anywhere!');
|
||||
|
|
|
@ -30,6 +30,7 @@ import {
|
|||
LoggerContextConfigType,
|
||||
LoggerContextConfigInput,
|
||||
loggerContextConfigSchema,
|
||||
config as loggingConfig,
|
||||
} from './logging_config';
|
||||
|
||||
export type ILoggingSystem = PublicMethodsOf<LoggingSystem>;
|
||||
|
@ -48,6 +49,8 @@ export class LoggingSystem implements LoggerFactory {
|
|||
private readonly loggers: Map<string, LoggerAdapter> = new Map();
|
||||
private readonly contextConfigs = new Map<string, LoggerContextConfigType>();
|
||||
|
||||
constructor() {}
|
||||
|
||||
public get(...contextParts: string[]): Logger {
|
||||
const context = LoggingConfig.getLoggerContext(contextParts);
|
||||
if (!this.loggers.has(context)) {
|
||||
|
@ -65,11 +68,13 @@ export class LoggingSystem implements LoggerFactory {
|
|||
|
||||
/**
|
||||
* Updates all current active loggers with the new config values.
|
||||
* @param rawConfig New config instance.
|
||||
* @param rawConfig New config instance. if unspecified, the default logging configuration
|
||||
* will be used.
|
||||
*/
|
||||
public upgrade(rawConfig: LoggingConfigType) {
|
||||
const config = new LoggingConfig(rawConfig)!;
|
||||
this.applyBaseConfig(config);
|
||||
public async upgrade(rawConfig?: LoggingConfigType) {
|
||||
const usedConfig = rawConfig ?? loggingConfig.schema.validate({});
|
||||
const config = new LoggingConfig(usedConfig);
|
||||
await this.applyBaseConfig(config);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -93,7 +98,7 @@ export class LoggingSystem implements LoggerFactory {
|
|||
* @param baseContextParts
|
||||
* @param rawConfig
|
||||
*/
|
||||
public setContextConfig(baseContextParts: string[], rawConfig: LoggerContextConfigInput) {
|
||||
public async setContextConfig(baseContextParts: string[], rawConfig: LoggerContextConfigInput) {
|
||||
const context = LoggingConfig.getLoggerContext(baseContextParts);
|
||||
const contextConfig = loggerContextConfigSchema.validate(rawConfig);
|
||||
this.contextConfigs.set(context, {
|
||||
|
@ -110,7 +115,7 @@ export class LoggingSystem implements LoggerFactory {
|
|||
// If we already have a base config, apply the config. If not, custom context configs
|
||||
// will be picked up on next call to `upgrade`.
|
||||
if (this.baseConfig) {
|
||||
this.applyBaseConfig(this.baseConfig);
|
||||
await this.applyBaseConfig(this.baseConfig);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -154,17 +159,21 @@ export class LoggingSystem implements LoggerFactory {
|
|||
return this.getLoggerConfigByContext(config, LoggingConfig.getParentLoggerContext(context));
|
||||
}
|
||||
|
||||
private applyBaseConfig(newBaseConfig: LoggingConfig) {
|
||||
private async applyBaseConfig(newBaseConfig: LoggingConfig) {
|
||||
const computedConfig = [...this.contextConfigs.values()].reduce(
|
||||
(baseConfig, contextConfig) => baseConfig.extend(contextConfig),
|
||||
newBaseConfig
|
||||
);
|
||||
|
||||
// reconfigure all the loggers without configuration to have them use the buffer
|
||||
// appender while we are awaiting for the appenders to be disposed.
|
||||
for (const [loggerKey, loggerAdapter] of this.loggers) {
|
||||
loggerAdapter.updateLogger(this.createLogger(loggerKey, undefined));
|
||||
}
|
||||
|
||||
// Appenders must be reset, so we first dispose of the current ones, then
|
||||
// build up a new set of appenders.
|
||||
for (const appender of this.appenders.values()) {
|
||||
appender.dispose();
|
||||
}
|
||||
await Promise.all([...this.appenders.values()].map((a) => a.dispose()));
|
||||
this.appenders.clear();
|
||||
|
||||
for (const [appenderKey, appenderConfig] of computedConfig.appenders) {
|
||||
|
|
|
@ -33,6 +33,7 @@ let mockConsoleError: jest.SpyInstance;
|
|||
beforeEach(() => {
|
||||
jest.spyOn(global.process, 'exit').mockReturnValue(undefined as never);
|
||||
mockConsoleError = jest.spyOn(console, 'error').mockReturnValue(undefined);
|
||||
logger.upgrade.mockResolvedValue(undefined);
|
||||
rawConfigService.getConfig$.mockReturnValue(new BehaviorSubject({ someValue: 'foo' }));
|
||||
configService.atPath.mockReturnValue(new BehaviorSubject({ someValue: 'foo' }));
|
||||
});
|
||||
|
|
|
@ -17,8 +17,8 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
import { ConnectableObservable, Subscription } from 'rxjs';
|
||||
import { first, map, publishReplay, switchMap, tap } from 'rxjs/operators';
|
||||
import { ConnectableObservable, Subscription, of } from 'rxjs';
|
||||
import { first, publishReplay, switchMap, concatMap, tap } from 'rxjs/operators';
|
||||
|
||||
import { Env, RawConfigurationProvider } from '../config';
|
||||
import { Logger, LoggerFactory, LoggingConfigType, LoggingSystem } from '../logging';
|
||||
|
@ -36,7 +36,7 @@ export class Root {
|
|||
|
||||
constructor(
|
||||
rawConfigProvider: RawConfigurationProvider,
|
||||
env: Env,
|
||||
private readonly env: Env,
|
||||
private readonly onShutdown?: (reason?: Error | string) => void
|
||||
) {
|
||||
this.loggingSystem = new LoggingSystem();
|
||||
|
@ -98,8 +98,11 @@ export class Root {
|
|||
// Stream that maps config updates to logger updates, including update failures.
|
||||
const update$ = configService.getConfig$().pipe(
|
||||
// always read the logging config when the underlying config object is re-read
|
||||
switchMap(() => configService.atPath<LoggingConfigType>('logging')),
|
||||
map((config) => this.loggingSystem.upgrade(config)),
|
||||
// except for the CLI process where we only apply the default logging config once
|
||||
switchMap(() =>
|
||||
this.env.isDevCliParent ? of(undefined) : configService.atPath<LoggingConfigType>('logging')
|
||||
),
|
||||
concatMap((config) => this.loggingSystem.upgrade(config)),
|
||||
// This specifically console.logs because we were not able to configure the logger.
|
||||
// eslint-disable-next-line no-console
|
||||
tap({ error: (err) => console.error('Configuring logger failed:', err) }),
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
import { ApiResponse } from '@elastic/elasticsearch/lib/Transport';
|
||||
import Boom from '@hapi/boom';
|
||||
import { BulkIndexDocumentsParams } from 'elasticsearch';
|
||||
import { ByteSizeValue } from '@kbn/config-schema';
|
||||
import { CatAliasesParams } from 'elasticsearch';
|
||||
import { CatAllocationParams } from 'elasticsearch';
|
||||
import { CatCommonParams } from 'elasticsearch';
|
||||
|
@ -47,6 +48,7 @@ import { DeleteScriptParams } from 'elasticsearch';
|
|||
import { DeleteTemplateParams } from 'elasticsearch';
|
||||
import { DetailedPeerCertificate } from 'tls';
|
||||
import { Duration } from 'moment';
|
||||
import { Duration as Duration_2 } from 'moment-timezone';
|
||||
import { EnvironmentMode } from '@kbn/config';
|
||||
import { ExistsParams } from 'elasticsearch';
|
||||
import { ExplainParams } from 'elasticsearch';
|
||||
|
@ -177,9 +179,10 @@ export interface AppCategory {
|
|||
// Warning: (ae-forgotten-export) The symbol "ConsoleAppenderConfig" needs to be exported by the entry point index.d.ts
|
||||
// Warning: (ae-forgotten-export) The symbol "FileAppenderConfig" needs to be exported by the entry point index.d.ts
|
||||
// Warning: (ae-forgotten-export) The symbol "LegacyAppenderConfig" needs to be exported by the entry point index.d.ts
|
||||
// Warning: (ae-forgotten-export) The symbol "RollingFileAppenderConfig" needs to be exported by the entry point index.d.ts
|
||||
//
|
||||
// @public (undocumented)
|
||||
export type AppenderConfigType = ConsoleAppenderConfig | FileAppenderConfig | LegacyAppenderConfig;
|
||||
export type AppenderConfigType = ConsoleAppenderConfig | FileAppenderConfig | LegacyAppenderConfig | RollingFileAppenderConfig;
|
||||
|
||||
// @public @deprecated (undocumented)
|
||||
export interface AssistanceAPIResponse {
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue