[SecuritySolution][EntityAnalytics] Account for Asset Criticality in Risk Scoring (#172417)

## Summary

This PR adds asset criticality to the risk scoring calculation. While
previous work was done to add the resources and client to create/read
asset criticality documents, this PR builds on those to retrieve
relevant criticality records and apply them to the intermediate scores
calculated via elasticsearch.

__Note that the code in this PR that performs the actual
calculation/writing of new risk fields is behind a feature flag.__

### Performance
Since this PR adds new logic to an already tight/important code path, we
need to ensure that we haven't changed behavior nor performance. I've
captured that as a separate task to be done next:
https://github.com/elastic/security-team/issues/8223.

### Compatibility
Behaviorally, without the feature flag enabled, scoring will skip the
criticality workflow and not write new fields (`criticality_level`,
`criticality_modifier`). ~~I still have an outstanding task to validate
whether this will be an issue until criticality is enabled, and do some
smarter short-circuiting.~~ This task uncovered our need for the above
feature flag.

The one behavioral change introduced in this PR _not_ behind a feature
flag is the normalization of our risk category scores.

### Summary of Changes
- Adds an `AssetCriticalityService` which provides the API used by risk
scoring to retrieve criticality records
- Adds new `search` method to the `AssetCriticalityDataClient`: used to
generally retrieve multiple criticality records at once
- Adds functions to calculate a (currently hard-coded) modifier from a
criticality level, and apply it to the risk score via bayesian update
- Moves risk score level calculation into javascript (necessary because
we need the level to account for criticality)
- Moves some risk level code out of threat hunting code and into the
`common/entity_analytics` folder
-  Tests and comments throughout


### TODO
- [x] Add new criticality fields to risk score, address upgrade workflow
- [x] Validate that code works without criticality being enabled.
- [x] Bump task version to invalidate old tasks from being picked up in
customer env

Outcome: All three of the above are addressed with:

1. Moving the code responsible for adding new fields behind a feature
flag
([71f1158](71f115800b))
2. Addressing the upgrade path in a subsequent issue
(https://github.com/elastic/security-team/issues/8012)

## How to Review
1. Enable our asset criticality feature flag:
> ```
> xpack.securitySolution.enableExperimental: 
>  - entityAnalyticsAssetCriticalityEnabled
> ```
2. Create asset criticality records (see API documentation, or follow
[this
test](https://github.com/elastic/kibana/pull/172417/files#diff-43f9f394fb7c8eb0f0ace3f5e75482c56a7233ae7d11d5fdb98a89e6404412c3R276)
as a setup guide
3. Enable risk engine
4. Observe that new fields are written to risk scores' `_source`, but
not mapped/searchable
5. (optional) Observe that the transform subsequently fails 😢 


### Checklist
- [ ] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/packages/kbn-i18n/README.md)
- [ ]
[Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html)
was added for features that require explanation or tutorials
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [ ] [Flaky Test
Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner/1) was
used on any tests changed

---------

Co-authored-by: Jared Burgett and Ryland Herrick <ryalnd+jaredburgettelastic+rylnd@gmail.com>
This commit is contained in:
Ryland Herrick 2024-01-04 15:42:24 -06:00 committed by GitHub
parent 4af36fece2
commit 1021f65f1c
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
43 changed files with 1303 additions and 145 deletions

View file

@ -0,0 +1,9 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
export * from './asset_criticality';
export * from './risk_score';

View file

@ -1,4 +1,10 @@
openapi: 3.0.0
info:
title: Risk Engine Common Schema
description: Common schema for Risk Engine APIs
version: 1.0.0
paths: { }
components:
schemas:
@ -103,11 +109,27 @@ components:
category_1_score:
type: number
format: double
description: The contribution of Category 1 to the overall risk score (`calculated_score`). Category 1 contains Detection Engine Alerts.
description: The contribution of Category 1 to the overall risk score (`calculated_score_norm`). Category 1 contains Detection Engine Alerts.
category_1_count:
type: number
format: integer
description: The number of risk input documents that contributed to the Category 1 score (`category_1_score`).
category_2_score:
type: number
format: double
description: The contribution of Category 2 to the overall risk score (`calculated_score_norm`). Category 2 contains context from external sources.
category_2_count:
type: number
format: integer
description: The number of risk input documents that contributed to the Category 2 score (`category_2_score`).
criticality_level:
type: string
example: very_important
description: The designated criticality level of the entity. Possible values are `not_important`, `normal`, `important`, and `very_important`.
criticality_modifier:
type: number
format: double
description: The numeric modifier corresponding to the criticality level of the entity, which is used as an input to the risk score calculation.
inputs:
type: array
description: A list of the highest-risk documents contributing to this risk score. Useful for investigative purposes.

View file

@ -9,6 +9,7 @@ export * from './after_keys';
export * from './risk_weights';
export * from './identifier_types';
export * from './range';
export * from './risk_levels';
export * from './types';
export * from './indices';
export * from './constants';

View file

@ -0,0 +1,30 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import { RiskLevels } from './types';
export const RISK_LEVEL_RANGES = {
[RiskLevels.unknown]: { start: 0, stop: 20 },
[RiskLevels.low]: { start: 20, stop: 40 },
[RiskLevels.moderate]: { start: 40, stop: 70 },
[RiskLevels.high]: { start: 70, stop: 90 },
[RiskLevels.critical]: { start: 90, stop: 100 },
};
export const getRiskLevel = (riskScore: number): RiskLevels => {
if (riskScore >= RISK_LEVEL_RANGES[RiskLevels.critical].start) {
return RiskLevels.critical;
} else if (riskScore >= RISK_LEVEL_RANGES[RiskLevels.high].start) {
return RiskLevels.high;
} else if (riskScore >= RISK_LEVEL_RANGES[RiskLevels.moderate].start) {
return RiskLevels.moderate;
} else if (riskScore >= RISK_LEVEL_RANGES[RiskLevels.low].start) {
return RiskLevels.low;
} else {
return RiskLevels.unknown;
}
};

View file

@ -38,24 +38,41 @@ export interface SimpleRiskInput {
export interface EcsRiskScore {
'@timestamp': string;
host?: {
name: string;
risk: Omit<RiskScore, '@timestamp'>;
};
user?: {
name: string;
risk: Omit<RiskScore, '@timestamp'>;
};
}
export type RiskInputs = SimpleRiskInput[];
/**
* The API response object representing a risk score
*/
export interface RiskScore {
'@timestamp': string;
id_field: string;
id_value: string;
criticality_level?: string | undefined;
criticality_modifier?: number | undefined;
calculated_level: string;
calculated_score: number;
calculated_score_norm: number;
category_1_score: number;
category_1_count: number;
category_2_score?: number;
category_2_count?: number;
notes: string[];
inputs: RiskInputs;
}
export enum RiskLevels {
unknown = 'Unknown',
low = 'Low',
moderate = 'Moderate',
high = 'High',
critical = 'Critical',
}

View file

@ -8,7 +8,10 @@
import type { IEsSearchResponse } from '@kbn/data-plugin/common';
import type { Inspect, Maybe, SortField } from '../../../common';
import type { RiskInputs } from '../../../../entity_analytics/risk_engine';
import {
type RiskInputs,
RiskLevels as RiskSeverity,
} from '../../../../entity_analytics/risk_engine';
export interface HostsRiskScoreStrategyResponse extends IEsSearchResponse {
inspect?: Maybe<Inspect>;
@ -30,6 +33,8 @@ export interface RiskStats {
inputs?: RiskInputs;
}
export { RiskSeverity };
export interface HostRiskScore {
'@timestamp': string;
host: {
@ -85,14 +90,6 @@ export interface RiskScoreItem {
[RiskScoreFields.alertsCount]: Maybe<number>;
}
export enum RiskSeverity {
unknown = 'Unknown',
low = 'Low',
moderate = 'Moderate',
high = 'High',
critical = 'Critical',
}
export const isUserRiskScore = (risk: HostRiskScore | UserRiskScore): risk is UserRiskScore =>
'user' in risk;

View file

@ -8,6 +8,7 @@
import { euiLightVars } from '@kbn/ui-theme';
import { RiskSeverity } from '../../../common/search_strategy';
import { SEVERITY_COLOR } from '../../overview/components/detection_response/utils';
export { RISK_LEVEL_RANGES as RISK_SCORE_RANGES } from '../../../common/entity_analytics/risk_engine';
export const SEVERITY_UI_SORT_ORDER = [
RiskSeverity.unknown,
@ -25,14 +26,6 @@ export const RISK_SEVERITY_COLOUR: { [k in RiskSeverity]: string } = {
[RiskSeverity.critical]: SEVERITY_COLOR.critical,
};
export const RISK_SCORE_RANGES = {
[RiskSeverity.unknown]: { start: 0, stop: 20 },
[RiskSeverity.low]: { start: 20, stop: 40 },
[RiskSeverity.moderate]: { start: 40, stop: 70 },
[RiskSeverity.high]: { start: 70, stop: 90 },
[RiskSeverity.critical]: { start: 90, stop: 100 },
};
type SnakeToCamelCaseString<S extends string> = S extends `${infer T}_${infer U}`
? `${T}${Capitalize<SnakeToCamelCaseString<U>>}`
: S;

View file

@ -12,6 +12,7 @@ const createAssetCriticalityDataClientMock = () =>
doesIndexExist: jest.fn(),
getStatus: jest.fn(),
init: jest.fn(),
search: jest.fn(),
} as unknown as jest.Mocked<AssetCriticalityDataClient>);
export const assetCriticalityDataClientMock = { create: createAssetCriticalityDataClientMock };

View file

@ -57,4 +57,68 @@ describe('AssetCriticalityDataClient', () => {
});
});
});
describe('#search()', () => {
let esClientMock: ReturnType<
typeof elasticsearchServiceMock.createScopedClusterClient
>['asInternalUser'];
let loggerMock: ReturnType<typeof loggingSystemMock.createLogger>;
let subject: AssetCriticalityDataClient;
beforeEach(() => {
esClientMock = elasticsearchServiceMock.createScopedClusterClient().asInternalUser;
loggerMock = loggingSystemMock.createLogger();
subject = new AssetCriticalityDataClient({
esClient: esClientMock,
logger: loggerMock,
namespace: 'default',
});
});
it('searches in the asset criticality index', async () => {
subject.search({ query: { match_all: {} } });
expect(esClientMock.search).toHaveBeenCalledWith(
expect.objectContaining({ index: '.asset-criticality.asset-criticality-default' })
);
});
it('requires a query parameter', async () => {
subject.search({ query: { match_all: {} } });
expect(esClientMock.search).toHaveBeenCalledWith(
expect.objectContaining({ body: { query: { match_all: {} } } })
);
});
it('accepts a size parameter', async () => {
subject.search({ query: { match_all: {} }, size: 100 });
expect(esClientMock.search).toHaveBeenCalledWith(expect.objectContaining({ size: 100 }));
});
it('defaults to the default query size', async () => {
subject.search({ query: { match_all: {} } });
const defaultSize = 1_000;
expect(esClientMock.search).toHaveBeenCalledWith(
expect.objectContaining({ size: defaultSize })
);
});
it('caps the size to the maximum query size', async () => {
subject.search({ query: { match_all: {} }, size: 999999 });
const maxSize = 100_000;
expect(esClientMock.search).toHaveBeenCalledWith(expect.objectContaining({ size: maxSize }));
});
it('ignores an index_not_found_exception if the criticality index does not exist', async () => {
subject.search({ query: { match_all: {} } });
expect(esClientMock.search).toHaveBeenCalledWith(
expect.objectContaining({ ignore_unavailable: true })
);
});
});
});

View file

@ -4,12 +4,14 @@
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import type { ESFilter } from '@kbn/es-types';
import type { SearchResponse } from '@elastic/elasticsearch/lib/api/types';
import type { Logger, ElasticsearchClient } from '@kbn/core/server';
import { mappingFromFieldMap } from '@kbn/alerting-plugin/common';
import type { AssetCriticalityRecord } from '../../../../common/api/entity_analytics/asset_criticality';
import type { AssetCriticalityRecord } from '../../../../common/api/entity_analytics';
import { createOrUpdateIndex } from '../utils/create_or_update_index';
import { getAssetCriticalityIndex } from '../../../../common/entity_analytics/asset_criticality';
import { assetCriticalityFieldMap } from './configurations';
import { assetCriticalityFieldMap } from './constants';
interface AssetCriticalityClientOpts {
logger: Logger;
@ -25,7 +27,11 @@ interface AssetCriticalityUpsert {
type AssetCriticalityIdParts = Pick<AssetCriticalityUpsert, 'idField' | 'idValue'>;
const MAX_CRITICALITY_RESPONSE_SIZE = 100_000;
const DEFAULT_CRITICALITY_RESPONSE_SIZE = 1_000;
const createId = ({ idField, idValue }: AssetCriticalityIdParts) => `${idField}:${idValue}`;
export class AssetCriticalityDataClient {
constructor(private readonly options: AssetCriticalityClientOpts) {}
/**
@ -43,6 +49,29 @@ export class AssetCriticalityDataClient {
});
}
/**
*
* A general method for searching asset criticality records.
* @param query an ESL query to filter criticality results
* @param size the maximum number of records to return. Cannot exceed {@link MAX_CRITICALITY_RESPONSE_SIZE}. If unspecified, will default to {@link DEFAULT_CRITICALITY_RESPONSE_SIZE}.
* @returns criticality records matching the query
*/
public async search({
query,
size,
}: {
query: ESFilter;
size?: number;
}): Promise<SearchResponse<AssetCriticalityRecord>> {
const response = await this.options.esClient.search<AssetCriticalityRecord>({
index: this.getIndex(),
ignore_unavailable: true,
body: { query },
size: Math.min(size ?? DEFAULT_CRITICALITY_RESPONSE_SIZE, MAX_CRITICALITY_RESPONSE_SIZE),
});
return response;
}
private getIndex() {
return getAssetCriticalityIndex(this.options.namespace);
}

View file

@ -0,0 +1,17 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import type { AssetCriticalityService } from './asset_criticality_service';
const buildMockAssetCriticalityService = (): jest.Mocked<AssetCriticalityService> => ({
getCriticalitiesByIdentifiers: jest.fn().mockResolvedValue([]),
isEnabled: jest.fn().mockReturnValue(true),
});
export const assetCriticalityServiceMock = {
create: buildMockAssetCriticalityService,
};

View file

@ -0,0 +1,206 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import type { SearchHit } from '@elastic/elasticsearch/lib/api/types';
import type { ExperimentalFeatures } from '../../../../common';
import type { AssetCriticalityRecord } from '../../../../common/api/entity_analytics';
import type { AssetCriticalityDataClient } from './asset_criticality_data_client';
import { assetCriticalityDataClientMock } from './asset_criticality_data_client.mock';
import {
type AssetCriticalityService,
assetCriticalityServiceFactory,
} from './asset_criticality_service';
const buildMockCriticalityHit = (
overrides: Partial<AssetCriticalityRecord> = {}
): SearchHit<AssetCriticalityRecord> => ({
_id: 'host.name:not-found',
_index: '.asset-criticality-default',
_source: {
'@timestamp': '2021-09-16T15:00:00.000Z',
id_field: 'host.name',
id_value: 'hostname',
criticality_level: 'normal',
...overrides,
},
});
describe('AssetCriticalityService', () => {
describe('#getCriticalitiesByIdentifiers()', () => {
let baseIdentifier: { id_field: string; id_value: string };
let mockAssetCriticalityDataClient: AssetCriticalityDataClient;
let service: AssetCriticalityService;
beforeEach(() => {
mockAssetCriticalityDataClient = assetCriticalityDataClientMock.create();
baseIdentifier = { id_field: 'host.name', id_value: 'not-found' };
(mockAssetCriticalityDataClient.search as jest.Mock).mockResolvedValueOnce({
hits: { hits: [] },
});
service = assetCriticalityServiceFactory({
assetCriticalityDataClient: mockAssetCriticalityDataClient,
experimentalFeatures: {} as ExperimentalFeatures,
});
});
describe('specifying a single identifier', () => {
it('returns an empty response if identifier is not found', async () => {
const result = await service.getCriticalitiesByIdentifiers([baseIdentifier]);
expect(result).toEqual([]);
});
it('returns a single criticality if identifier is found', async () => {
const hits = [buildMockCriticalityHit()];
(mockAssetCriticalityDataClient.search as jest.Mock).mockReset().mockResolvedValueOnce({
hits: { hits },
});
const result = await service.getCriticalitiesByIdentifiers([baseIdentifier]);
expect(result).toEqual(hits.map((hit) => hit._source));
});
});
describe('specifying multiple identifiers', () => {
it('returns an empty response if identifier is not found', async () => {
const result = await service.getCriticalitiesByIdentifiers([baseIdentifier]);
expect(result).toEqual([]);
});
it('generates a single terms clause for multiple identifier values on the same field', async () => {
const multipleIdentifiers = [
{ id_field: 'user.name', id_value: 'one' },
{ id_field: 'user.name', id_value: 'other' },
];
await service.getCriticalitiesByIdentifiers(multipleIdentifiers);
expect(mockAssetCriticalityDataClient.search).toHaveBeenCalledTimes(1);
const query = (mockAssetCriticalityDataClient.search as jest.Mock).mock.calls[0][0].query;
expect(query).toMatchObject({
bool: {
filter: {
bool: {
should: [
{
bool: {
must: [
{ term: { id_field: 'user.name' } },
{ terms: { id_value: ['one', 'other'] } },
],
},
},
],
},
},
},
});
});
it('deduplicates identifiers', async () => {
const duplicateIdentifiers = [
{ id_field: 'user.name', id_value: 'same' },
{ id_field: 'user.name', id_value: 'same' },
];
await service.getCriticalitiesByIdentifiers(duplicateIdentifiers);
expect(mockAssetCriticalityDataClient.search).toHaveBeenCalledTimes(1);
const query = (mockAssetCriticalityDataClient.search as jest.Mock).mock.calls[0][0].query;
expect(query).toMatchObject({
bool: {
filter: {
bool: {
should: [
{
bool: {
must: [
{ term: { id_field: 'user.name' } },
{ terms: { id_value: ['same'] } },
],
},
},
],
},
},
},
});
});
it('returns multiple criticalities if identifiers are found', async () => {
const hits = [
buildMockCriticalityHit(),
buildMockCriticalityHit({
id_field: 'user.name',
id_value: 'username',
criticality_level: 'very_important',
}),
];
(mockAssetCriticalityDataClient.search as jest.Mock).mockReset().mockResolvedValueOnce({
hits: {
hits,
},
});
const result = await service.getCriticalitiesByIdentifiers([baseIdentifier]);
expect(result).toEqual(hits.map((hit) => hit._source));
});
});
describe('arguments', () => {
it('accepts a single identifier as an array', async () => {
const identifier = { id_field: 'host.name', id_value: 'foo' };
expect(() => service.getCriticalitiesByIdentifiers([identifier])).not.toThrow();
});
it('accepts multiple identifiers', async () => {
const identifiers = [
{ id_field: 'host.name', id_value: 'foo' },
{ id_field: 'user.name', id_value: 'bar' },
];
expect(() => service.getCriticalitiesByIdentifiers(identifiers)).not.toThrow();
});
it('throws an error if an empty array is provided', async () => {
await expect(() => service.getCriticalitiesByIdentifiers([])).rejects.toThrowError(
'At least one identifier must be provided'
);
});
it('throws an error if no identifier values are provided', async () => {
await expect(() =>
service.getCriticalitiesByIdentifiers([{ id_field: 'host.name', id_value: '' }])
).rejects.toThrowError('At least one identifier must contain a valid field and value');
});
it('throws an error if no valid identifier field/value pair is provided', async () => {
const identifiers = [
{ id_field: '', id_value: 'foo' },
{ id_field: 'user.name', id_value: '' },
];
await expect(() => service.getCriticalitiesByIdentifiers(identifiers)).rejects.toThrowError(
'At least one identifier must contain a valid field and value'
);
});
});
describe('error conditions', () => {
it('throws an error if the client does', async () => {
(mockAssetCriticalityDataClient.search as jest.Mock)
.mockReset()
.mockRejectedValueOnce(new Error('foo'));
await expect(() =>
service.getCriticalitiesByIdentifiers([baseIdentifier])
).rejects.toThrowError('foo');
});
});
});
});

View file

@ -0,0 +1,101 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import { isEmpty } from 'lodash/fp';
import type { ExperimentalFeatures } from '../../../../common';
import type { AssetCriticalityRecord } from '../../../../common/api/entity_analytics';
import type { AssetCriticalityDataClient } from './asset_criticality_data_client';
interface CriticalityIdentifier {
id_field: string;
id_value: string;
}
interface IdentifierValuesByField {
[idField: string]: string[];
}
export interface AssetCriticalityService {
getCriticalitiesByIdentifiers: (
identifiers: CriticalityIdentifier[]
) => Promise<AssetCriticalityRecord[]>;
isEnabled: () => boolean;
}
const isCriticalityIdentifierValid = (identifier: CriticalityIdentifier): boolean =>
!isEmpty(identifier.id_field) && !isEmpty(identifier.id_value);
const groupIdentifierValuesByField = (
identifiers: CriticalityIdentifier[]
): IdentifierValuesByField =>
identifiers.reduce((acc, id) => {
acc[id.id_field] ??= [];
if (!acc[id.id_field].includes(id.id_value)) {
acc[id.id_field].push(id.id_value);
}
return acc;
}, {} as IdentifierValuesByField);
const buildCriticalitiesQuery = (identifierValuesByField: IdentifierValuesByField) => ({
bool: {
filter: {
bool: {
should: Object.keys(identifierValuesByField).map((idField) => ({
bool: {
must: [
{ term: { id_field: idField } },
{ terms: { id_value: identifierValuesByField[idField] } },
],
},
})),
},
},
},
});
const getCriticalitiesByIdentifiers = async ({
assetCriticalityDataClient,
identifiers,
}: {
assetCriticalityDataClient: AssetCriticalityDataClient;
identifiers: CriticalityIdentifier[];
}): Promise<AssetCriticalityRecord[]> => {
if (identifiers.length === 0) {
throw new Error('At least one identifier must be provided');
}
const validIdentifiers = identifiers.filter((id) => isCriticalityIdentifierValid(id));
if (validIdentifiers.length === 0) {
throw new Error('At least one identifier must contain a valid field and value');
}
const identifierCount = validIdentifiers.length;
const identifierValuesByField = groupIdentifierValuesByField(validIdentifiers);
const criticalitiesQuery = buildCriticalitiesQuery(identifierValuesByField);
const criticalitySearchResponse = await assetCriticalityDataClient.search({
query: criticalitiesQuery,
size: identifierCount,
});
// @ts-expect-error @elastic/elasticsearch _source is optional
return criticalitySearchResponse.hits.hits.map((hit) => hit._source);
};
interface AssetCriticalityServiceFactoryOptions {
assetCriticalityDataClient: AssetCriticalityDataClient;
experimentalFeatures: ExperimentalFeatures;
}
export const assetCriticalityServiceFactory = ({
assetCriticalityDataClient,
experimentalFeatures,
}: AssetCriticalityServiceFactoryOptions): AssetCriticalityService => ({
getCriticalitiesByIdentifiers: (identifiers: CriticalityIdentifier[]) =>
getCriticalitiesByIdentifiers({ assetCriticalityDataClient, identifiers }),
isEnabled: () => experimentalFeatures.entityAnalyticsAssetCriticalityEnabled,
});

View file

@ -5,6 +5,7 @@
* 2.0.
*/
import type { FieldMap } from '@kbn/alerts-as-data-utils';
import type { AssetCriticalityRecord } from '../../../../common/api/entity_analytics';
export const assetCriticalityFieldMap: FieldMap = {
'@timestamp': {
@ -33,3 +34,13 @@ export const assetCriticalityFieldMap: FieldMap = {
required: false,
},
} as const;
/**
* CriticalityModifiers are used to adjust the risk score based on the criticality of the asset.
*/
export const CriticalityModifiers: Record<AssetCriticalityRecord['criticality_level'], number> = {
very_important: 2,
important: 1.5,
normal: 1,
not_important: 0.5,
};

View file

@ -0,0 +1,102 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import { applyCriticalityToScore, normalize } from './helpers';
describe('applyCriticalityToScore', () => {
describe('integer scores', () => {
it('returns the original score if the modifier is undefined', () => {
const result = applyCriticalityToScore({ modifier: undefined, score: 90 });
expect(result).toEqual(90);
});
it('returns the original score if the modifier is 1', () => {
const result = applyCriticalityToScore({ modifier: 1, score: 90 });
expect(result).toEqual(90);
});
it('returns an increased score if the modifier is greater than 1', () => {
const result = applyCriticalityToScore({ modifier: 1.5, score: 90 });
expect(result).toEqual(93.10344827586206);
});
it('returns a decreased score if the modifier is less than 1', () => {
const result = applyCriticalityToScore({ modifier: 0.5, score: 90 });
expect(result).toEqual(81.81818181818181);
});
it('does not exceed a score of 100 with a previous score of 99 and a large modifier', () => {
const result = applyCriticalityToScore({ modifier: 200, score: 99 });
expect(result).toEqual(99.99494975001262);
});
});
describe('non-integer scores', () => {
it('returns the original score if the modifier is undefined', () => {
const result = applyCriticalityToScore({ modifier: undefined, score: 90.5 });
expect(result).toEqual(90.5);
});
it('returns the original score if the modifier is 1', () => {
const result = applyCriticalityToScore({ modifier: 1, score: 91.84 });
expect(result).toEqual(91.84);
});
it('returns an increased score if the modifier is greater than 1', () => {
const result = applyCriticalityToScore({ modifier: 1.5, score: 75.98 });
expect(result).toEqual(82.59294151750127);
});
it('returns a decreased score if the modifier is less than 1', () => {
const result = applyCriticalityToScore({ modifier: 0.5, score: 44.12 });
expect(result).toEqual(28.303823453938925);
});
it('does not exceed a score of 100 with a high previous score and a large modifier', () => {
const result = applyCriticalityToScore({ modifier: 200, score: 99.88 });
expect(result).toEqual(99.9993992827436);
});
});
});
describe('normalize', () => {
it('returns 0 if the number is equal to the min', () => {
const result = normalize({ number: 0, min: 0, max: 100 });
expect(result).toEqual(0);
});
it('returns 100 if the number is equal to the max', () => {
const result = normalize({ number: 100, min: 0, max: 100 });
expect(result).toEqual(100);
});
it('returns 50 if the number is halfway between the min and max', () => {
const result = normalize({ number: 50, min: 0, max: 100 });
expect(result).toEqual(50);
});
it('defaults to a min of 0', () => {
const result = normalize({ number: 50, max: 100 });
expect(result).toEqual(50);
});
describe('when the domain is diffrent than the range', () => {
it('returns 0 if the number is equal to the min', () => {
const result = normalize({ number: 20, min: 20, max: 200 });
expect(result).toEqual(0);
});
it('returns 100 if the number is equal to the max', () => {
const result = normalize({ number: 40, min: 30, max: 40 });
expect(result).toEqual(100);
});
it('returns 50 if the number is halfway between the min and max', () => {
const result = normalize({ number: 20, min: 0, max: 40 });
expect(result).toEqual(50);
});
});
});

View file

@ -0,0 +1,88 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import type { AssetCriticalityRecord } from '../../../../common/api/entity_analytics';
import { RISK_SCORING_NORMALIZATION_MAX } from '../risk_score/constants';
import { CriticalityModifiers } from './constants';
/**
* Retrieves the criticality modifier for a given criticality level.
*
* @param criticalityLevel The criticality level for which to get the modifier.
* @returns The associated criticality modifier for the given criticality level.
*/
export const getCriticalityModifier = (
criticalityLevel?: AssetCriticalityRecord['criticality_level']
): number | undefined => {
if (criticalityLevel == null) {
return;
}
return CriticalityModifiers[criticalityLevel];
};
/**
* Applies asset criticality to a normalized risk score using bayesian inference.
* @param modifier - The criticality modifier to apply to the score.
* @param score - The normalized risk score to which the criticality modifier is applied
*
* @returns The risk score with the criticality modifier applied.
*/
export const applyCriticalityToScore = ({
modifier,
score,
}: {
modifier: number | undefined;
score: number;
}): number => {
if (modifier == null) {
return score;
}
return bayesianUpdate({ max: RISK_SCORING_NORMALIZATION_MAX, modifier, score });
};
/**
* Updates a score with the given modifier using bayesian inference.
* @param modifier - The modifier to be applied to the score.
* @param score - The score to modifiers are applied
* @param max - The maximum value of the score.
*
* @returns The updated score with modifiers applied
*/
export const bayesianUpdate = ({
max,
modifier,
score,
}: {
max: number;
modifier: number;
score: number;
}) => {
const priorProbability = score / (max - score);
const newProbability = priorProbability * modifier;
return (max * newProbability) / (1 + newProbability);
};
/**
* Normalizes a number to the range [0, 100]
*
* @param number - The number to be normalized
* @param min - The minimum possible value of the number. Defaults to 0.
* @param max - The maximum possible value of the number
*
* @returns The updated score with modifiers applied
*/
export const normalize = ({
number,
min = 0,
max,
}: {
number: number;
min?: number;
max: number;
}) => ((number - min) / (max - min)) * 100;

View file

@ -0,0 +1,9 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
export * from './asset_criticality_service';
export * from './asset_criticality_data_client';

View file

@ -9,10 +9,12 @@ import type { ElasticsearchClient, Logger } from '@kbn/core/server';
import type { RiskScoreDataClient } from './risk_score_data_client';
import type { CalculateAndPersistScoresParams, CalculateAndPersistScoresResponse } from '../types';
import type { AssetCriticalityService } from '../asset_criticality/asset_criticality_service';
import { calculateRiskScores } from './calculate_risk_scores';
export const calculateAndPersistRiskScores = async (
params: CalculateAndPersistScoresParams & {
assetCriticalityService: AssetCriticalityService;
esClient: ElasticsearchClient;
logger: Logger;
spaceId: string;

View file

@ -23,7 +23,6 @@ const buildRiskScoreBucketMock = (overrides: Partial<RiskScoreBucket> = {}): Ris
value: {
score: 20,
normalized_score: 30.0,
level: 'Unknown',
notes: [],
category_1_score: 30,
category_1_count: 1,
@ -88,11 +87,15 @@ const buildResponseMock = (
'@timestamp': '2021-08-19T20:55:59.000Z',
id_field: 'host.name',
id_value: 'hostname',
criticality_level: 'important',
criticality_modifier: 1.5,
calculated_level: 'Unknown',
calculated_score: 20,
calculated_score_norm: 30,
category_1_score: 30,
category_1_count: 12,
category_2_score: 0,
category_2_count: 0,
notes: [],
inputs: [
{

View file

@ -7,6 +7,7 @@
import type { ElasticsearchClient, Logger } from '@kbn/core/server';
import { elasticsearchServiceMock, loggingSystemMock } from '@kbn/core/server/mocks';
import { assetCriticalityServiceMock } from '../asset_criticality/asset_criticality_service.mock';
import { calculateRiskScores } from './calculate_risk_scores';
import { calculateRiskScoresMock } from './calculate_risk_scores.mock';
@ -21,6 +22,7 @@ describe('calculateRiskScores()', () => {
logger = loggingSystemMock.createLogger();
params = {
afterKeys: {},
assetCriticalityService: assetCriticalityServiceMock.create(),
esClient,
logger,
index: 'index',
@ -184,7 +186,7 @@ describe('calculateRiskScores()', () => {
'@timestamp': expect.any(String),
id_field: expect.any(String),
id_value: expect.any(String),
calculated_level: 'Unknown',
calculated_level: 'Low',
calculated_score: expect.any(Number),
calculated_score_norm: expect.any(Number),
category_1_score: expect.any(Number),
@ -217,17 +219,43 @@ describe('calculateRiskScores()', () => {
});
describe('error conditions', () => {
beforeEach(() => {
// stub out a rejected response
it('raises an error if elasticsearch client rejects', async () => {
(esClient.search as jest.Mock).mockRejectedValueOnce({
aggregations: calculateRiskScoresMock.buildAggregationResponse(),
});
await expect(() => calculateRiskScores(params)).rejects.toEqual({
aggregations: calculateRiskScoresMock.buildAggregationResponse(),
});
});
it('raises an error if elasticsearch client rejects', () => {
expect.assertions(1);
expect(() => calculateRiskScores(params)).rejects.toEqual({
aggregations: calculateRiskScoresMock.buildAggregationResponse(),
describe('when the asset criticality service throws an error', () => {
beforeEach(() => {
(esClient.search as jest.Mock).mockResolvedValueOnce({
aggregations: calculateRiskScoresMock.buildAggregationResponse(),
});
(
params.assetCriticalityService.getCriticalitiesByIdentifiers as jest.Mock
).mockRejectedValueOnce(new Error('foo'));
});
it('logs the error but proceeds if asset criticality service throws', async () => {
await expect(calculateRiskScores(params)).resolves.toEqual(
expect.objectContaining({
scores: expect.objectContaining({
host: expect.arrayContaining([
expect.objectContaining({
calculated_level: expect.any(String),
id_field: expect.any(String),
id_value: expect.any(String),
}),
]),
}),
})
);
expect(logger.warn).toHaveBeenCalledWith(
'Error retrieving criticality: Error: foo. Scoring will proceed without criticality information.'
);
});
});
});

View file

@ -17,14 +17,22 @@ import {
ALERT_WORKFLOW_STATUS,
EVENT_KIND,
} from '@kbn/rule-registry-plugin/common/technical_rule_data_field_names';
import type {
AfterKeys,
IdentifierType,
RiskWeights,
RiskScore,
import {
type AfterKeys,
type IdentifierType,
type RiskWeights,
type RiskScore,
getRiskLevel,
RiskCategories,
} from '../../../../common/entity_analytics/risk_engine';
import { RiskCategories } from '../../../../common/entity_analytics/risk_engine';
import { withSecuritySpan } from '../../../utils/with_security_span';
import type { AssetCriticalityRecord } from '../../../../common/api/entity_analytics';
import type { AssetCriticalityService } from '../asset_criticality/asset_criticality_service';
import {
applyCriticalityToScore,
getCriticalityModifier,
normalize,
} from '../asset_criticality/helpers';
import { getAfterKeyForIdentifierType, getFieldForIdentifierAgg } from './helpers';
import {
buildCategoryCountDeclarations,
@ -39,34 +47,68 @@ import type {
CalculateScoresResponse,
RiskScoreBucket,
} from '../types';
import {
RISK_SCORING_INPUTS_COUNT_MAX,
RISK_SCORING_SUM_MAX,
RISK_SCORING_SUM_VALUE,
} from './constants';
const bucketToResponse = ({
const formatForResponse = ({
bucket,
criticality,
now,
identifierField,
includeNewFields,
}: {
bucket: RiskScoreBucket;
criticality?: AssetCriticalityRecord;
now: string;
identifierField: string;
}): RiskScore => ({
'@timestamp': now,
id_field: identifierField,
id_value: bucket.key[identifierField],
calculated_level: bucket.risk_details.value.level,
calculated_score: bucket.risk_details.value.score,
calculated_score_norm: bucket.risk_details.value.normalized_score,
category_1_score: bucket.risk_details.value.category_1_score,
category_1_count: bucket.risk_details.value.category_1_count,
notes: bucket.risk_details.value.notes,
inputs: bucket.inputs.hits.hits.map((riskInput) => ({
id: riskInput._id,
index: riskInput._index,
description: `Alert from Rule: ${riskInput.fields?.[ALERT_RULE_NAME]?.[0] ?? 'RULE_NOT_FOUND'}`,
category: RiskCategories.category_1,
risk_score: riskInput.fields?.[ALERT_RISK_SCORE]?.[0] ?? undefined,
timestamp: riskInput.fields?.['@timestamp']?.[0] ?? undefined,
})),
});
includeNewFields: boolean;
}): RiskScore => {
const criticalityModifier = getCriticalityModifier(criticality?.criticality_level);
const normalizedScoreWithCriticality = applyCriticalityToScore({
score: bucket.risk_details.value.normalized_score,
modifier: criticalityModifier,
});
const calculatedLevel = getRiskLevel(normalizedScoreWithCriticality);
const categoryTwoScore =
normalizedScoreWithCriticality - bucket.risk_details.value.normalized_score;
const categoryTwoCount = criticalityModifier ? 1 : 0;
const newFields = {
category_2_score: categoryTwoScore,
category_2_count: categoryTwoCount,
criticality_level: criticality?.criticality_level,
criticality_modifier: criticalityModifier,
};
return {
'@timestamp': now,
id_field: identifierField,
id_value: bucket.key[identifierField],
calculated_level: calculatedLevel,
calculated_score: bucket.risk_details.value.score,
calculated_score_norm: normalizedScoreWithCriticality,
category_1_score: normalize({
number: bucket.risk_details.value.category_1_score,
max: RISK_SCORING_SUM_MAX,
}),
category_1_count: bucket.risk_details.value.category_1_count,
notes: bucket.risk_details.value.notes,
inputs: bucket.inputs.hits.hits.map((riskInput) => ({
id: riskInput._id,
index: riskInput._index,
description: `Alert from Rule: ${
riskInput.fields?.[ALERT_RULE_NAME]?.[0] ?? 'RULE_NOT_FOUND'
}`,
category: RiskCategories.category_1,
risk_score: riskInput.fields?.[ALERT_RISK_SCORE]?.[0] ?? undefined,
timestamp: riskInput.fields?.['@timestamp']?.[0] ?? undefined,
})),
...(includeNewFields ? newFields : {}),
};
};
const filterFromRange = (range: CalculateScoresParams['range']): QueryDslQueryContainer => ({
range: { '@timestamp': { lt: range.end, gte: range.start } },
@ -108,22 +150,6 @@ const buildReduceScript = ({
results['score'] = total_score;
results['normalized_score'] = score_norm;
if (score_norm < 20) {
results['level'] = 'Unknown'
}
else if (score_norm >= 20 && score_norm < 40) {
results['level'] = 'Low'
}
else if (score_norm >= 40 && score_norm < 70) {
results['level'] = 'Moderate'
}
else if (score_norm >= 70 && score_norm < 90) {
results['level'] = 'High'
}
else if (score_norm >= 90) {
results['level'] = 'Critical'
}
return results;
`;
};
@ -184,9 +210,9 @@ const buildIdentifierTypeAggregation = ({
`,
combine_script: 'return state;',
params: {
max_risk_inputs_per_identity: 999999,
p: 1.5,
risk_cap: 261.2,
max_risk_inputs_per_identity: RISK_SCORING_INPUTS_COUNT_MAX,
p: RISK_SCORING_SUM_VALUE,
risk_cap: RISK_SCORING_SUM_MAX,
},
reduce_script: buildReduceScript({ globalIdentifierTypeWeight }),
},
@ -195,8 +221,55 @@ const buildIdentifierTypeAggregation = ({
};
};
const processScores = async ({
assetCriticalityService,
buckets,
identifierField,
logger,
now,
}: {
assetCriticalityService: AssetCriticalityService;
buckets: RiskScoreBucket[];
identifierField: string;
logger: Logger;
now: string;
}): Promise<RiskScore[]> => {
if (buckets.length === 0) {
return [];
}
if (!assetCriticalityService.isEnabled()) {
return buckets.map((bucket) =>
formatForResponse({ bucket, now, identifierField, includeNewFields: false })
);
}
const identifiers = buckets.map((bucket) => ({
id_field: identifierField,
id_value: bucket.key[identifierField],
}));
let criticalities: AssetCriticalityRecord[] = [];
try {
criticalities = await assetCriticalityService.getCriticalitiesByIdentifiers(identifiers);
} catch (e) {
logger.warn(
`Error retrieving criticality: ${e}. Scoring will proceed without criticality information.`
);
}
return buckets.map((bucket) => {
const criticality = criticalities.find(
(c) => c.id_field === identifierField && c.id_value === bucket.key[identifierField]
);
return formatForResponse({ bucket, criticality, identifierField, now, includeNewFields: true });
});
};
export const calculateRiskScores = async ({
afterKeys: userAfterKeys,
assetCriticalityService,
debug,
esClient,
filter: userFilter,
@ -208,6 +281,7 @@ export const calculateRiskScores = async ({
runtimeMappings,
weights,
}: {
assetCriticalityService: AssetCriticalityService;
esClient: ElasticsearchClient;
logger: Logger;
} & CalculateScoresParams): Promise<CalculateScoresResponse> =>
@ -274,16 +348,27 @@ export const calculateRiskScores = async ({
user: response.aggregations.user?.after_key,
};
const hostScores = await processScores({
assetCriticalityService,
buckets: hostBuckets,
identifierField: 'host.name',
logger,
now,
});
const userScores = await processScores({
assetCriticalityService,
buckets: userBuckets,
identifierField: 'user.name',
logger,
now,
});
return {
...(debug ? { request, response } : {}),
after_keys: afterKeys,
scores: {
host: hostBuckets.map((bucket) =>
bucketToResponse({ bucket, identifierField: 'host.name', now })
),
user: userBuckets.map((bucket) =>
bucketToResponse({ bucket, identifierField: 'user.name', now })
),
host: hostScores,
user: userScores,
},
};
});

View file

@ -0,0 +1,27 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
/**
* The risk scoring algorithm uses a Riemann zeta function to sum an entity's risk inputs to a known, finite value (@see RISK_SCORING_SUM_MAX). It does so by assigning each input a weight based on its position in the list (ordered by score) of inputs. This value represents the complex variable s of Re(s) in traditional Riemann zeta function notation.
*/
export const RISK_SCORING_SUM_VALUE = 1.5;
/**
* Represents the maximum possible risk score sum. This value is derived from RISK_SCORING_SUM_VALUE, but we store the precomputed value here to be used more conveniently in normalization.
* @see RISK_SCORING_SUM_VALUE
*/
export const RISK_SCORING_SUM_MAX = 261.2;
/**
* The risk scoring algorithm can only process a finite number of risk inputs per identity; this value represents the maximum number of inputs that will be processed.
*/
export const RISK_SCORING_INPUTS_COUNT_MAX = 999999;
/**
* This value represents the maximum possible risk score after normalization.
*/
export const RISK_SCORING_NORMALIZATION_MAX = 100;

View file

@ -51,6 +51,10 @@ describe('RiskEngineDataWriter', () => {
"calculated_score_norm": 85.332,
"category_1_count": 12,
"category_1_score": 85,
"category_2_count": 0,
"category_2_score": 0,
"criticality_level": "very_important",
"criticality_modifier": 2,
"id_field": "host.name",
"id_value": "hostname",
"inputs": Array [],
@ -73,6 +77,10 @@ describe('RiskEngineDataWriter', () => {
"calculated_score_norm": 85.332,
"category_1_count": 12,
"category_1_score": 85,
"category_2_count": 0,
"category_2_score": 0,
"criticality_level": "very_important",
"criticality_modifier": 2,
"id_field": "host.name",
"id_value": "hostname",
"inputs": Array [],
@ -117,6 +125,10 @@ describe('RiskEngineDataWriter', () => {
"calculated_score_norm": 85.332,
"category_1_count": 12,
"category_1_score": 85,
"category_2_count": 0,
"category_2_score": 0,
"criticality_level": "very_important",
"criticality_modifier": 2,
"id_field": "user.name",
"id_value": "username_1",
"inputs": Array [],
@ -139,6 +151,10 @@ describe('RiskEngineDataWriter', () => {
"calculated_score_norm": 85.332,
"category_1_count": 12,
"category_1_score": 85,
"category_2_count": 0,
"category_2_score": 0,
"criticality_level": "very_important",
"criticality_modifier": 2,
"id_field": "user.name",
"id_value": "username_2",
"inputs": Array [],
@ -189,6 +205,10 @@ describe('RiskEngineDataWriter', () => {
"calculated_score_norm": 85.332,
"category_1_count": 12,
"category_1_score": 85,
"category_2_count": 0,
"category_2_score": 0,
"criticality_level": "very_important",
"criticality_modifier": 2,
"id_field": "host.name",
"id_value": "hostname_1",
"inputs": Array [],
@ -211,6 +231,10 @@ describe('RiskEngineDataWriter', () => {
"calculated_score_norm": 85.332,
"category_1_count": 12,
"category_1_score": 85,
"category_2_count": 0,
"category_2_score": 0,
"criticality_level": "very_important",
"criticality_modifier": 2,
"id_field": "user.name",
"id_value": "username_1",
"inputs": Array [],
@ -233,6 +257,10 @@ describe('RiskEngineDataWriter', () => {
"calculated_score_norm": 85.332,
"category_1_count": 12,
"category_1_score": 85,
"category_2_count": 0,
"category_2_score": 0,
"criticality_level": "very_important",
"criticality_modifier": 2,
"id_field": "user.name",
"id_value": "username_2",
"inputs": Array [],

View file

@ -17,6 +17,10 @@ const createRiskScoreMock = (overrides: Partial<RiskScore> = {}): RiskScore => (
calculated_score_norm: 85.332,
category_1_score: 85,
category_1_count: 12,
category_2_count: 0,
category_2_score: 0,
criticality_level: 'very_important',
criticality_modifier: 2,
notes: [],
inputs: [],
...overrides,

View file

@ -16,6 +16,7 @@ import type {
import { calculateRiskScores } from './calculate_risk_scores';
import { calculateAndPersistRiskScores } from './calculate_and_persist_risk_scores';
import type { RiskEngineDataClient } from '../risk_engine/risk_engine_data_client';
import type { AssetCriticalityService } from '../asset_criticality/asset_criticality_service';
import type { RiskScoreDataClient } from './risk_score_data_client';
import type { RiskInputsIndexResponse } from './get_risk_inputs_index';
import { scheduleLatestTransformNow } from '../utils/transforms';
@ -31,6 +32,7 @@ export interface RiskScoreService {
}
export interface RiskScoreServiceFactoryParams {
assetCriticalityService: AssetCriticalityService;
esClient: ElasticsearchClient;
logger: Logger;
riskEngineDataClient: RiskEngineDataClient;
@ -39,15 +41,24 @@ export interface RiskScoreServiceFactoryParams {
}
export const riskScoreServiceFactory = ({
assetCriticalityService,
esClient,
logger,
riskEngineDataClient,
riskScoreDataClient,
spaceId,
}: RiskScoreServiceFactoryParams): RiskScoreService => ({
calculateScores: (params) => calculateRiskScores({ ...params, esClient, logger }),
calculateScores: (params) =>
calculateRiskScores({ ...params, assetCriticalityService, esClient, logger }),
calculateAndPersistScores: (params) =>
calculateAndPersistRiskScores({ ...params, esClient, logger, riskScoreDataClient, spaceId }),
calculateAndPersistRiskScores({
...params,
assetCriticalityService,
esClient,
logger,
riskScoreDataClient,
spaceId,
}),
getConfiguration: async () => riskEngineDataClient.getConfiguration(),
getRiskInputsIndex: async (params) => riskScoreDataClient.getRiskInputsIndex(params),
scheduleLatestTransformNow: () => scheduleLatestTransformNow({ namespace: spaceId, esClient }),

View file

@ -54,7 +54,7 @@ const getWeightForIdentifierType = (weight: RiskWeight, identifierType: Identifi
};
export const buildCategoryScoreDeclarations = (): string => {
return RISK_CATEGORIES.map((riskCategory) => `results['${riskCategory}_score'] = 0;`).join('');
return RISK_CATEGORIES.map((riskCategory) => `results['${riskCategory}_score'] = 0.0;`).join('');
};
export const buildCategoryCountDeclarations = (): string => {

View file

@ -9,6 +9,7 @@ import { riskScoreCalculationRoute } from './calculation';
import { loggerMock } from '@kbn/logging-mocks';
import type { ExperimentalFeatures } from '../../../../../common';
import { RISK_SCORE_CALCULATION_URL } from '../../../../../common/constants';
import {
serverMock,
@ -44,7 +45,7 @@ describe('risk score calculation route', () => {
clients.appClient.getAlertsIndex.mockReturnValue('default-alerts-index');
(riskScoreServiceFactory as jest.Mock).mockReturnValue(mockRiskScoreService);
riskScoreCalculationRoute(server.router, logger);
riskScoreCalculationRoute(server.router, logger, {} as ExperimentalFeatures);
});
const buildRequest = (overrides: object = {}) => {

View file

@ -14,12 +14,18 @@ import {
RISK_SCORE_CALCULATION_URL,
} from '../../../../../common/constants';
import { riskScoreCalculationRequestSchema } from '../../../../../common/entity_analytics/risk_engine/risk_score_calculation/request_schema';
import type { ExperimentalFeatures } from '../../../../../common';
import type { SecuritySolutionPluginRouter } from '../../../../types';
import { buildRouteValidation } from '../../../../utils/build_validation/route_validation';
import { assetCriticalityServiceFactory } from '../../asset_criticality';
import { riskScoreServiceFactory } from '../risk_score_service';
import { getRiskInputsIndex } from '../get_risk_inputs_index';
export const riskScoreCalculationRoute = (router: SecuritySolutionPluginRouter, logger: Logger) => {
export const riskScoreCalculationRoute = (
router: SecuritySolutionPluginRouter,
logger: Logger,
experimentalFeatures: ExperimentalFeatures
) => {
router.versioned
.post({
path: RISK_SCORE_CALCULATION_URL,
@ -42,8 +48,14 @@ export const riskScoreCalculationRoute = (router: SecuritySolutionPluginRouter,
const spaceId = securityContext.getSpaceId();
const riskEngineDataClient = securityContext.getRiskEngineDataClient();
const riskScoreDataClient = securityContext.getRiskScoreDataClient();
const assetCriticalityDataClient = securityContext.getAssetCriticalityDataClient();
const assetCriticalityService = assetCriticalityServiceFactory({
assetCriticalityDataClient,
experimentalFeatures,
});
const riskScoreService = riskScoreServiceFactory({
assetCriticalityService,
esClient,
logger,
riskEngineDataClient,

View file

@ -7,6 +7,7 @@
import { loggerMock } from '@kbn/logging-mocks';
import type { ExperimentalFeatures } from '../../../../../common';
import { RISK_SCORE_PREVIEW_URL } from '../../../../../common/constants';
import {
RiskCategories,
@ -48,7 +49,7 @@ describe('POST risk_engine/preview route', () => {
clients.appClient.getAlertsIndex.mockReturnValue('default-alerts-index');
(riskScoreServiceFactory as jest.Mock).mockReturnValue(mockRiskScoreService);
riskScorePreviewRoute(server.router, logger);
riskScorePreviewRoute(server.router, logger, {} as ExperimentalFeatures);
});
const buildRequest = (body: object = {}) =>

View file

@ -15,12 +15,18 @@ import {
RISK_SCORE_PREVIEW_URL,
} from '../../../../../common/constants';
import { riskScorePreviewRequestSchema } from '../../../../../common/entity_analytics/risk_engine/risk_score_preview/request_schema';
import type { ExperimentalFeatures } from '../../../../../common';
import type { SecuritySolutionPluginRouter } from '../../../../types';
import { buildRouteValidation } from '../../../../utils/build_validation/route_validation';
import { assetCriticalityServiceFactory } from '../../asset_criticality';
import { riskScoreServiceFactory } from '../risk_score_service';
import { getRiskInputsIndex } from '../get_risk_inputs_index';
export const riskScorePreviewRoute = (router: SecuritySolutionPluginRouter, logger: Logger) => {
export const riskScorePreviewRoute = (
router: SecuritySolutionPluginRouter,
logger: Logger,
experimentalFeatures: ExperimentalFeatures
) => {
router.versioned
.post({
access: 'internal',
@ -43,8 +49,14 @@ export const riskScorePreviewRoute = (router: SecuritySolutionPluginRouter, logg
const spaceId = securityContext.getSpaceId();
const riskEngineDataClient = securityContext.getRiskEngineDataClient();
const riskScoreDataClient = securityContext.getRiskScoreDataClient();
const assetCriticalityDataClient = securityContext.getAssetCriticalityDataClient();
const assetCriticalityService = assetCriticalityServiceFactory({
assetCriticalityDataClient,
experimentalFeatures,
});
const riskScoreService = riskScoreServiceFactory({
assetCriticalityService,
esClient,
logger,
riskEngineDataClient,

View file

@ -11,6 +11,7 @@ import { taskManagerMock } from '@kbn/task-manager-plugin/server/mocks';
import { loggerMock } from '@kbn/logging-mocks';
import type { AnalyticsServiceSetup } from '@kbn/core/public';
import type { ExperimentalFeatures } from '../../../../../common';
import type { RiskScoreService } from '../risk_score_service';
import { riskScoreServiceMock } from '../risk_score_service.mock';
import { riskScoringTaskMock } from './risk_scoring_task.mock';
@ -47,6 +48,7 @@ describe('Risk Scoring Task', () => {
it('registers the task with TaskManager', () => {
expect(mockTaskManagerSetup.registerTaskDefinitions).not.toHaveBeenCalled();
registerRiskScoringTask({
experimentalFeatures: {} as ExperimentalFeatures,
getStartServices: mockCore.getStartServices,
kibanaVersion: '8.10.0',
taskManager: mockTaskManagerSetup,
@ -59,6 +61,7 @@ describe('Risk Scoring Task', () => {
it('does nothing if TaskManager is not available', () => {
expect(mockTaskManagerSetup.registerTaskDefinitions).not.toHaveBeenCalled();
registerRiskScoringTask({
experimentalFeatures: {} as ExperimentalFeatures,
getStartServices: mockCore.getStartServices,
kibanaVersion: '8.10.0',
taskManager: undefined,

View file

@ -18,7 +18,11 @@ import type {
TaskManagerStartContract,
} from '@kbn/task-manager-plugin/server';
import type { AnalyticsServiceSetup } from '@kbn/core-analytics-server';
import type { AfterKeys, IdentifierType } from '../../../../../common/entity_analytics/risk_engine';
import {
type AfterKeys,
type IdentifierType,
RiskScoreEntity,
} from '../../../../../common/entity_analytics/risk_engine';
import type { StartPlugins } from '../../../../plugin';
import { type RiskScoreService, riskScoreServiceFactory } from '../risk_score_service';
import { RiskEngineDataClient } from '../../risk_engine/risk_engine_data_client';
@ -31,12 +35,16 @@ import {
} from './state';
import { INTERVAL, SCOPE, TIMEOUT, TYPE, VERSION } from './constants';
import { buildScopedInternalSavedObjectsClientUnsafe, convertRangeToISO } from './helpers';
import { RiskScoreEntity } from '../../../../../common/entity_analytics/risk_engine/types';
import type { ExperimentalFeatures } from '../../../../../common';
import {
RISK_SCORE_EXECUTION_SUCCESS_EVENT,
RISK_SCORE_EXECUTION_ERROR_EVENT,
RISK_SCORE_EXECUTION_CANCELLATION_EVENT,
} from '../../../telemetry/event_based/events';
import {
AssetCriticalityDataClient,
assetCriticalityServiceFactory,
} from '../../asset_criticality';
const logFactory =
(logger: Logger, taskId: string) =>
@ -50,12 +58,14 @@ const getTaskId = (namespace: string): string => `${TYPE}:${namespace}:${VERSION
type GetRiskScoreService = (namespace: string) => Promise<RiskScoreService>;
export const registerRiskScoringTask = ({
experimentalFeatures,
getStartServices,
kibanaVersion,
logger,
taskManager,
telemetry,
}: {
experimentalFeatures: ExperimentalFeatures;
getStartServices: StartServicesAccessor<StartPlugins>;
kibanaVersion: string;
logger: Logger;
@ -71,6 +81,17 @@ export const registerRiskScoringTask = ({
getStartServices().then(([coreStart, _]) => {
const esClient = coreStart.elasticsearch.client.asInternalUser;
const soClient = buildScopedInternalSavedObjectsClientUnsafe({ coreStart, namespace });
const assetCriticalityDataClient = new AssetCriticalityDataClient({
esClient,
logger,
namespace,
});
const assetCriticalityService = assetCriticalityServiceFactory({
assetCriticalityDataClient,
experimentalFeatures,
});
const riskEngineDataClient = new RiskEngineDataClient({
logger,
kibanaVersion,
@ -87,6 +108,7 @@ export const registerRiskScoringTask = ({
});
return riskScoreServiceFactory({
assetCriticalityService,
esClient,
logger,
riskEngineDataClient,

View file

@ -117,7 +117,6 @@ export interface RiskScoreBucket {
score: number;
normalized_score: number;
notes: string[];
level: string;
category_1_score: number;
category_1_count: number;
};

View file

@ -183,6 +183,7 @@ export class Plugin implements ISecuritySolutionPlugin {
if (experimentalFeatures.riskScoringPersistence) {
registerRiskScoringTask({
experimentalFeatures,
getStartServices: core.getStartServices,
kibanaVersion: pluginContext.env.packageInfo.version,
logger: this.logger,

View file

@ -27,7 +27,7 @@ import type { EndpointAuthz } from '../common/endpoint/types/authz';
import type { EndpointAppContextService } from './endpoint/endpoint_app_context_services';
import { RiskEngineDataClient } from './lib/entity_analytics/risk_engine/risk_engine_data_client';
import { RiskScoreDataClient } from './lib/entity_analytics/risk_score/risk_score_data_client';
import { AssetCriticalityDataClient } from './lib/entity_analytics/asset_criticality/asset_criticality_data_client';
import { AssetCriticalityDataClient } from './lib/entity_analytics/asset_criticality';
export interface IRequestContextFactory {
create(

View file

@ -159,8 +159,8 @@ export const initRoutes = (
}
if (config.experimentalFeatures.riskScoringRoutesEnabled) {
riskScorePreviewRoute(router, logger);
riskScoreCalculationRoute(router, logger);
riskScorePreviewRoute(router, logger, config.experimentalFeatures);
riskScoreCalculationRoute(router, logger, config.experimentalFeatures);
riskEngineStatusRoute(router);
riskEngineInitRoute(router, getStartServices);
riskEngineEnableRoute(router, getStartServices);

View file

@ -31,7 +31,7 @@ import type { EndpointAuthz } from '../common/endpoint/types/authz';
import type { EndpointInternalFleetServicesInterface } from './endpoint/services/fleet';
import type { RiskEngineDataClient } from './lib/entity_analytics/risk_engine/risk_engine_data_client';
import type { RiskScoreDataClient } from './lib/entity_analytics/risk_score/risk_score_data_client';
import type { AssetCriticalityDataClient } from './lib/entity_analytics/asset_criticality/asset_criticality_data_client';
import type { AssetCriticalityDataClient } from './lib/entity_analytics/asset_criticality';
export { AppClient };
export interface SecuritySolutionApiRequestHandlerContext {

View file

@ -23,6 +23,9 @@ import {
readRiskScores,
normalizeScores,
waitForRiskScoresToBePresent,
assetCriticalityRouteHelpersFactory,
cleanAssetCriticality,
waitForAssetCriticalityToBePresent,
} from '../../utils';
import { FtrProviderContext } from '../../../../ftr_provider_context';
@ -116,17 +119,17 @@ export default ({ getService }: FtrProviderContext): void => {
const scores = await readRiskScores(es);
expect(scores.length).to.eql(1);
expect(normalizeScores(scores)).to.eql([
{
calculated_level: 'Unknown',
calculated_score: 21,
calculated_score_norm: 8.039816232771823,
category_1_score: 21,
category_1_count: 1,
id_field: 'host.name',
id_value: 'host-1',
},
]);
const [score] = normalizeScores(scores);
expect(score).to.eql({
calculated_level: 'Unknown',
calculated_score: 21,
calculated_score_norm: 8.039816232771823,
category_1_score: 8.039816232771821,
category_1_count: 1,
id_field: 'host.name',
id_value: 'host-1',
});
});
describe('paging through calculations', () => {
@ -269,6 +272,60 @@ export default ({ getService }: FtrProviderContext): void => {
expect(scores.length).to.eql(10);
});
});
describe('@skipInServerless with asset criticality data', () => {
const assetCriticalityRoutes = assetCriticalityRouteHelpersFactory(supertest);
beforeEach(async () => {
await assetCriticalityRoutes.upsert({
id_field: 'host.name',
id_value: 'host-1',
criticality_level: 'important',
});
});
afterEach(async () => {
await cleanAssetCriticality({ log, es });
});
it('calculates and persists risk scores with additional criticality metadata and modifiers', async () => {
const documentId = uuidv4();
await indexListOfDocuments([buildDocument({ host: { name: 'host-1' } }, documentId)]);
await waitForAssetCriticalityToBePresent({ es, log });
const results = await calculateRiskScoreAfterRuleCreationAndExecution(documentId);
expect(results).to.eql({
after_keys: { host: { 'host.name': 'host-1' } },
errors: [],
scores_written: 1,
});
await waitForRiskScoresToBePresent({ es, log });
const scores = await readRiskScores(es);
expect(scores.length).to.eql(1);
const [score] = normalizeScores(scores);
expect(score).to.eql({
criticality_level: 'important',
criticality_modifier: 1.5,
calculated_level: 'Unknown',
calculated_score: 21,
calculated_score_norm: 11.59366948840633,
category_1_score: 8.039816232771821,
category_1_count: 1,
id_field: 'host.name',
id_value: 'host-1',
});
const [rawScore] = scores;
expect(
rawScore.host?.risk.category_1_score! + rawScore.host?.risk.category_2_score!
).to.be.within(
score.calculated_score_norm! - 0.000000000000001,
score.calculated_score_norm! + 0.000000000000001
);
});
});
});
});
};

View file

@ -18,10 +18,13 @@ import {
dataGeneratorFactory,
} from '../../../detections_response/utils';
import {
assetCriticalityRouteHelpersFactory,
buildDocument,
cleanAssetCriticality,
createAndSyncRuleAndAlertsFactory,
deleteAllRiskScores,
sanitizeScores,
waitForAssetCriticalityToBePresent,
} from '../../utils';
import { FtrProviderContext } from '../../../../ftr_provider_context';
@ -99,18 +102,23 @@ export default ({ getService }: FtrProviderContext): void => {
await indexListOfDocuments([buildDocument({ host: { name: 'host-1' } }, documentId)]);
const body = await getRiskScoreAfterRuleCreationAndExecution(documentId);
const [score] = sanitizeScores(body.scores.host!);
const [rawScore] = body.scores.host!;
expect(sanitizeScores(body.scores.host!)).to.eql([
{
calculated_level: 'Unknown',
calculated_score: 21,
calculated_score_norm: 8.039816232771823,
category_1_count: 1,
category_1_score: 21,
id_field: 'host.name',
id_value: 'host-1',
},
]);
expect(score).to.eql({
calculated_level: 'Unknown',
calculated_score: 21,
calculated_score_norm: 8.039816232771823,
category_1_count: 1,
category_1_score: 8.039816232771821,
id_field: 'host.name',
id_value: 'host-1',
});
expect(rawScore.category_1_score! + rawScore.category_2_score!).to.be.within(
score.calculated_score_norm! - 0.000000000000001,
score.calculated_score_norm! + 0.000000000000001
);
});
it('calculates risk from two alerts, each representing a unique host', async () => {
@ -130,7 +138,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 21,
calculated_score_norm: 8.039816232771823,
category_1_count: 1,
category_1_score: 21,
category_1_score: 8.039816232771821,
id_field: 'host.name',
id_value: 'host-1',
},
@ -139,7 +147,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 21,
calculated_score_norm: 8.039816232771823,
category_1_count: 1,
category_1_score: 21,
category_1_score: 8.039816232771821,
id_field: 'host.name',
id_value: 'host-2',
},
@ -163,7 +171,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 28.42462120245875,
calculated_score_norm: 10.88232052161514,
category_1_count: 2,
category_1_score: 28,
category_1_score: 10.882320521615142,
id_field: 'host.name',
id_value: 'host-1',
},
@ -185,7 +193,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 47.25513506055279,
calculated_score_norm: 18.091552473412246,
category_1_count: 30,
category_1_score: 37,
category_1_score: 18.091552473412246,
id_field: 'host.name',
id_value: 'host-1',
},
@ -210,7 +218,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 47.25513506055279,
calculated_score_norm: 18.091552473412246,
category_1_count: 30,
category_1_score: 37,
category_1_score: 18.091552473412246,
id_field: 'host.name',
id_value: 'host-1',
},
@ -219,7 +227,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 21,
calculated_score_norm: 8.039816232771823,
category_1_count: 1,
category_1_score: 21,
category_1_score: 8.039816232771821,
id_field: 'host.name',
id_value: 'host-2',
},
@ -241,7 +249,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 50.67035607277805,
calculated_score_norm: 19.399064346392823,
category_1_count: 100,
category_1_score: 37,
category_1_score: 19.399064346392823,
id_field: 'host.name',
id_value: 'host-1',
},
@ -266,7 +274,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 241.2874098703716,
calculated_score_norm: 92.37649688758484,
category_1_count: 100,
category_1_score: 209,
category_1_score: 92.37649688758484,
id_field: 'host.name',
id_value: 'host-1',
},
@ -297,7 +305,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 254.91456029175757,
calculated_score_norm: 97.59362951445543,
category_1_count: 1000,
category_1_score: 209,
category_1_score: 97.59362951445543,
id_field: 'host.name',
id_value: 'host-1',
},
@ -393,7 +401,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 225.1106801442913,
calculated_score_norm: 86.18326192354185,
category_1_count: 100,
category_1_score: 203,
category_1_score: 86.18326192354185,
id_field: 'host.name',
id_value: 'host-1',
},
@ -422,7 +430,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 120.6437049351858,
calculated_score_norm: 46.18824844379242,
category_1_count: 100,
category_1_score: 209,
category_1_score: 92.37649688758484,
id_field: 'host.name',
id_value: 'host-1',
},
@ -449,7 +457,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 168.9011869092601,
calculated_score_norm: 64.66354782130938,
category_1_count: 100,
category_1_score: 209,
category_1_score: 92.37649688758484,
id_field: 'user.name',
id_value: 'user-1',
},
@ -478,7 +486,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 93.23759116471251,
calculated_score_norm: 35.695861854790394,
category_1_count: 50,
category_1_score: 209,
category_1_score: 89.23965463697598,
id_field: 'host.name',
id_value: 'host-1',
},
@ -490,7 +498,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_score: 186.47518232942502,
calculated_score_norm: 71.39172370958079,
category_1_count: 50,
category_1_score: 209,
category_1_score: 89.23965463697598,
id_field: 'user.name',
id_value: 'user-1',
},
@ -527,7 +535,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_level: 'Low',
calculated_score: 93.2375911647125,
calculated_score_norm: 35.695861854790394,
category_1_score: 77,
category_1_score: 35.69586185479039,
category_1_count: 50,
id_field: 'host.name',
id_value: 'host-1',
@ -539,7 +547,7 @@ export default ({ getService }: FtrProviderContext): void => {
calculated_level: 'High',
calculated_score: 186.475182329425,
calculated_score_norm: 71.39172370958079,
category_1_score: 165,
category_1_score: 71.39172370958077,
category_1_count: 50,
id_field: 'user.name',
id_value: 'user-1',
@ -547,6 +555,58 @@ export default ({ getService }: FtrProviderContext): void => {
]);
});
});
describe('@skipInServerless with asset criticality data', () => {
const assetCriticalityRoutes = assetCriticalityRouteHelpersFactory(supertest);
beforeEach(async () => {
await assetCriticalityRoutes.upsert({
id_field: 'host.name',
id_value: 'host-1',
criticality_level: 'very_important',
});
});
afterEach(async () => {
await cleanAssetCriticality({ log, es });
});
it('calculates and persists risk scores with additional criticality metadata and modifiers', async () => {
const documentId = uuidv4();
await indexListOfDocuments([
buildDocument({ host: { name: 'host-1' } }, documentId),
buildDocument({ host: { name: 'host-2' } }, documentId),
]);
await waitForAssetCriticalityToBePresent({ es, log });
const body = await getRiskScoreAfterRuleCreationAndExecution(documentId, {
alerts: 2,
});
expect(sanitizeScores(body.scores.host!)).to.eql([
{
criticality_level: 'very_important',
criticality_modifier: 2.0,
calculated_level: 'Unknown',
calculated_score: 21,
calculated_score_norm: 14.8830616583983,
category_1_count: 1,
category_1_score: 8.039816232771821,
id_field: 'host.name',
id_value: 'host-1',
},
{
calculated_level: 'Unknown',
calculated_score: 21,
calculated_score_norm: 8.039816232771823,
category_1_count: 1,
category_1_score: 8.039816232771821,
id_field: 'host.name',
id_value: 'host-2',
},
]);
});
});
});
});
};

View file

@ -23,6 +23,9 @@ import {
getRiskEngineTask,
waitForRiskEngineTaskToBeGone,
cleanRiskEngine,
assetCriticalityRouteHelpersFactory,
cleanAssetCriticality,
waitForAssetCriticalityToBePresent,
} from '../../../utils';
import { FtrProviderContext } from '../../../../../ftr_provider_context';
@ -157,7 +160,8 @@ export default ({ getService }: FtrProviderContext): void => {
await riskEngineRoutes.disable();
});
describe('when task interval is modified', () => {
// Temporary, expected failure: See https://github.com/elastic/security-team/issues/8012
describe.skip('when task interval is modified', () => {
beforeEach(async () => {
await updateRiskEngineConfigSO({
attributes: {
@ -179,8 +183,7 @@ export default ({ getService }: FtrProviderContext): void => {
});
});
// FLAKY: https://github.com/elastic/kibana/issues/171132
describe.skip('with some alerts containing hosts and others containing users', () => {
describe('with some alerts containing hosts and others containing users', () => {
let hostId: string;
let userId: string;
@ -212,20 +215,68 @@ export default ({ getService }: FtrProviderContext): void => {
alerts: 20,
riskScore: 40,
});
await riskEngineRoutes.init();
});
it('@skipInQA calculates and persists risk scores for both types of entities', async () => {
await riskEngineRoutes.init();
await waitForRiskScoresToBePresent({ es, log, scoreCount: 20 });
const riskScores = await readRiskScores(es);
expect(riskScores.length).to.eql(20);
expect(riskScores.length).to.be.greaterThan(0);
const scoredIdentifiers = normalizeScores(riskScores).map(
({ id_field: idField }) => idField
);
expect(scoredIdentifiers.includes('host.name')).to.be(true);
expect(scoredIdentifiers.includes('user.name')).to.be(true);
expect(scoredIdentifiers).to.contain('host.name');
expect(scoredIdentifiers).to.contain('user.name');
});
context('@skipInServerless with asset criticality data', () => {
const assetCriticalityRoutes = assetCriticalityRouteHelpersFactory(supertest);
beforeEach(async () => {
await assetCriticalityRoutes.upsert({
id_field: 'host.name',
id_value: 'host-1',
criticality_level: 'very_important',
});
});
afterEach(async () => {
await cleanAssetCriticality({ log, es });
});
it('calculates risk scores with asset criticality data', async () => {
await waitForAssetCriticalityToBePresent({ es, log });
await riskEngineRoutes.init();
await waitForRiskScoresToBePresent({ es, log, scoreCount: 20 });
const riskScores = await readRiskScores(es);
expect(riskScores.length).to.be.greaterThan(0);
const assetCriticalityLevels = riskScores.map(
(riskScore) => riskScore.host?.risk.criticality_level
);
const assetCriticalityModifiers = riskScores.map(
(riskScore) => riskScore.host?.risk.criticality_modifier
);
expect(assetCriticalityLevels).to.contain('very_important');
expect(assetCriticalityModifiers).to.contain(2);
const scoreWithCriticality = riskScores.find((score) => score.host?.name === 'host-1');
expect(normalizeScores([scoreWithCriticality!])).to.eql([
{
id_field: 'host.name',
id_value: 'host-1',
criticality_level: 'very_important',
criticality_modifier: 2,
calculated_level: 'Moderate',
calculated_score: 79.81345973382406,
calculated_score_norm: 46.809565696393314,
category_1_count: 10,
category_1_score: 30.55645472198471,
},
]);
});
});
});
});

View file

@ -15,10 +15,11 @@ import {
ASSET_CRITICALITY_URL,
ASSET_CRITICALITY_PRIVILEGES_URL,
} from '@kbn/security-solution-plugin/common/constants';
import type { AssetCriticalityRecord } from '@kbn/security-solution-plugin/common/api/entity_analytics';
import type { Client } from '@elastic/elasticsearch';
import type { ToolingLog } from '@kbn/tooling-log';
import querystring from 'querystring';
import { routeWithNamespace } from '../../detections_response/utils';
import { routeWithNamespace, waitFor } from '../../detections_response/utils';
export const getAssetCriticalityIndex = (namespace?: string) =>
`.asset-criticality.asset-criticality-${namespace ?? 'default'}`;
@ -123,3 +124,51 @@ export const assetCriticalityRouteHelpersFactoryNoAuth = (
.send()
.expect(200),
});
/**
* Function to read asset criticality records from ES. By default, it reads from the asset criticality index in the default space, but this can be overridden with the
* `index` parameter.
*
* @param {string[]} index - the index or indices to read criticality from.
* @param {number} size - the size parameter of the query
*/
export const readAssetCriticality = async (
es: Client,
index: string[] = [getAssetCriticalityIndex()],
size: number = 1000
): Promise<AssetCriticalityRecord[]> => {
const results = await es.search({
index,
size,
});
return results.hits.hits.map((hit) => hit._source as AssetCriticalityRecord);
};
/**
* Function to read asset criticality from ES and wait for them to be
* present/readable. By default, it reads from the asset criticality index in the
* default space, but this can be overridden with the `index` parameter.
*
* @param {string[]} index - the index or indices to read asset criticality from.
* @param {number} docCount - the number of asset criticality docs to wait for. Defaults to 1.
*/
export const waitForAssetCriticalityToBePresent = async ({
es,
log,
index = [getAssetCriticalityIndex()],
docCount = 1,
}: {
es: Client;
log: ToolingLog;
index?: string[];
docCount?: number;
}): Promise<void> => {
await waitFor(
async () => {
const criticalities = await readAssetCriticality(es, index, docCount + 10);
return criticalities.length >= docCount;
},
'waitForAssetCriticalityToBePresent',
log
);
};

View file

@ -4,6 +4,7 @@
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
export * from './risk_engine';
export * from './get_risk_engine_stats';
export * from './asset_criticality';

View file

@ -37,11 +37,15 @@ import {
} from '../../detections_response/utils';
const sanitizeScore = (score: Partial<RiskScore>): Partial<RiskScore> => {
delete score['@timestamp'];
delete score.inputs;
delete score.notes;
// delete score.category_1_score;
return score;
const {
'@timestamp': timestamp,
inputs,
notes,
category_2_count: cat2Count,
category_2_score: cat2Score,
...rest
} = score;
return rest;
};
export const sanitizeScores = (scores: Array<Partial<RiskScore>>): Array<Partial<RiskScore>> =>