[Security Solution] [GenAi] Give the security AI assistant access to the current time (#211200)

## Summary

This PR adds a new tool that gives the LLM access to the current time.
The tool returns the time in the timezone configured on Kibana as well
as the UTC time.

Changes:
- Add time tool
- Also increased the speed of the assistant stream making the assistant
feel more snappy
([here](https://github.com/elastic/kibana/pull/211200/files#diff-d4dd2f3b250247285fee3300a6d38cf622f2724daa87947fe58111bae9d3d655R12)).
The reasons for keeping the small delay (of 10 ms) is because it helps
smooth out the stream.

<img width="500" alt="image"
src="https://github.com/user-attachments/assets/e613f9fb-a0f5-4559-88df-6d8ea0e5d042"
/>

## How to test
- Check that stack management > advanced settings > timezone is set to
"browser"
- Open the security assistant
- Ask "what is the current time". You should get back the time in your
local timezone + the equivalent GMT timezone (UTC and GMT are
equivalent)
- Go to stack management > advanced settings and set "Time zone" to a
different timezone (a timezone with a different timezone offset).
- Go to the assistant and ask again, "What is the current time". You
should get back the time in the timezone that you just configured and
the UTC equivalent.
- Other questions to try out:
- "What was the time exactly one week ago? Rounded to the nearest
hour.". The result should be correct depending on what you have
configured in advanced settings.
- "Write an esql query that gets 100 records from the .logs index from
the last week. Use the absolute time in the query." (may need to prompt
again to have the query include the absolute time)
- "When is my birthday", The assistant responds with "I don't know but
you can tell me". You reply with "It was exactly 3 weeks ago". The
assistant should create a KB document with the correct date.
 

## Considerations:
- When asked "Which security labs content was published in the last 2
months", gemini-1-5-pro-002 often responds incorrectly
([trace](6bfddf7b-1225-4e97-ac9f-6cdf9158ac35?timeModel=%7B%22duration%22%3A%227d%22%7D&peek=4f5244a3-68fd-45e3-b1df-6c80e739377f)).
GPT4o performs better and does not return an incorrect result when asked
this question
([trace](6bfddf7b-1225-4e97-ac9f-6cdf9158ac35?timeModel=%7B%22duration%22%3A%227d%22%7D&peek=61bc4c12-d5ea-48be-8460-3e891d2e243b)).
- You will notice that the formatted time string contains the time in
the user's timezone and in UTC timezone (e.g. `Current time: 14/02/2025,
00:33:12 UTC-07:00 (14/02/2025, 07:33:12 UTC+00:00)`). The reason for
this is that the weaker LLMs sometimes make mistakes when converting
from one timezone to another. Therefore I have included both in the
formatted message. * If the user is in UTC, then the UTC timezone will
not be repeated.

## How is the current time string formatted?

The formatted time string is added directly into the system prompt.
Bellow is the logic for how the string is formatted.

- If the user's kibana timezone setting is "Browser"
1. and their browser timezone is not UTC, then the format is `Current
time: Thu, Feb 13, 2025 11:33 PM UTC-08:00 (7:33 AM UTC)` (where the
first timezone is the client timezone, the one from the browser)
2. and their browser is in UTC, then the format is `Current time: Thu,
Feb 13, 2025 11:33 PM UTC+00:00`
- If the user's kibana timezone is set to something other than "Browser"
1. and the Kibana timezone setting is not UTC equivalent, then the
format is `Current time: Thu, Feb 13, 2025 11:33 PM UTC-08:00 (7:33 AM
UTC)` (where the first timezone is the one from the Kibana timezone
setting)
2. and their kibana timezone is UTC equivalent, then the format is
`Current time: Thu, Feb 13, 2025 11:33 PM UTC+00:00`

### Checklist

Check the PR satisfies following conditions. 

Reviewers should verify this PR satisfies this list as well.

- [x] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/src/platform/packages/shared/kbn-i18n/README.md)
- [x]
[Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html)
was added for features that require explanation or tutorials
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] If a plugin configuration key changed, check if it needs to be
allowlisted in the cloud and added to the [docker
list](https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/resources/base/bin/kibana-docker)
- [x] This was checked for breaking HTTP API changes, and any breaking
changes have been approved by the breaking-change committee. The
`release_note:breaking` label should be applied in these situations.
- [x] [Flaky Test
Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner/1) was
used on any tests changed
- [x] The PR description includes the appropriate Release Notes section,
and the correct `release_note:*` label is applied per the
[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)

### Identify risks

Does this PR introduce any risks? For example, consider risks like hard
to test bugs, performance regression, potential of data loss.

Describe the risk, its severity, and mitigation for each identified
risk. Invite stakeholders and evaluate how to proceed before merging.

- [ ] [See some risk
examples](https://github.com/elastic/kibana/blob/main/RISK_MATRIX.mdx)
- [ ] ...

---------

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
This commit is contained in:
Kenneth Kreindler 2025-02-28 13:04:12 +01:00 committed by GitHub
parent bbc3b451f1
commit 7dce6e6e01
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
26 changed files with 339 additions and 20 deletions

View file

@ -16,7 +16,7 @@
import { z } from '@kbn/zod';
import { NonEmptyString } from '../common_attributes.gen';
import { NonEmptyString, ScreenContext } from '../common_attributes.gen';
import { Replacements } from '../conversations/common_attributes.gen';
export type ExecuteConnectorRequestParams = z.infer<typeof ExecuteConnectorRequestParams>;
@ -42,6 +42,7 @@ export const ExecuteConnectorRequestBody = z.object({
size: z.number().optional(),
langSmithProject: z.string().optional(),
langSmithApiKey: z.string().optional(),
screenContext: ScreenContext.optional(),
});
export type ExecuteConnectorRequestBodyInput = z.input<typeof ExecuteConnectorRequestBody>;

View file

@ -62,6 +62,8 @@ paths:
type: string
langSmithApiKey:
type: string
screenContext:
$ref: '../common_attributes.schema.yaml#/components/schemas/ScreenContext'
responses:
'200':
description: Successful static response

View file

@ -48,3 +48,14 @@ export type SortOrder = z.infer<typeof SortOrder>;
export const SortOrder = z.enum(['asc', 'desc']);
export type SortOrderEnum = typeof SortOrder.enum;
export const SortOrderEnum = SortOrder.enum;
/**
* User screen context
*/
export type ScreenContext = z.infer<typeof ScreenContext>;
export const ScreenContext = z.object({
/**
* The local timezone of the user
*/
timeZone: z.string().optional(),
});

View file

@ -33,3 +33,11 @@ components:
enum:
- 'asc'
- 'desc'
ScreenContext:
description: User screen context
type: object
properties:
timeZone:
description: The local timezone of the user
type: string

View file

@ -17,6 +17,7 @@
import { z } from '@kbn/zod';
import { Replacements } from '../conversations/common_attributes.gen';
import { ScreenContext } from '../common_attributes.gen';
export type PostEvaluateBody = z.infer<typeof PostEvaluateBody>;
export const PostEvaluateBody = z.object({
@ -29,6 +30,7 @@ export const PostEvaluateBody = z.object({
langSmithApiKey: z.string().optional(),
langSmithProject: z.string().optional(),
replacements: Replacements.optional().default({}),
screenContext: ScreenContext.optional(),
size: z.number().optional().default(20),
});

View file

@ -79,6 +79,8 @@ components:
replacements:
$ref: "../conversations/common_attributes.schema.yaml#/components/schemas/Replacements"
default: {}
screenContext:
$ref: '../common_attributes.schema.yaml#/components/schemas/ScreenContext'
size:
type: number
default: 20

View file

@ -42,6 +42,9 @@ const fetchConnectorArgs: FetchConnectorExecuteAction = {
message: 'This is a test',
conversationId: 'test',
replacements: {},
screenContext: {
timeZone: 'America/New_York',
},
};
const streamingDefaults = {
method: 'POST',
@ -73,7 +76,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...staticDefaults,
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{}}',
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
@ -85,7 +88,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...streamingDefaults,
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gen-ai","replacements":{}}',
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gen-ai","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
@ -102,7 +105,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...streamingDefaults,
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".bedrock","replacements":{}}',
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".bedrock","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
@ -119,7 +122,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...streamingDefaults,
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gemini","replacements":{}}',
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gemini","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
@ -136,7 +139,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...streamingDefaults,
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".bedrock","replacements":{}}',
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".bedrock","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
@ -156,7 +159,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...staticDefaults,
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{"auuid":"real.hostname"},"alertsIndexPattern":".alerts-security.alerts-default","size":30}',
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{"auuid":"real.hostname"},"screenContext":{"timeZone":"America/New_York"},"alertsIndexPattern":".alerts-security.alerts-default","size":30}',
}
);
});

View file

@ -9,8 +9,9 @@ import { HttpSetup } from '@kbn/core/public';
import {
API_VERSIONS,
ApiConfig,
MessageMetadata,
Replacements,
ScreenContext,
MessageMetadata,
} from '@kbn/elastic-assistant-common';
import { API_ERROR } from '../translations';
import { getOptionalRequestParams } from '../helpers';
@ -29,6 +30,7 @@ export interface FetchConnectorExecuteAction {
signal?: AbortSignal | undefined;
size?: number;
traceOptions?: TraceOptions;
screenContext: ScreenContext;
}
export interface FetchConnectorExecuteResponse {
@ -53,6 +55,7 @@ export const fetchConnectorExecuteAction = async ({
signal,
size,
traceOptions,
screenContext,
}: FetchConnectorExecuteAction): Promise<FetchConnectorExecuteResponse> => {
// TODO add streaming support for gemini with langchain on
const isStream = assistantStreamingEnabled;
@ -73,6 +76,7 @@ export const fetchConnectorExecuteAction = async ({
traceOptions?.langSmithProject === '' ? undefined : traceOptions?.langSmithProject,
langSmithApiKey:
traceOptions?.langSmithApiKey === '' ? undefined : traceOptions?.langSmithApiKey,
screenContext,
...optionalRequestParams,
};

View file

@ -34,6 +34,7 @@ import type {
} from '@kbn/elastic-assistant-common';
import { isEmpty } from 'lodash/fp';
import moment from 'moment';
import * as i18n from './translations';
import { useAssistantContext } from '../../../assistant_context';
import { DEFAULT_ATTACK_DISCOVERY_MAX_ALERTS } from '../../../assistant_context/constants';
@ -210,6 +211,9 @@ export const EvaluationSettings: React.FC = React.memo(() => {
langSmithProject,
runName,
size: Number(size),
screenContext: {
timeZone: moment.tz.guess(),
},
};
performEvaluation(evalParams);
}, [

View file

@ -8,6 +8,7 @@
import { HttpSetup } from '@kbn/core-http-browser';
import { useCallback, useRef, useState } from 'react';
import { ApiConfig, Replacements } from '@kbn/elastic-assistant-common';
import moment from 'moment';
import { useAssistantContext } from '../../assistant_context';
import { fetchConnectorExecuteAction, FetchConnectorExecuteResponse } from '../api';
import * as i18n from './translations';
@ -65,6 +66,9 @@ export const useSendMessage = (): UseSendMessage => {
signal: abortController.current.signal,
size: knowledgeBase.latestAlerts,
traceOptions,
screenContext: {
timeZone: moment.tz.guess(),
},
});
} finally {
clearTimeout(timeoutId);

View file

@ -31,3 +31,5 @@ export const CAPABILITIES = `${BASE_PATH}/capabilities`;
Licensing requirements
*/
export const MINIMUM_AI_ASSISTANT_LICENSE = 'enterprise' as const;
export const DEFAULT_DATE_FORMAT_TZ = 'dateFormat:tz' as const;

View file

@ -16,6 +16,7 @@ import {
ExecuteConnectorRequestBody,
Message,
Replacements,
ScreenContext,
} from '@kbn/elastic-assistant-common';
import { StreamResponseWithHeaders } from '@kbn/ml-response-stream/server';
import { PublicMethodsOf } from '@kbn/utility-types';
@ -24,6 +25,7 @@ import { AnalyticsServiceSetup } from '@kbn/core-analytics-server';
import { TelemetryParams } from '@kbn/langchain/server/tracers/telemetry/telemetry_tracer';
import type { LlmTasksPluginStart } from '@kbn/llm-tasks-plugin/server';
import { SavedObjectsClientContract } from '@kbn/core-saved-objects-api-server';
import { CoreRequestHandlerContext } from '@kbn/core/server';
import { ResponseBody } from '../types';
import type { AssistantTool } from '../../../types';
import { AIAssistantKnowledgeBaseDataClient } from '../../../ai_assistant_data_clients/knowledge_base';
@ -50,6 +52,7 @@ export interface AgentExecutorParams<T extends boolean> {
connectorId: string;
conversationId?: string;
contentReferencesStore: ContentReferencesStore;
core: CoreRequestHandlerContext;
dataClients?: AssistantDataClients;
esClient: ElasticsearchClient;
langChainMessages: BaseMessage[];
@ -65,6 +68,7 @@ export interface AgentExecutorParams<T extends boolean> {
request: KibanaRequest<unknown, unknown, ExecuteConnectorRequestBody>;
response?: KibanaResponseFactory;
savedObjectsClient: SavedObjectsClientContract;
screenContext?: ScreenContext;
size?: number;
systemPrompt?: string;
telemetry: AnalyticsServiceSetup;

View file

@ -42,6 +42,7 @@ export interface GetDefaultAssistantGraphParams {
signal?: AbortSignal;
tools: StructuredTool[];
replacements: Replacements;
getFormattedTime?: () => string;
}
export type DefaultAssistantGraph = ReturnType<typeof getDefaultAssistantGraph>;
@ -57,6 +58,7 @@ export const getDefaultAssistantGraph = ({
signal,
tools,
replacements,
getFormattedTime,
}: GetDefaultAssistantGraphParams) => {
try {
// Default graph state
@ -125,6 +127,10 @@ export const getDefaultAssistantGraph = ({
reducer: (x: string, y?: string) => y ?? x,
default: () => '',
}),
formattedTime: Annotation<string>({
reducer: (x: string, y?: string) => y ?? x,
default: getFormattedTime ?? (() => ''),
}),
});
// Default node parameters

View file

@ -25,6 +25,7 @@ import { AssistantTool, AssistantToolParams } from '../../../..';
import { promptGroupId as toolsGroupId } from '../../../prompt/tool_prompts';
import { promptDictionary } from '../../../prompt';
import { promptGroupId } from '../../../prompt/local_prompt_object';
jest.mock('./graph');
jest.mock('./helpers');
jest.mock('langchain/agents');
@ -85,6 +86,13 @@ describe('callAssistantGraph', () => {
traceOptions: {},
responseLanguage: 'English',
contentReferencesStore: newContentReferencesStoreMock(),
core: {
uiSettings: {
client: {
get: jest.fn().mockResolvedValue('Browser'),
},
},
},
} as unknown as AgentExecutorParams<boolean>;
beforeEach(() => {

View file

@ -19,7 +19,7 @@ import { getPrompt, resolveProviderAndModel } from '@kbn/security-ai-prompts';
import { isEmpty } from 'lodash';
import { localToolPrompts, promptGroupId as toolsGroupId } from '../../../prompt/tool_prompts';
import { promptGroupId } from '../../../prompt/local_prompt_object';
import { getModelOrOss } from '../../../prompt/helpers';
import { getFormattedTime, getModelOrOss } from '../../../prompt/helpers';
import { getPrompt as localGetPrompt, promptDictionary } from '../../../prompt';
import { getLlmClass } from '../../../../routes/utils';
import { EsAnonymizationFieldsSchema } from '../../../../ai_assistant_data_clients/anonymization_fields/types';
@ -30,6 +30,7 @@ import { GraphInputs } from './types';
import { getDefaultAssistantGraph } from './graph';
import { invokeGraph, streamGraph } from './helpers';
import { transformESSearchToAnonymizationFields } from '../../../../ai_assistant_data_clients/anonymization_fields/helpers';
import { DEFAULT_DATE_FORMAT_TZ } from '../../../../../common/constants';
export const callAssistantGraph: AgentExecutor<true | false> = async ({
abortSignal,
@ -39,6 +40,7 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
connectorId,
contentReferencesStore,
conversationId,
core,
dataClients,
esClient,
inference,
@ -53,6 +55,7 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
replacements,
request,
savedObjectsClient,
screenContext,
size,
systemPrompt,
telemetry,
@ -218,6 +221,11 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
actionsClient,
})
: { provider: llmType };
const uiSettingsDateFormatTimezone = await core.uiSettings.client.get<string>(
DEFAULT_DATE_FORMAT_TZ
);
const assistantGraph = getDefaultAssistantGraph({
agentRunnable,
dataClients,
@ -230,6 +238,11 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
replacements,
// some chat models (bedrock) require a signal to be passed on agent invoke rather than the signal passed to the chat model
...(llmType === 'bedrock' ? { signal: abortSignal } : {}),
getFormattedTime: () =>
getFormattedTime({
screenContextTimezone: request.body.screenContext?.timeZone,
uiSettingsDateFormatTimezone,
}),
});
const inputs: GraphInputs = {
responseLanguage,

View file

@ -0,0 +1,83 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import { runAgent, RunAgentParams } from './run_agent';
import { actionsClientMock } from '@kbn/actions-plugin/server/mocks';
import { AgentState } from '../types';
import { loggerMock } from '@kbn/logging-mocks';
import { savedObjectsClientMock } from '@kbn/core/server/mocks';
import { AIMessage } from '@langchain/core/messages';
jest.mock('../../../../prompt', () => ({
getPrompt: jest.fn(),
promptDictionary: {},
}));
const agentState = {
messages: [new AIMessage({ content: 'This message contains a reference {reference(1234)}' })],
formattedTime: 'mockFormattedTime',
} as unknown as AgentState;
const invokeMock = jest.fn().mockResolvedValue({});
const testParams = {
actionsClient: actionsClientMock.create(),
logger: loggerMock.create(),
savedObjectsClient: savedObjectsClientMock.create(),
state: agentState,
agentRunnable: {
withConfig: jest.fn().mockReturnValue({
invoke: invokeMock,
}),
},
config: undefined,
kbDataClient: {
getRequiredKnowledgeBaseDocumentEntries: jest.fn().mockResolvedValue([{ text: 'foobar' }]),
},
} as unknown as RunAgentParams;
describe('runAgent', () => {
beforeEach(() => {
jest.clearAllMocks();
});
it('invoked with formattedTime placeholder', async () => {
await runAgent(testParams);
expect(invokeMock).toHaveBeenCalledTimes(1);
expect(invokeMock).toHaveBeenCalledWith(
expect.objectContaining({
formattedTime: 'mockFormattedTime',
}),
undefined
);
});
it('invoked with knowledgeHistory placeholder', async () => {
await runAgent(testParams);
expect(invokeMock).toHaveBeenCalledTimes(1);
expect(invokeMock).toHaveBeenCalledWith(
expect.objectContaining({
knowledge_history: 'Knowledge History:\n["foobar"]',
}),
undefined
);
});
it('invoked with sanitized chat history', async () => {
await runAgent(testParams);
expect(invokeMock).toHaveBeenCalledTimes(1);
expect(invokeMock).toHaveBeenCalledWith(
expect.objectContaining({
chat_history: expect.arrayContaining([
expect.objectContaining({
content: 'This message contains a reference ',
}),
]),
}),
undefined
);
});
});

View file

@ -43,6 +43,7 @@ export interface AgentState extends AgentStateBase {
connectorId: string;
conversation: ConversationResponse | undefined;
conversationId: string;
formattedTime: string;
}
export interface NodeParamsBase {

View file

@ -0,0 +1,107 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import { getFormattedTime } from './helpers';
describe('helper', () => {
describe('getCurrentTimeForPrompt', () => {
beforeEach(() => {
jest.clearAllMocks();
jest
.useFakeTimers()
.setSystemTime(new Date('Fri Feb 14 2025 07:33:12 UTC+0000 (Greenwich Mean Time)'));
});
it.each([
// kibana settings timezone and no screen context timezone
['Browser', undefined, 'Current time: Fri, Feb 14, 2025 7:33 AM UTC+00:00'],
[undefined, undefined, 'Current time: Fri, Feb 14, 2025 7:33 AM UTC+00:00'],
[
'Europe/Zurich',
undefined,
'Current time: Fri, Feb 14, 2025 8:33 AM UTC+01:00 (7:33 AM UTC)',
],
[
'Europe/Warsaw',
undefined,
'Current time: Fri, Feb 14, 2025 8:33 AM UTC+01:00 (7:33 AM UTC)',
],
[
'America/Denver',
undefined,
'Current time: Fri, Feb 14, 2025 12:33 AM UTC-07:00 (7:33 AM UTC)',
],
['MST', undefined, 'Current time: Fri, Feb 14, 2025 12:33 AM UTC-07:00 (7:33 AM UTC)'],
[
'America/Los_Angeles',
undefined,
'Current time: Thu, Feb 13, 2025 11:33 PM UTC-08:00 (7:33 AM UTC)',
],
// Custom kibana settings timezone and screen context timezone
[
'Europe/Zurich',
'America/Denver',
'Current time: Fri, Feb 14, 2025 8:33 AM UTC+01:00 (7:33 AM UTC)',
],
[
'Europe/Warsaw',
'America/Denver',
'Current time: Fri, Feb 14, 2025 8:33 AM UTC+01:00 (7:33 AM UTC)',
],
[
'America/Denver',
'Europe/Warsaw',
'Current time: Fri, Feb 14, 2025 12:33 AM UTC-07:00 (7:33 AM UTC)',
],
['MST', 'Europe/Warsaw', 'Current time: Fri, Feb 14, 2025 12:33 AM UTC-07:00 (7:33 AM UTC)'],
[
'America/Los_Angeles',
'Europe/Warsaw',
'Current time: Thu, Feb 13, 2025 11:33 PM UTC-08:00 (7:33 AM UTC)',
],
// screen context timezone and Browser kibana setting timezone
['Browser', 'Europe/London', 'Current time: Fri, Feb 14, 2025 7:33 AM UTC+00:00'],
[
'Browser',
'Europe/Zurich',
'Current time: Fri, Feb 14, 2025 8:33 AM UTC+01:00 (7:33 AM UTC)',
],
[
'Browser',
'Europe/Warsaw',
'Current time: Fri, Feb 14, 2025 8:33 AM UTC+01:00 (7:33 AM UTC)',
],
[
'Browser',
'America/Denver',
'Current time: Fri, Feb 14, 2025 12:33 AM UTC-07:00 (7:33 AM UTC)',
],
['Browser', 'MST', 'Current time: Fri, Feb 14, 2025 12:33 AM UTC-07:00 (7:33 AM UTC)'],
[
'Browser',
'America/Los_Angeles',
'Current time: Thu, Feb 13, 2025 11:33 PM UTC-08:00 (7:33 AM UTC)',
],
])(
'when timezone from kibana settings is "%s" and screenContext.timezone is "%s", then result is "%s"',
async (
uiSettingsDateFormatTimezone: string | undefined,
screenContextTimezone: string | undefined,
expectedResult: string
) => {
const result = getFormattedTime({
screenContextTimezone,
uiSettingsDateFormatTimezone,
});
expect(result).toEqual(expectedResult);
}
);
});
});

View file

@ -4,6 +4,8 @@
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
import { ScreenContext } from '@kbn/elastic-assistant-common';
import moment from 'moment-timezone';
/**
* use oss as model when using openai and oss
@ -18,3 +20,34 @@ export const getModelOrOss = (
isOssModel?: boolean,
model?: string
): string | undefined => (llmType === 'openai' && isOssModel ? 'oss' : model);
const TIME_FORMAT = 'llll [UTC]Z';
const UTC_CONVERSION_TIME_FORMAT = 'LT [UTC]';
export const getFormattedTime = ({
screenContextTimezone,
uiSettingsDateFormatTimezone,
}: {
screenContextTimezone: ScreenContext['timeZone'];
uiSettingsDateFormatTimezone: string | undefined; // From core.uiSettings.client.get('dateFormat:tz')
}) => {
const currentTimezone: string =
(uiSettingsDateFormatTimezone === 'Browser'
? screenContextTimezone
: uiSettingsDateFormatTimezone) ?? 'UTC';
const now = new Date();
const currentFormatter = moment.tz(now, currentTimezone);
const utcFormatter = moment.tz(now, 'UTC');
// If the local timezone is different from UTC, we should show the UTC time as well
const utcConversionRequired = currentFormatter.format('[UTC]Z') !== utcFormatter.format('[UTC]Z');
const currentTime = currentFormatter.format(TIME_FORMAT);
const utcConversion = utcConversionRequired
? utcFormatter.format(UTC_CONVERSION_TIME_FORMAT)
: undefined;
return `Current time: ${currentTime} ${utcConversion ? `(${utcConversion})` : ''}`.trim();
};

View file

@ -14,10 +14,14 @@ import {
describe('prompts', () => {
it.each([
[DEFAULT_SYSTEM_PROMPT, 'Annotate your answer with relevant citations', 1],
[GEMINI_SYSTEM_PROMPT, 'Annotate your answer with relevant citations', 1],
[BEDROCK_SYSTEM_PROMPT, 'Annotate your answer with relevant citations', 1],
[STRUCTURED_SYSTEM_PROMPT, 'Annotate your answer with relevant citations', 1],
[DEFAULT_SYSTEM_PROMPT, 'Annotate your answer with the provided citations', 1],
[GEMINI_SYSTEM_PROMPT, 'Annotate your answer with the provided citations', 1],
[BEDROCK_SYSTEM_PROMPT, 'Annotate your answer with the provided citations', 1],
[STRUCTURED_SYSTEM_PROMPT, 'Annotate your answer with the provided citations', 1],
[DEFAULT_SYSTEM_PROMPT, '{formattedTime}', 1],
[GEMINI_SYSTEM_PROMPT, '{formattedTime}', 1],
[BEDROCK_SYSTEM_PROMPT, '{formattedTime}', 1],
[STRUCTURED_SYSTEM_PROMPT, '{formattedTime}', 1],
[DEFAULT_SYSTEM_PROMPT, 'You are a security analyst', 1],
[GEMINI_SYSTEM_PROMPT, 'You are an assistant', 1],
[BEDROCK_SYSTEM_PROMPT, 'You are a security analyst', 1],

View file

@ -7,14 +7,14 @@
export const KNOWLEDGE_HISTORY =
'If available, use the Knowledge History provided to try and answer the question. If not provided, you can try and query for additional knowledge via the KnowledgeBaseRetrievalTool.';
export const INCLUDE_CITATIONS = `\n\nAnnotate your answer with relevant citations. Here are some example responses with citations: \n1. "Machine learning is increasingly used in cyber threat detection. {{reference(prSit)}}" \n2. "The alert has a risk score of 72. {{reference(OdRs2)}}"\n\nOnly use the citations returned by tools\n\n`;
export const DEFAULT_SYSTEM_PROMPT = `You are a security analyst and expert in resolving security incidents. Your role is to assist by answering questions about Elastic Security. Do not answer questions unrelated to Elastic Security. ${KNOWLEDGE_HISTORY} ${INCLUDE_CITATIONS}`;
export const INCLUDE_CITATIONS = `\n\nAnnotate your answer with the provided citations. Here are some example responses with citations: \n1. "Machine learning is increasingly used in cyber threat detection. {{reference(prSit)}}" \n2. "The alert has a risk score of 72. {{reference(OdRs2)}}"\n\nOnly use the citations returned by tools\n\n`;
export const DEFAULT_SYSTEM_PROMPT = `You are a security analyst and expert in resolving security incidents. Your role is to assist by answering questions about Elastic Security. Do not answer questions unrelated to Elastic Security. ${KNOWLEDGE_HISTORY} ${INCLUDE_CITATIONS} \n{formattedTime}`;
// system prompt from @afirstenberg
const BASE_GEMINI_PROMPT =
'You are an assistant that is an expert at using tools and Elastic Security, doing your best to use these tools to answer questions or follow instructions. It is very important to use tools to answer the question or follow the instructions rather than coming up with your own answer. Tool calls are good. Sometimes you may need to make several tool calls to accomplish the task or get an answer to the question that was asked. Use as many tool calls as necessary.';
const KB_CATCH =
'If the knowledge base tool gives empty results, do your best to answer the question from the perspective of an expert security analyst.';
export const GEMINI_SYSTEM_PROMPT = `${BASE_GEMINI_PROMPT} ${INCLUDE_CITATIONS} ${KB_CATCH}`;
export const GEMINI_SYSTEM_PROMPT = `${BASE_GEMINI_PROMPT} ${INCLUDE_CITATIONS} ${KB_CATCH} \n{formattedTime}`;
export const BEDROCK_SYSTEM_PROMPT = `${DEFAULT_SYSTEM_PROMPT} Use tools as often as possible, as they have access to the latest data and syntax. Never return <thinking> tags in the response, but make sure to include <result> tags content in the response. Do not reflect on the quality of the returned search results in your response. ALWAYS return the exact response from NaturalLanguageESQLTool verbatim in the final response, without adding further description.`;
export const GEMINI_USER_PROMPT = `Now, always using the tools at your disposal, step by step, come up with a response to this request:\n\n`;
@ -72,7 +72,7 @@ Action:
"action_input": "Final response to human"}}
Begin! Reminder to ALWAYS respond with a valid json blob of a single action with no additional output. When using tools, ALWAYS input the expected JSON schema args. Your answer will be parsed as JSON, so never use double quotes within the output and instead use backticks. Single quotes may be used, such as apostrophes. Response format is Action:\`\`\`$JSON_BLOB\`\`\`then Observation`;
Begin! Reminder to ALWAYS respond with a valid json blob of a single action with no additional output. When using tools, ALWAYS input the expected JSON schema args. Your answer will be parsed as JSON, so never use double quotes within the output and instead use backticks. Single quotes may be used, such as apostrophes. Response format is Action:\`\`\`$JSON_BLOB\`\`\`then Observation. \n{formattedTime}`;
export const ATTACK_DISCOVERY_DEFAULT =
"You are a cyber security analyst tasked with analyzing security events from Elastic Security to identify and report on potential cyber attacks or progressions. Your report should focus on high-risk incidents that could severely impact the organization, rather than isolated alerts. Present your findings in a way that can be easily understood by anyone, regardless of their technical expertise, as if you were briefing the CISO. Break down your response into sections based on timing, hosts, and users involved. When correlating alerts, use kibana.alert.original_time when it's available, otherwise use @timestamp. Include appropriate context about the affected hosts and users. Describe how the attack progression might have occurred and, if feasible, attribute it to known threat groups. Prioritize high and critical alerts, but include lower-severity alerts if desired. In the description field, provide as much detail as possible, in a bulleted list explaining any attack progressions. Accuracy is of utmost importance. You MUST escape all JSON special characters (i.e. backslashes, double quotes, newlines, tabs, carriage returns, backspaces, and form feeds).";

View file

@ -35,7 +35,7 @@ import {
import { omit } from 'lodash/fp';
import { localToolPrompts, promptGroupId as toolsGroupId } from '../../lib/prompt/tool_prompts';
import { promptGroupId } from '../../lib/prompt/local_prompt_object';
import { getModelOrOss } from '../../lib/prompt/helpers';
import { getFormattedTime, getModelOrOss } from '../../lib/prompt/helpers';
import { getAttackDiscoveryPrompts } from '../../lib/attack_discovery/graphs/default_attack_discovery_graph/nodes/helpers/prompts';
import {
formatPrompt,
@ -56,6 +56,7 @@ import {
} from '../../lib/langchain/graphs/default_assistant_graph/graph';
import { getLlmClass, getLlmType, isOpenSourceModel } from '../utils';
import { getGraphsFromNames } from './get_graphs_from_names';
import { DEFAULT_DATE_FORMAT_TZ } from '../../../common/constants';
const DEFAULT_SIZE = 20;
const ROUTE_HANDLER_TIMEOUT = 10 * 60 * 1000; // 10 * 60 seconds = 10 minutes
@ -377,6 +378,10 @@ export const postEvaluateRoute = (
streamRunnable: false,
});
const uiSettingsDateFormatTimezone = await ctx.core.uiSettings.client.get<string>(
DEFAULT_DATE_FORMAT_TZ
);
return {
connectorId: connector.id,
name: `${runName} - ${connector.name}`,
@ -391,6 +396,11 @@ export const postEvaluateRoute = (
savedObjectsClient,
tools,
replacements: {},
getFormattedTime: () =>
getFormattedTime({
screenContextTimezone: request.body.screenContext?.timeZone,
uiSettingsDateFormatTimezone,
}),
}),
};
})

View file

@ -25,6 +25,7 @@ import {
ContentReferencesStore,
ContentReferences,
MessageMetadata,
ScreenContext,
} from '@kbn/elastic-assistant-common';
import { ILicense } from '@kbn/licensing-plugin/server';
import { i18n } from '@kbn/i18n';
@ -252,6 +253,7 @@ export interface LangChainExecuteParams {
response: KibanaResponseFactory;
responseLanguage?: string;
savedObjectsClient: SavedObjectsClientContract;
screenContext?: ScreenContext;
systemPrompt?: string;
}
export const langChainExecute = async ({
@ -277,6 +279,7 @@ export const langChainExecute = async ({
responseLanguage,
isStream = true,
savedObjectsClient,
screenContext,
systemPrompt,
}: LangChainExecuteParams) => {
// Fetch any tools registered by the request's originating plugin
@ -318,6 +321,7 @@ export const langChainExecute = async ({
abortSignal,
dataClients,
alertsIndexPattern: request.body.alertsIndexPattern,
core: context.core,
actionsClient,
assistantTools,
conversationId,
@ -337,6 +341,7 @@ export const langChainExecute = async ({
replacements,
responseLanguage,
savedObjectsClient,
screenContext,
size: request.body.size,
systemPrompt,
telemetry,

View file

@ -88,6 +88,7 @@ export const postActionsConnectorExecuteRoute = (
let newMessage: Pick<Message, 'content' | 'role'> | undefined;
const conversationId = request.body.conversationId;
const actionTypeId = request.body.actionTypeId;
const screenContext = request.body.screenContext;
const connectorId = decodeURIComponent(request.params.connectorId);
// if message is undefined, it means the user is regenerating a message from the stored conversation
@ -163,6 +164,7 @@ export const postActionsConnectorExecuteRoute = (
response,
telemetry,
savedObjectsClient,
screenContext,
systemPrompt,
...(productDocsAvailable ? { llmTasks: ctx.elasticAssistant.llmTasks } : {}),
});

View file

@ -55,7 +55,7 @@
"@kbn/product-doc-base-plugin",
"@kbn/core-saved-objects-api-server-mocks",
"@kbn/security-ai-prompts",
"@kbn/datemath"
"@kbn/datemath",
],
"exclude": [
"target/**/*",

View file

@ -9,7 +9,7 @@ import { concatMap, delay, finalize, Observable, of, scan, timestamp } from 'rxj
import type { Dispatch, SetStateAction } from 'react';
import type { PromptObservableState } from './types';
import { API_ERROR } from '../translations';
const MIN_DELAY = 35;
const MIN_DELAY = 10;
interface StreamObservable {
isError: boolean;