[8.x] [Attack discovery] Fix error handling in LM studio (#213855) (#214042)

# Backport

This will backport the following commits from `main` to `8.x`:
- [[Attack discovery] Fix error handling in LM studio
(#213855)](https://github.com/elastic/kibana/pull/213855)

<!--- Backport version: 9.6.6 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Patryk
Kopyciński","email":"contact@patrykkopycinski.com"},"sourceCommit":{"committedDate":"2025-03-12T02:06:48Z","message":"[Attack
discovery] Fix error handling in LM studio (#213855)\n\n##
Summary\n\nError were not properly propagated to the user and instead of
meaningful\nmessage we were displaying just `API Error`.\n\n<img
width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47
59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n
\n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk
testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX`
variant (optimized for Mac)
of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2.
Configure the model to have a context length of `131072` tokens,
as\nillustrated by the screenshot
below:\n\n\n![context_length](https://github.com/user-attachments/assets/505f64af-6d03-4f66-a485-7b25ebc4cae2)\n\n3.
Serve ONLY the model above in LM Studio. (Ensure no other models
are\nrunning in LM Studio), as illustrated by the screenshot
below:\n\n\n![one_model_running](https://github.com/user-attachments/assets/af29bea5-4cc3-401c-87d8-4b5778acdfe6)\n\n4.
Configure a connector via the details
in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut
change:\n\n```\nlocal-model\n```\n\nto the name of the model when
configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas
illustrated by the screenshot
below:\n\n\n![connector](https://github.com/user-attachments/assets/5c2bcba3-6cc0-4066-833b-fe68d4c64569)\n\n5.
Generate Attack discoveries\n\n**Expected results**\n\n- Generation does
NOT fail with the error described in the later steps\nbelow.\n- Progress
on generating discoveries is visible in Langsmith, as\nillustrated by
the screenshot
below:\n\n\n![langsmith](https://github.com/user-attachments/assets/ac2f36f4-35de-4cc9-b9aa-8b9e09d32569)\n\nNote:
`Llama-3.2-3B-Instruct-4bit` may not reliably generate
Attack\ndiscoveries, so generation may still fail after `10` generation
/\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as
illustrated by the\nscreenshot
below:\n\n\n![llm_studio_2nd_model](https://github.com/user-attachments/assets/93eda24c-c016-4f81-919c-0cbf5ffb63b0)\n\n7.
Once again, generate Attack discoveries\n\n**Expected results**\n\n-
Generation does NOT fail with the errors below\n- Progress on generating
discoveries is visible in Langsmith, though as\nnoted above, generation
may still fail after `10` attempts if the model\ndoes not produce output
that conforms to the expected schema\n\n**Actual results**\n\n-
Generation fails with an error similar to:\n\n```\ngenerate node is
unable to parse (openai) response from attempt 0; (this may be an
incomplete response from the model): Status code: 400. Message: API
Error:\nBad Request: ActionsClientLlm: action result status is error: an
error occurred while running the action - Status code: 400. Message: API
Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse
(openai) response from attempt 0; (this may be an incomplete response
from the model): Status code: 404. Message: API Error: Not Found - Model
\"llama-3.2-3b-instruct\" not found. Please specify a valid
model.\n```\n\nas illustrated by the following
screenshot:\n\n\n![error](https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46)","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87","branchLabelMapping":{"^v9.1.0$":"main","^v8.19.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["bug","release_note:skip","backport:prev-major","Team:Security
Generative AI","Feature:Attack
Discovery","backport:current-major","v9.1.0"],"title":"[Attack
discovery] Fix error handling in LM
studio","number":213855,"url":"https://github.com/elastic/kibana/pull/213855","mergeCommit":{"message":"[Attack
discovery] Fix error handling in LM studio (#213855)\n\n##
Summary\n\nError were not properly propagated to the user and instead of
meaningful\nmessage we were displaying just `API Error`.\n\n<img
width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47
59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n
\n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk
testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX`
variant (optimized for Mac)
of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2.
Configure the model to have a context length of `131072` tokens,
as\nillustrated by the screenshot
below:\n\n\n![context_length](https://github.com/user-attachments/assets/505f64af-6d03-4f66-a485-7b25ebc4cae2)\n\n3.
Serve ONLY the model above in LM Studio. (Ensure no other models
are\nrunning in LM Studio), as illustrated by the screenshot
below:\n\n\n![one_model_running](https://github.com/user-attachments/assets/af29bea5-4cc3-401c-87d8-4b5778acdfe6)\n\n4.
Configure a connector via the details
in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut
change:\n\n```\nlocal-model\n```\n\nto the name of the model when
configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas
illustrated by the screenshot
below:\n\n\n![connector](https://github.com/user-attachments/assets/5c2bcba3-6cc0-4066-833b-fe68d4c64569)\n\n5.
Generate Attack discoveries\n\n**Expected results**\n\n- Generation does
NOT fail with the error described in the later steps\nbelow.\n- Progress
on generating discoveries is visible in Langsmith, as\nillustrated by
the screenshot
below:\n\n\n![langsmith](https://github.com/user-attachments/assets/ac2f36f4-35de-4cc9-b9aa-8b9e09d32569)\n\nNote:
`Llama-3.2-3B-Instruct-4bit` may not reliably generate
Attack\ndiscoveries, so generation may still fail after `10` generation
/\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as
illustrated by the\nscreenshot
below:\n\n\n![llm_studio_2nd_model](https://github.com/user-attachments/assets/93eda24c-c016-4f81-919c-0cbf5ffb63b0)\n\n7.
Once again, generate Attack discoveries\n\n**Expected results**\n\n-
Generation does NOT fail with the errors below\n- Progress on generating
discoveries is visible in Langsmith, though as\nnoted above, generation
may still fail after `10` attempts if the model\ndoes not produce output
that conforms to the expected schema\n\n**Actual results**\n\n-
Generation fails with an error similar to:\n\n```\ngenerate node is
unable to parse (openai) response from attempt 0; (this may be an
incomplete response from the model): Status code: 400. Message: API
Error:\nBad Request: ActionsClientLlm: action result status is error: an
error occurred while running the action - Status code: 400. Message: API
Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse
(openai) response from attempt 0; (this may be an incomplete response
from the model): Status code: 404. Message: API Error: Not Found - Model
\"llama-3.2-3b-instruct\" not found. Please specify a valid
model.\n```\n\nas illustrated by the following
screenshot:\n\n\n![error](https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46)","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.1.0","branchLabelMappingKey":"^v9.1.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/213855","number":213855,"mergeCommit":{"message":"[Attack
discovery] Fix error handling in LM studio (#213855)\n\n##
Summary\n\nError were not properly propagated to the user and instead of
meaningful\nmessage we were displaying just `API Error`.\n\n<img
width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47
59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n
\n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk
testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX`
variant (optimized for Mac)
of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2.
Configure the model to have a context length of `131072` tokens,
as\nillustrated by the screenshot
below:\n\n\n![context_length](https://github.com/user-attachments/assets/505f64af-6d03-4f66-a485-7b25ebc4cae2)\n\n3.
Serve ONLY the model above in LM Studio. (Ensure no other models
are\nrunning in LM Studio), as illustrated by the screenshot
below:\n\n\n![one_model_running](https://github.com/user-attachments/assets/af29bea5-4cc3-401c-87d8-4b5778acdfe6)\n\n4.
Configure a connector via the details
in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut
change:\n\n```\nlocal-model\n```\n\nto the name of the model when
configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas
illustrated by the screenshot
below:\n\n\n![connector](https://github.com/user-attachments/assets/5c2bcba3-6cc0-4066-833b-fe68d4c64569)\n\n5.
Generate Attack discoveries\n\n**Expected results**\n\n- Generation does
NOT fail with the error described in the later steps\nbelow.\n- Progress
on generating discoveries is visible in Langsmith, as\nillustrated by
the screenshot
below:\n\n\n![langsmith](https://github.com/user-attachments/assets/ac2f36f4-35de-4cc9-b9aa-8b9e09d32569)\n\nNote:
`Llama-3.2-3B-Instruct-4bit` may not reliably generate
Attack\ndiscoveries, so generation may still fail after `10` generation
/\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as
illustrated by the\nscreenshot
below:\n\n\n![llm_studio_2nd_model](https://github.com/user-attachments/assets/93eda24c-c016-4f81-919c-0cbf5ffb63b0)\n\n7.
Once again, generate Attack discoveries\n\n**Expected results**\n\n-
Generation does NOT fail with the errors below\n- Progress on generating
discoveries is visible in Langsmith, though as\nnoted above, generation
may still fail after `10` attempts if the model\ndoes not produce output
that conforms to the expected schema\n\n**Actual results**\n\n-
Generation fails with an error similar to:\n\n```\ngenerate node is
unable to parse (openai) response from attempt 0; (this may be an
incomplete response from the model): Status code: 400. Message: API
Error:\nBad Request: ActionsClientLlm: action result status is error: an
error occurred while running the action - Status code: 400. Message: API
Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse
(openai) response from attempt 0; (this may be an incomplete response
from the model): Status code: 404. Message: API Error: Not Found - Model
\"llama-3.2-3b-instruct\" not found. Please specify a valid
model.\n```\n\nas illustrated by the following
screenshot:\n\n\n![error](https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46)","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}}]}]
BACKPORT-->

Co-authored-by: Patryk Kopyciński <contact@patrykkopycinski.com>
This commit is contained in:
Kibana Machine 2025-03-12 15:04:50 +11:00 committed by GitHub
parent e51bb2215e
commit d21c5f364c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 22 additions and 6 deletions

View file

@ -643,6 +643,23 @@ describe('OpenAIConnector', () => {
).toEqual(`API Error: Resource Not Found - Resource not found`);
});
it('returns the error.response.data.error', () => {
const err = {
response: {
headers: {},
status: 404,
statusText: 'Resource Not Found',
data: {
error: 'Resource not found',
},
},
} as AxiosError<{ error?: string }>;
expect(
// @ts-expect-error expects an axios error as the parameter
connector.getResponseErrorMessage(err)
).toEqual(`API Error: Resource Not Found - Resource not found`);
});
it('returns auhtorization error', () => {
const err = {
response: {

View file

@ -152,14 +152,12 @@ export class OpenAIConnector extends SubActionConnector<Config, Secrets> {
if (!error.response?.status) {
return `Unexpected API Error: ${error.code ?? ''} - ${error.message ?? 'Unknown error'}`;
}
// LM Studio returns error.response?.data?.error as string
const errorMessage = error.response?.data?.error?.message ?? error.response?.data?.error;
if (error.response.status === 401) {
return `Unauthorized API Error${
error.response?.data?.error?.message ? ` - ${error.response.data.error?.message}` : ''
}`;
return `Unauthorized API Error${errorMessage ? ` - ${errorMessage}` : ''}`;
}
return `API Error: ${error.response?.statusText}${
error.response?.data?.error?.message ? ` - ${error.response.data.error?.message}` : ''
}`;
return `API Error: ${error.response?.statusText}${errorMessage ? ` - ${errorMessage}` : ''}`;
}
/**
* responsible for making a POST request to the external API endpoint and returning the response data

View file

@ -83,6 +83,7 @@ export const invokeAttackDiscoveryGraph = async ({
connectorId: apiConfig.connectorId,
llmType,
logger,
model,
temperature: 0, // zero temperature for attack discovery, because we want structured JSON output
timeout: connectorTimeout,
traceOptions,