mirror of
https://github.com/elastic/kibana.git
synced 2025-04-24 09:48:58 -04:00
# Backport This will backport the following commits from `main` to `8.16`: - [[Attack discovery] Fix error handling in LM studio (#213855)](https://github.com/elastic/kibana/pull/213855) <!--- Backport version: 9.6.6 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sorenlouv/backport) <!--BACKPORT [{"author":{"name":"Patryk Kopyciński","email":"contact@patrykkopycinski.com"},"sourceCommit":{"committedDate":"2025-03-12T02:06:48Z","message":"[Attack discovery] Fix error handling in LM studio (#213855)\n\n## Summary\n\nError were not properly propagated to the user and instead of meaningful\nmessage we were displaying just `API Error`.\n\n<img width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47 59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n \n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX` variant (optimized for Mac) of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2. Configure the model to have a context length of `131072` tokens, as\nillustrated by the screenshot below:\n\n\n\n\n3. Serve ONLY the model above in LM Studio. (Ensure no other models are\nrunning in LM Studio), as illustrated by the screenshot below:\n\n\n\n\n4. Configure a connector via the details in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut change:\n\n```\nlocal-model\n```\n\nto the name of the model when configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas illustrated by the screenshot below:\n\n\n\n\n5. Generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the error described in the later steps\nbelow.\n- Progress on generating discoveries is visible in Langsmith, as\nillustrated by the screenshot below:\n\n\n\n\nNote: `Llama-3.2-3B-Instruct-4bit` may not reliably generate Attack\ndiscoveries, so generation may still fail after `10` generation /\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as illustrated by the\nscreenshot below:\n\n\n\n\n7. Once again, generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the errors below\n- Progress on generating discoveries is visible in Langsmith, though as\nnoted above, generation may still fail after `10` attempts if the model\ndoes not produce output that conforms to the expected schema\n\n**Actual results**\n\n- Generation fails with an error similar to:\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 400. Message: API Error:\nBad Request: ActionsClientLlm: action result status is error: an error occurred while running the action - Status code: 400. Message: API Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 404. Message: API Error: Not Found - Model \"llama-3.2-3b-instruct\" not found. Please specify a valid model.\n```\n\nas illustrated by the following screenshot:\n\n\n","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87","branchLabelMapping":{"^v9.1.0$":"main","^v8.19.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["bug","release_note:skip","backport:prev-major","Team:Security Generative AI","Feature:Attack Discovery","backport:current-major","v9.1.0"],"title":"[Attack discovery] Fix error handling in LM studio","number":213855,"url":"https://github.com/elastic/kibana/pull/213855","mergeCommit":{"message":"[Attack discovery] Fix error handling in LM studio (#213855)\n\n## Summary\n\nError were not properly propagated to the user and instead of meaningful\nmessage we were displaying just `API Error`.\n\n<img width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47 59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n \n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX` variant (optimized for Mac) of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2. Configure the model to have a context length of `131072` tokens, as\nillustrated by the screenshot below:\n\n\n\n\n3. Serve ONLY the model above in LM Studio. (Ensure no other models are\nrunning in LM Studio), as illustrated by the screenshot below:\n\n\n\n\n4. Configure a connector via the details in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut change:\n\n```\nlocal-model\n```\n\nto the name of the model when configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas illustrated by the screenshot below:\n\n\n\n\n5. Generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the error described in the later steps\nbelow.\n- Progress on generating discoveries is visible in Langsmith, as\nillustrated by the screenshot below:\n\n\n\n\nNote: `Llama-3.2-3B-Instruct-4bit` may not reliably generate Attack\ndiscoveries, so generation may still fail after `10` generation /\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as illustrated by the\nscreenshot below:\n\n\n\n\n7. Once again, generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the errors below\n- Progress on generating discoveries is visible in Langsmith, though as\nnoted above, generation may still fail after `10` attempts if the model\ndoes not produce output that conforms to the expected schema\n\n**Actual results**\n\n- Generation fails with an error similar to:\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 400. Message: API Error:\nBad Request: ActionsClientLlm: action result status is error: an error occurred while running the action - Status code: 400. Message: API Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 404. Message: API Error: Not Found - Model \"llama-3.2-3b-instruct\" not found. Please specify a valid model.\n```\n\nas illustrated by the following screenshot:\n\n\n","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.1.0","branchLabelMappingKey":"^v9.1.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/213855","number":213855,"mergeCommit":{"message":"[Attack discovery] Fix error handling in LM studio (#213855)\n\n## Summary\n\nError were not properly propagated to the user and instead of meaningful\nmessage we were displaying just `API Error`.\n\n<img width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47 59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n \n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX` variant (optimized for Mac) of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2. Configure the model to have a context length of `131072` tokens, as\nillustrated by the screenshot below:\n\n\n\n\n3. Serve ONLY the model above in LM Studio. (Ensure no other models are\nrunning in LM Studio), as illustrated by the screenshot below:\n\n\n\n\n4. Configure a connector via the details in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut change:\n\n```\nlocal-model\n```\n\nto the name of the model when configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas illustrated by the screenshot below:\n\n\n\n\n5. Generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the error described in the later steps\nbelow.\n- Progress on generating discoveries is visible in Langsmith, as\nillustrated by the screenshot below:\n\n\n\n\nNote: `Llama-3.2-3B-Instruct-4bit` may not reliably generate Attack\ndiscoveries, so generation may still fail after `10` generation /\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as illustrated by the\nscreenshot below:\n\n\n\n\n7. Once again, generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the errors below\n- Progress on generating discoveries is visible in Langsmith, though as\nnoted above, generation may still fail after `10` attempts if the model\ndoes not produce output that conforms to the expected schema\n\n**Actual results**\n\n- Generation fails with an error similar to:\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 400. Message: API Error:\nBad Request: ActionsClientLlm: action result status is error: an error occurred while running the action - Status code: 400. Message: API Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 404. Message: API Error: Not Found - Model \"llama-3.2-3b-instruct\" not found. Please specify a valid model.\n```\n\nas illustrated by the following screenshot:\n\n\n","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}}]}] BACKPORT--> Co-authored-by: Patryk Kopyciński <contact@patrykkopycinski.com> |
||
---|---|---|
.. | ||
common | ||
docs/img | ||
scripts | ||
server | ||
jest.config.js | ||
kibana.jsonc | ||
package.json | ||
README.md | ||
tsconfig.json |
Elastic AI Assistant
This plugin implements (only) server APIs for the Elastic AI Assistant
.
This plugin does NOT contain UI components. See x-pack/packages/kbn-elastic-assistant
for React components.
Maintainers
Maintained by the Security Solution team
Graph structure
Default Assistant graph
Default Attack discovery graph
Development
Generate graph structure
To generate the graph structure, run yarn draw-graph
from the plugin directory.
The graphs will be generated in the docs/img
directory of the plugin.
Testing
To run the tests for this plugin, run node scripts/jest --watch x-pack/plugins/elastic_assistant/jest.config.js --coverage
from the Kibana root directory.