# Backport This will backport the following commits from `main` to `8.x`: - [[Attack discovery] Fix error handling in LM studio (#213855)](https://github.com/elastic/kibana/pull/213855) <!--- Backport version: 9.6.6 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sorenlouv/backport) <!--BACKPORT [{"author":{"name":"Patryk Kopyciński","email":"contact@patrykkopycinski.com"},"sourceCommit":{"committedDate":"2025-03-12T02:06:48Z","message":"[Attack discovery] Fix error handling in LM studio (#213855)\n\n## Summary\n\nError were not properly propagated to the user and instead of meaningful\nmessage we were displaying just `API Error`.\n\n<img width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47 59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n \n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX` variant (optimized for Mac) of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2. Configure the model to have a context length of `131072` tokens, as\nillustrated by the screenshot below:\n\n\n\n\n3. Serve ONLY the model above in LM Studio. (Ensure no other models are\nrunning in LM Studio), as illustrated by the screenshot below:\n\n\n\n\n4. Configure a connector via the details in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut change:\n\n```\nlocal-model\n```\n\nto the name of the model when configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas illustrated by the screenshot below:\n\n\n\n\n5. Generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the error described in the later steps\nbelow.\n- Progress on generating discoveries is visible in Langsmith, as\nillustrated by the screenshot below:\n\n\n\n\nNote: `Llama-3.2-3B-Instruct-4bit` may not reliably generate Attack\ndiscoveries, so generation may still fail after `10` generation /\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as illustrated by the\nscreenshot below:\n\n\n\n\n7. Once again, generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the errors below\n- Progress on generating discoveries is visible in Langsmith, though as\nnoted above, generation may still fail after `10` attempts if the model\ndoes not produce output that conforms to the expected schema\n\n**Actual results**\n\n- Generation fails with an error similar to:\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 400. Message: API Error:\nBad Request: ActionsClientLlm: action result status is error: an error occurred while running the action - Status code: 400. Message: API Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 404. Message: API Error: Not Found - Model \"llama-3.2-3b-instruct\" not found. Please specify a valid model.\n```\n\nas illustrated by the following screenshot:\n\n\n","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87","branchLabelMapping":{"^v9.1.0$":"main","^v8.19.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["bug","release_note:skip","backport:prev-major","Team:Security Generative AI","Feature:Attack Discovery","backport:current-major","v9.1.0"],"title":"[Attack discovery] Fix error handling in LM studio","number":213855,"url":"https://github.com/elastic/kibana/pull/213855","mergeCommit":{"message":"[Attack discovery] Fix error handling in LM studio (#213855)\n\n## Summary\n\nError were not properly propagated to the user and instead of meaningful\nmessage we were displaying just `API Error`.\n\n<img width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47 59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n \n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX` variant (optimized for Mac) of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2. Configure the model to have a context length of `131072` tokens, as\nillustrated by the screenshot below:\n\n\n\n\n3. Serve ONLY the model above in LM Studio. (Ensure no other models are\nrunning in LM Studio), as illustrated by the screenshot below:\n\n\n\n\n4. Configure a connector via the details in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut change:\n\n```\nlocal-model\n```\n\nto the name of the model when configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas illustrated by the screenshot below:\n\n\n\n\n5. Generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the error described in the later steps\nbelow.\n- Progress on generating discoveries is visible in Langsmith, as\nillustrated by the screenshot below:\n\n\n\n\nNote: `Llama-3.2-3B-Instruct-4bit` may not reliably generate Attack\ndiscoveries, so generation may still fail after `10` generation /\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as illustrated by the\nscreenshot below:\n\n\n\n\n7. Once again, generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the errors below\n- Progress on generating discoveries is visible in Langsmith, though as\nnoted above, generation may still fail after `10` attempts if the model\ndoes not produce output that conforms to the expected schema\n\n**Actual results**\n\n- Generation fails with an error similar to:\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 400. Message: API Error:\nBad Request: ActionsClientLlm: action result status is error: an error occurred while running the action - Status code: 400. Message: API Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 404. Message: API Error: Not Found - Model \"llama-3.2-3b-instruct\" not found. Please specify a valid model.\n```\n\nas illustrated by the following screenshot:\n\n\n","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.1.0","branchLabelMappingKey":"^v9.1.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/213855","number":213855,"mergeCommit":{"message":"[Attack discovery] Fix error handling in LM studio (#213855)\n\n## Summary\n\nError were not properly propagated to the user and instead of meaningful\nmessage we were displaying just `API Error`.\n\n<img width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47 59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n \n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX` variant (optimized for Mac) of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2. Configure the model to have a context length of `131072` tokens, as\nillustrated by the screenshot below:\n\n\n\n\n3. Serve ONLY the model above in LM Studio. (Ensure no other models are\nrunning in LM Studio), as illustrated by the screenshot below:\n\n\n\n\n4. Configure a connector via the details in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut change:\n\n```\nlocal-model\n```\n\nto the name of the model when configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas illustrated by the screenshot below:\n\n\n\n\n5. Generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the error described in the later steps\nbelow.\n- Progress on generating discoveries is visible in Langsmith, as\nillustrated by the screenshot below:\n\n\n\n\nNote: `Llama-3.2-3B-Instruct-4bit` may not reliably generate Attack\ndiscoveries, so generation may still fail after `10` generation /\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as illustrated by the\nscreenshot below:\n\n\n\n\n7. Once again, generate Attack discoveries\n\n**Expected results**\n\n- Generation does NOT fail with the errors below\n- Progress on generating discoveries is visible in Langsmith, though as\nnoted above, generation may still fail after `10` attempts if the model\ndoes not produce output that conforms to the expected schema\n\n**Actual results**\n\n- Generation fails with an error similar to:\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 400. Message: API Error:\nBad Request: ActionsClientLlm: action result status is error: an error occurred while running the action - Status code: 400. Message: API Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse (openai) response from attempt 0; (this may be an incomplete response from the model): Status code: 404. Message: API Error: Not Found - Model \"llama-3.2-3b-instruct\" not found. Please specify a valid model.\n```\n\nas illustrated by the following screenshot:\n\n\n","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}}]}] BACKPORT--> Co-authored-by: Patryk Kopyciński <contact@patrykkopycinski.com> |
||
---|---|---|
.buildkite | ||
.devcontainer | ||
.github | ||
api_docs | ||
config | ||
dev_docs | ||
docs | ||
examples | ||
kbn_pm | ||
legacy_rfcs | ||
licenses | ||
oas_docs | ||
packages | ||
plugins | ||
scripts | ||
src | ||
test | ||
typings | ||
x-pack | ||
.backportrc.json | ||
.bazelignore | ||
.bazeliskversion | ||
.bazelrc | ||
.bazelrc.common | ||
.bazelversion | ||
.browserslistrc | ||
.editorconfig | ||
.eslintignore | ||
.eslintrc.js | ||
.gitattributes | ||
.gitignore | ||
.i18nrc.json | ||
.node-version | ||
.npmrc | ||
.nvmrc | ||
.prettierignore | ||
.prettierrc | ||
.puppeteerrc | ||
.stylelintignore | ||
.stylelintrc | ||
.telemetryrc.json | ||
.yarnrc | ||
BUILD.bazel | ||
catalog-info.yaml | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
FAQ.md | ||
fleet_packages.json | ||
github_checks_reporter.json | ||
kibana.d.ts | ||
LICENSE.txt | ||
NOTICE.txt | ||
package.json | ||
preinstall_check.js | ||
README.md | ||
renovate.json | ||
RISK_MATRIX.mdx | ||
run_fleet_setup_parallel.sh | ||
SECURITY.md | ||
sonar-project.properties | ||
STYLEGUIDE.mdx | ||
tsconfig.base.json | ||
tsconfig.browser.json | ||
tsconfig.browser_bazel.json | ||
tsconfig.json | ||
TYPESCRIPT.md | ||
versions.json | ||
WORKSPACE.bazel | ||
yarn.lock |
Kibana
Kibana is your window into the Elastic Stack. Specifically, it's a browser-based analytics and search dashboard for Elasticsearch.
- Getting Started
- Documentation
- Version Compatibility with Elasticsearch
- Questions? Problems? Suggestions?
Getting Started
If you just want to try Kibana out, check out the Elastic Stack Getting Started Page to give it a whirl.
If you're interested in diving a bit deeper and getting a taste of Kibana's capabilities, head over to the Kibana Getting Started Page.
Using a Kibana Release
If you want to use a Kibana release in production, give it a test run, or just play around:
- Download the latest version on the Kibana Download Page.
- Learn more about Kibana's features and capabilities on the Kibana Product Page.
- We also offer a hosted version of Kibana on our Cloud Service.
Building and Running Kibana, and/or Contributing Code
You might want to build Kibana locally to contribute some code, test out the latest features, or try out an open PR:
- CONTRIBUTING.md will help you get Kibana up and running.
- If you would like to contribute code, please follow our STYLEGUIDE.mdx.
- For all other questions, check out the FAQ.md and wiki.
Documentation
Visit Elastic.co for the full Kibana documentation.
For information about building the documentation, see the README in elastic/docs.
Version Compatibility with Elasticsearch
Ideally, you should be running Elasticsearch and Kibana with matching version numbers. If your Elasticsearch has an older version number or a newer major number than Kibana, then Kibana will fail to run. If Elasticsearch has a newer minor or patch number than Kibana, then the Kibana Server will log a warning.
Note: The version numbers below are only examples, meant to illustrate the relationships between different types of version numbers.
Situation | Example Kibana version | Example ES version | Outcome |
---|---|---|---|
Versions are the same. | 7.15.1 | 7.15.1 | 💚 OK |
ES patch number is newer. | 7.15.0 | 7.15.1 | ⚠️ Logged warning |
ES minor number is newer. | 7.14.2 | 7.15.0 | ⚠️ Logged warning |
ES major number is newer. | 7.15.1 | 8.0.0 | 🚫 Fatal error |
ES patch number is older. | 7.15.1 | 7.15.0 | ⚠️ Logged warning |
ES minor number is older. | 7.15.1 | 7.14.2 | 🚫 Fatal error |
ES major number is older. | 8.0.0 | 7.15.1 | 🚫 Fatal error |
Questions? Problems? Suggestions?
- If you've found a bug or want to request a feature, please create a GitHub Issue. Please check to make sure someone else hasn't already created an issue for the same topic.
- Need help using Kibana? Ask away on our Kibana Discuss Forum and a fellow community member or Elastic engineer will be glad to help you out.