Your window into the Elastic Stack
Find a file
Kibana Machine d21c5f364c
[8.x] [Attack discovery] Fix error handling in LM studio (#213855) (#214042)
# Backport

This will backport the following commits from `main` to `8.x`:
- [[Attack discovery] Fix error handling in LM studio
(#213855)](https://github.com/elastic/kibana/pull/213855)

<!--- Backport version: 9.6.6 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)

<!--BACKPORT [{"author":{"name":"Patryk
Kopyciński","email":"contact@patrykkopycinski.com"},"sourceCommit":{"committedDate":"2025-03-12T02:06:48Z","message":"[Attack
discovery] Fix error handling in LM studio (#213855)\n\n##
Summary\n\nError were not properly propagated to the user and instead of
meaningful\nmessage we were displaying just `API Error`.\n\n<img
width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47
59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n
\n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk
testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX`
variant (optimized for Mac)
of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2.
Configure the model to have a context length of `131072` tokens,
as\nillustrated by the screenshot
below:\n\n\n![context_length](https://github.com/user-attachments/assets/505f64af-6d03-4f66-a485-7b25ebc4cae2)\n\n3.
Serve ONLY the model above in LM Studio. (Ensure no other models
are\nrunning in LM Studio), as illustrated by the screenshot
below:\n\n\n![one_model_running](https://github.com/user-attachments/assets/af29bea5-4cc3-401c-87d8-4b5778acdfe6)\n\n4.
Configure a connector via the details
in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut
change:\n\n```\nlocal-model\n```\n\nto the name of the model when
configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas
illustrated by the screenshot
below:\n\n\n![connector](https://github.com/user-attachments/assets/5c2bcba3-6cc0-4066-833b-fe68d4c64569)\n\n5.
Generate Attack discoveries\n\n**Expected results**\n\n- Generation does
NOT fail with the error described in the later steps\nbelow.\n- Progress
on generating discoveries is visible in Langsmith, as\nillustrated by
the screenshot
below:\n\n\n![langsmith](https://github.com/user-attachments/assets/ac2f36f4-35de-4cc9-b9aa-8b9e09d32569)\n\nNote:
`Llama-3.2-3B-Instruct-4bit` may not reliably generate
Attack\ndiscoveries, so generation may still fail after `10` generation
/\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as
illustrated by the\nscreenshot
below:\n\n\n![llm_studio_2nd_model](https://github.com/user-attachments/assets/93eda24c-c016-4f81-919c-0cbf5ffb63b0)\n\n7.
Once again, generate Attack discoveries\n\n**Expected results**\n\n-
Generation does NOT fail with the errors below\n- Progress on generating
discoveries is visible in Langsmith, though as\nnoted above, generation
may still fail after `10` attempts if the model\ndoes not produce output
that conforms to the expected schema\n\n**Actual results**\n\n-
Generation fails with an error similar to:\n\n```\ngenerate node is
unable to parse (openai) response from attempt 0; (this may be an
incomplete response from the model): Status code: 400. Message: API
Error:\nBad Request: ActionsClientLlm: action result status is error: an
error occurred while running the action - Status code: 400. Message: API
Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse
(openai) response from attempt 0; (this may be an incomplete response
from the model): Status code: 404. Message: API Error: Not Found - Model
\"llama-3.2-3b-instruct\" not found. Please specify a valid
model.\n```\n\nas illustrated by the following
screenshot:\n\n\n![error](https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46)","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87","branchLabelMapping":{"^v9.1.0$":"main","^v8.19.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["bug","release_note:skip","backport:prev-major","Team:Security
Generative AI","Feature:Attack
Discovery","backport:current-major","v9.1.0"],"title":"[Attack
discovery] Fix error handling in LM
studio","number":213855,"url":"https://github.com/elastic/kibana/pull/213855","mergeCommit":{"message":"[Attack
discovery] Fix error handling in LM studio (#213855)\n\n##
Summary\n\nError were not properly propagated to the user and instead of
meaningful\nmessage we were displaying just `API Error`.\n\n<img
width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47
59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n
\n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk
testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX`
variant (optimized for Mac)
of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2.
Configure the model to have a context length of `131072` tokens,
as\nillustrated by the screenshot
below:\n\n\n![context_length](https://github.com/user-attachments/assets/505f64af-6d03-4f66-a485-7b25ebc4cae2)\n\n3.
Serve ONLY the model above in LM Studio. (Ensure no other models
are\nrunning in LM Studio), as illustrated by the screenshot
below:\n\n\n![one_model_running](https://github.com/user-attachments/assets/af29bea5-4cc3-401c-87d8-4b5778acdfe6)\n\n4.
Configure a connector via the details
in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut
change:\n\n```\nlocal-model\n```\n\nto the name of the model when
configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas
illustrated by the screenshot
below:\n\n\n![connector](https://github.com/user-attachments/assets/5c2bcba3-6cc0-4066-833b-fe68d4c64569)\n\n5.
Generate Attack discoveries\n\n**Expected results**\n\n- Generation does
NOT fail with the error described in the later steps\nbelow.\n- Progress
on generating discoveries is visible in Langsmith, as\nillustrated by
the screenshot
below:\n\n\n![langsmith](https://github.com/user-attachments/assets/ac2f36f4-35de-4cc9-b9aa-8b9e09d32569)\n\nNote:
`Llama-3.2-3B-Instruct-4bit` may not reliably generate
Attack\ndiscoveries, so generation may still fail after `10` generation
/\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as
illustrated by the\nscreenshot
below:\n\n\n![llm_studio_2nd_model](https://github.com/user-attachments/assets/93eda24c-c016-4f81-919c-0cbf5ffb63b0)\n\n7.
Once again, generate Attack discoveries\n\n**Expected results**\n\n-
Generation does NOT fail with the errors below\n- Progress on generating
discoveries is visible in Langsmith, though as\nnoted above, generation
may still fail after `10` attempts if the model\ndoes not produce output
that conforms to the expected schema\n\n**Actual results**\n\n-
Generation fails with an error similar to:\n\n```\ngenerate node is
unable to parse (openai) response from attempt 0; (this may be an
incomplete response from the model): Status code: 400. Message: API
Error:\nBad Request: ActionsClientLlm: action result status is error: an
error occurred while running the action - Status code: 400. Message: API
Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse
(openai) response from attempt 0; (this may be an incomplete response
from the model): Status code: 404. Message: API Error: Not Found - Model
\"llama-3.2-3b-instruct\" not found. Please specify a valid
model.\n```\n\nas illustrated by the following
screenshot:\n\n\n![error](https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46)","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.1.0","branchLabelMappingKey":"^v9.1.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/213855","number":213855,"mergeCommit":{"message":"[Attack
discovery] Fix error handling in LM studio (#213855)\n\n##
Summary\n\nError were not properly propagated to the user and instead of
meaningful\nmessage we were displaying just `API Error`.\n\n<img
width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47
59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n
\n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk
testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX`
variant (optimized for Mac)
of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2.
Configure the model to have a context length of `131072` tokens,
as\nillustrated by the screenshot
below:\n\n\n![context_length](https://github.com/user-attachments/assets/505f64af-6d03-4f66-a485-7b25ebc4cae2)\n\n3.
Serve ONLY the model above in LM Studio. (Ensure no other models
are\nrunning in LM Studio), as illustrated by the screenshot
below:\n\n\n![one_model_running](https://github.com/user-attachments/assets/af29bea5-4cc3-401c-87d8-4b5778acdfe6)\n\n4.
Configure a connector via the details
in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut
change:\n\n```\nlocal-model\n```\n\nto the name of the model when
configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas
illustrated by the screenshot
below:\n\n\n![connector](https://github.com/user-attachments/assets/5c2bcba3-6cc0-4066-833b-fe68d4c64569)\n\n5.
Generate Attack discoveries\n\n**Expected results**\n\n- Generation does
NOT fail with the error described in the later steps\nbelow.\n- Progress
on generating discoveries is visible in Langsmith, as\nillustrated by
the screenshot
below:\n\n\n![langsmith](https://github.com/user-attachments/assets/ac2f36f4-35de-4cc9-b9aa-8b9e09d32569)\n\nNote:
`Llama-3.2-3B-Instruct-4bit` may not reliably generate
Attack\ndiscoveries, so generation may still fail after `10` generation
/\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as
illustrated by the\nscreenshot
below:\n\n\n![llm_studio_2nd_model](https://github.com/user-attachments/assets/93eda24c-c016-4f81-919c-0cbf5ffb63b0)\n\n7.
Once again, generate Attack discoveries\n\n**Expected results**\n\n-
Generation does NOT fail with the errors below\n- Progress on generating
discoveries is visible in Langsmith, though as\nnoted above, generation
may still fail after `10` attempts if the model\ndoes not produce output
that conforms to the expected schema\n\n**Actual results**\n\n-
Generation fails with an error similar to:\n\n```\ngenerate node is
unable to parse (openai) response from attempt 0; (this may be an
incomplete response from the model): Status code: 400. Message: API
Error:\nBad Request: ActionsClientLlm: action result status is error: an
error occurred while running the action - Status code: 400. Message: API
Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse
(openai) response from attempt 0; (this may be an incomplete response
from the model): Status code: 404. Message: API Error: Not Found - Model
\"llama-3.2-3b-instruct\" not found. Please specify a valid
model.\n```\n\nas illustrated by the following
screenshot:\n\n\n![error](https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46)","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}}]}]
BACKPORT-->

Co-authored-by: Patryk Kopyciński <contact@patrykkopycinski.com>
2025-03-12 05:04:50 +01:00
.buildkite [8.x] [Security Solution] Enable prebuilt rules customization feature flag (#212761) (#214024) 2025-03-12 05:02:09 +01:00
.devcontainer [8.x] Sync devcontainer with main (#202854) 2024-12-03 17:10:40 -08:00
.github [8.x] [ES|QL] Introduces a new package for esql types (#212754) (#212880) 2025-03-03 14:52:06 +01:00
api_docs [8.x] [Streams 🌊] Introduce GroupStreams (#208126) (#209871) 2025-02-06 14:09:59 +01:00
config [8.x] [Inference Connector] Enable inference connector for ESS by default, disable it for Serverless (#209197) (#209865) 2025-02-06 17:17:01 +01:00
dev_docs [8.x] [Dev Docs] Add VS Code configurations to Dev Docs Debugging Tutorial (#212807) (#213413) 2025-03-06 18:09:18 +01:00
docs [DOCS] Increase maximum Osquery timeout (#213918) 2025-03-11 16:18:02 +00:00
examples [8.x] [embeddable] replace Embeddable ViewMode with presentation-publishing ViewMode (#211960) (#213135) 2025-03-05 02:29:33 +01:00
kbn_pm [8.x] Sustainable Kibana Architecture: Move CodeEditor related packages #205587 (#205738) (#205919) 2025-01-10 11:20:26 +00:00
legacy_rfcs [8.x] SKA: Relocate "platform" packages that remain on /packages (#208704) (#212474) 2025-02-28 10:12:01 +00:00
licenses Adds AGPL 3.0 license (#192025) 2024-09-06 19:02:41 -06:00
oas_docs [8.x] [Security Assistant] Fix use default inference endpoint (#212191) (#213183) 2025-03-05 04:08:04 +01:00
packages [8.x] [Synthetics] Fix overview error popover !! (#211431) (#213328) 2025-03-06 13:01:22 +01:00
plugins
scripts [8.x] SKA: Relocate "platform" packages that remain on /packages (#208704) (#212474) 2025-02-28 10:12:01 +00:00
src [8.x] [ResponseOps][DOCS] Add stack rule parameter descriptions (#213185) (#214019) 2025-03-12 00:21:10 +01:00
test [8.x] migrate discover session multiple data view test to discover (#213991) (#214020) 2025-03-12 00:22:15 +01:00
typings [8.x] make emotion typing global (#200958) (#203162) 2024-12-05 14:10:16 -06:00
x-pack [8.x] [Attack discovery] Fix error handling in LM studio (#213855) (#214042) 2025-03-12 05:04:50 +01:00
.backportrc.json chore(NA): adds 8.16 into backportrc (#187530) 2024-07-04 19:09:25 +01:00
.bazelignore Remove references to deleted .ci folder (#177168) 2024-02-20 19:54:21 +01:00
.bazeliskversion
.bazelrc chore(NA): use new and more performant BuildBuddy servers (#130350) 2022-04-18 02:01:38 +01:00
.bazelrc.common Transpile packages on demand, validate all TS projects (#146212) 2022-12-22 19:00:29 -06:00
.bazelversion chore(NA): revert bazel upgrade for v5.2.0 (#135096) 2022-06-24 03:57:21 +01:00
.browserslistrc Add Firefox ESR to browserlistrc (#184462) 2024-05-29 17:53:18 -05:00
.editorconfig
.eslintignore [8.x] SKA: Relocate "platform" packages that remain on /packages (#208704) (#212474) 2025-02-28 10:12:01 +00:00
.eslintrc.js [8.x] [ResponseOps] consistent-type-imports linting rule for RO packages/plugins - PR1 (#212348) (#213929) 2025-03-11 18:15:23 +01:00
.gitattributes
.gitignore [8.x] SKA: Relocate "platform" packages that remain on /packages (#208704) (#212474) 2025-02-28 10:12:01 +00:00
.i18nrc.json [8.x] SKA: Fix kebab-case issues in security-threat-hunting packages (#211349) (#211732) 2025-02-19 13:45:13 +01:00
.node-version [8.x] Upgrade Node.js to 20.18.2 (#207431) (#207894) 2025-01-22 19:37:48 +00:00
.npmrc [npmrc] Fix puppeteer_skip_download configuration (#177673) 2024-02-22 18:59:01 -07:00
.nvmrc [8.x] Upgrade Node.js to 20.18.2 (#207431) (#207894) 2025-01-22 19:37:48 +00:00
.prettierignore
.prettierrc
.puppeteerrc Add .puppeteerrc (#179847) 2024-04-03 09:14:39 -05:00
.stylelintignore
.stylelintrc Bump stylelint to ^14 (#136693) 2022-07-20 10:11:00 -05:00
.telemetryrc.json [8.x] Sustainable Kibana Architecture: Move modules owned by @elastic/kibana-core (#201653) (#205563) 2025-01-05 16:32:00 +01:00
.yarnrc
BUILD.bazel Transpile packages on demand, validate all TS projects (#146212) 2022-12-22 19:00:29 -06:00
catalog-info.yaml [sonarqube] Disable cron (#190611) 2024-08-15 09:19:09 -05:00
CODE_OF_CONDUCT.md
CONTRIBUTING.md
FAQ.md Fix small typos in the root md files (#134609) 2022-06-23 09:36:11 -05:00
fleet_packages.json [8.x] Sync bundled packages with Package Storage (#212061) 2025-02-21 14:59:31 +00:00
github_checks_reporter.json
kibana.d.ts Adds AGPL 3.0 license (#192025) 2024-09-06 19:02:41 -06:00
LICENSE.txt Adds AGPL 3.0 license (#192025) 2024-09-06 19:02:41 -06:00
NOTICE.txt [8.x] [ES|QL] capitalize &#x60;FROM&#x60; in recommended queries (#205122) (#205352) 2025-01-02 04:27:03 -06:00
package.json [8.x] Update lru-cache (main) (#206225) (#213934) 2025-03-11 23:42:23 +01:00
preinstall_check.js Adds AGPL 3.0 license (#192025) 2024-09-06 19:02:41 -06:00
README.md
renovate.json [8.x] Update langchain (main) (#205553) (#212567) 2025-02-27 08:35:22 -05:00
RISK_MATRIX.mdx
run_fleet_setup_parallel.sh [8.x] Sustainable Kibana Architecture: Move modules owned by @elastic/fleet (#202422) (#205145) 2024-12-24 15:17:23 -06:00
SECURITY.md
sonar-project.properties [sonarqube] update memory, cpu (#190547) 2024-09-09 16:16:30 -05:00
STYLEGUIDE.mdx [styleguide] update path to scss theme (#140742) 2022-09-15 10:41:14 -04:00
tsconfig.base.json [8.x] [ES|QL] Introduces a new package for esql types (#212754) (#212880) 2025-03-03 14:52:06 +01:00
tsconfig.browser.json
tsconfig.browser_bazel.json
tsconfig.json Transpile packages on demand, validate all TS projects (#146212) 2022-12-22 19:00:29 -06:00
TYPESCRIPT.md Fix small typos in the root md files (#134609) 2022-06-23 09:36:11 -05:00
versions.json [ci] Update version tracking for 7.17.25 (#192477) 2024-09-10 20:54:04 -05:00
WORKSPACE.bazel [8.x] Upgrade Node.js to 20.18.2 (#207431) (#207894) 2025-01-22 19:37:48 +00:00
yarn.lock [8.x] Update lru-cache (main) (#206225) (#213934) 2025-03-11 23:42:23 +01:00

Kibana

Kibana is your window into the Elastic Stack. Specifically, it's a browser-based analytics and search dashboard for Elasticsearch.

Getting Started

If you just want to try Kibana out, check out the Elastic Stack Getting Started Page to give it a whirl.

If you're interested in diving a bit deeper and getting a taste of Kibana's capabilities, head over to the Kibana Getting Started Page.

Using a Kibana Release

If you want to use a Kibana release in production, give it a test run, or just play around:

Building and Running Kibana, and/or Contributing Code

You might want to build Kibana locally to contribute some code, test out the latest features, or try out an open PR:

Documentation

Visit Elastic.co for the full Kibana documentation.

For information about building the documentation, see the README in elastic/docs.

Version Compatibility with Elasticsearch

Ideally, you should be running Elasticsearch and Kibana with matching version numbers. If your Elasticsearch has an older version number or a newer major number than Kibana, then Kibana will fail to run. If Elasticsearch has a newer minor or patch number than Kibana, then the Kibana Server will log a warning.

Note: The version numbers below are only examples, meant to illustrate the relationships between different types of version numbers.

Situation Example Kibana version Example ES version Outcome
Versions are the same. 7.15.1 7.15.1 💚 OK
ES patch number is newer. 7.15.0 7.15.1 ⚠️ Logged warning
ES minor number is newer. 7.14.2 7.15.0 ⚠️ Logged warning
ES major number is newer. 7.15.1 8.0.0 🚫 Fatal error
ES patch number is older. 7.15.1 7.15.0 ⚠️ Logged warning
ES minor number is older. 7.15.1 7.14.2 🚫 Fatal error
ES major number is older. 8.0.0 7.15.1 🚫 Fatal error

Questions? Problems? Suggestions?

  • If you've found a bug or want to request a feature, please create a GitHub Issue. Please check to make sure someone else hasn't already created an issue for the same topic.
  • Need help using Kibana? Ask away on our Kibana Discuss Forum and a fellow community member or Elastic engineer will be glad to help you out.