## 📓 Summary Part of https://github.com/elastic/streams-program/issues/127 Closes https://github.com/elastic/streams-program/issues/114 This update overhauls the internal logic of our processing simulation endpoint. It now runs parallel simulations (pipeline and, conditionally, ingest) to extract detailed document reports and processor metrics, while also handling a host of edge cases. The key improvements include: - **Parallel Simulation Execution** Executes both pipeline and ingest simulations concurrently. The pipeline simulation always runs to extract per-document reports and metrics. The ingest simulation runs conditionally when detected fields are provided, enabling fast failures on mapping mismatches. - **Document Reporting & Metrics** Extracts granular differences between source and simulated documents. Reports include: - Field-level diffs indicating which processor added or updated fields. - Detailed error messages (e.g., generic processor failure, generic simulation failure, non-additive processor failure). - Calculation of overall success and failure rates, as well as per-processor metrics. - **Sequential Processors & Field Overriding** Supports multiple sequential processors. In cases where later processors override fields produced by earlier ones, the logic bypasses non-additive checks to accept the new value. - **Robust Handling of Partial & Failed Simulations** Simulations now correctly mark documents as: - **Parsed** when all processors succeed. - **Partially parsed** when some processors fail. - **Failed** when none of the processors processing the document succeed. - **Mapping Validation & Non-Additive Detection** The simulation verifies that the detected field mappings are compatible. If a processor introduces non-additive changes—updating an existing field rather than appending—the simulation flags the error and sets a dedicated `is_non_additive_simulation` flag. Additionally, a failed ingest simulation (e.g., due to incompatible mapping types) results in an immediate failure. The final returned API response adheres to the following TypeScript type: ```typescript interface SimulationResponse { detected_fields: DetectedField[]; documents: SimulationDocReport[]; processors_metrics: Record<string, ProcessorMetrics>; failure_rate: number; success_rate: number; is_non_additive_simulation: boolean; } ``` ## Updated tests ``` Processing Simulation ├── Successful simulations │ ├── should simulate additive processing │ ├── should simulate with detected fields │ ├── should simulate multiple sequential processors │ ├── should simulate partially parsed documents │ ├── should return processor metrics │ ├── should return accurate success/failure rates │ ├── should allow overriding fields detected by previous simulation processors (skip non-additive check) │ ├── should gracefully return the errors for each partially parsed or failed document │ ├── should gracefully return failed simulation errors │ ├── should gracefully return non-additive simulation errors │ └── should return the is_non_additive_simulation simulation flag └── Failed simulations └── should fail with incompatible detected field mappings ``` ## 🚨 API Failure Conditions & Handler Corner Cases The simulation API handles and reports the following corner cases: - **Pipeline Simulation Failures** _(Gracefully reported)_ - Syntax errors in processor configurations (e.g., malformed grok patterns) trigger a pipeline-level failure with detailed error information (processor ID, error type, and message). - **Non-Additive Processor Behavior** _(Gracefully reported)_ - If a processor modifies fields already present in the source document rather than strictly appending new fields, the simulation flags this as a non-additive change. - The error is recorded both at the document level (resulting in a "partially_parsed" or "failed" status) and within per-processor metrics, with the global flag `is_non_additive_simulation` set to true. - **Partial Document Processing** _(Gracefully reported)_ - In scenarios with sequential processors where the first processor succeeds (e.g., a dissect processor) and the subsequent grok processor fails, documents are marked as "partially_parsed." - These cases are reflected in the overall success/failure rates and detailed per-document error lists. - **Field Overriding** - When a later processor intentionally overrides fields (for instance, reassigning a previously calculated field), the simulation bypasses the non-additive check, and detected fields are aggregated accordingly, noting both the original and overridden values. - **Mapping Inconsistencies** _(API failure bad request)_ - When the ingest simulation detects incompatibility between the provided detected field mappings (such as defining a field as a boolean when it should be a date) and the source document, it immediately fails. - The failure response includes an error message explaining the incompatibility. ## 🔜 Follow-up Work - **Integrate Schema Editor** Given the improved support for detected fields, a follow up PR will introduced the Schema Editor and will allow mapping along the data enrichment. - **Granular filtering and report** Having access to more granular details such as status, errors and detected fields for each documents, we could enhance the table with additional information and better filters. cc @LucaWintergerst @patpscal ## 🎥 Demo recordings https://github.com/user-attachments/assets/29f804eb-6dd4-4452-a798-9d48786cbb7f --------- Co-authored-by: Jean-Louis Leysens <jloleysens@gmail.com> Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com> |
||
---|---|---|
.buildkite | ||
.devcontainer | ||
.github | ||
api_docs | ||
config | ||
dev_docs | ||
docs | ||
examples | ||
kbn_pm | ||
legacy_rfcs | ||
licenses | ||
oas_docs | ||
packages | ||
plugins | ||
scripts | ||
src | ||
test | ||
typings | ||
x-pack | ||
.backportrc.json | ||
.bazelignore | ||
.bazeliskversion | ||
.bazelrc | ||
.bazelrc.common | ||
.bazelversion | ||
.browserslistrc | ||
.editorconfig | ||
.eslintignore | ||
.eslintrc.js | ||
.gitattributes | ||
.gitignore | ||
.i18nrc.json | ||
.node-version | ||
.npmrc | ||
.nvmrc | ||
.prettierignore | ||
.prettierrc | ||
.puppeteerrc | ||
.stylelintignore | ||
.stylelintrc | ||
.telemetryrc.json | ||
.yarnrc | ||
BUILD.bazel | ||
catalog-info.yaml | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
FAQ.md | ||
fleet_packages.json | ||
github_checks_reporter.json | ||
kibana.d.ts | ||
LICENSE.txt | ||
NOTICE.txt | ||
package.json | ||
preinstall_check.js | ||
README.md | ||
renovate.json | ||
RISK_MATRIX.mdx | ||
run_fleet_setup_parallel.sh | ||
SECURITY.md | ||
sonar-project.properties | ||
STYLEGUIDE.mdx | ||
tsconfig.base.json | ||
tsconfig.browser.json | ||
tsconfig.browser_bazel.json | ||
tsconfig.json | ||
TYPESCRIPT.md | ||
updatecli-compose.yaml | ||
versions.json | ||
WORKSPACE.bazel | ||
yarn.lock |
Kibana
Kibana is your window into the Elastic Stack. Specifically, it's a browser-based analytics and search dashboard for Elasticsearch.
- Getting Started
- Documentation
- Version Compatibility with Elasticsearch
- Questions? Problems? Suggestions?
Getting Started
If you just want to try Kibana out, check out the Elastic Stack Getting Started Page to give it a whirl.
If you're interested in diving a bit deeper and getting a taste of Kibana's capabilities, head over to the Kibana Getting Started Page.
Using a Kibana Release
If you want to use a Kibana release in production, give it a test run, or just play around:
- Download the latest version on the Kibana Download Page.
- Learn more about Kibana's features and capabilities on the Kibana Product Page.
- We also offer a hosted version of Kibana on our Cloud Service.
Building and Running Kibana, and/or Contributing Code
You might want to build Kibana locally to contribute some code, test out the latest features, or try out an open PR:
- CONTRIBUTING.md will help you get Kibana up and running.
- If you would like to contribute code, please follow our STYLEGUIDE.mdx.
- For all other questions, check out the FAQ.md and wiki.
Documentation
Visit Elastic.co for the full Kibana documentation.
For information about building the documentation, see the README in elastic/docs.
Version Compatibility with Elasticsearch
Ideally, you should be running Elasticsearch and Kibana with matching version numbers. If your Elasticsearch has an older version number or a newer major number than Kibana, then Kibana will fail to run. If Elasticsearch has a newer minor or patch number than Kibana, then the Kibana Server will log a warning.
Note: The version numbers below are only examples, meant to illustrate the relationships between different types of version numbers.
Situation | Example Kibana version | Example ES version | Outcome |
---|---|---|---|
Versions are the same. | 7.15.1 | 7.15.1 | 💚 OK |
ES patch number is newer. | 7.15.0 | 7.15.1 | ⚠️ Logged warning |
ES minor number is newer. | 7.14.2 | 7.15.0 | ⚠️ Logged warning |
ES major number is newer. | 7.15.1 | 8.0.0 | 🚫 Fatal error |
ES patch number is older. | 7.15.1 | 7.15.0 | ⚠️ Logged warning |
ES minor number is older. | 7.15.1 | 7.14.2 | 🚫 Fatal error |
ES major number is older. | 8.0.0 | 7.15.1 | 🚫 Fatal error |
Questions? Problems? Suggestions?
- If you've found a bug or want to request a feature, please create a GitHub Issue. Please check to make sure someone else hasn't already created an issue for the same topic.
- Need help using Kibana? Ask away on our Kibana Discuss Forum and a fellow community member or Elastic engineer will be glad to help you out.