* DLQ-ing events that trigger an conditional evaluation error. (#16423)
When a conditional evaluation encounter an error in the expression the event that triggered the issue is sent to pipeline's DLQ, if enabled for the executing pipeline.
This PR engage with the work done in #16322, the `ConditionalEvaluationListener` that is receives notifications about if-statements evaluation failure, is improved to also send the event to DLQ (if enabled in the pipeline) and not just logging it.
(cherry picked from commit b69d993d71)
* Fixed warning about non serializable field DeadLetterQueueWriter in serializable AbstractPipelineExt
---------
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
This PR adds the cause errors details on the pipeline converge state error logs
(cherry picked from commit e84fb458ce)
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
* Fixes the issue where LS wipes out all quotes from docker env variables. This is an issue when running LS on docker with CONFIG_STRING, needs to keep quotes with env variable.
* Add a docker acceptance integration test.
(cherry picked from commit 7c64c7394b)
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Make inner field of ConditionalEvaluationError transient to be avoided during serialization.
(cherry picked from commit bb7ecc203f)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
This PR protects the if statements against expression evaluation errors, cancel the event under processing and log it.
This avoids to crash the pipeline which encounter a runtime error during event condition evaluation, permitting to debug the root cause reporting the offending event and removing from the current processing batch.
Translates the `org.jruby.exceptions.TypeError`, `IllegalArgumentException`, `org.jruby.exceptions.ArgumentError` that could happen during `EventCodition` evaluation into a custom `ConditionalEvaluationError` which bubbles up on AST tree nodes. It's catched in the `SplitDataset` node.
Updates the generation of the `SplitDataset `so that the execution of `filterEvents` method inside the compute body is try-catch guarded and defer the execution to an instance of `AbstractPipelineExt.ConditionalEvaluationListener` to handle such error. In this particular case the error management consist in just logging the offending Event.
---------
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
This PR is intended to help Logstash developers or users that want to better understand the code that's autogenerated to model a pipeline, assigning more meaningful names to the Datasets subclasses' fields.
Updates `FieldDefinition` to receive the name of the field from construction methods, so that it can be used during the code generation phase, instead of the existing incremental `field%n`.
Updates `ClassFields` to propagate the explicit field name down to the `FieldDefinitions`.
Update the `DatasetCompiler` that add fields to `ClassFields` to assign a proper name to generated Dataset's fields.
* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)
* Safety integration test for pipeline config.string contains ENV .
* Properly resolve the values from ENV vars if literal array string provided with ENV var.
* Docker acceptance test for persisting keys and use actual values in docker container.
* Review suggestion.
Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
---------
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.
By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.
Resolves: elastic/logstash#16332
This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.
A couple of thoughts on the way this is implemented:
* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.
Solves #16320
* licenses: allow elv2, standard abbreviation for Elastic License version 2
* json-dump: reduce unicode normalization cost
Since the underlying JrJackson now properly (and efficiently) encodes the
UTF-8 transcode of whichever strings it is given, we no longer need to
pre-normalize to UTF-8 in ruby _except_ when the string is flagged as BINARY
because we have alternate behaviour to preserve valid UTF-8 sequences.
By emitting a _copy_ of binary-flagged strings that have been re-flagged as
UTF-8, we allow the downstream (efficient) encoding operation in jrjackson
to produce equivalent behaviour at much lower cost.
* cleanup: remove orphan unicode normalizer
This commit fixed the configuration reload process to clean up the pipeline's metric store, so it does not retain references to failed pipelines components.
* Add RubyEvent#dup support and unit test case to keep Json#dump(Event) safe.
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
---------
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
* fix: restore support for unicode pipeline- and plugin-id's
JRuby's `Ruby#newSymbol(String)` throws an exception when provided a `String`
that contains characters outside of lower-ASCII because JRuby internals expect
"the incoming String to be one of our mangled ISO-8859-1 strings" as noted in
a comment on jruby/jruby#6217.
Instead, we use `Ruby#newString(String)` to create a new `RubyString` (which
works properly), and then rely on `RubyString#intern` to get our `RubySymbol`.
This fixes a regression introduced in the 8.7 series in which pipeline id's
are consistently represented as ruby symbols in the metrics store, and ensures
similar issue does not exist when specifying a plugin id that contains
characters above the lower-ASCII plane.
* fix: use properly-encoded RubySymbol in PipelineConfig
We cannot rely on `RubySymbol#toString` to produce a properly-encoded `String`
whe the string contains characters above the lower-ASCII plane because the
result is effectively a binary ruby-internal marshal of the bytes that only
holds when the symbol contains lower-ASCII.
Instead, we can use the internally-memoizing `RubySymbol#name` to get a
properly-encoded `RubyString`, and `RubyString#asJavaString()` to get a
properly-encoded java-`String`.
* fix: properly serialize unicode pipeline names in API output
Jackson's JSON serializer leaks the JRuby-internal byte structure of Symbols,
which only aligns with the byte-structure of the symbol's actual string when
that string is wholly-comprised of lower-ASCII characters.
By pre-converting Symbols to Strings, we ensure that the result is readable
and useful.
* spec: bypass monitoring specs for unicode pipeline ids when PQ enabled
* Rework the logic to delete DLQ eldest segments to be more resilient on file not found errors and avoid to log warn messages that there isn't any action the user can do to solve.
* Fixed test case, when path point to a file that doesn't exist, rely always on path name comparator. Reworked the code to simplify, not needing anymore the tri-state variable
This a refactoring of test fixture.
Avoid mocking the value returned in global SETTINGS constant. Use instead the local setting map instance used in subject creation.
* p2p: extract interface from v1 pipeline bus
* p2p: extract pipeline push to abstract
* p2p: add opt-in unblocked "v2" implementation
Adds a v2 implementation that does not synchronize on the sender so that
multiple workers can send events through a common `pipeline` output instance
simultaneously.
In this implementation, an `AddressStateMapping` provides synchronized
mutation and cleanup of the underlying `AddressState`, and allows only
queryable mutable views (`AddressState.ReadOnly`) to escape encapsulation.
The implementation also holds indentity-keyed mapping from `PipelineOutput`s
to the set of `AddressState.ReadOnly`s it is regested as a sender for so
that they can be quickly resolved at runtime.
* p2p: more tests for pipeline restart behaviour
* p2p: make v2 pipeline bus the default
Updates the DLQ reader to create a notification file (`.deleted_segment`) which signal when a segment is deleted in consequence of `clean_consumed` set. Updates the DLQ writer to have a filesystem watch so that can receive the reader's signal and update the exposed metric, loading the size by listing FS segments occupation.
The PR was created to skip resolving environment variable references in comments present in the “config.string” pipelines defined in the pipelines.yml file.
However it introduced a bug that no longer resolves env var references in values of settings like pipeline.batch.size or queue.max_bytes.
For now we’ll revert this PR and create a fix that handles both problems.
* pq: avoid blocking writer when queue is precisely full
A PQ is considered full (and therefore needs to block before releasing the
writer) when its persisted size on disk _exceeds_ its `queue.max_bytes`
capacity.
This removes an edge-case preemptive block when the persisted size after
writing an event _meets_ its `queue.max_bytes` precisely AND its current
head page has insufficient room to also accept a hypothetical future event.
Fixes: elastic/logstash#16172
* docs: PQ `queue.max_bytes` cannot be less than `queue.page_capacity`
* A logic to handle non-unicode payload in Logstash.
* Well tested and code organized version of the logic.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Upgrade jrjackson to 0.4.20
* Code review: simplify the logic with a standard String#encode interface with replace option.
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
---------
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
In the grammar definitions for hashes, `whitespace` was replaced with `cs` to allow either whitespace _or_ comments.
Additionally, the grammar definition for comments was previously required to end with a newline, now it can end with a newline _or_ EOF, using the "not anything" treetop rule `!.`.
Co-authored-by: Jonas Lundholm Bertelsen <jonas.lundholm.bertelsen@beumer.com>
Adds log.format.json.fix_duplicate_message_fields feature flag to rename the clashing fields when json logging format (log.format) is selected.
In case two message fields clashes on structured log message, then the second is renamed attaching _1 suffix to the field name.
By default the feature is disabled and requires user to explicitly enable the behaviour.
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
* Wipe out comment lines if config comment contains.
* Remove substitution var process when loading the YAML, instead align on the generic approach which LSCL happens during the pipeline compile.
* Update logstash-core/src/main/java/org/logstash/config/ir/PipelineConfig.java
Put the logging config back as it is being used with composed configs.
Introduce a new setting named `pipeline.buffer.type` which could be valued direct or heap to enable the allocation on Java heap.
The processing of the setting is done in `LogStash::Runner#execute` and sets the Java properties considered by Netty to disable the direct allocation: `io.netty.noPreferDirect`.
However, if that system property is already configured explicitly by the user (because set in `jvm.options`or `LS_JAVA_OPTS`) the setting doesn't take place and warning log is reported, respecting the user's will.
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Given that JRuby comes with ruby-maven-libs 3.3.9 this commit upgrades the gem to 3.8.9 and ensures files from 3.3.9 are not included in the distribution.
Remove unused method createFlushScheduler and add some debug logs to the finalizeSegment method to better follow what happens during its execution in case of problems.
This PR adds a shutdown method to the SchedulerService class used to handle actions to be executed on a certain cadence. In particular is used to execute scheduled finalization of DLQ head segment.
Updates the close method of the DLQ writer to invoke this additional shutdown on the service instance.
Adds a burning of time condition to avoid a collision of time which, under certain circumstances, would fail the test.
The sealing of a segment happens if the segment is considered as stale, which requires 2 conditions:
- the segment must have received a write.
- the time of the last write must exceed the flush interval.
In this failing test, the flush interval is set to ZERO because of the synchronicity of the test, to avoid time dependency. However, with coarse grain timer resolution, could happen that the last write coincide with the time of the stale check, so fail the seal condition.
* Ruby code coverage with SimpleCov json formatter
* [CI] Send Java and ruby tests to sonarqube simultaneously
* Enabled COVERAGE for ruby tests
* Enabled COVERAGE for ruby tests
* Enabled COVERAGE for ruby tests
* Enabled COVERAGE for ruby tests
* Enabled COVERAGE for ruby tests
* Added compiled classes to artifacts
* Test change
* Removed test changes
* Returned back ENABLE_SONARQUBE condition
* Removed debug line
* Diable Ruby coverage if ENABLE_SONARQUBE is not true
* Run sonar scan on pull requests and onn push to main
* Run sonar can on release branches
This commit added a few jvm.options properties to configure the Jackson read constraints defaults (Maximum Number value length, Maximum String value length, and Maximum Nesting depth).
This commit fixes how the keystore tool handle the command's options, including validation for unknown options, and adding the --stdin flag to the add command.
Introduces a new interface named SchedulerService to abstract from the ScheduledExecutorService to execute the DLQ flushes of segments. Abstracting from time provides a benefit in testing, where the test doesn't have to wait for things to happen, but those things could happen synchronously.
In DLQ unit testing sometime the DLQ writer is started explicitly without starting the segments flushers. In such cases the test 's logs contains exceptions which could lead to think that the test fails silently.
Avoid to invoke scheduledFlusher's shutdown when it's not started (such behaviour is present only in tests).
This commit added support to add and remove multiple keystore keys in a single operation. It also fixed the empty value validation for editing existing key values and added ASCII validation for values.