This commit added a few jvm.options properties to configure the Jackson read constraints defaults (Maximum Number value length, Maximum String value length, and Maximum Nesting depth).
(cherry picked from commit a21ced0946)
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
This commit updates the puma gem from version 5 to the latest version 6.3.
A few breaking changes were introduced in Puma 6.0.0, which required some refactoring on the Logstash side, especially to adapt it to the "Extracted LogWriter from Events #2798" changes.
Before this PR, all the logs generated by Puma were using the debug level, even the ones that were actually errors and needed attention/action from the users. This commit also changes the log level as following:
error(...): changed from debug to error
unknown_error(..): changed from debug to error
Set of changes to make Logstash compatible to JRuby 9.4.
Bundle JRuby 9.4.3.0
- Redefine space token in `LSCL` and `grammar` treetop from `_` which would generated methods in the form `def _0` (deprecated since `2.7`) to `sc`.
- `I18n.t` method doesn't accept hash as second argument
- `URI.encode` has been replaced with same functionality with `URI::Parser.new.escape`
- `YAML.load` needs explicit `fallback: false` to return false when the yaml string is empty (or contains only comments)
- JRuby's `JavaClass` has been removed, now it can use `java.lang.Class` directly
- explicitly require gem `thwait` to satisfy `require "thwait"` (In `Gemfile.template` and `logstash-core/logstash-core.gemspec`)
- fix not args `clone` to be `def clone(*args)`
- fix `Enumeration.each_slice` which from `Ruby 3.1` is [chainable](https://rubyreferences.github.io/rubychanges/3.1.html#enumerableeach_cons-and-each_slice-return-a-receiver) and doesn't return `nil`. JRuby fixed in https://github.com/jruby/jruby/issues/7015
- Expanded `Down.download` arguments map ca16bbed3c302006967413eb9d3862f2da81f7ae
- Avoid to pass `nil` in the list of couples used in `Hash[ <list of couples> ]` which from Ruby `3.0` generates an `ArgumentError`
- Removed space not allowed between method name and parentheses `initialize (` is forbidden. 29b607dcdef98f81a73ad171639fd13aaa65e243
- With [Ruby 2.7 the `Kernel#open`](https://rubyreferences.github.io/rubychanges/2.7.html#network-and-web) doesn't fallback to `URI#open`, fixed test code that used that to verify open port. e5b70de54c5301f51a767da67294092af0cfafdc
- Avoid to drop `rdoc/` folder from vendored JRuby else `bin/logstash -i irb` would crash, commit b71f73e9c6edb81a7b7ae1305047e506f61c6e8c
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
This commit changed the java_pipeline.rb to include the pipeline/main thread on the stalling threads info list, that way, Logstash can provide users with more helpful information when the stalling thread is the pipeline/main one.
Modify the WorkerLoop to catch the newly introduced exception org.logstash.execution.AbortedBatchException so that an inflight batch could be negatively ACK-ed. This feature is used in combination with PQs to let exit plugins without completing the processing. Any filter and output already executed for the batch will be executed again next time the batch is picked by the persistent queue.
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Reject illegal value assigning to `tags` field. Top-level `tags` should only accept string of array of string.
When `tags` got illegal value on event creation, LogStash::Event will rename the field to `_tags` and add a tag `_tagsparsefailure` to `tags`.
When `tags` got illegal value on `set` operation, LogStash::Event will throw exception.
Add a flag `--event_api.tags.illegal` to allow fallback to old logic. There are two options.
`warn` - the old flow that allows illegal value assignment to tags field.
`rename` - the new flow. This is the default value in 8.7
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Logstah currently does not support multiple top-level `codec` per plugin. This commit fixes the pipeline compilation ensuring that erroneously configured plugins fail to compile and result in a configuration error.
* live timers: introduce API boundary
Introduces an API boundary for timers as a first-class metric, as described
in elastic/logstash#14675, and migrates all known internal timers to use the
new API boundary for tracked execution.
Please refer to the specification for details on motivations.
This commit is net zero change to behaviour, and introduces a single new
undocumented setting `metric.timers` to `logstash.yml`, which presently only
takes its default value `delayed` to indicate that delayed committing of
execution time is acceptable.
It implements the new `TimerMetric` API in a way that is also net-zero-change.
Tracked executions are still performed by marking a start time, performing
the tracked execution, and incrementing an underlying long-type counter with
the number of elapsed milliseconds _after_ execution has completed. This means
that long-running execution is still missing from the metric until it has
completed.
The new Timer API is available to both the Ruby- and the Java-based plugin APIs
* timer metrics: sub-package and add baseline tests
* WIP: move execution metric ownership out of queue
* noop: remove useless abstract method
Our `AbstractMetric` implements `Metric` and does not need to declare
an abstract override of `Metric#getType`. Doing so prevents interfaces
from providing a default override for all implementers.
* timer metric tests: extract util, refactor for reuse
* timers: accumulate milli-excess-nanos
* live timers: single-checkpoint implementation
* timer metric: use explicit type parameters to make intent clear
* remove unused imports
* use safe int conversion
* test fixup: use given name for tested metric
* test helper: TimerMetricFactory prefers nanotime supplier
* timers: flesh out test coverage, incl live-timers
* test: move validation of queue-read metrics to ObservedExecution
* flow: support non-moving denominator (±infinity)
* metrics: add metric config pass-through to env2yaml
During stalled shutdowns while waiting for in-flight batches to complete,
our shutdown watcher emits helpful information about what work is in flight,
including the actual threads and plugins that are still executing.
Since ~6.3.0, the `inflight_count` metric in this log message has always
been `0`, in part because of two somewhat-overlapping bugs:
- elastic/logstash#8987 and elastic/logstash#9056 (7.0, 6.3) changed
the `inflight_batches` map provided by the queue read clients to index
batches by native thread id, but pipeline reporter continued to
attempt to extract by ruby thread object. Because it does not find
the thread in the "batch map", it reports zero.
- elastic/logstash#9111 (7.0, 6.3) changed the _value_ stored in
the `inflight_batches` map provided by a new common queue read client
from an object responding to `#size` to a java `QueueBatch` which
does not respond to `size`. If our pipeline reporter had been able to
look up the queue batch, it would have failed with a `NoMethodError`.
We resolve the issue by (1) extracting the batch from our "batch map" using
the native thread id and (2) safely extracting the value from a `QueueBatch`
before falling through to `Object#size` or 0.
* Collect growth events and bytes metrics if PQ is enabled: Java changes.
* Move queue flow under queue namespace.
* Pipeline level PQ flow metrics: add unit & integration tests.
* Include queue info in node stats sample.
* Apply suggestions from code review
Change uptime precision for PQ growth metrics to uptime seconds since PQ events are based on seconds.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Add safeguard when using lazy delegating gauge type.
* flow metrics: simplify generics of lazy implementation
Enables interface `FlowMetrics::create` to take suppliers that _implement_
a `Metric<? extends Number>` instead of requiring them to be pre-cast, and
avoid unnecessary exposure of the metrics value-type into our lazy init.
* flow metrics: use lazy init for PQ gauge-based metrics
* noop: use enum equality
Avoids routing two enum values through `MetricType#toString()`
and `String#equals()` when they can be compared directly.
* Apply suggestions from code review
Optional.ofNullable used for safe return. Doc includes real tested expected metric values.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* flow metrics: make lazy-init wraper inherit from AbstractMetric
this allows the Jackson serialization annotations to work
* flow metrics: move pipeline queue-based flows into pipeline flow namespace
* Follow up for moving PQ growth metrics under pipeline.*.flow.
- Unit and integration tests are added or fixed.
- Documentation added along with sample response data
* flow: pipeline pq flow rates docs
* Do not expect flow in the queue section of API. Metrics moved to flow section.
Update logstash-core/spec/logstash/api/commands/stats_spec.rb
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Integration test failure fix.
Mistake: `flow_status` should be `pipeline_flow_stats`
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Integration test failures fix.
Number should be Numeric in the ruby specs.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Make CI happy.
* api specs: use PQ only where needed
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
* specs: detangle out-of-band pipeline initialization
Our API tests were initializing their pipelines-to-test in an out-of-band
manner that prevented the agent from having complete knowledge of the
pipelines that were running. By providing a ConfigSource to our Agent's
SourceLoader, we can rely on the normal pipeline reload behaviour to ensure
that the agent fully-manages the pipelines in question.
* api: do not emit pipeline that is not fully-initialized
* Flow metrics: initial implementation (#14509)
* metrics: eliminate race condition when registering metrics
Ensure our fast-lookup and store tables cannot diverge in a race condition
by wrapping mutation of both in a single mutex and appropriately handle
another thread winning the race to the lock by using the value that it
persisted instead of writing our own.
* metrics: guard against intermediate namespace conflicts
- ensures our safeguard that prevents using an existing metric as a namespace
is applied to _intermediate_ nodes, not just the tail-node, eliminating a
potential crash when sending `fetch_or_store` to a metric object that is not
expected to respond to `fetch_or_store`.
- uses the atomic `Concurrent::Map#compute_if_absent` instead of the
non-atomic `Concurrent::Map#fetch_or_store`, which is prone to
last-write-wins during contention (as-written, this method is only
executed under lock and not subject to contention)
- uses `Enumerable#reduce` to eliminate the need for recursion
* flow: introduce auto-advancing UptimeMetric
* flow: introduce FlowMetric with minimal current/lifetime rates
* flow: initialize pipeline metrics at pipeline start
* Controller and service layer implementation for flow metrics. (#14514)
* Controller and service layer implementation for flow metrics.
* Add flow metrics to unit test and benchmark cli definitions.
* flow: fix tests for metric types to accomodate new one
* Renaming concurrency and backpressure metrics.
Rename `concurrency` to `worker_concurrency ` and `backpressure` to `queue_backpressure` to provide proper scope naming.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* metric: register flow metrics only when we have a collector (#14529)
the collector is absent when the pipeline is run in test with a
NullMetricExt, or when the pipeline is explicitly configured to
not collect metrics using `metric.collect: false`.
* Unit tests and integration tests added for flow metrics. (#14527)
* Unit tests and integration tests added for flow metrics.
* Node stat spec and pipeline spec metric updates.
* Metric keys statically imported, implicit error expectation added in metric spec.
* Fix node status API spec after renaming flow metrics.
* Removing flow metric from PipelinesInfo DS (used in peridoci metric snapshot), integration QA updates.
* metric: register flow metrics only when we have a collector (#14529)
the collector is absent when the pipeline is run in test with a
NullMetricExt, or when the pipeline is explicitly configured to
not collect metrics using `metric.collect: false`.
* Unit tests and integration tests added for flow metrics.
* Node stat spec and pipeline spec metric updates.
* Metric keys statically imported, implicit error expectation added in metric spec.
* Fix node status API spec after renaming flow metrics.
* Removing flow metric from PipelinesInfo DS (used in peridoci metric snapshot), integration QA updates.
* Rebasing with feature branch.
* metric: register flow metrics only when we have a collector
the collector is absent when the pipeline is run in test with a
NullMetricExt, or when the pipeline is explicitly configured to
not collect metrics using `metric.collect: false`.
* Apply suggestions from code review
Integration tests updated to test capturing the flow metrics.
* Flow metrics expectation updated in tegration tests.
* flow: refine integration expectations for reloads/monitoring
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
Co-authored-by: Mashhur <mashhur.sattorov@gmail.com>
* metric: add ScaledView with sub-unit precision to UptimeMetric (#14525)
* metric: add ScaledView with sub-unit precision to UptimeMetric
By presenting a _view_ of our metric that maintains sub-unit precision,
we prevent jitter that can be caused by our periodic poller not running at
exactly our configured cadence.
This is especially important as the UptimeMetric is used as the _denominator_ of
several flow metrics, and a capture at 4.999s that truncates to 4s, causes the
rate to be over-reported by ~25%.
The `UptimeMetric.ScaledView` implements `Metric<Number>`, so its full
lossless `BigDecimal` value is accessible to our `FlowMetric` at query time.
* metrics: reduce window for too-frequent-captures bug and document it
* fixup: provide mocked clock to flow metric
* Flow metrics cleanup (#14535)
* flow metrics: code-style and readability pass
* remove unused imports
* cleanup: simplify usage of internal helpers
* flow: migrate internals to use OptionalDouble
* Flow metrics global (#14539)
* flow: add global top-level flows
* docs: add flow metrics
* Top level flow metrics unit tests added. (#14540)
* Top level flow metrics unit tests added.
* Add unit tests when config reloads, make sure top-level flow metrics didn't get reset.
* Apply suggestions from code review
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Validating against Hash test cases updated.
* For the safety check against exact type in unit tests.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* docs: section links and clarity in node stats API flow metrics
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: Mashhur <mashhur.sattorov@gmail.com>
This test has been broken on Windows since the jruby 9.3 update, and is
similar to an issue we saw when referencing other classes in the
org.logstash.util namespace
The Module is broken with the current version. The Type needs to be changed from syslog to _doc to fix the issue.
* remove dangling setting and add arcsight index suffixes
* add tests for new suffix in arcsight module
Co-authored-by: Tobias Schröer <tobias@schroeer.ch>
* Timestamp#toString(): ensure a minimum of 3 decimal places
Logstash 8 introduced internals for nano-precise timestamps, and began relying
on `java.time.format.DateTimeFormatter#ISO_INSTANT` to produce ISO8601-format
strings via `java.time.Instant#toString()`, resulting in a _variable length_
serialization that only includes sub-second digits when the Instant represents
a moment in time with fractional seconds.
By ensuring that a timestamp's serialization has a minimum of 3 decimal places,
we ensure that our output is backward-compatible with the equivalent timestamp
produced by Logstash 7.
* timestamp serialization-related specs fixup
This commit changes the behavior of PQ size checking.
When it checks the size usage, instead of throwing exception that stops the pipeline,
it gives warning msg in every converge state if it fails the check
Fixed: #14257
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
since JRuby 9.3 a useful inspect is provided out of the box
LS' inspect: <Java::JavaUtil::ArrayList:3536147 ["some"]>
JRuby 9.3's: "#<Java::JavaUtil::ArrayList: [\"some\"]>"
* add failing tests for Event.new with field that look like field references
* fix: correctly handle FieldReference-special characters in field names.
Keys passed to most methods of `ConvertedMap`, based on `IdentityHashMap`
depend on identity and not equivalence, and therefore rely on the keys being
_interned_ strings. In order to avoid hitting the JVM's global String intern
pool (which can have performance problems), operations to normalize a string
to its interned counterpart have traditionally relied on the behaviour of
`FieldReference#from` returning a likely-cached `FieldReference`, that had
an interned `key` and an empty `path`.
This is problematic on two points.
First, when `ConvertedMap` was given data with keys that _were_ valid string
field references representing a nested field (such as `[host][geo][location]`),
the implementation of `ConvertedMap#put` effectively silently discarded the
path components because it assumed them to be empty, and only the key was
kept (`location`).
Second, when `ConvertedMap` was given a map whose keys contained what the
field reference parser considered special characters but _were NOT_
valid field references, the resulting `FieldReference.IllegalSyntaxException`
caused the operation to abort.
Instead of using the `FieldReference` cache, which sits on top of objects whose
`key` and `path`-components are known to have been interned, we introduce an
internment helper on our `ConvertedMap` that is also backed by the global string
intern pool, and ensure that our field references are primed through this pool.
In addition to fixing the `ConvertedMap#newFromMap` functionality, this has
three net effects:
- Our ConvertedMap operations still use strings
from the global intern pool
- We have a new, smaller cache of individual field
names, improving lookup performance
- Our FieldReference cache no longer is flooded
with fragments and therefore is more likely to
remain performant
NOTE: this does NOT create isolated intern pools, as doing so would require
a careful audit of the possible code-paths to `ConvertedMap#putInterned`.
The new cache is limited to 10k strings, and when more are used only
the FIRST 10k strings will be primed into the cache, leaving the
remainder to always hit the global String intern pool.
NOTE: by fixing this bug, we alow events to be created whose fields _CANNOT_
be referenced with the existing FieldReference implementation.
Resolves: https://github.com/elastic/logstash/issues/13606
Resolves: https://github.com/elastic/logstash/issues/11608
* field_reference: support escape sequences
Adds a `config.field_reference.escape_style` option and a companion
command-line flag `--field-reference-escape-style` allowing a user
to opt into one of two proposed escape-sequence implementations for field
reference parsing:
- `PERCENT`: URI-style `%`+`HH` hexadecimal encoding of UTF-8 bytes
- `AMPERSAND`: HTML-style `&#`+`DD`+`;` encoding of decimal Unicode code-points
The default is `NONE`, which does _not_ proccess escape sequences.
With this setting a user effectively cannot reference a field whose name
contains FieldReference-reserved characters.
| ESCAPE STYLE | `[` | `]` |
| ------------ | ------- | ------- |
| `NONE` | _N/A_ | _N/A_ |
| `PERCENT` | `%5B` | `%5D` |
| `AMPERSAND` | `[` | `]` |
* fixup: no need to double-escape HTML-ish escape sequences in docs
* Apply suggestions from code review
Co-authored-by: Karol Bucek <kares@users.noreply.github.com>
* field-reference: load escape style in runner
* docs: sentences over semiciolons
* field-reference: faster shortcut for PERCENT escape mode
* field-reference: escape mode control downcase
* field_reference: more s/experimental/technical preview/
* field_reference: still more s/experimental/technical preview/
Co-authored-by: Karol Bucek <kares@users.noreply.github.com>
* Add support for ca_trusted_fingerprint in Apache HTTP and Manticore
Adds a module `LogStash::Plugins::CATrustedFingerprintSupport`, which can be
included in a plugin class to add a `ca_trusted_fingerprint` option to create
an Apache SSL TrustStrategy that can be used to bypass the TrustManager when
a matching certificate is found on the chain.
This commit updates the version of jruby used in Logstash to `9.3.4.0`.
* Updates the references of `jruby` from `9.2.20.1` to `9.3.4.0`
* Updates references/locations of ruby from `2.5.0` to `2.6.0`
* Updates java imports including `org.logstash.util` to be quoted
* Without quoting the name of the import, the following error is observed in tests:
* `java.lang.NoClassDefFoundError: org/logstash/Util (wrong name: org/logstash/util)`
* Maybe an instance of https://github.com/jruby/jruby/issues/4861
* Adds a monkey patch to `require` to resolve compatibility issue between latest `jruby` and `polyglot` gem
* The addition of https://github.com/jruby/jruby/pull/7145 to disallow circular
causes, will throw when `polyglot` is thrown into the mix, and stop logstash from
starting and building - any gems that use an exception to determine whether or not
to load the native gem, will trigger the code added in that commit.
* This commit adds a monkey patch of `require` to rollback the circular cause exception
back to the original cause.
* Removes the use of the deprecated `JavaClass`
* Adds additional `require time` in `generate_build_metadata`
* Rewrites a test helper to avoid potentially calling `~>` on `FalseClass`
Co-authored-by: Joao Duarte <jsvduarte@gmail.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Updates the `internalReceive` method implementation in Pipeline Input to catch exception error and return the position where the stream was interrupted. Modify the EventBus's batch forwarding logic to handle errors from Pipeline Input and apply retry logic only from last error position in the batch of events.
Co-authored-by: Karol Bucek <kares@users.noreply.github.com>
Exposes dead_letter_queue.storage_policy configuration setting to explicitly enable the drop_older behavior in DLQs.
Moving from a drop_newer to a drop_older behavior has impact both on the writer side and to the reader side.
The implementation leverage the fact that a complete DLQ segment can be removed to free up space; on the writer side when the dead_letter_queue.max_bytes limit is reached it has to remove old segments.
On the reader side, the consuming has to be adapted to don't expect a continuous flow of segments, it could face an hole due to removal of tail segments.
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Prior to the change, pipeline `stop` and `delete` happen in two converge cycles, which
has a gap letting the stopped pipeline compare with the same pipeline definition
in central pipeline management, hence Logstash see the stopped pipeline as graceful finish
and not to delete in registry
This commit creates StopAndDelete action to delete running pipeline in one converge cycle
Fixed: #14017
When logstash is run without automatic reloading, it is still possible to reload configurations
by using 'SIGHUP'. This functionality was broken in #12444, which split non-terminated pipelines
into "loading" and "running" states. The call `no_pipelines?` in agent#execute would no longer
find pipelines in a "loading" state, causing the loop to exit, and logstash to shutdown. This
commit tests for pipelines in a "loading" state to restore functionality
Certain errors during pipeline converge will emit an IllegalStateException - for example attempting
to reference an environment variable that does not exist. Attempting to add a java exception to the
converge result would result in an uncaught exception in a pipeline thread leading to an unclean
shutdown.
This had the effect of,prior to this commit, Logstash behaviour varying depending on the class of
error that caused a pipeline not to start - an invalid pipeline configuration would still enable
logstash to start other pipelines in a multiple pipeline configuration, or automatic reloading
enabling changes to the configuration to allow the pipeline to start(in multi- and single pipeline
configurations).
However, a missing environment variable would cause Logstash to crash hard no matter what.
This was discovered when updating to jruby-9.3.3.0, when new clean up code joining existing
threads to perform a graceful shutdown was stuck indefinitely, due to the webserver shutdown
code not being called, due to the unclean shutdown.
This PR substitutes ${VAR} in Expression, except RegexValueExpression, with the value in secret store, env.
The substitution happens after syntax parsing and before graph execution.
Fixed: #5115
This commit fixes 2 tests
- Set queue.drain to true in pipeline pq test
- Under certain conditions the pipeline_pq_file_spec test would fail as the pipeline would exit once the generator had generated all of its events, but before the events were processed, leading to the test hanging. This commit adds `queue.drain:true` to the settings to ensure that all of the events are processed before the pipeline is shut down
- Increase the flush delay in dead letter quest testFlushAfterDelay test
- Under certain conditions, the flush delay of 1 second was insufficient, and invalidated a pre-condition assertion that no events had been flushed before the expiry of that delay.
* ecs: report pipeline's ECS-compatibility with INFO at startup
Because the pipeline-level setting `pipeline.ecs_compatibility` affects the
default behaviour of nearly every plugin in the pipeline, an INFO-level log
message will provide useful hints, especially to our users who upgrade to
Logstash 8 without first reading the breaking changes docs.
For example, when we have two pipelines `old` and `new` whose `pipeline.ecs_compatibility` is `disabled` and `v8` respectively, we would get the following log messages:
> ~~~
> [2021-11-04T18:43:21,810][INFO ][logstash.javapipeline ] Pipeline `old` is configured with `pipeline.ecs_compatibility: disabled` setting. All plugins in this pipeline will default to `ecs_compatibility => disabled` unless explicitly configured otherwise.
> [2021-11-04T18:43:21,817][INFO ][logstash.javapipeline ] Pipeline `new` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
> ~~~
* ecs: make v8 the default for 8.0
* ecs: `pipeline.ecs_compatibility` defaults to `v8`
Related: elastic/logstash#11623
* doc: temporarily remove deep link from breaking changes doc to fix build
* settings: add "deprecated alias" support
A deprecated alias provides a path for renaming a setting.
- When a deprecated alias is set on its own, a deprecation notice is emitted
but fetching the canonical setting value will reflect the value set with the
deprecated alias.
- When both the canonical setting (new name) and the deprecated alias (old
name) are specified, it is an error condition.
- When the value of the deprecated alias is queried, a warning is emitted to
the logger and only the value explicitly set to the deprecated alias is
returned.
Additionally, some relevant cleanup is also included:
- Starting Logstash with invalid settings no longer results in the obtuse "An
unexpected error occurred" with backtrace and exception data obscuring the
issue. Instead, a simple message is emitted indicating that the settings are
invalid along with the originating exception's message.
- The various settings implementations share a common logger, instead of each
implementation class providing its own. This is aimed to reduce noise from
the logs and to ensure specs validating logging do not need to tie so
closely to implementation details.
* settings: add password-wrapped setting
* settings: make any setting type capable of being nullable
* settings: add `Settings#names` to power programatic iteration
* cli: route CLI-flag deprecations in to deprecation logger
* settings: group API-related settings under `api.*`
retains deprecated aliases, and is fully backward-compatible.
* webserver: cleanup orphaned attr accessors for never-set ivars
* api: pull settings extraction down from agent
This net-no-change refactor introduces a new method `WebServer#from_settings`
that bridges the gap between Logstash settings and Puma-related options, so
that future additions to the API settings don't add complexity to the Agent.
It also has the benefit of initializing the API Rack App and just ONCE, instead
of once per attempted HTTP port.
* api: add optional TLS/SSL
* docs: reference API security settings
* api: when configured securely, bind to all available interfaces by default
* cleanup: remove unused cert artifacts
* tests: generate fresh webserver certificates
* certs: actually add the binary keystores 🤦
* add nanoseconds support
Migrates internals of `org.logstash.Timestamp` from legacy `org.joda.time.*`
which is limited to millisecond-precision to modern `java.time.Instant`,
allowing us to retain nanosecond granularity of `@timestamp` values.
Timestamps that are generated by Logstash (such as when creating an event that
does _not_ have a `@timestamp` field) will be generated at the highest precision
available to the JVM and/or platform (in many cases, this is microseconds).
Timestamps that are _parsed_ from user input will capture the entire provided
precision, up to and including nanosecond granularity.
Throughout the flow in the pipeline, including serialization to PQ, DLQ, and
JSON, will retain all available precision.
BREAKING: This produces an effectively-breaking change to the serialization
format of both the persistent queue (PQ) and dead-letter queue (DLQ),
as the serialized format this changeset contains a higher granularity
of timestamp than previous releases of Logstash were capable of
parsing without error.
As such, it _MUST NOT_ be back-ported to the 7.x series.