This commit fixes no method error when node stats API got
invalid API path, which triggers puma to print error using stderr
Fix: #15639
(cherry picked from commit 05392ad16e)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Updates invocations of i18n.t method which are leftovers and missed in the original Ruby 3.1 update PR #14861
Without this, some error reporting logs are hidden by the mismatch of arguments error in translate the error message.
(cherry picked from commit 90964fb559)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
* Pipeline to pipeline communication acked queue improvements.
* Handle InterruptedException exception in input back, warn message improvement when in-flight events are partially sent and other minor such as descriptive logs, etc improvements.
* Apply suggestions from code review
Check if queue is open after thread acquires the lock.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Apply suggestions from code review
Unite test case improvement: use `assertThrows` when validating the exception.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Pulling off of introducing wrap with operations.
* Introduce functional interface to broadly use to catch the exception types.
* Addressing comments: do not retry sending inflight events on case. We still throw if we get error when opening queue.
* will not be reached input retry logic, removing.
* Move queue close check after thread acquiring a lock. Make read next page interface private since it is an internal use purpose.
* Apply suggestions from code review
Leave a comment for the write lock and remove unnecessary warning with `ensure_delivery=>false`
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Remove unused method, check if current thread acquired lock when accessing next page.
* pq: getting possibly-shared access to next read page is illegal
The private `Queue#nextReadPage()` method requires that the caller has
exclusive ownership of the lock, and failing to have the lock is an
illegal state that cannot be recoverd from; it would leak the
effectively-private `Page` to a caller that cannot reliably use it
without corrupting other callers.
Both callers of this private method already call it with exclusive
access, so this safeguard is merely to prevent future development from
breaking the expectation unknowingly.
As such, we throw an `IllegalStateException`.
* pq: use shared queue-closed check for block and non-block reads
By moving the closed-check from the blocking `Queue#readBatch` to the
shared private `Queue#nextReadPage`, we ensure that both blocking reads
by `Queue#readBatch` and non-blocking reads by `Queue#nonBlockReadBatch`
behave the same when the queue has been closed.
* pq: make exception message constants descriptive
* p2p: clarify comment about cumulating retry behaviour
---------
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
This commit updates the puma gem from version 5 to the latest version 6.3.
A few breaking changes were introduced in Puma 6.0.0, which required some refactoring on the Logstash side, especially to adapt it to the "Extracted LogWriter from Events #2798" changes.
Before this PR, all the logs generated by Puma were using the debug level, even the ones that were actually errors and needed attention/action from the users. This commit also changes the log level as following:
error(...): changed from debug to error
unknown_error(..): changed from debug to error
Set of changes to make Logstash compatible to JRuby 9.4.
Bundle JRuby 9.4.3.0
- Redefine space token in `LSCL` and `grammar` treetop from `_` which would generated methods in the form `def _0` (deprecated since `2.7`) to `sc`.
- `I18n.t` method doesn't accept hash as second argument
- `URI.encode` has been replaced with same functionality with `URI::Parser.new.escape`
- `YAML.load` needs explicit `fallback: false` to return false when the yaml string is empty (or contains only comments)
- JRuby's `JavaClass` has been removed, now it can use `java.lang.Class` directly
- explicitly require gem `thwait` to satisfy `require "thwait"` (In `Gemfile.template` and `logstash-core/logstash-core.gemspec`)
- fix not args `clone` to be `def clone(*args)`
- fix `Enumeration.each_slice` which from `Ruby 3.1` is [chainable](https://rubyreferences.github.io/rubychanges/3.1.html#enumerableeach_cons-and-each_slice-return-a-receiver) and doesn't return `nil`. JRuby fixed in https://github.com/jruby/jruby/issues/7015
- Expanded `Down.download` arguments map ca16bbed3c302006967413eb9d3862f2da81f7ae
- Avoid to pass `nil` in the list of couples used in `Hash[ <list of couples> ]` which from Ruby `3.0` generates an `ArgumentError`
- Removed space not allowed between method name and parentheses `initialize (` is forbidden. 29b607dcdef98f81a73ad171639fd13aaa65e243
- With [Ruby 2.7 the `Kernel#open`](https://rubyreferences.github.io/rubychanges/2.7.html#network-and-web) doesn't fallback to `URI#open`, fixed test code that used that to verify open port. e5b70de54c5301f51a767da67294092af0cfafdc
- Avoid to drop `rdoc/` folder from vendored JRuby else `bin/logstash -i irb` would crash, commit b71f73e9c6edb81a7b7ae1305047e506f61c6e8c
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
This commit changed the java_pipeline.rb to include the pipeline/main thread on the stalling threads info list, that way, Logstash can provide users with more helpful information when the stalling thread is the pipeline/main one.
This commit adds missing Elasticsearch SSL settings and replaces deprecated options being used on `xpack.monitoring.*` and `xpack.management.*` settings:
Changes:
- Updated deprecated monitoring and management Elasticsearch's SSL settings so no warnings are logged.
- Added monitoring settings support for file-based certificates and for the cipher suites: `xpack.monitoring.elasticsearch.ssl.certificate`, `xpack.monitoring.elasticsearch.ssl.key`, and `xpack.monitoring.elasticsearch.ssl.cipher_suites`.
- Added management settings support for file-based certificates and for the cipher suites: `xpack.management.elasticsearch.ssl.certificate`, `xpack.management.elasticsearch.ssl.key`, and `xpack.management.elasticsearch.ssl.cipher_suites`.
Reject illegal value assigning to `tags` field. Top-level `tags` should only accept string of array of string.
When `tags` got illegal value on event creation, LogStash::Event will rename the field to `_tags` and add a tag `_tagsparsefailure` to `tags`.
When `tags` got illegal value on `set` operation, LogStash::Event will throw exception.
Add a flag `--event_api.tags.illegal` to allow fallback to old logic. There are two options.
`warn` - the old flow that allows illegal value assignment to tags field.
`rename` - the new flow. This is the default value in 8.7
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
The delayed implementation `AfterCompletionTimerMetric` of the `TimerMetric`
interface, introduced along-side that interface's introduction to replicate
the previous (undesired) behaviour, is superceded by an already-merged live-
tracking implementation that is effectively as performant when not under
concurrent contention and still reasonably performant when a single timer is
contended across multiple threads.
The `metric.timers` setting removed here has not been a part of any Logstash
release and can safely be removed without going through the normal deprecation
path; from the user's perspective this removal combined with the previously-
merged work is simply an improvement to the accuracy of the existing timer
metrics exposed via our API.
Logstah currently does not support multiple top-level `codec` per plugin. This commit fixes the pipeline compilation ensuring that erroneously configured plugins fail to compile and result in a configuration error.
* live timers: introduce API boundary
Introduces an API boundary for timers as a first-class metric, as described
in elastic/logstash#14675, and migrates all known internal timers to use the
new API boundary for tracked execution.
Please refer to the specification for details on motivations.
This commit is net zero change to behaviour, and introduces a single new
undocumented setting `metric.timers` to `logstash.yml`, which presently only
takes its default value `delayed` to indicate that delayed committing of
execution time is acceptable.
It implements the new `TimerMetric` API in a way that is also net-zero-change.
Tracked executions are still performed by marking a start time, performing
the tracked execution, and incrementing an underlying long-type counter with
the number of elapsed milliseconds _after_ execution has completed. This means
that long-running execution is still missing from the metric until it has
completed.
The new Timer API is available to both the Ruby- and the Java-based plugin APIs
* timer metrics: sub-package and add baseline tests
* WIP: move execution metric ownership out of queue
* noop: remove useless abstract method
Our `AbstractMetric` implements `Metric` and does not need to declare
an abstract override of `Metric#getType`. Doing so prevents interfaces
from providing a default override for all implementers.
* timer metric tests: extract util, refactor for reuse
* timers: accumulate milli-excess-nanos
* live timers: single-checkpoint implementation
* timer metric: use explicit type parameters to make intent clear
* remove unused imports
* use safe int conversion
* test fixup: use given name for tested metric
* test helper: TimerMetricFactory prefers nanotime supplier
* timers: flesh out test coverage, incl live-timers
* test: move validation of queue-read metrics to ObservedExecution
* flow: support non-moving denominator (±infinity)
* metrics: add metric config pass-through to env2yaml
* source/multilocal: fix detection of empty pipelines.yml
Fixes a regression introduced in elastic/logstash#13883 in which the presence
of an empty `pipelines.yml` file produces an error message indicating that
the file cannot be read.
When either `YAML::load` or `YAML::safe_load` encounter an effectively-empty
payload (such as one that is entirely comments), they use a `fallback` param
to determine what value to emit, with the former emitting `false` and the
latter emitting `nil`.
This is problematic because a _separate_ blind-`rescue nil` causes `nil` to
be bound to the MultiLocal's `@detected_marker`, and we assume that a `nil`
value in the marker means that there was an exception reading the file (such
as a permissions issue or parse failure).
By providing a `fallback: false` directive when parsing the contents, we
ensure that an empty file is reported as such.
* source/multilocal: avoid `rescue nil` that loses helpful context
When the pipelines yaml cannot be read, or can be read but fails to parse,
the MultiLocal#read_pipelines_from_yaml emits a helpful exception including
specifics about why it failed to load or parse, but a blind `rescue nil`
here causes that helpful information to be lost.
When pipeline detection is exceptional, hold onto the helpful exception
so that it can be reported along with the config conflicts.
* source/multilocal: differentiate between reading and parsing failure
* source/multilocal: use translations for conflict messages
* source/multilocal: specs for error conditions
* specs: detangle out-of-band pipeline initialization
Our API tests were initializing their pipelines-to-test in an out-of-band
manner that prevented the agent from having complete knowledge of the
pipelines that were running. By providing a ConfigSource to our Agent's
SourceLoader, we can rely on the normal pipeline reload behaviour to ensure
that the agent fully-manages the pipelines in question.
* api: do not emit pipeline that is not fully-initialized
* flow metrics: extract to interface, sharable-comon base, and implementation
In preparation of landing an additional implementation of FlowMetric, we
shuffle the current parts net-unchanged to provide interfaces for `FlowMetric`
and `FlowCapture`, along with a sharable-common `BaseFlowMetric`, and move
our initial implementation to a new `SimpleFlowMetric`, accessible only
through a static factory method on our new `FlowMetric` interface.
* flow-rates: refactor LIFETIME up to sharable base
* util: add SetOnceReference
* flow metrics: tolerate unavailable captures
While the metrics we capture from in the initial release of FlowMetrics
are all backed by `Metric<T extends Number>` whose values are non-null,
we will need to capture from nullable `Gauge<Number>` in order to
support persistent queue size and capacity metrics. This refactor uses
the newly-introduced `SetOnceReference` to defer our baseline lifetime
capture until one is available, and ensures `BaseFlowMetric#doCapture`
creates a capture if-and-only-if non-null values are available from
the provided metrics.
* flow rates: limit precision for readability
* flow metrics: introduce policy-driven extended windows implementation
The new ExtendedFlowMetric is an alternate implementation of the FlowMetric
introduced in Logstash 8.5.0 that is capable of producing windoes for a set of
policies, which dictate the desired retention for the rate along with a
desired resolution.
- `current`: 10s retention, 1s resolution [*]
- `last_1_minute`: one minute retention, at 3s resolution [*]
- `last_5_minutes`: five minutes retention, at 15s resolution
- `last_15_minutes`: fifteen minutes retention, at 30s resolution
- `last_1_hour`: one hour retention, at 60s resolution
- `last_24_hours`: one day retention at 15 minute resolution
A given series may report a range for slightly longer than its configured
retention period, up to the either the series' configured resolution or
our capture rate (currently ~5s), whichever is greater. This approach
allows us to retain sufficient data-points to present meaningful rolling
averages while ensuring that our memory footprint is bounded.
When recording these captures, we first stage the newest capture, and then
promote the previously-staged caputure to the tail of a linked list IFF
the gap between our new capture and the newest promoted capture is larger
than our desired resolution.
When _reading_ these rates, we compact the head of that linked list forward
in time as far as possible without crossing the desired retention barrier,
at which point the head points to the youngest record that is old enough
to satisfy the period for the series.
We also occesionally compact the head during writes, but only if the head
is significantly out-of-date relative to the allowed retention.
As implemented here, this extended flow rates are on by default, but can be
disabled by setting the JVM system property `-Dlogstash.flowMetric=simple`
* flow metrics: provide lazy-initiazed implementation
* flow metrics: append lifetime baseline if available during init
* flow metric tests: continuously monitor combined capture count
* collection of unrelated minor code-review fixes
* collection of even more unrelated minor code-review fixes
* refactor: pull members up from JavaBasePipelineExt to AbstractPipelineExt
* refactor: make `LogStash::JavaPipeline` inherit directly from `AbstractPipeline`
* Flow metrics: initial implementation (#14509)
* metrics: eliminate race condition when registering metrics
Ensure our fast-lookup and store tables cannot diverge in a race condition
by wrapping mutation of both in a single mutex and appropriately handle
another thread winning the race to the lock by using the value that it
persisted instead of writing our own.
* metrics: guard against intermediate namespace conflicts
- ensures our safeguard that prevents using an existing metric as a namespace
is applied to _intermediate_ nodes, not just the tail-node, eliminating a
potential crash when sending `fetch_or_store` to a metric object that is not
expected to respond to `fetch_or_store`.
- uses the atomic `Concurrent::Map#compute_if_absent` instead of the
non-atomic `Concurrent::Map#fetch_or_store`, which is prone to
last-write-wins during contention (as-written, this method is only
executed under lock and not subject to contention)
- uses `Enumerable#reduce` to eliminate the need for recursion
* flow: introduce auto-advancing UptimeMetric
* flow: introduce FlowMetric with minimal current/lifetime rates
* flow: initialize pipeline metrics at pipeline start
* Controller and service layer implementation for flow metrics. (#14514)
* Controller and service layer implementation for flow metrics.
* Add flow metrics to unit test and benchmark cli definitions.
* flow: fix tests for metric types to accomodate new one
* Renaming concurrency and backpressure metrics.
Rename `concurrency` to `worker_concurrency ` and `backpressure` to `queue_backpressure` to provide proper scope naming.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* metric: register flow metrics only when we have a collector (#14529)
the collector is absent when the pipeline is run in test with a
NullMetricExt, or when the pipeline is explicitly configured to
not collect metrics using `metric.collect: false`.
* Unit tests and integration tests added for flow metrics. (#14527)
* Unit tests and integration tests added for flow metrics.
* Node stat spec and pipeline spec metric updates.
* Metric keys statically imported, implicit error expectation added in metric spec.
* Fix node status API spec after renaming flow metrics.
* Removing flow metric from PipelinesInfo DS (used in peridoci metric snapshot), integration QA updates.
* metric: register flow metrics only when we have a collector (#14529)
the collector is absent when the pipeline is run in test with a
NullMetricExt, or when the pipeline is explicitly configured to
not collect metrics using `metric.collect: false`.
* Unit tests and integration tests added for flow metrics.
* Node stat spec and pipeline spec metric updates.
* Metric keys statically imported, implicit error expectation added in metric spec.
* Fix node status API spec after renaming flow metrics.
* Removing flow metric from PipelinesInfo DS (used in peridoci metric snapshot), integration QA updates.
* Rebasing with feature branch.
* metric: register flow metrics only when we have a collector
the collector is absent when the pipeline is run in test with a
NullMetricExt, or when the pipeline is explicitly configured to
not collect metrics using `metric.collect: false`.
* Apply suggestions from code review
Integration tests updated to test capturing the flow metrics.
* Flow metrics expectation updated in tegration tests.
* flow: refine integration expectations for reloads/monitoring
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
Co-authored-by: Mashhur <mashhur.sattorov@gmail.com>
* metric: add ScaledView with sub-unit precision to UptimeMetric (#14525)
* metric: add ScaledView with sub-unit precision to UptimeMetric
By presenting a _view_ of our metric that maintains sub-unit precision,
we prevent jitter that can be caused by our periodic poller not running at
exactly our configured cadence.
This is especially important as the UptimeMetric is used as the _denominator_ of
several flow metrics, and a capture at 4.999s that truncates to 4s, causes the
rate to be over-reported by ~25%.
The `UptimeMetric.ScaledView` implements `Metric<Number>`, so its full
lossless `BigDecimal` value is accessible to our `FlowMetric` at query time.
* metrics: reduce window for too-frequent-captures bug and document it
* fixup: provide mocked clock to flow metric
* Flow metrics cleanup (#14535)
* flow metrics: code-style and readability pass
* remove unused imports
* cleanup: simplify usage of internal helpers
* flow: migrate internals to use OptionalDouble
* Flow metrics global (#14539)
* flow: add global top-level flows
* docs: add flow metrics
* Top level flow metrics unit tests added. (#14540)
* Top level flow metrics unit tests added.
* Add unit tests when config reloads, make sure top-level flow metrics didn't get reset.
* Apply suggestions from code review
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Validating against Hash test cases updated.
* For the safety check against exact type in unit tests.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* docs: section links and clarity in node stats API flow metrics
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: Mashhur <mashhur.sattorov@gmail.com>
Remove support for DES-CBC3-SHA.
Also removes unnecessary exclusions for EDH-DSS-DES-CBC3-SHA, EDH-RSA-DES-CBC3-SHA and KRB5-DES-CBC3-SHA since there's already a "!DES" rule.
This test has been broken on Windows since the jruby 9.3 update, and is
similar to an issue we saw when referencing other classes in the
org.logstash.util namespace
The Module is broken with the current version. The Type needs to be changed from syslog to _doc to fix the issue.
* remove dangling setting and add arcsight index suffixes
* add tests for new suffix in arcsight module
Co-authored-by: Tobias Schröer <tobias@schroeer.ch>
Updates DLQ writer's writeEvent method to clean the tail segments older then the duration period. This happens only if setting dead_letter_queue.retain.age is configured.
To read the age of a segment it extract the timestamp of the last (youngest) message in the segment.
The age is defined as a number followed by one of d, h, m, s that stands for days, hours, minutes and seconds. If nothing is used then assumes seconds as default measure entity.
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
* Logstash checks different file system if each pipeline has a symlink to other filesystem.
* Apply suggestions from code review
* FileAlreadyExistsException case handling when queue path is symlinked.
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
This commit replaces the use of a block with a lambda as an argument for Stream.forEach.
This is to work around the jruby issue identified in https://github.com/jruby/jruby/issues/7246.
This commit also updates the multiple_pipeline_spec to update the test case for pipeline->pipeline
communication to trigger the issue - it only occurs with Streams with more than one event in it.
This commit changes the behavior of PQ size checking.
When it checks the size usage, instead of throwing exception that stops the pipeline,
it gives warning msg in every converge state if it fails the check
Fixed: #14257
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
* Refactor: require treetop/runtime - avoids loading polyglot
* Build: instruct Bundler not to auto-load polyglot/treetop
+ Build: these deps are properly required as needed
all of them only used in one place (outside of normal bootstrap)
* Fix deprecation logging of password policy.
Give users a guide to not only upgrading but also keep current behavior if they really want.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
* add failing tests for Event.new with field that look like field references
* fix: correctly handle FieldReference-special characters in field names.
Keys passed to most methods of `ConvertedMap`, based on `IdentityHashMap`
depend on identity and not equivalence, and therefore rely on the keys being
_interned_ strings. In order to avoid hitting the JVM's global String intern
pool (which can have performance problems), operations to normalize a string
to its interned counterpart have traditionally relied on the behaviour of
`FieldReference#from` returning a likely-cached `FieldReference`, that had
an interned `key` and an empty `path`.
This is problematic on two points.
First, when `ConvertedMap` was given data with keys that _were_ valid string
field references representing a nested field (such as `[host][geo][location]`),
the implementation of `ConvertedMap#put` effectively silently discarded the
path components because it assumed them to be empty, and only the key was
kept (`location`).
Second, when `ConvertedMap` was given a map whose keys contained what the
field reference parser considered special characters but _were NOT_
valid field references, the resulting `FieldReference.IllegalSyntaxException`
caused the operation to abort.
Instead of using the `FieldReference` cache, which sits on top of objects whose
`key` and `path`-components are known to have been interned, we introduce an
internment helper on our `ConvertedMap` that is also backed by the global string
intern pool, and ensure that our field references are primed through this pool.
In addition to fixing the `ConvertedMap#newFromMap` functionality, this has
three net effects:
- Our ConvertedMap operations still use strings
from the global intern pool
- We have a new, smaller cache of individual field
names, improving lookup performance
- Our FieldReference cache no longer is flooded
with fragments and therefore is more likely to
remain performant
NOTE: this does NOT create isolated intern pools, as doing so would require
a careful audit of the possible code-paths to `ConvertedMap#putInterned`.
The new cache is limited to 10k strings, and when more are used only
the FIRST 10k strings will be primed into the cache, leaving the
remainder to always hit the global String intern pool.
NOTE: by fixing this bug, we alow events to be created whose fields _CANNOT_
be referenced with the existing FieldReference implementation.
Resolves: https://github.com/elastic/logstash/issues/13606
Resolves: https://github.com/elastic/logstash/issues/11608
* field_reference: support escape sequences
Adds a `config.field_reference.escape_style` option and a companion
command-line flag `--field-reference-escape-style` allowing a user
to opt into one of two proposed escape-sequence implementations for field
reference parsing:
- `PERCENT`: URI-style `%`+`HH` hexadecimal encoding of UTF-8 bytes
- `AMPERSAND`: HTML-style `&#`+`DD`+`;` encoding of decimal Unicode code-points
The default is `NONE`, which does _not_ proccess escape sequences.
With this setting a user effectively cannot reference a field whose name
contains FieldReference-reserved characters.
| ESCAPE STYLE | `[` | `]` |
| ------------ | ------- | ------- |
| `NONE` | _N/A_ | _N/A_ |
| `PERCENT` | `%5B` | `%5D` |
| `AMPERSAND` | `[` | `]` |
* fixup: no need to double-escape HTML-ish escape sequences in docs
* Apply suggestions from code review
Co-authored-by: Karol Bucek <kares@users.noreply.github.com>
* field-reference: load escape style in runner
* docs: sentences over semiciolons
* field-reference: faster shortcut for PERCENT escape mode
* field-reference: escape mode control downcase
* field_reference: more s/experimental/technical preview/
* field_reference: still more s/experimental/technical preview/
Co-authored-by: Karol Bucek <kares@users.noreply.github.com>
* Add support for ca_trusted_fingerprint in Apache HTTP and Manticore
Adds a module `LogStash::Plugins::CATrustedFingerprintSupport`, which can be
included in a plugin class to add a `ca_trusted_fingerprint` option to create
an Apache SSL TrustStrategy that can be used to bypass the TrustManager when
a matching certificate is found on the chain.
This commit updates the version of jruby used in Logstash to `9.3.4.0`.
* Updates the references of `jruby` from `9.2.20.1` to `9.3.4.0`
* Updates references/locations of ruby from `2.5.0` to `2.6.0`
* Updates java imports including `org.logstash.util` to be quoted
* Without quoting the name of the import, the following error is observed in tests:
* `java.lang.NoClassDefFoundError: org/logstash/Util (wrong name: org/logstash/util)`
* Maybe an instance of https://github.com/jruby/jruby/issues/4861
* Adds a monkey patch to `require` to resolve compatibility issue between latest `jruby` and `polyglot` gem
* The addition of https://github.com/jruby/jruby/pull/7145 to disallow circular
causes, will throw when `polyglot` is thrown into the mix, and stop logstash from
starting and building - any gems that use an exception to determine whether or not
to load the native gem, will trigger the code added in that commit.
* This commit adds a monkey patch of `require` to rollback the circular cause exception
back to the original cause.
* Removes the use of the deprecated `JavaClass`
* Adds additional `require time` in `generate_build_metadata`
* Rewrites a test helper to avoid potentially calling `~>` on `FalseClass`
Co-authored-by: Joao Duarte <jsvduarte@gmail.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
If #stop is called on Puma before #run is finished - for example, when an
incorrect configuration is specified, Puma can be left with threads hanging
around.
This is due to a check/notify pipe used to signal state changes not being created
until halfway through the #run method, enabling a window where #stop can be called
before the pipe has been created, leading to the run thread not exiting. Prior to
jruby-9.3.x, this would tend to be benign and Logstash would exit normally. However,
jruby-9.3.x introduced a tear down as part of the shutdown mechanism, which will join
active threads on shutdown. This puma issue would cause an indefinite hang on exit
when this condition is triggered.