The old version (1.0.0) had a broken version of Concurrent::Timer that
did not work on all machines for reasons that are still unclear, but may
be related to the Java version
Fixes#7368Fixes#7373
Instead of depending on the now deprecated multiline filter we use a
dummy filter that just emits events. This simplifies the test and
dramatically reduces timing issues.
I also increased the max-wait for the timer just in case
Fixes#7024Fixes#7131
The failure:
Failures:
1) LogStash::Pipeline defaulting the pipeline workers based on thread safety when there are threadsafe filters only starts multiple filter threads
Failure/Error: expect(pipeline.worker_threads.size).to eq(worker_thread_count)
expected: 5
got: 8
Related issues: #6855, #6245, #6355Fixes#7071
fix Issue #6352
On Windows, Logstash can't find log4j2.properties file with above message at startup.
```
Could not find log4j2 configuration at path /LS_HOME/config/log4j2.properties.
```
Fixes#6903
With the creation of the x-pack we have added our first internal
pipeline, but if you were running the monitoring pipeline with a
finite pipeline (LIKE generator count => X) when the finite has
completed processing all the events logstash would refuse to stop.
This PR fixes the problem by adding a new pipeline settings called
system in the shutdown loop we will check if all the user defined
pipeline are completed if its the case we will shutdown any internal
pipeline and logtash will stop gracefully.
Fixes#6943
On o.l.ackedqueue.Queue.close() method the
lock.unlock() call might not execute, leaving
a mutex locked.
This change ensure that lock.unlock() is always
execute by moving its call to the sibling
try/catch's finally block.
`
move nextSeqNum call under mutex lock to prevent seqNum race condition
fully acked head page beading should not create new tail page
explicit purge required to clean physical files
correctly remove preserved checkpoints
small review changes
Because we sync listeners with emitters when adding or creating hook
this could lead to duplicates of listeners, this PR fixes the problem by using a set
instead of a list. Making sure we can only have one instance of a specific
listener at any time.
Fixes#6916
In org.logstash.Accessors a reference Id is converted from String
to integer, but it's not checked for NumberFormatException. As a
consequence logstash throws the exception away.
This change Accessors' methods to handle NumberFormatException
exceptions by returning null or false accordingly.
Fixes#6522Fixes#6883
Before, the DeadLetterQueueReadManager would throw an exception
when it attempted to choose to read a segment from its segments
list and that list was empty. This fixes that.
Fixes#6880
This PR add the initial building block to pass some `ExecutionContext`
from the pipeline to the plugin, currently we only pass the `pipeline_id`.
We use the accessor `execution_context=` to set the context, in a future
refactor we will pass the object to the constructor.
Fixes#6890
Some codecs are context-specific and not threadsafe. If, for instance,
you want to use `generator { threads => 3 }` you will run into buggy
behavior with the line and multiline codecs which are not threadsafe.
This patch is a quick workaround for this behavior. This does not fix
this issue for inputs that do their own multithreading. Those inputs
should handle codec cloning / lifecycle internally according to their
specific requirements.
Fixes#6865
This PR fixes an issue where the max heap size was reported as the double of
the actual value because it was merging the values of the usage.max and
peak.max into a single value.
Fixes: #6608Fixes#6827
Add a new method that uses the `fast_lookup` has to find if a specific
metric exist instead of relying on exceptions.
Usage:
```ruby
metric_store.has_metric?(:node, :sashimi, :pipelines, :pipeline01, :plugins, :"logstash-output-elasticsearch", :event_in) # true
metric_store.has_metric?(:node, :sashimi, :pipelines, :pipeline01, :plugins, :"logstash-output-elasticsearch", :do_not_exist) # false
```
Fixes: #6533Fixes#6759
This PR changes where the `events.in` are calculated, previously the
values were calculated in the `ReadClient` which was fine before the
addition of the PQ, but this make the stats not accurate when the PQ was
enabled and the producer are a lot faster than the consumer.
These commits change the collection of the metric inside an
instrumented `WriteClient` so both implementation of the client queues will use
the same code.
This also make possible to record `events.out` for every inputs and the
time waiting to push to the queue.
The API is now exposing theses values for each plugins, the events level
and and the pipeline.
Using a pipeline with a sleep filter and PQ we will see this kind of
response from the API.
```json
{
"duration_in_millis": 438624,
"in": 3011436,
"filtered": 2189,
"out": 2189,
"queue_push_duration_in_millis": 49845
}
```
Fixes: #6512Fixes#6532