Instead of depending on the now deprecated multiline filter we use a
dummy filter that just emits events. This simplifies the test and
dramatically reduces timing issues.
I also increased the max-wait for the timer just in case
Fixes#7024Fixes#7131
The failure:
Failures:
1) LogStash::Pipeline defaulting the pipeline workers based on thread safety when there are threadsafe filters only starts multiple filter threads
Failure/Error: expect(pipeline.worker_threads.size).to eq(worker_thread_count)
expected: 5
got: 8
Related issues: #6855, #6245, #6355Fixes#7071
With the creation of the x-pack we have added our first internal
pipeline, but if you were running the monitoring pipeline with a
finite pipeline (LIKE generator count => X) when the finite has
completed processing all the events logstash would refuse to stop.
This PR fixes the problem by adding a new pipeline settings called
system in the shutdown loop we will check if all the user defined
pipeline are completed if its the case we will shutdown any internal
pipeline and logtash will stop gracefully.
Fixes#6943
Because we sync listeners with emitters when adding or creating hook
this could lead to duplicates of listeners, this PR fixes the problem by using a set
instead of a list. Making sure we can only have one instance of a specific
listener at any time.
Fixes#6916
This PR add the initial building block to pass some `ExecutionContext`
from the pipeline to the plugin, currently we only pass the `pipeline_id`.
We use the accessor `execution_context=` to set the context, in a future
refactor we will pass the object to the constructor.
Fixes#6890
Some codecs are context-specific and not threadsafe. If, for instance,
you want to use `generator { threads => 3 }` you will run into buggy
behavior with the line and multiline codecs which are not threadsafe.
This patch is a quick workaround for this behavior. This does not fix
this issue for inputs that do their own multithreading. Those inputs
should handle codec cloning / lifecycle internally according to their
specific requirements.
Fixes#6865
Add a new method that uses the `fast_lookup` has to find if a specific
metric exist instead of relying on exceptions.
Usage:
```ruby
metric_store.has_metric?(:node, :sashimi, :pipelines, :pipeline01, :plugins, :"logstash-output-elasticsearch", :event_in) # true
metric_store.has_metric?(:node, :sashimi, :pipelines, :pipeline01, :plugins, :"logstash-output-elasticsearch", :do_not_exist) # false
```
Fixes: #6533Fixes#6759
This PR changes where the `events.in` are calculated, previously the
values were calculated in the `ReadClient` which was fine before the
addition of the PQ, but this make the stats not accurate when the PQ was
enabled and the producer are a lot faster than the consumer.
These commits change the collection of the metric inside an
instrumented `WriteClient` so both implementation of the client queues will use
the same code.
This also make possible to record `events.out` for every inputs and the
time waiting to push to the queue.
The API is now exposing theses values for each plugins, the events level
and and the pipeline.
Using a pipeline with a sleep filter and PQ we will see this kind of
response from the API.
```json
{
"duration_in_millis": 438624,
"in": 3011436,
"filtered": 2189,
"out": 2189,
"queue_push_duration_in_millis": 49845
}
```
Fixes: #6512Fixes#6532
This change was harder than it first appeared! Due to the complicated
interactions between our Setting class and our monkey-patched Clamp
classes this required adding some new hooks into various places to
properly intercept the settings at the right point and set this
dynamically.
Crucially, this only changes path.queue when the user has *not*
overriden it explicitly in the settings.yml file.
Fixes#6378 and #6387Fixes#6731
fix agent and pipeline and specs for queue exclusive access
added comments and swapped all sleep 0.01 to 0.1
revert explicit pipeline close in specs using sample helper
fix multiple pipelines specs
use BasePipeline for config validation which does not instantiate a new queue
review modifications
improve queue exception message
the pipeline class two state predicates: ready? and running?
ready? becomes true after `start_workers` terminates (succesfuly or not)
running? becomes true before calling `start_flusher`, which means that
`start_workers` is guaranteed to have terminated successfuly
Whenever possible, we should use `running?` instead of `ready?` in the
spec setup blocks. The only place where this may be bad is when the
pipeline execution is short lived (e.g. generator w/small count) and the
spec may never observe pipeline.running? == true
Fixes#6574
Record the wall clock time for each output a new `duration_in_millis`
key will now be available for each output in the api located at http://localhost:9600/_node/stats
This commit also change some expectations in the output_delegator_spec
that were not working as intended with the `have_received` matcher.
Fixes#6458
When we were initilizing the `duration_in_millis` in the the batch we
were using a `Gauge` instead of a counter, since all the object have the
same signature when the were actually recording the time the value was
replaced instead of incremented.
Fixes#6465
When a plugin is loaded using the `plugins.path` option or is from a
universal plugin there no gemspec can be found for the specific plugin.
We should not print any warning on that case.
Fixes: #6444Fixes#6448
The metric store has no concept is a metric need to exist so as a rule
of thumb we need to defined them with 0 values and send them to the
store when we initialize something.
This PR make sure the batch object is recording the right default values
Fixes: #6449Fixes#6450
When logstash is run under a linux container we will gather statistic about the cgroup and the
cpu usage. This information will should in the /_node/stats api and the result will look like this:
```
"os" : {
"cgroup" : {
"cpuacct" : {
"usage" : 789470280230,
"control_group" : "/user.slice/user-1000.slice"
},
"cpu" : {
"cfs_quota_micros" : -1,
"control_group" : "/user.slice/user-1000.slice",
"stat" : {
"number_of_times_throttled" : 0,
"time_throttled_nanos" : 0,
"number_of_periods" : 0
},
"cfs_period_micros" : 100000
}
}
}
```
Fixes: #6252Fixes#6357
The assertions was using dummy outputs and kept received events into an
array in memory, but the test actually only needed to match the number
of events it received, this PR add a DroppingDummyOutput that wont
retain the events in memory.
The previous implementation was causing a OOM issue when running the
test on a very fast machine.
Fixes: #6335Fixes#6346
add queue.max_acked_checkpoint and queue.checkpoint_rate settings
now using checkpoint.max_acks, checkpoint.max_writes and checkpoint.max_interval
rename options
wip rework checkpointing
refactored full acked pages handling on acking and recovery
correclty close queue
proper queue open/recovery
checkpoint dump utility
checkpoint on writes
removed debug code and added missing newline
added better comment on contiguous checkpoints
fix spec for new pipeline setting