This is a big chang, it:
1. Moves API specs out of their special hierarchy
2. Removes the API spec spec_helper
3. Reactivates that stats command spec (that was accidentally not being
run before due to it not having _spec as a suffix
This was required to fix the preceeding commit, where we added a
before(:each) hook to the spec_helper that wasn't being picked up in
some cases due to the existence of two spec helpers and a $LOAD_PATH
that could change.
Fixes#7132
Instead of depending on the now deprecated multiline filter we use a
dummy filter that just emits events. This simplifies the test and
dramatically reduces timing issues.
I also increased the max-wait for the timer just in case
Fixes#7024Fixes#7131
This PR helps enable https://github.com/elastic/logstash/issues/7076
This also fixes a bug where when concatenating pipelines for PipelineIR
the to_s versions of the SourceWithMetadata objects were conjoined
instead of just the `text`.
Fixes#7079
The failure:
Failures:
1) LogStash::Pipeline defaulting the pipeline workers based on thread safety when there are threadsafe filters only starts multiple filter threads
Failure/Error: expect(pipeline.worker_threads.size).to eq(worker_thread_count)
expected: 5
got: 8
Related issues: #6855, #6245, #6355Fixes#7071
* Introduce a DeadLetterQueueFactory
DeadLetterQueueFactory is a static class that keeps
a static collection of DeadLetterQueueWriteManagers per
pipeline that has plugins requesting to use it.
* DeadLetterQueue was added as a first-class field in the execution context that input/filter/output plugins can leverage
this means the configuration read from path.config (-f) is no longer auto completed with stdin/stdout if the input/output sections are missing. This behaviour will only occur with config.string (-e).
This cleans up the code from a design patterns standpoint and makes testing plugins easier since you can just create/destroy agents at will.
Without this change the SOURCE_LOADER singleton's state will become dirty as agents are created/destroyed and be problematic.
Fixes#7048
The current implementation of the test was using a mock and an expect
on the internal classes to determine when to start testing on the
metrics. I've rewrote the setup of the test to use the file output instead of using a
instance. I believe the previous code was not completely threadsafe and was causing this error
in the spec. We should really remove any mock of the form expect_any_instance_of`.
Ref: #6935Fixes#6956
Instead of using a concrete class use let statement instead, this make
sure they are reset between run and make the variable available at the
context level.
Fixes#7017
When the fetch is call we are aggregating all the pipeline_config from
the different sources, if we encounter duplicates ids we will return a
failure, making the pipeline skip that fetch.
Fixes#6866
This expose some of the internal of the registry to the outside world to
allow other part of the system to retrieves plugins.
This change was motivated by #6851 to retrieve the installed list of
modules.
Fixes#7011
This unifies the two different config classes that represented mainly
the same data. While this does expose a plain java class into ruby
this works fine because ruby only needs to access and set values, not
work with ruby return types.
Fixes#7003Fixes#7004
The test was actually starting an agent with pipeline and the runner has
currently no was to stop the agent. This commit make sure we use an
agent mock instead.
Fix: #6931Fixes#6957
This test has been a bit flaky since it relies on an external thread to
trigger, this commit add a bit more time for the trigger to happen and
also add a retry.
Fixes: #6929Fixes#6945
PipelineAction now have an `execution_priority`, we use this method to change the priority of a create action when we are creating a system pipeline
Fixes#6885
With the creation of the x-pack we have added our first internal
pipeline, but if you were running the monitoring pipeline with a
*finite* pipeline (LIKE generator count => X) when the finite has
completed processing all the events logstash would refuse to stop.
This PR fixes the problem by adding a new pipeline settings called
`system` in the shutdown loop we will check if all the user defined
pipeline are completed if its the case we will shutdown any internal
pipeline and logtash will stop gracefully.
Fixes#6885
I've changed the relative path used in the test, I am using a relative
path that I can more easily control and doesn't have 10 level of nested
directories, I think it was confusing when ran in on our linux ci.
I also added an assert in the `before` to make sure we generate the
right number of configuration in the example.
Ref: #6935Fixes#6946
This PR introduces majors changes inside Logstash, to help build future
features like supporting multiple pipeline or java execution.
The previous implementation of the agent made the class hard to refactor or add new feature to it, because it needed to
know about too much internal of the pipeline and how the configuration existed.
The changes includes:
- Externalize the loading of the configuration file using a `SourceLoader`
- The source loader can support multiple sources and will aggregate them.
- We keep some metadata about the original file so the LIR can give better feedback.
- The Agent now ask the `SourceLoader` to know which configuration need to be run
- The Agent now uses a converge state strategy to handle: start, reload, stop
- Each actions executed on the pipeline are now extracted into their own classes to help with migration to a new pipeline execution.
- The pipeline now has a start method that handle the tread
- Better out of the box support for multiple pipeline (monitoring)
- Refactor of the spec of the agent
Fixes#6632
Because we sync listeners with emitters when adding or creating hook
this could lead to duplicates of listeners, this PR fixes the problem by using a set
instead of a list. Making sure we can only have one instance of a specific
listener at any time.
Fixes#6916
This PR add the initial building block to pass some `ExecutionContext`
from the pipeline to the plugin, currently we only pass the `pipeline_id`.
We use the accessor `execution_context=` to set the context, in a future
refactor we will pass the object to the constructor.
Fixes#6890
Some codecs are context-specific and not threadsafe. If, for instance,
you want to use `generator { threads => 3 }` you will run into buggy
behavior with the line and multiline codecs which are not threadsafe.
This patch is a quick workaround for this behavior. This does not fix
this issue for inputs that do their own multithreading. Those inputs
should handle codec cloning / lifecycle internally according to their
specific requirements.
Fixes#6865
Add a new method that uses the `fast_lookup` has to find if a specific
metric exist instead of relying on exceptions.
Usage:
```ruby
metric_store.has_metric?(:node, :sashimi, :pipelines, :pipeline01, :plugins, :"logstash-output-elasticsearch", :event_in) # true
metric_store.has_metric?(:node, :sashimi, :pipelines, :pipeline01, :plugins, :"logstash-output-elasticsearch", :do_not_exist) # false
```
Fixes: #6533Fixes#6759
This PR changes where the `events.in` are calculated, previously the
values were calculated in the `ReadClient` which was fine before the
addition of the PQ, but this make the stats not accurate when the PQ was
enabled and the producer are a lot faster than the consumer.
These commits change the collection of the metric inside an
instrumented `WriteClient` so both implementation of the client queues will use
the same code.
This also make possible to record `events.out` for every inputs and the
time waiting to push to the queue.
The API is now exposing theses values for each plugins, the events level
and and the pipeline.
Using a pipeline with a sleep filter and PQ we will see this kind of
response from the API.
```json
{
"duration_in_millis": 438624,
"in": 3011436,
"filtered": 2189,
"out": 2189,
"queue_push_duration_in_millis": 49845
}
```
Fixes: #6512Fixes#6532