* Add Logstash configs for filebeat modules
Hand transpiled Logstash configs for FB modules to be included in
the docs for 5.3.
See FB modules for more details: https://github.com/elastic/beats/tree/master/filebeat/module
* Fix newlines
* Remove multiline, fix bugs
Before, the DeadLetterQueueReadManager would throw an exception
when it attempted to choose to read a segment from its segments
list and that list was empty. This fixes that.
Fixes#6880
This PR add the initial building block to pass some `ExecutionContext`
from the pipeline to the plugin, currently we only pass the `pipeline_id`.
We use the accessor `execution_context=` to set the context, in a future
refactor we will pass the object to the constructor.
Fixes#6890
Some variables that hold path in Linux shell scripts
are not properly quoted, thus undefined behaviors may
show up (e.g. a link path with space).
This change enclose variables with quotes in bin/logstash,
bin/logstash.lib.sh and, bin/system-install scripts.
Fixes#6596Fixes#6877
Some codecs are context-specific and not threadsafe. If, for instance,
you want to use `generator { threads => 3 }` you will run into buggy
behavior with the line and multiline codecs which are not threadsafe.
This patch is a quick workaround for this behavior. This does not fix
this issue for inputs that do their own multithreading. Those inputs
should handle codec cloning / lifecycle internally according to their
specific requirements.
Fixes#6865
Logstash's plugin manager will now follow proxy configuration from the environment.
If you configure `http_proxy` and `https_proxy`, the manager will now use theses information for all the ruby http
connection and will also pass that information down to maven.
Fixes: #6619, #6528Fixes#6825
This PR changes the behavior of the command when you were specify a directory
instead of a filename, if the directory exists it would have deleted the
directory and the files in it.
The new flow and validation is now safer, if you specify a directory
Logstash will tell you to specify a complete filename.
If the ZIP extension is missing, the command line will ask you to make
sure you target a right filename.
If the output file already exist it will *not* delete it but instead
will return a warning message, you can force a delete by using the
`--overwrite` option.
Fixes: #6862Fixes#6861
This PR fix an annoyance when running the `bin/logstash-plugin install
--no-verify` without any plugins, the command was making an unnecessary
call to the artifacts web server.
Fixes#6826
This PR fixes an issue where the max heap size was reported as the double of
the actual value because it was merging the values of the usage.max and
peak.max into a single value.
Fixes: #6608Fixes#6827
This PR chagne the File's open mode to make sure we correctly download
the file in binary mode on windows, without doing that the gem will be
corrupt and will throw the following message.
```
ERROR: While executing gem ... (Gem::Package::FormatError)
package is corrupt, exception while verifying: Unexpected end of ZLIB input stream (Zlib::GzipFile::Error) in logstash-filter-dissect-1.0.8.gem
```
Fixes#6840
Add a new method that uses the `fast_lookup` has to find if a specific
metric exist instead of relying on exceptions.
Usage:
```ruby
metric_store.has_metric?(:node, :sashimi, :pipelines, :pipeline01, :plugins, :"logstash-output-elasticsearch", :event_in) # true
metric_store.has_metric?(:node, :sashimi, :pipelines, :pipeline01, :plugins, :"logstash-output-elasticsearch", :do_not_exist) # false
```
Fixes: #6533Fixes#6759
This PR changes where the `events.in` are calculated, previously the
values were calculated in the `ReadClient` which was fine before the
addition of the PQ, but this make the stats not accurate when the PQ was
enabled and the producer are a lot faster than the consumer.
These commits change the collection of the metric inside an
instrumented `WriteClient` so both implementation of the client queues will use
the same code.
This also make possible to record `events.out` for every inputs and the
time waiting to push to the queue.
The API is now exposing theses values for each plugins, the events level
and and the pipeline.
Using a pipeline with a sleep filter and PQ we will see this kind of
response from the API.
```json
{
"duration_in_millis": 438624,
"in": 3011436,
"filtered": 2189,
"out": 2189,
"queue_push_duration_in_millis": 49845
}
```
Fixes: #6512Fixes#6532