This change was harder than it first appeared! Due to the complicated
interactions between our Setting class and our monkey-patched Clamp
classes this required adding some new hooks into various places to
properly intercept the settings at the right point and set this
dynamically.
Crucially, this only changes path.queue when the user has *not*
overriden it explicitly in the settings.yml file.
Fixes#6378 and #6387Fixes#6731
fix agent and pipeline and specs for queue exclusive access
added comments and swapped all sleep 0.01 to 0.1
revert explicit pipeline close in specs using sample helper
fix multiple pipelines specs
use BasePipeline for config validation which does not instantiate a new queue
review modifications
improve queue exception message
the pipeline class two state predicates: ready? and running?
ready? becomes true after `start_workers` terminates (succesfuly or not)
running? becomes true before calling `start_flusher`, which means that
`start_workers` is guaranteed to have terminated successfuly
Whenever possible, we should use `running?` instead of `ready?` in the
spec setup blocks. The only place where this may be bad is when the
pipeline execution is short lived (e.g. generator w/small count) and the
spec may never observe pipeline.running? == true
Fixes#6574
Record the wall clock time for each output a new `duration_in_millis`
key will now be available for each output in the api located at http://localhost:9600/_node/stats
This commit also change some expectations in the output_delegator_spec
that were not working as intended with the `have_received` matcher.
Fixes#6458
When we were initilizing the `duration_in_millis` in the the batch we
were using a `Gauge` instead of a counter, since all the object have the
same signature when the were actually recording the time the value was
replaced instead of incremented.
Fixes#6465
When a plugin is loaded using the `plugins.path` option or is from a
universal plugin there no gemspec can be found for the specific plugin.
We should not print any warning on that case.
Fixes: #6444Fixes#6448
The metric store has no concept is a metric need to exist so as a rule
of thumb we need to defined them with 0 values and send them to the
store when we initialize something.
This PR make sure the batch object is recording the right default values
Fixes: #6449Fixes#6450
When logstash is run under a linux container we will gather statistic about the cgroup and the
cpu usage. This information will should in the /_node/stats api and the result will look like this:
```
"os" : {
"cgroup" : {
"cpuacct" : {
"usage" : 789470280230,
"control_group" : "/user.slice/user-1000.slice"
},
"cpu" : {
"cfs_quota_micros" : -1,
"control_group" : "/user.slice/user-1000.slice",
"stat" : {
"number_of_times_throttled" : 0,
"time_throttled_nanos" : 0,
"number_of_periods" : 0
},
"cfs_period_micros" : 100000
}
}
}
```
Fixes: #6252Fixes#6357
The assertions was using dummy outputs and kept received events into an
array in memory, but the test actually only needed to match the number
of events it received, this PR add a DroppingDummyOutput that wont
retain the events in memory.
The previous implementation was causing a OOM issue when running the
test on a very fast machine.
Fixes: #6335Fixes#6346
add queue.max_acked_checkpoint and queue.checkpoint_rate settings
now using checkpoint.max_acks, checkpoint.max_writes and checkpoint.max_interval
rename options
wip rework checkpointing
refactored full acked pages handling on acking and recovery
correclty close queue
proper queue open/recovery
checkpoint dump utility
checkpoint on writes
removed debug code and added missing newline
added better comment on contiguous checkpoints
fix spec for new pipeline setting
A pack in this context is a *bundle* of plugins that can be distributed outside of rubygems; it is similar to what ES and kibana are doing, and
the user interface is modeled after them. See https://www.elastic.co/downloads/x-pack
**Do not mix it with the `bin/logstash-plugin pack/unpack` command.**
- it contains one or more plugins that need to be installed
- it is self-contains with the gems and the needed jars
- it is distributed as a zip file
- the file structure needs to follow some rules.
- As a reserved name name on elastic.co download http server
- `bin/plugin install logstash-mypack` will check on the download server if a pack for the current specific logstash version exist and it will be downloaded, if it doesn't exist we fallback on rubygems.
- The file on the server will follow this convention `logstash-mypack-{LOGSTASH_VERSION}.zip`
- As a fully qualified url
- `bin/plugin install http://test.abc/logstash-mypack.zip`, if it exists it will be downloaded and installed if it does not we raise an error.
- As a local file
- `bin/plugin install file:///tmp/logstash-mypack.zip`, if it exists it will be installed
Fixes#6168
This PR add new information in the /_node/stats api and will return the
load average of the machine in the following formats depending of the
platforms that logstash is running on:
**Linux**
```json
{
"cpu" : {
"percent" : 26,
"load_average" : {
"1m" : 2.826171875,
"5m": 1.8261718,
"15m": 1.56566
}
}
}
```
**MacOS and other platform that the OperatingMXBean understand**
```json
{
"cpu" : {
"percent" : 26,
"load_average" : {
"1m" : 2.826171875,
}
}
}
```
Load average is not available on Windows
Fixes: #6214Fixes#6240
When we load a yml file in logstash if a key isn't not found in the
settings, we move that key and the corresponding value to a
`transient_settings` hash. This give the time to the plugin to register
new settings. When we call #validate_all we merge the
`transient_settings` hash and do the validation, if a key is not found
at that stage an excepton will be throw.
Fixes#6109
- Allow plugins author to decide how their plugins are structured
- Allow a new kind of plugin that allow plugins author to add hooks into logstash core.
Fixes#6109
currently a reload is only marked as a failured if it fails the classic
config test check, where we check for parameter names, existing plugins, etc.
This change waits for the pipeline to transition to running before
marking the reload as success, otherwise it is a failure.
fixes#6195Fixes#6196
A few days ago I've created this PR[1] to to enable the writable directory
spec to pass on macos X. When the PQ got merged in the master branch the
test was disabled again[2].
[1] https://github.com/elastic/logstash/pull/6110
[2] 761f9f1bc9Fixes#6218
This adds two new fields 'id', and 'name' to the base metadata for API requests.
These fields are now returned at all API endpoints by default.
The `id` field is the persisted UUID, the name field is the custom name
the user has passed in (defaulted to the hostname).
I renamed `node_uuid` and `node_name` to just `id` and `name` to be
inline with Elasticsearch.
This also fixed a broken test double in `webserver_spec.rb` that was
using doubles across threads which created hidden errors.
Fixes#6224
In a busy logstash install and the way we interact with core stats from
the jvm or the os, the poller can timeout. This exception is logged as
an error. but this shouldn't impact normal operation.
This PR changes the following:
- Change the interval for 5s instead of 1s
- Make the timeout bigger 160s instead of 60
- Concurrent::TimeoutError will be logged using debug instead of error
- Any other exception will use the error level.
- add tests
Fixes: #6160Fixes: #6158Fixes#6169
path for unix domain socket have a hard limit of 104 on macos and 108 on
linux, using `Stud::Temporary.pathname` to generate the path will exceed
that limit. So we will use a custom generation of using the tmpdir +
Time.now.to_f which should be good enough for this test.
Fixes: #6108Fixes#6110