Instead of using a list of non reloadable plugin we add a new class
method on the base plugin class that the plugin will override.
By default we assume that all plugins are reloadable, only the stdin
shouldn't be reloadable.
Fixes#6499
during Agent#start_pipeline a new thread is launched that executes
a pipeline.run and a rescue block which increments the failed reload counter
After launching the thread, the parent thread will wait for the pipeline
to start, or detect that the pipeline aborted, or sleep and check again.
There is a bug that, if the pipeline.run aborts during start_workers,
the pipeline is still marked as `ready`, and the thread will continue
running for a very short period of time, incrementing the failed reload
metric.
During this period of `pipeline.ready? == true` and `thread.alive? == true`,
the parent check code will observe all the necessary conditions to
consider the pipeline.run to be succesful and thus increment the success
counter too. This failed reload can then result in both the success and
failure reload count being incremented.
This commit changes the parent thread check to use `pipeline.running?`
instead of `pipeline.ready?` which is the next logical state transition,
and ensures it is only true if `start_workers` runs successfuly.
Fixes#6566
re #6508.
- removed `acked_count`, `unacked_count`, and migrated `unread_count` to
top-level `events` field.
- removed `current_size_in_bytes` info from queue node stats
Fixes#6510
Record the wall clock time for each output a new `duration_in_millis`
key will now be available for each output in the api located at http://localhost:9600/_node/stats
This commit also change some expectations in the output_delegator_spec
that were not working as intended with the `have_received` matcher.
Fixes#6458
When we were initilizing the `duration_in_millis` in the the batch we
were using a `Gauge` instead of a counter, since all the object have the
same signature when the were actually recording the time the value was
replaced instead of incremented.
Fixes#6465
We have more the responsability of watching the collector inside the
input itself, this feature might come back when we have a new execution
model that can be improved in watching metrics. But this would require
more granular watchers.
No tests were affected by this changes since the code that required that
features was already removed.
Fixes: #6447Fixes#6456
When a plugin is loaded using the `plugins.path` option or is from a
universal plugin there no gemspec can be found for the specific plugin.
We should not print any warning on that case.
Fixes: #6444Fixes#6448
The metric store has no concept is a metric need to exist so as a rule
of thumb we need to defined them with 0 values and send them to the
store when we initialize something.
This PR make sure the batch object is recording the right default values
Fixes: #6449Fixes#6450
When logstash is run under a linux container we will gather statistic about the cgroup and the
cpu usage. This information will should in the /_node/stats api and the result will look like this:
```
"os" : {
"cgroup" : {
"cpuacct" : {
"usage" : 789470280230,
"control_group" : "/user.slice/user-1000.slice"
},
"cpu" : {
"cfs_quota_micros" : -1,
"control_group" : "/user.slice/user-1000.slice",
"stat" : {
"number_of_times_throttled" : 0,
"time_throttled_nanos" : 0,
"number_of_periods" : 0
},
"cfs_period_micros" : 100000
}
}
}
```
Fixes: #6252Fixes#6357
This library provides a "log4j 1.2"-like API from the log4j2 library.
We don't seem to use this, and including it seems to be the cause of the
Logstash log4j input rejecting log4j 1.x's SocketAppender with this
message:
org.apache.log4j.spi.LoggingEvent; class invalid for deserialization
The origin of this error is that log4j2's log4j-1.2-api defines
LoggingEvent without `implements Serializable`.
This commit also includes regenerated gemspec_jars.rb and
logstash-core_jars.rb.
Reference: https://github.com/logstash-plugins/logstash-input-log4j/issues/36Fixes#6309
add queue.max_acked_checkpoint and queue.checkpoint_rate settings
now using checkpoint.max_acks, checkpoint.max_writes and checkpoint.max_interval
rename options
wip rework checkpointing
refactored full acked pages handling on acking and recovery
correclty close queue
proper queue open/recovery
checkpoint dump utility
checkpoint on writes
removed debug code and added missing newline
added better comment on contiguous checkpoints
fix spec for new pipeline setting
This PR add new information in the /_node/stats api and will return the
load average of the machine in the following formats depending of the
platforms that logstash is running on:
**Linux**
```json
{
"cpu" : {
"percent" : 26,
"load_average" : {
"1m" : 2.826171875,
"5m": 1.8261718,
"15m": 1.56566
}
}
}
```
**MacOS and other platform that the OperatingMXBean understand**
```json
{
"cpu" : {
"percent" : 26,
"load_average" : {
"1m" : 2.826171875,
}
}
}
```
Load average is not available on Windows
Fixes: #6214Fixes#6240
When we load a yml file in logstash if a key isn't not found in the
settings, we move that key and the corresponding value to a
`transient_settings` hash. This give the time to the plugin to register
new settings. When we call #validate_all we merge the
`transient_settings` hash and do the validation, if a key is not found
at that stage an excepton will be throw.
Fixes#6109
- Allow plugins author to decide how their plugins are structured
- Allow a new kind of plugin that allow plugins author to add hooks into logstash core.
Fixes#6109
currently a reload is only marked as a failured if it fails the classic
config test check, where we check for parameter names, existing plugins, etc.
This change waits for the pipeline to transition to running before
marking the reload as success, otherwise it is a failure.
fixes#6195Fixes#6196
This adds two new fields 'id', and 'name' to the base metadata for API requests.
These fields are now returned at all API endpoints by default.
The `id` field is the persisted UUID, the name field is the custom name
the user has passed in (defaulted to the hostname).
I renamed `node_uuid` and `node_name` to just `id` and `name` to be
inline with Elasticsearch.
This also fixed a broken test double in `webserver_spec.rb` that was
using doubles across threads which created hidden errors.
Fixes#6224
5.0.0 required Logstash to have a valid logstash.yml before it could start successfully. This
was mostly fine for users who installed Logstash via tar.gz, but many many folks who install
it via packages still start Logstash manually. Also, our documentation uses -e flag for
getting started on Logstash and sending their first event.logstash.yml has only defaults defined,
and there is no required parameter to start Logstash. We should be able to use the defaults if no
logstash.yml. Obviously, this is not ideal from a user point of view, so we should log a warning but
continue to bootstrap.
Fixes#6170Fixes#6172