add queue.max_acked_checkpoint and queue.checkpoint_rate settings
now using checkpoint.max_acks, checkpoint.max_writes and checkpoint.max_interval
rename options
wip rework checkpointing
refactored full acked pages handling on acking and recovery
correclty close queue
proper queue open/recovery
checkpoint dump utility
checkpoint on writes
removed debug code and added missing newline
added better comment on contiguous checkpoints
fix spec for new pipeline setting
This PR add new information in the /_node/stats api and will return the
load average of the machine in the following formats depending of the
platforms that logstash is running on:
**Linux**
```json
{
"cpu" : {
"percent" : 26,
"load_average" : {
"1m" : 2.826171875,
"5m": 1.8261718,
"15m": 1.56566
}
}
}
```
**MacOS and other platform that the OperatingMXBean understand**
```json
{
"cpu" : {
"percent" : 26,
"load_average" : {
"1m" : 2.826171875,
}
}
}
```
Load average is not available on Windows
Fixes: #6214Fixes#6240
When we load a yml file in logstash if a key isn't not found in the
settings, we move that key and the corresponding value to a
`transient_settings` hash. This give the time to the plugin to register
new settings. When we call #validate_all we merge the
`transient_settings` hash and do the validation, if a key is not found
at that stage an excepton will be throw.
Fixes#6109
- Allow plugins author to decide how their plugins are structured
- Allow a new kind of plugin that allow plugins author to add hooks into logstash core.
Fixes#6109
currently a reload is only marked as a failured if it fails the classic
config test check, where we check for parameter names, existing plugins, etc.
This change waits for the pipeline to transition to running before
marking the reload as success, otherwise it is a failure.
fixes#6195Fixes#6196
This adds two new fields 'id', and 'name' to the base metadata for API requests.
These fields are now returned at all API endpoints by default.
The `id` field is the persisted UUID, the name field is the custom name
the user has passed in (defaulted to the hostname).
I renamed `node_uuid` and `node_name` to just `id` and `name` to be
inline with Elasticsearch.
This also fixed a broken test double in `webserver_spec.rb` that was
using doubles across threads which created hidden errors.
Fixes#6224
5.0.0 required Logstash to have a valid logstash.yml before it could start successfully. This
was mostly fine for users who installed Logstash via tar.gz, but many many folks who install
it via packages still start Logstash manually. Also, our documentation uses -e flag for
getting started on Logstash and sending their first event.logstash.yml has only defaults defined,
and there is no required parameter to start Logstash. We should be able to use the defaults if no
logstash.yml. Obviously, this is not ideal from a user point of view, so we should log a warning but
continue to bootstrap.
Fixes#6170Fixes#6172
In a busy logstash install and the way we interact with core stats from
the jvm or the os, the poller can timeout. This exception is logged as
an error. but this shouldn't impact normal operation.
This PR changes the following:
- Change the interval for 5s instead of 1s
- Make the timeout bigger 160s instead of 60
- Concurrent::TimeoutError will be logged using debug instead of error
- Any other exception will use the error level.
- add tests
Fixes: #6160Fixes: #6158Fixes#6169
Previously if both -e and -f was specified, LS required that
-f still have valid config file(s) before merging. This fixes it
to either have one of -f or -e provided
Fixes#6164
SafeURI requires forwardable, which is included elsewhere in logstash. Plugin tests using this validation don't load logstash in the right order for this to work, so we need an explicit require here.
Fixes#5978
This also makes the failure reports from WritableDirectory's validation
more specific (parent directory doesn't exist, or does exist and is
isn't writable, or the path itself isn't writable, etc).
Fixes#6023
This allows us to validate all settings after all the settings sources
have been processed (logstash.yml, flags, environment variables, etc)
NullableString is required for validation to pass on what were
previously String settings with nil defaults.
WritableDirectory's strict now defaults false to help with a problem
where the default path.data might not be writable *and* the user could
be specifying --path.data on the command line to compensate. Prior to
this, the default value would be validated and cause Logstash to
terminate on startup because of the default data directory was validated
before the flag override was applied.
To make this validate_all feature more useful, Setting#set will only
call validate if `strict` is true.
Fixes#6004Fixes#6008
This problem is seen here:
https://github.com/elastic/logstash/issues/6019
This change will log extra information that will be helpful to the
debugging process.
Testing this out with jdbc-input 4.1.1 (cited in #6019 above) we now see
as output:
```
~/p/l/logstash-alt (better_plugin_error_messages) $ bin/logstash -e "input { jdbc {} } output { stdout {} }"
Sending Logstash logs to /Users/andrewvc/projects/lsp/logstash-alt/logs which is now configured via log4j2.properties.
[2016-10-10T16:38:50,723][WARN ][logstash.registry ] Problems loading a plugin with {:type=>"input", :name=>#<LogStash::Registry::Plugin:0x50ec4e78 @type="input", @name="jdbc">, :path=>"logstash/inputs/jdbc", :error_message=>"uninitialized constant LogStash::PluginMixins::Jdbc::Cabin", :error_class=>NameError, :error_backtrace=>["org/jruby/RubyModule.java:2719:in `const_missing'", "/Users/andrewvc/projects/lsp/logstash-alt/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/plugin_mixins/jdbc.rb:11:in `Jdbc'", "/Users/andrewvc/projects/lsp/logstash-alt/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/plugin_mixins/jdbc.rb:9:in `(root)'", "org/jruby/RubyKernel.java:1040:in `require'", "/Users/andrewvc/projects/lsp/logstash-alt/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65:in `require'", "/Users/andrewvc/projects/lsp/logstash-alt/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/inputs/jdbc.rb:1:in `(root)'", "org/jruby/RubyKernel.java:1040:in `require'", "/Users/andrewvc/projects/lsp/logstash-alt/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65:in `require'", "/Users/andrewvc/projects/lsp/logstash-alt/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/inputs/jdbc.rb:4:in `(root)'", "/Users/andrewvc/projects/lsp/logstash-alt/logstash-core/lib/logstash/plugins/registry.rb:1:in `(root)'", "/Users/andrewvc/projects/lsp/logstash-alt/logstash-core/lib/logstash/plugins/registry.rb:59:in `lookup'", "/Users/andrewvc/projects/lsp/logstash-alt/logstash-core/lib/logstash/plugin.rb:121:in `lookup'", "org/jruby/RubyKernel.java:1079:in `eval'", "/Users/andrewvc/projects/lsp/logstash-alt/logstash-core/lib/logstash/pipeline.rb:418:in `plugin'", "(eval):8:in `initialize'", "/Users/andrewvc/projects/lsp/logstash-alt/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/Users/andrewvc/projects/lsp/logstash-alt/logstash-core/lib/logstash/agent.rb:195:in `create_pipeline'", "/Users/andrewvc/projects/lsp/logstash-alt/logstash-core/lib/logstash/agent.rb:87:in `register_pipeline'", "/Users/andrewvc/projects/lsp/logstash-alt/logstash-core/lib/logstash/runner.rb:256:in `execute'", "/Users/andrewvc/projects/lsp/logstash-alt/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/Users/andrewvc/projects/lsp/logstash-alt/lib/bootstrap/environment.rb:68:in `(root)'"]}
[2016-10-10T16:38:50,735][ERROR][logstash.agent ] fetched an invalid config {:config=>"input { jdbc {} } output { stdout {} }", :reason=>"Couldn't find any input plugin named 'jdbc'. Are you sure this is correct? Trying to load the jdbc input plugin resulted in this error: Problems loading the requested plugin named jdbc of type input. Error: NameError uninitialized constant LogStash::PluginMixins::Jdbc::Cabin"}
```
Fixes#6020
This PR fix an issue where the time was calculated but no work was done
on the event. This code make sure we have at least one event to start
recording the time spend.
This was causing the `events/duratin_in_millis` to not be in sync with
the time spend on the filtera, since `take_batch` was called in a tight
loop and could return an empty array this made the duration was way off.
Fixes: #5952Fixes#5953
This PR changes how we modify the STDOUT/STDERR in puma, instande of
using `const_set`, we override the constants using a module.
This operation should be thread safe and will make sure the STDERR are
correctly visible.
After when we receive the logger we can swap the null logger with the
real thing in a thread safe way.
Fixes: #5912Fixes#5918
sets default to INFO and updates some verbose logging to
more appropriate, less verbose log levels where it makes sense.
Closes#5735.
Closes#5893.
Fixes#5898
This PR introduce a changes to make sure the API endpoint correctly
return 404 in every case. To make it work it uses an exception that get
pickup up in the base class by the `error` handler.
Using the exception template allow the code to be dry and make sure all
other errors can be returned with the same format.
Example or a 404.
```json
{
"error": { "message": "Not Found"},
"path": "/this-should-not-exist"
"status": 404
}
```
The code will also set the content-type and the right error code in the
header.
Fixes: #5874, #5622Fixes#5897