This commit includes:
* A base Dockerfile and script to push to a Docker repo
* A per-build Dockerfile (derived from the base)
* Updates to the test scripts to allow for more parallel builds
* Docker wrappers for the tests scripts
* Update for the integration test readme to manually run the tests
* Clean up the output of the Java tests
* Remove offline tag for tests (no longer needed that we don't use docker dependent services)
This commit does NOT include:
* Changes needed for the CI system to use Docker
Fixes#8223
Attempt to stabilize DLQ Integration tests
- Only tear down ES instance once tests are complete
- Delete indices after each test
- Tear down Logstash after each test
Fixes#8143
Simple test for dead letter queue integration tests:
Attempt to write invalid entries to elastic search, fail and
remove invalid field. Verify that mutated entry exists in es
Not for committing - has different jvm.options to improve stability
to ensure that the tests pass in CI.
Fixes#7882Fixes#8026
Change DeadLetterQueueReader, so that if a missing segment file is
encountered at startup, the next valid entry will be used instead
Fixes#7433Fixes#7457
Work done by @guyboertje and @ph
Since JRuby 1.7.25 is now EOL we are migrating Logstash to use JRuby 9k and JDK8 only,
Not much needed updating to make this work, its was mostly a drop in replacement from the previous version.
The major point was the change in the implementation of Time in JRuby, JRuby now use `java.time`
instead of joda time, this allow JRuby to have nanoseconds precision on time object.
Add an initial set of metrics for the dead letter queue.
Metrics are supplied under the pipeline in the following format:
"dead_letter_queue": {
"queue_size_in_bytes": ...,
}
Metrics are populated via a PeriodicPoller
Also fixed up calculation of currentQueueSize to take account
of version headers, which was previously being skipped.
Additionally, whether the dlq is enabled, and if so, the path
of the dlq is supplied under the pipelines API endpoint
Resolves#7287Fixes#7338
The ability to change the log level as described at https://www.elastic.co/guide/en/logstash/current/logging.html#_logging_apis appears to no longer work. The root cause of this issue is that log4j2 context we initialize is different from the log4j2 context that we are setting via the API. The fix here is to mirror the functionality of org.apache.logging.log4j.core.config.Configurator.setLevel Java method, but instead use the logging_context initialized in our code via JRuby.
Fixes#7277Fixes#7321
* add multi_local source for multi pipelines
* introduce pipelines.yml
* introduce PipelineSettings class
* support reloading of pipeline parameters
* fix pipeline api call for _node/pipelines
* inform user pipelines.yml is ignored if -e or -f is enabled
in some tests against metrics api it's possible the values aren't there,
making the assertion fail. By default Stud.try only catches StandardError,
but rspec throws a RSpec::Expectations::ExpectationNotMetError on a failed
assertion, and this exception inherits directly from Exception.
This commit explicitly adds this exception to the list of Stud.try exceptions
Fixes#7177
Since the database of the plugin can be update we cannot do a strict
assert on the geoip lat/long values instead we will use a range of
possible and valid latitude and longitude.
Fixes: #7119Fixes#7122
because the metric subsystem may not be ready yet when the tests ask for
the values, the test code may generate a NoMethodError accessing nested
hashes.
The NoMethodError exception aborts the try mechanism instead of retrying.
This PR rescues potential exceptions and adds more expectations to
protect the try block.
Fixes#7074
Some integration spec tests fail to run
individually due missing require modules.
This fix it by ensuring all specs and helpers
require the needed modules.
Fixes#7037Fixes#7038
Logstash's plugin manager will now follow proxy configuration from the environment.
If you configure `http_proxy` and `https_proxy`, the manager will now use theses information for all the ruby http
connection and will also pass that information down to maven.
Fixes: #6619, #6528Fixes#6825
This PR make the smoke test for the multiple instances of logstash works
and also change the assertions of a few tests to make it more robust.
Instead of a `try` I assert the creation of a file generated by a fixed generator.
Fixes: #6707Fixes#6709
re #6508.
- removed `acked_count`, `unacked_count`, and migrated `unread_count` to
top-level `events` field.
- removed `current_size_in_bytes` info from queue node stats
Fixes#6510
This new command replace the old workflow of `pack`, `unpack` and the `install --local`, and wrap all the logic into one uniform way of installing plugins.
The work is based on the flow developed for installing an x-pack inside Logstash, when you call prepare-offline-pack, the specified plugins and their dependencies will be packaged in a zip.
And this zip can be installed with the same flow as the pack.
Definition:
Source Logstash: Where you run the prepare-offline-pack.
Target Logstash: Where you install the offline package.
PROS:
- If you install a .gem in the source logstash, the .gem and his dependencies will be bundled.
- The install flow doesn't need to have access to the internet.
- Nothing special need to be setup in the target logstash environment.
CONS:
- The is one minor drawback, the plugins need to have their JARS bundled with them for this flow to work, this is currently the case for all the official plugins.
- The source Logstash need to have access to the internet when you install plugins before packaging them.
Usage examples:
bin/logstash-plugin prepare-offline-pack logstash-input-beats
bin/logstash-plugin prepare-offline-pack logstash-filter-jdbc logstash-input-beats
bin/logstash-plugin prepare-offline-pack logstash-filter-*
bin/logstash-plugin prepare-offline-pack logstash-filter-* logstash-input-beats
How to install:
bin/logstash-plugin install file:///tmp/logstash-offline-plugins-XXXX.zip
Fixes#6404
* Add feature flag support for RATS
Adds feature flag support for RATS integration tests. First feature to use it
is persistent_queues. You can enable a FF in travis by using the travis matrix
and adding a env variable called FEATURE_FLAG=<feature_name>.
To use feature flag, you need a logstash.yml file which has this feature turned on.
Unfortunately --path.settings needs an entire dir where logstash.yml is located, not
just the logstash.yml file, so we need to checkin the fixture which has the feature yml
and log4j.
For example, you can now set FEATURE_FLAG=persistent_queues which means travis will
run tests twice, one with this feature enabled and once without it. When the feature is enaabled
the logstash.yml located in qa/integration/fixtures/persistent_queues is picked up whenever LS is started in tests
A pack in this context is a *bundle* of plugins that can be distributed outside of rubygems; it is similar to what ES and kibana are doing, and
the user interface is modeled after them. See https://www.elastic.co/downloads/x-pack
**Do not mix it with the `bin/logstash-plugin pack/unpack` command.**
- it contains one or more plugins that need to be installed
- it is self-contains with the gems and the needed jars
- it is distributed as a zip file
- the file structure needs to follow some rules.
- As a reserved name name on elastic.co download http server
- `bin/plugin install logstash-mypack` will check on the download server if a pack for the current specific logstash version exist and it will be downloaded, if it doesn't exist we fallback on rubygems.
- The file on the server will follow this convention `logstash-mypack-{LOGSTASH_VERSION}.zip`
- As a fully qualified url
- `bin/plugin install http://test.abc/logstash-mypack.zip`, if it exists it will be downloaded and installed if it does not we raise an error.
- As a local file
- `bin/plugin install file:///tmp/logstash-mypack.zip`, if it exists it will be installed
Fixes#6168
This adds two new fields 'id', and 'name' to the base metadata for API requests.
These fields are now returned at all API endpoints by default.
The `id` field is the persisted UUID, the name field is the custom name
the user has passed in (defaulted to the hostname).
I renamed `node_uuid` and `node_name` to just `id` and `name` to be
inline with Elasticsearch.
This also fixed a broken test double in `webserver_spec.rb` that was
using doubles across threads which created hidden errors.
Fixes#6224