Change the data and log directories to be in /tmp/ls_integrations
(matching the kafka service folder structure), and delete
them in the teardown script.
Fixes#8528
This commit includes:
* A base Dockerfile and script to push to a Docker repo
* A per-build Dockerfile (derived from the base)
* Updates to the test scripts to allow for more parallel builds
* Docker wrappers for the tests scripts
* Update for the integration test readme to manually run the tests
* Clean up the output of the Java tests
* Remove offline tag for tests (no longer needed that we don't use docker dependent services)
This commit does NOT include:
* Changes needed for the CI system to use Docker
Fixes#8223
This is in prep for a Docker based test run approach, and by removing the dependent Docker containers we will avoid Docker in Docker requirements.
Fixes#8211
Attempt to stabilize DLQ Integration tests
- Only tear down ES instance once tests are complete
- Delete indices after each test
- Tear down Logstash after each test
Fixes#8143
Simple test for dead letter queue integration tests:
Attempt to write invalid entries to elastic search, fail and
remove invalid field. Verify that mutated entry exists in es
Not for committing - has different jvm.options to improve stability
to ensure that the tests pass in CI.
Fixes#7882Fixes#8026
Change DeadLetterQueueReader, so that if a missing segment file is
encountered at startup, the next valid entry will be used instead
Fixes#7433Fixes#7457
Work done by @guyboertje and @ph
Since JRuby 1.7.25 is now EOL we are migrating Logstash to use JRuby 9k and JDK8 only,
Not much needed updating to make this work, its was mostly a drop in replacement from the previous version.
The major point was the change in the implementation of Time in JRuby, JRuby now use `java.time`
instead of joda time, this allow JRuby to have nanoseconds precision on time object.
Add an initial set of metrics for the dead letter queue.
Metrics are supplied under the pipeline in the following format:
"dead_letter_queue": {
"queue_size_in_bytes": ...,
}
Metrics are populated via a PeriodicPoller
Also fixed up calculation of currentQueueSize to take account
of version headers, which was previously being skipped.
Additionally, whether the dlq is enabled, and if so, the path
of the dlq is supplied under the pipelines API endpoint
Resolves#7287Fixes#7338
The ability to change the log level as described at https://www.elastic.co/guide/en/logstash/current/logging.html#_logging_apis appears to no longer work. The root cause of this issue is that log4j2 context we initialize is different from the log4j2 context that we are setting via the API. The fix here is to mirror the functionality of org.apache.logging.log4j.core.config.Configurator.setLevel Java method, but instead use the logging_context initialized in our code via JRuby.
Fixes#7277Fixes#7321
* add multi_local source for multi pipelines
* introduce pipelines.yml
* introduce PipelineSettings class
* support reloading of pipeline parameters
* fix pipeline api call for _node/pipelines
* inform user pipelines.yml is ignored if -e or -f is enabled
in some tests against metrics api it's possible the values aren't there,
making the assertion fail. By default Stud.try only catches StandardError,
but rspec throws a RSpec::Expectations::ExpectationNotMetError on a failed
assertion, and this exception inherits directly from Exception.
This commit explicitly adds this exception to the list of Stud.try exceptions
Fixes#7177
Since the database of the plugin can be update we cannot do a strict
assert on the geoip lat/long values instead we will use a range of
possible and valid latitude and longitude.
Fixes: #7119Fixes#7122