As a follow up to #15861 this commit splits the current unit tests step
for the Windows JDK matrix pipeline to two that run
Java and Ruby unit tests separately.
Closes https://github.com/elastic/logstash/issues/15566
(cherry picked from commit c0c213d17e)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
PR#15900 missed a few more places where Logstash is installed but
a working minimal pipeline config is added.
This commit fixes that and stabilizes all acceptance tests, thus
minizing the need for time consuming BK retries of corresponding
steps.
Relates #15900
Relates https://github.com/elastic/logstash/issues/15784
(cherry picked from commit 54f73e5d22)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit tightens the checks for the status
output of the Logstash OS service to specifically
scan for `org.logstash.Logstash` rather than
only the jdk path.
The reason is that the startup script first runs
an options parser, and then the logstash process
itself, both referencing the JDK path.
Closes https://github.com/elastic/ingest-dev/issues/2950
(cherry picked from commit eedccea33f)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
Update the env2yaml to have a go.mod instead of relying on disabling go modules, otherwise building with golang 1.22 will fail in the future.
This change also directly uses the golang image to build the binary removing the need for an intermediate image.
(cherry picked from commit 5c3e64d591)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
This commit fixes the startup of the Logstash service during packaging
tests by adding a minimal pipeline config. Without it, the service was
flapping from start to start and vice versa causing test flakiness.
Relates https://github.com/elastic/logstash/issues/15784
(cherry picked from commit b66dc7f460)
Similarly to #15874, this commit adds retries
to another group, the acceptance/docker to reduce
build noise from transient issues.
(cherry picked from commit 2fc3f4c21f)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit adds Debian 12 (Bookworm) to the
Linux JDK matrix pipeline and Compat Phase of the
exhaustive pipeline respectively.
Relates https://github.com/elastic/ingest-dev/issues/2871
(cherry picked from commit fedcf58c48)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit allows separate running of Java and Ruby tests on Windows i.e. the same way as we currently do on unix (unit_tests.sh) via a cli argument.
If no argument has been supplied, both tests are run (as it does now).
The wrapper script is also rewritten from old batch style script to Powershell.
This work allows us to split the existing Windows CI job in a subsequent PR to separate steps, as we currently do on Linux.
Relates: https://github.com/elastic/logstash/issues/15566
(cherry picked from commit 8ac55184b8)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit adds retries to the steps of the Linux + Windows JDK matrix
pipeline steps to avoid notification noise due to transient network
errors.
(cherry picked from commit 3b747d86b8)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
As a follow up to #15787 we also add Buildkite retries for the
exhaustive pipeline / compatibility group steps to prevent
failures due to flakiness.
(cherry picked from commit 88a32cca81)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit fixes IT failures that frequently occur after
version bumps due to missing unified release snapshot builds for
the new version.
This commit uses project specific DRA snapshot URLs for ES and Filebeat
in all cases apart from release builds.
(cherry picked from commit d74fea4b55)
The current mechanism of discovering the latest released version per
branch (via ARTIFACTS_API) isn't foolproof near the time of a new
release, as it may be pick a version that hasn't been released
yet. This leads to failures[^1] of the packaging upgrade tests, as we
attempt to download a package file that doesn't exist yet.
This commit switches to an API that that is more up to date regarding
the release version truth.
[^1]: https://buildkite.com/elastic/logstash-exhaustive-tests-pipeline/builds/125#018d319b-9a33-4306-b7f2-5b41937a8881/1033-1125
(cherry picked from commit 15e19a96c2)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit makes the generated DRA URL easily accessible via
a Buildkite annotation.
Closes https://github.com/elastic/ingest-dev/issues/2608
(cherry picked from commit c5cb1fe2ed)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit fixes the flaky IT test:
`install non bundle plugin successfully installs the plugin with debug enabled`
by being a bit more lenient with the output which can get garbled by Bundler.
Closes#15801
(cherry picked from commit fc09ad4112)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit adds the Docker acceptance tests in the acceptance phase
of the exhaustive tests pipeline.
- Relates: https://github.com/elastic/ingest-dev/issues/1722
(cherry picked from commit fca1fccb66)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
There is occasional flakiness mainly with IT tests requiring us to
manually retry such failures when we raise PR (or the first
group of the exhaustive suite, which runs the same steps).
This commit adds up to 3 retries for all the steps of the PR
pipeline.
(cherry picked from commit 739e8a3ef0)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit enables running the exhaustive tests Buildkite pipeline
(i.e. the equivalent to the `main` Jenkins tests) ; the trigger is
code events, i.e. direct pushes, merge commits and creation of new branches.
CI is skipped if changes are only related to files under `docs/`.
This commit pins the `childprocess` gem to version `4` since version `5.0.0` of
https://github.com/enkessler/childprocess/pull/175 seems to have broken JRuby support for spawning.
Closes https://github.com/elastic/logstash/issues/15757
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit 9f1d55c6a2)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit adds annotations for Java unit tests (in the pull request pipeline) helping
identify failing unit tests quickly.
(cherry picked from commit 286088915f)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
PR#15729 missed the input step. As a result when the job is triggered
the steps are executed, but the pause icon still shows in the job
requiring manual unblock[^1]
This commit also skips the input step when the job is triggered from
the scheduler pipeline.
[^1] https://buildkite.com/elastic/logstash-linux-jdk-matrix-pipeline/builds/86
(cherry picked from commit a8b64a32e9)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
The recent PRs #15668 and #15705 refactored jobs with a custom schedule
to leverage a centralized trigger pipeline.
An unexpected sideffect of this is that the conditional for the wait
step doesn't work anymore.
This commit skips the wait step when the JDK matrix pipelines get triggered
from another pipeline.
(cherry picked from commit 82ac474b13)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit is a pre-requisite for adding unit + IT tests in a
dedicated phase of the Exhaustive tests pipeline.
It refactors the tests currently used by PR jobs, so that they become
reusable.
(cherry picked from commit 03d7b59f2a)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
In DLQ unit testing sometime the DLQ writer is started explicitly without starting the segments flushers. In such cases the test 's logs contains exceptions which could lead to think that the test fails silently.
Avoid to invoke scheduledFlusher's shutdown when it's not started (such behaviour is present only in tests).
(cherry picked from commit eddd91454f)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
This commit adds the compatibility tier for the Exhaustive tests suite.
Specifically, we introduce two new groups (running in parallel) for Linux and Windows compat tests.
Linux picks one OS per family from [^] and likewise Windows one of the three available choices from the same file.
We also support manual override, if user chooses to, by setting `LINUX_OS` or `WINDOWS_OS` as env vars in the Buildkite build prompt (in this case there is no randomization, and only one OS can be defined for Linux and Windows respectively).
For example:
```
LINUX_OS=rhel-9
WINDOWS_OS=windows=216
```
Relates:
- https://github.com/elastic/ingest-dev/issues/1722
[^1]: 4d6bd955e6/.buildkite/scripts/common/vm-images.json
(cherry picked from commit d42b938f81)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit is a manual backport of #15628 on the 7.17 branch.
Since 7.17 uses Java 11, we switch to the adoptium variant, which is
still receiving updates (latest available is `11.0.21+9` as of now.)
AdoptOpenJDK is nowadays Adoptium, so we replace it in favor of
the latter which is actively maintained.
Relates https://github.com/elastic/logstash/pull/15628
(cherry picked from commit 6446bba962)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit adds a skeleton Buildkite pipeline for the Exhaustive tests
suite.
(cherry picked from commit db50983ab5)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
So far we've been using images from the -qa GCP image project throughput
the development of the Logstash Linux JDK matrix pipeline for quicker
iteration.
As we have scheduled weekly builds of those images that promote to
prod[^1] we can now switch to the prod version of the GCP images.
[^1]: https://buildkite.com/elastic/ci-vm-images/builds/2888
Relates https://github.com/elastic/ingest-dev/issues/1725
(cherry picked from commit e259e04e53)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
Add missing yaml-language-server definition to Buildkite pipeline files
(static and dynamic generated) for consistency and to ease spotting
errors with editors.
The last part of the Logstash JDK matrix CI migration from Jenkins to
Buildkite is AmazonLinux 2023.
While we have a working image[^1], this is the only step that requires
a agent that runs on AWS.
This commit refactors the builder to support GCP or AWS agents depending
on the OS.
[^1]: https://github.com/elastic/ci-agent-images/pull/441
(cherry picked from commit 8fa3bd0d7f)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
Fix typo for image name of Rocky Linux 8 for JDK matrix jobs.
(cherry picked from commit ce63ea4a51)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit adds JDK matrix Buildkite pipelines for
Windows 2022, 2019 and 2016.
It also makes the groups easier to read (on both Linux and Windows
pipelines) by removing the os-jdk prefix from the job labels.
`testDLQWriterFlusherRemovesExpiredSegmentWhenCurrentHeadSegmentIsEmpty`
fails on Windows Buildkite agents and it's a test issue tracked in
https://github.com/elastic/logstash/issues/15562.
Relates:
- https://github.com/elastic/logstash/pull/15539
- https://github.com/elastic/ingest-dev/issues/1725
(cherry picked from commit 0ede19a0e1)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit is the first part of the migration of JDK matrix tests
from Jenkins to Buildkite. There will be two separate pipelines, for
Linux and Windows.
Linux is currently limited to Ubuntu 22.04 and 20.04, but
additional operating systems will be added outside of the Logstash
repository seamlessly through additional VM images.
Steps are created dynamically and the underlying script is meant to be
common for Linux and Windows. Windows is currently a stub and
will be added in a follow up PR.
Relates:
- https://github.com/elastic/ingest-dev/issues/1725
- https://github.com/elastic/ci-agent-images/pull/424
(cherry picked from commit 956bf483f2)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit splits the generic Buildkite pipelines introduced
in #15520 for JDK tests to separate pipelines for Linux and Windows.
(cherry picked from commit 07147b3e40)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit is the follow up PR after #15466, which migrates away
the remaining aarch64 acceptance test Jenkins jobs to Buildkite.
Relates:
- #15466
- https://github.com/elastic/ingest-dev/issues/1724
(cherry picked from commit c384190718)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
PR#15466 skipped the Java unit tests as on the `main` and `8.11`
branches they attempted to run sonar scans (which are only meant to
run for PRs).
This commit re-enables the Java unit tests, taking advantage of #15486,
disabling the sonar scan part of the test suite.
(cherry picked from commit 16da966290)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This commit is the first part of migrating away the aarch64 Jenkins
jobs to Buildkite. It adds a group of exhaustive test steps in the
aarch64 pipeline.
The java unit tests are temporarily disabled as they run SonarQube
scans which need to be associated with pull requests.
Relates:
https://github.com/elastic/ingest-dev/issues/1724
(cherry picked from commit 36656de4f0)
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
This is a backport of the initial Pull Request pipeline for Buildkite.
While currently we haven't migrated all PR jobs from Jenkins, this is needed so PRs against non `main` branches don't fail this step (also giving us the possibility to test functionality against non `main` branches).
Relates:
- #15402
- #15413
- #15415
- #15421
- https://github.com/elastic/ingest-dev/issues/1721
## Release notes
[rn:skip]
This is a backport of the DRA pipeline and related scripts from:
- #15366
- #15365
- #15356
- #15352
- #15344
- #15343
- #15337
- #15312
Note that it's a manual backport because some PRs (e.g. #15312) contain files (`catalog-info.yaml`) that should only
live on the `main` branch.
This commit adds null guard to get the native thread when constructing pipeline report
Fix: #15300
(cherry picked from commit cd78558121)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
* upgrades logstash-mixin-aws to 5.1.0
* removes unused aws-sdk-v1 dependency.
* upgrades json version to 2.6.3
* upgrades fpm to 1.14.1 where 13.x versions.
This commit adds the missing method `worker_threads_draining?` to ruby pipeline which is added in #13934 to java pipeline for log msg improvement
Fixed: #15010
On ARM architecture UBI8 Docker images aren't created, so avoid to create empty tar.gz files.
(cherry picked from commit 7a39d97055)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
* [DRA] Force docker save to save directly on file instead of pipe to another command loosing the execution error code
(cherry picked from commit 2e5e49d10d)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
During stalled shutdowns while waiting for in-flight batches to complete,
our shutdown watcher emits helpful information about what work is in flight,
including the actual threads and plugins that are still executing.
Since ~6.3.0, the `inflight_count` metric in this log message has always
been `0`, in part because of two somewhat-overlapping bugs:
- elastic/logstash#8987 and elastic/logstash#9056 (7.0, 6.3) changed
the `inflight_batches` map provided by the queue read clients to index
batches by native thread id, but pipeline reporter continued to
attempt to extract by ruby thread object. Because it does not find
the thread in the "batch map", it reports zero.
- elastic/logstash#9111 (7.0, 6.3) changed the _value_ stored in
the `inflight_batches` map provided by a new common queue read client
from an object responding to `#size` to a java `QueueBatch` which
does not respond to `size`. If our pipeline reporter had been able to
look up the queue batch, it would have failed with a `NoMethodError`.
We resolve the issue by (1) extracting the batch from our "batch map" using
the native thread id and (2) safely extracting the value from a `QueueBatch`
before falling through to `Object#size` or 0.
(cherry picked from commit 4941c25f32)
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Note this is a manual cherry-pick backport, as it did not backport cleanly. This backport includes some changed/additional code to
the original PR:
* Added additional null check in seekToNextEvent that was previously not present in 7.17, but is required for this PR
* Filter is added, but surrounding code is slightly different, but the intent is the same.
**Backport PR #14605 to 8.5 branch, original message:**
---
<!-- Type of change
Please label this PR with the release version and one of the following labels, depending on the scope of your change:
- bug
- enhancement
- breaking change
- doc
-->
<!-- Add content to appear in [Release Notes](https://www.elastic.co/guide/en/logstash/current/releasenotes.html), or add [rn:skip] to leave this PR out of release notes -->
Fix DLQ fails to start due to read 1 byte file
<!-- Mandatory
Explain here the changes you made on the PR. Please explain the WHAT: patterns used, algorithms implemented, design architecture, message processing, etc.
Example:
Expose 'xpack.monitoring.elasticsearch.proxy' in the docker environment variables and update logstash.yml to surface this config option.
This commit exposes the 'xpack.monitoring.elasticsearch.proxy' variable in the docker by adding it in env2yaml.go, which translates from
being an environment variable to a proper yaml config.
Additionally, this PR exposes this setting for both xpack monitoring & management to the logstash.yml file.
-->
This commit ignores DLQ files that contain only the version number. These files have no content and should be skipped.
Mapping 1 byte DLQ files to buffer causes java.lang.IllegalArgumentException: newPosition < 0: (-1 < 0)
User is unable to start the pipeline using dead_letter_queue input
<!-- Mandatory
Explain here the WHY or the IMPACT to the user, or the rationale/motivation for the changes.
Example:
This PR fixes an issue that was preventing the docker image from using the proxy setting when sending xpack monitoring information.
and/or
This PR now allows the user to define the xpack monitoring proxy setting in the docker container.
-->
<!-- Mandatory
Add a checklist of things that are required to be reviewed in order to have the PR approved
List here all the items you have verified BEFORE sending this PR. Please DO NOT remove any item, striking through those that do not apply. (Just in case, strikethrough uses two tildes. ~~Scratch this.~~)
-->
- [ ] My code follows the style guidelines of this project
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] I have made corresponding change to the default configuration files (and/or docker env variables)
- [ ] I have added tests that prove my fix is effective or that my feature works
<!-- Recommended
Add a checklist of things that are required to be reviewed in order to have the PR approved
-->
- [x] manually test the 1 byte file and can start Logstash
<!-- Recommended
Explain here how this PR will be tested by the reviewer: commands, dependencies, steps, etc.
-->
follow the reproducer of #14599
<!-- Recommended
Link related issues below. Insert the issue link or reference after the word "Closes" if merging this should automatically close it.
- Closes#123
- Relates #123
- Requires #123
- Superseeds #123
-->
- Fixed: #14599
<!-- Recommended
Explain here the different behaviors that this PR introduces or modifies in this project, user roles, environment configuration, etc.
If you are familiar with Gherkin test scenarios, we recommend its usage: https://cucumber.io/docs/gherkin/reference/
-->
<!-- Optional
Add here screenshots about how the project will be changed after the PR is applied. They could be related to web pages, terminal, etc, or any other image you consider important to be shared with the team.
-->
<!-- Recommended
Paste here output logs discovered while creating this PR, such as stack traces or integration logs, or any other output you consider important to be shared with the team.
-->
* [Doc] Document the usage of LS_JAVA_OPTS environment variable
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
(cherry picked from commit 9242105c3c)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Starting with Log4j2 2.6 if a subclass of MessageFactory associated with an Logger instance
is not subclass of MessageFactory2, then it's wrapped with MessageFactory2Adapter.
This trigger a log4j warn log that, when a class subclasses LogStash::Plugin for example, is noisy and report about
a Logger is not associated with the default MessagedFactory (LogstashMessageFactory) every time a subclass of Plugins is instantiated.
This commit adapt LogstashMessageFactory to implement the MessagedFactory2 instead of the older MessageFactory to avoid the wrapping with the adapter class.
(cherry picked from commit 05bfaff799)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Fix the docker image building and upload process:
* Builds ubi8 on x86_64.
* Uploads ironbank and ubi8 context files from x86_64 only.
(cherry picked from commit 2e8bd20cf5)
Co-authored-by: Andres Rodriguez <andres.lazo@elastic.co>
Updates the dra_docker.sh script to upload also docker-build-context.tar.gz files
(cherry picked from commit 6ad5690a8c)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Ensures the DRA build script surfaces a rake error, instead of allowing the build to continue.
This ensures that the build doesn't continue if any of the steps fails.
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
(cherry picked from commit 17d0bb5ffb)
* Generalize docker image building
* Rename and add ability to pass the architecture as a parameter
* Handle ARCH env variable
(cherry picked from commit 6ba5cc112f)
Co-authored-by: Andres Rodriguez <andres.lazo@elastic.co>
Version 7.17 doesn't generate Darwin aarch64 artifacts. Don't download these artifacts from the GCS bucket, given that we don't build Darwin for that release.
(cherry picked from commit 9c7b7b7454)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
* DRA: Improve shell scripts for debuggability (#14654). The changes remove some code duplication by introducing a common file that can be sourced between all scripts. It also improves debuggability by adding better messages.
* Fix dra_common sourcing (#14657). Fixes the source of dra_common.sh. It will now first check the directory of the file from which this dra_common.sh script is being called. This allows the common script to be sourced regardless of where the sourcing script is being called from.
* Fix sourcing on dra_upload (#14659). Fix sourcing on dra_upload.sh
* DRA: Handle env variables better
* Moved the addition of SNAPSHOT suffix to the version after the VERSION_QUALIFIER
* Fix badly assigned variable, version qualifier has to be appended also to PLAIN_STACK_VERSION and not RELEASE_VER
Co-authored-by: andsel <selva.andre@gmail.com>
(cherry picked from commit db6a7bc619)
The version passed to the release-manager doesn't need the SNAPSHOT particle because already handled by the --workflow="snapshot", if inserted make the release manager to search for artifacts named as 8.5.0-SNAPSHOT-SNAPSHOT
(cherry picked from commit b8792107ad)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Do not move out artifacts from the build/ former to ensure the upload doesn't fail.
(cherry picked from commit 363adad3b6)
Co-authored-by: Andres Rodriguez <andres.lazo@elastic.co>
Avoid to leverage on git local commands to guess the local branch, it switches to listing the branches and checking against the the stack version. If doesn't exists it's main
(cherry picked from commit 11ecaaea5a)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
* specs: detangle out-of-band pipeline initialization
Our API tests were initializing their pipelines-to-test in an out-of-band
manner that prevented the agent from having complete knowledge of the
pipelines that were running. By providing a ConfigSource to our Agent's
SourceLoader, we can rely on the normal pipeline reload behaviour to ensure
that the agent fully-manages the pipelines in question.
* api: do not emit pipeline that is not fully-initialized
(cherry picked from commit de49eba22a)
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Switch branch selector from major.minor to read the current branch name
(cherry picked from commit ff8afb2293)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Handle the WORKFLOW_TYPE enviroment variable used to select the kind of artifacts to generate and consequently adapt the version name.
If the WORKFLOW_TYPE has a value assigned other then empty string it's assumed to be snapshot and so it generates snapshot artifacts else the release ones.
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
(cherry picked from commit d8d690079a)
Update DRA scripts to use the version qualifier in stack_version variable for alpha and beta builds
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit 3075029b27)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
* Extract the branch name passed to release-manager from version and not from git current branch
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
(cherry picked from commit d3b92ec20c)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
This is a **not clean backport** of the following PRs to `8.5` branch:
- Introduces a bash script to build all the artifacts and dependencies report. #14522
- Save docker images as tar.gz files move the CSV dependency report in the path that's expected by release-manager #14552
- Split ci scripts into ARM and x86 ones #14567
- Uses the gsutil tool to upload all the generated artifacts into an intermediate collector bucket. #14568
- Collect all artifacts created and upload to GCP with release-manager #14584
When run in debug mode, #invoke was returning an instance of UI::Shell rather
than a string, causing the plugin to crash when `<<` was called on.
This commit ensures that a string is returned regardless of whether debug is set
Fixes: #14131
(cherry picked from commit 02c2aec710)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
SettingsImpl.checkpointRetry is hardcoded to false in builder. Prior to this change, users are unable to set queue.checkpoint.retry to true to enable Windows retry on PQ AccessDeniedException in checkpoint.
Fixed: #14486
(cherry picked from commit 3a78621109)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
When using Bundler 2.3.19, doing a "bin/logstash-plugin uninstall <plugin>"
will crash, failing to find gems in the :build group.
Until we know more about why, pin bundler to 2.3.19
(cherry picked from commit 8aa62dc441)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
<artifact_path> need to be hardcoded so it can be replaced properly by
the ubireleaser during the creation of the Ironbank merge request.
Relates to https://github.com/elastic/logstash/pull/14298/
(cherry picked from commit 79c36c5ac2)
Co-authored-by: Julien Mailleret <8582351+jmlrt@users.noreply.github.com>
This commit adds a rake task `rake artifact:dockerfile_ironbank` to generate ironbank docker build context for automatic release.
The output can be found in build/logstash-ironbank-$VERSION-docker-build-context.tar.gz
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
(cherry picked from commit dfb109843d)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
* bump to 7.17.6
* bump version of logstash itself in gemfile.lock.release
* Update docs/static/releasenotes.asciidoc
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
* Update docs/static/releasenotes.asciidoc
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
the use of ranges (e.g. {0..5}) or seq (e.g. $(seq 0 5)) may not
correctly in some systems, so let's just have a plain list of elements
for the loop to go through.
(cherry picked from commit ce27e08eac)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
This helps with transient network problems by not failing at the first try.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit ff9f1e5a7f)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
When an external repository reaches IO error or generates network timeouts the build fails in resolving external dependencies (plugins or libraries used by the project).
This commits increase a little bit those limits.
(cherry picked from commit 080c2f6253)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
This commit changes the behavior of PQ size checking.
When it checks the size usage, instead of throwing exception that stops the pipeline,
it gives warning msg in every converge state if it fails the check
Fixed: #14257
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit c725aabb49)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
This PR adds the load of i18n to LogStash::Settings to fix uninitialized constant I18n exception when using `logstash-keystore`
(cherry picked from commit d63b6ae564)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
This commit re-adds the update to jdk 11.0.15+10 originally made in #14031
(cherry picked from commit ea1690d5ba)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
* Switch adoptopenjdk url to adoptium
Newer versions of the JDK are only available from api.adoptium.net, and not dual hosted on api.adoptopenjdk.net
This commit allows the use of adoptium versions of the JDK.
Relates: #14072
(cherry picked from commit 9e2c87a1ab)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
This commit adds a checkpoint for a fully acked page before purging to keep the checkpoint up-to-date
Fixed: #6592
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
(cherry picked from commit 7f36665c09)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
When PQ is full, workers wake up writer thread in every read.
However, without removing a fully acked page, queue is still full.
This commit changes the condition of notFull signal.
Fixed: #6801
(cherry picked from commit da68ff3803)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
* bring back the details of PQ size checking
(cherry picked from commit 205cf43213)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
* [DOCS] Added a few missing "(`queue.type: persisted`)" to a few of the logstash.yml.
* [DOCS] Moved a few of the queue_type persisted indicators.
(cherry picked from commit a6e418adf7)
Co-authored-by: Nicole Albee <2642763+a03nikki@users.noreply.github.com>
Adds some tests to proof that the offset position retrieved from the `RecordIOReader` effectively point to the start of the next available event and not to the last channel position retrieved.
Adds fixes for the problem, moving the concept of `channelPosition` to `streamPosition` differentiating the position on the stream from the position on the channel. However the already published interface (method getChannelPosition) is not renamed to avoid the introduction of a breaking change.
* Allow metrics update when PQ draining (#13935)
This commit moves the stop of metrics collection after pipelines shutdown to allow metrics update during PQ draining
Fixed: #13832
(cherry picked from commit 0af9fb0d5f)
* fix monitoring api integration test with draining queue (#14106)
This commit ends the integration test with teardown instead of sending a signal to kill
Related: #13935
(cherry picked from commit 7641b076f4)
* [Doc] PQ and DLQ do not support NFS
Fixed: #12097
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 90e7c8864e)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
OpenJDK versions 11.0.15+10, 17.0.3+7 introduced new functionality to allow Java to enable
strict path checking. This included disallowing the use of : in any place other than directly
after the drive letter. Unfortunately, this check had the side effect of breaking compatibility
with special device paths, such as NUL:, which in turn, prevents Logstash from starting.
This feature was gated by the use of the jdk.io.File.enableADS property with a value of true
disabling the check.
This property was introduced with the default value of false, which prevents logstash
from starting in a Windows environment. While the next release is anticipated to set this
value to true, this commit explicitly sets that value to enable Logstash to be able
to start correctly.
Relates: #14066
Hide shutdown stall message when queue is draining
Fixed: #9544
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
(cherry picked from commit 41cb3d3680)
Prior to the change, pipeline `stop` and `delete` happen in two converge cycles, which
has a gap letting the stopped pipeline compare with the same pipeline definition
in central pipeline management, hence Logstash see the stopped pipeline as graceful finish
and not to delete in registry
This commit creates StopAndDelete action to delete running pipeline in one converge cycle
Fixed: #14017
(cherry picked from commit e8cd0d3039)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
As pointed out in this post merge review comment, there is a window
where we could miss a pipeline transitioning from 'loading' to 'running'
in the original fix, as separate calls are made to the pipeline registry.
This commit fixes that by making a single call to the pipeline registry which
allows for also returning pipelines in the `loading` state.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit 1291b5edcc)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
When logstash is run without automatic reloading, it is still possible to reload configurations
by using 'SIGHUP'. This functionality was broken in #12444, which split non-terminated pipelines
into "loading" and "running" states. The call `no_pipelines?` in agent#execute would no longer
find pipelines in a "loading" state, causing the loop to exit, and logstash to shutdown. This
commit tests for pipelines in a "loading" state to restore functionality
(cherry picked from commit 7b2bec2e7a)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
* PQ size check for multiple pipelines (#13877)
Fixed unable to start causing by queue.max_bytes: 0
Added PQ size checking for multiple pipelines in converging
Fixed: #12213
# Conflicts:
# logstash-core/lib/logstash/runner.rb
* Fix window CI for PQ size checking (#13981)
windows platform fails to generate desired file size with #truncate(), hence failing the test
Fixed: #13957
Sets the LS_JAVA_HOME environment variable for the environments used to spawn Logstash process in integration tests.
The JDK matrix testing is based on selecting the desired JDK to run the tests, through the BUILD_JAVA_HOME.
However, when the integration tests spawn a Logstash process this setting was missed.
(cherry picked from commit d2739a875c)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
* Update failing policy in Central Management fetcher and license checker if hit ES down node (#13689)
Wraps the calls to the central management Elasticsearch cluster with the utility class Stud::Try to handle the remote host error when the client used to connect hit a not available node.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit c544ecb380)
* Covered all calls to ES with retryable
* Mocked logger interaction in test after wrapping the client calls with retryable
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Backport PR #12198 to 7.17 branch
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
(cherry picked from commit 682f07b703)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
This commit changes `queue.checkpoint.retry` to `true` by default allowing retry of checkpoint write failure.
Add exponential backoff retry to checkpoint write to mitigate AccessDeniedExcpetion in Windows.
Fixed: #12345
(cherry picked from commit 1a5030bd63)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Ruby allows methods to have default values in arguments if they're not
passed. However if a nil is passed then the default value isn't used.
The artifact:archives tasks were passing nil values to the exclusion
argument, causing all files to be included in the package.
This commit cleans the naming of the path lists and ensures the default
exclusion list is always used.
Updates all Windows batch scripts used as CLI tools to quotes the %JAVACMD% to avoid path problems when the path contains spaces.
(cherry picked from commit a8bd90c22d)
A number of plugins reach into Logstash's i18n translations to "helpfully"
communicate certain configuration errors, but rely on translations that were
moved in 00a99c19e5 from logstash.agent to
logstash.runner. Since then, it is possible to hit obtuse error messages about
failing to load a translation instead of the intended helpful message:
~~~
translation missing: en.logstash.agent.configuration.invalid_plugin_register
~~~
By moving the `logstash.agent` definition to _after_ the `logstash.runner`
definition, we can use YAML tooling to name the `logstash.runner.configuration`
node and then merge its contents into `logstash.agent.configuration`. This
effectively allows us to keep a single definition of those translations while
making them available at both addresses.
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
(cherry picked from commit 8bec0e658a)
* api: avoid 5xx when stats/events not yet populated
* Add catching of metrics exception also for retrieve of :queue metrics. No pipelines means no queues created
* Added empty result when no pipelines is present and query for node info API
Co-authored-by: andsel <selva.andre@gmail.com>
(cherry picked from commit a6c0e75b53)
When a pipeline isn't fully initialized, we run the risk of attempting to
format pipeline info that isn't yet fully-shaped. By using safe-fallback
methods like `Hash#dig` and conditional-chaining, we can avoid the spurious
`NoMethodError` caused by sending `[]` to nil.
(cherry picked from commit e9455ca81e)
currently the artifact tasks compute the file listing from a list of
include regexes and exclude regexes. However this is done by hand,
taking each include regex and running it through each exclude regex.
This is quite slow as we add more exclude regexes. This PR changes to
totally relying on Rake::FileList, by feeding it the include and exclude
lists. This speeds up file listing from 150 seconds to 1 second.
(cherry picked from commit edfbabf2fc)
* Fix Logstash cli tools to use the selected JDK under Windows (#13839)
Some Logstash tools invokes directly the JRuby intepreter. The interpreter uses the JVM pointed by two environment variables:
- JAVACMD
- JAVA_HOME\bin\java.exe
The setup.bat script exported the selected JVM under the env var named JAVA, which isn't recognized by vendored jruby.
This commit fixes it renaming to JAVACMD.
(cherry picked from commit 0084492494)
* Fixed the case for JAVA_HOME selection path
* artifacts: omit openssl_pkcs8_pure specs from built artifacts (#13715)
* artifacts: omit openssl_pkcs8_pure specs from built artifacts
* Exclude _all_ top-level spec and test directories from built artifacts
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit 0369ba208d)
* Update releasenotes.asciidoc (#13701)
Fixed OS name. Ubuntu instead of Ununtu.
Co-authored-by: Cris da Rocha <cdarocha.astro@gmail.com>
Fix gem installer tests to enable unpinning the version of bundler
This commit removes changes the gem installer to use real gems, rather than
use `allow_instance_of` during testing, which appears to be problematic with the
latest version of bundler
# Conflicts:
# build.gradle
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
This commit deleted the corrupted zero byte PQ file and recreated the head checkpoint file to get rid of page file size is too small exception
Fixed: #10855
Clean backport of #13727 to branch `7.17.`
----
Original comment:
Use same technique for Unix system to extract the CPU percentage and publish to Logstash's metrics collector.
It doesn't retrieve the full set of metric of a Unix system, but the ones that are available from internal JDK class com.sun.management.OperatingSystemMXBean
(cherry picked from commit d8f4784d69)
This PR substitutes ${VAR} in Expression, except RegexValueExpression, with the value in secret store, env.
The substitution happens after syntax parsing and before graph execution.
Fixed: #5115
Clean backport of #13672 to 7.17 branch
----
Fixes an integration test that verifies the capabilities of CLI tool to install a not bundled plugin.
Move away from logstash-input-google_cloud_storage which depends indirectly to OS's package named shared-mime-info, which is not always available.
(cherry picked from commit 7bb56e46dd)
* logging: move init into environment's settings post-processor
Ensures that the non-runner command line utilities like `bin/logstash-keystore`
correctly initialize the logger as-configured.
* fixup: ensure we get ruby stdlib URI & File
(cherry picked from commit 2a5e54cd21)
Backport #13656 to branch 7.17
----
Use the System.lineSeparator instead of "\n" to make the test portable across platforms
(cherry picked from commit 3fdc4c3aa7)
Co-authored-by: Andrea Selva <andrea.selva@elastic.co>
* [Docs] Add pipeline.ecs_compatibility to the list (#13612)
* Add pipeline.ecs_compatibility to the list
* Update docs/static/running-logstash-command-line.asciidoc
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit e11d0364d4)
* Rephrase docs for --pipeline.ecs_compatibility flag for 7.x perspective
Co-authored-by: Toby Sutor <55087308+toby-sutor@users.noreply.github.com>
* field-reference: cap RUBY_CACHE to 10k entries
Reduces the scope of a memory leak that can be caused by using UUIDs or other
high-cardinality field names by preventing the ruby string _keys_ from being
held by the cache indefinitely.
Note: this may not solve the problem entirely, but certainly limits its impact.
Because ConvertedMap requires individual field names to be interned into
the global String intern pool, their eligibility for GC is JVM-specific
and high-cardinality field names should still be avoided.
* noop: field-reference test refactor to consolodate reflection
(cherry picked from commit ca501acdcf)
Clean backport of #13631 to 7.17
Original message:
When the Bash script executes the vendored Ruby it has to use proper `GEM_HOME` to avoid the overwrite that happens inside the logstash.lib.sh
3064f7d0c3/bin/logstash.lib.sh (L161-L165)
(cherry picked from commit 93f37b9609)
Clean backport of #13641 to 7.17
Original message:
Cleanly teardown an integration test that made fall other integration tests.
In some cases the CI integration tests fails because the launched Logstash can't find a gem named `mimemagic`. This gem is installed during a CLI plugin test (install of `logstash-input-google_cloud_storage` plugins kicks in that `mimemagic`).
(cherry picked from commit 640ba8489f)
* fix: respect LS_JAVA_OPTS environment even when optionsfile missing (#13525)
* fix: respect LS_JAVA_OPTS environment even when optionsfile missing
* Fixed integration tests
* Added unit test to cover the fix
* Wipe commented code
* Removed redundant log in a path that could never be reached
* Moved jvm.options checks into only one place
* javaopts: provide injection point for environment string
Co-authored-by: andsel <selva.andre@gmail.com>
(cherry picked from commit 2a248b2ea0)
* backport: spec silencing noise
Backport PR#13604 to 7.17 branch. Original message:
This commit fixes 2 tests
- Set queue.drain to true in pipeline pq test
- Under certain conditions the pipeline_pq_file_spec test would fail as the pipeline would exit once the generator had generated all of its events, but before the events were processed, leading to the test hanging. This commit adds `queue.drain:true` to the settings to ensure that all of the events are processed before the pipeline is shut down
- Increase the flush delay in dead letter quest testFlushAfterDelay test
- Under certain conditions, the flush delay of 1 second was insufficient, and invalidated a pre-condition assertion that no events had been flushed before the expiry of that delay.
Logs the JVM flags and options used to launch Logstash.
(cherry picked from commit d4bdcc936d)
----
Original message:
Add info log of JVM flags used to configure Logstash (#13531)
Logs the JVM flags and options used to launch Logstash.
Clean backport of #13593 to 7.17 branch
----
Original message:
Fixes the issue #8752 in event.out counter. When a pipeline contains a drop filter the total out events counter should count only the events that reached the out stage.
This PR changes CompiledExecution.compute() interface to return the number of events that effectively reached the end of the pipeline. This change is used in WorkerLoop to update correctly the event.out metric, instead of relying on the batch's size.
(cherry picked from commit b6da829f4f)
* add product origin header to license checks
* add origin header to Central Management config fetcher
* add origin header to ES output for Monitoring pipeline
(cherry picked from commit 2892964ba1)
Clean backport of #13603 to branch 7.17
(cherry picked from commit e27fdeb252)
----
Original commit message:
Sometime the deep_replace could be invoked by plugins, using the LogStash::Config::Mixin#validate.
This method receives a Ruby hash which could contains Java ArrayList instead of Ruby Array.
The iteration method `each_index` is not available for ArrayList, so resort to some form of "plain old way".
The reason why an ArrayList is recognized as a Ruby Array is due to the override classes, like RubyJavaIntegration.JavaCollectionOverride that monkey patches Ruby Array, so that a Java Collection could be seen as a RubyArray but it doesn't implement all the abstractions, like `each_index`.
Co-authored-by: Karol Bucek <kares@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Add notes to the Event sprintf docs about timestamp formatting to call out the
UTC nature of the Timestamp object.
Resolves: elastic/logstash#13112Closes: elastic/logstash#13571
(cherry picked from commit ef40bb0643)
After the fix of unlocking ecs_compatibility_support version in plugin update (#13218), `logstash-plugin install` has a problem of installing non default plugin.
This commit removes `Bundler.setup` in install path to avoid Gemfile froze by bundler
Fixed: #13404
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
This is a backport of #12538 to 7.17
This is not clean backport because #12538 results in empty commit due to the changes already committed into main with PR #13344, which is not backported because it removes the Java 8 testing.
Backports #13537 to 7.17
The Output Isolator Pattern doesn't need a persisted queue on the input
pipeline to work. It just needs one on every output pipeline.
Authored-by: Toby McLaughlin <toby@jarpy.net>
Backport #13442 to 7.17 branch. Original message:
* Update logstash docker to use ubuntu 20.04 base image
* Correctly set locale for ubuntu docker image
* tiny typo fix: ubunto -> ubuntu
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
To err on the side of caution it'd be preferable to use log4j 2.16.0 due to CVE-2021-45046
(cherry picked from commit bf0b122b37)
Co-authored-by: thex12 <thex12@users.noreply.github.com>
* Updated lockfile for 7.16.1
* Keep elasticsearch client as per 7.16.0
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Backport PR#13014 to 7.16 branch. Original message:
* Docker integration tests stability improvements
This commit contains numerous fixes to improve the stability of the docker integration tests
* Patch Excon::UnixSocket
Socket.new running on arm64 on Ubuntu 18.04, causes an immediate SIGSEGV error and crash on
that OS, and, as far as I can tell, only that OS. `TCPSocket.new`,`UDPSocket.new` and
`UNIXSocket.new` do not. This commit patches the UnixSocket of the Excon library to
do the absolute simplest thing possible to avoid this error.
* Ensure that container is deleted even if #kill fails
* Add extra waits to handle the incremental way the payload returned by the monitoring
API increases as logstash starts up and pipelines load.
* Use pyenv to ensure the same version of python is used across different jenkins workers
* Add container logs to help diagnose failed test.
* Update the pipeline definition on multi-pipeline integration test
This was causing a pipeline to halt after startup causing intermittent test failures.
* Remove `;` to ensure failures are propagated appropriately
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
With #13308 configuration namespace that started with `http.` was renamed to `api.`, this commit fix a usage left behind.
Use the new `api.enabled` setting in one place instead of the deprecated `http.enable`.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit 88c80ebb19)
Backport PR #13369 to 7.16 branch. Original message:
Add ability to pull the version used to build java from the logstash repo, rather
than rely on system Java. Previously, builds would use JAVA_HOME, now this setting
is ignored in Logstash (and by extension, parts of the Logstash build), which was causing
variations in the version of Java used to build Logstash, including the use of Java 8,
which the Logstash team would like to remove support for.
Relates: https://github.com/elastic/infra/pull/32818
Backport PR #13351 to 7.16 branch. Original message:
* Fix bundler handling of 'without'
Prior to this change, the values set in `set_local` are ignored when invoking
bundler via the command line, as is used with `invoke!`. This commit sets those
values in `ENV` variables instead, fixing the functionality to not install
development gems.
* Update bundler spec to check ENV variable
* Added test to ensure kramdown gem not vendored
* Re-add set_local setting to play nice with `expand_logstash_mixin_dependencies`
* logstash service needs to be installed
* gem_vendored? needs to use full path to vendor files
* use `stdout` from `cat` command to generate spec temporary file
* Removed unnecessary support for supplying a block from #gem_vendored?
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
* Add deprecation warnings for JAVA_HOME/older versions of Java
Logstash 8.0 will remove support for java versions before java 11, this commit
adds entries to the deprecation log warning against this.
Also adds use of `JAVA_HOME` to the deprecation log.
* Soften deprecation language and point module deprecations to agent integrations
* Remove extra `and`
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
* ecs: remove warning when opting-in per-pipeline or globally
* ecs: align with v8 for version after v1
* add deprecation warning for Ruby Execution, removed in #12517
* settings: add "deprecated alias" support
A deprecated alias provides a path for renaming a setting.
- When a deprecated alias is set on its own, a deprecation notice is emitted
but fetching the canonical setting value will reflect the value set with the
deprecated alias.
- When both the canonical setting (new name) and the deprecated alias (old
name) are specified, it is an error condition.
- When the value of the deprecated alias is queried, a warning is emitted to
the logger and only the value explicitly set to the deprecated alias is
returned.
Additionally, some relevant cleanup is also included:
- Starting Logstash with invalid settings no longer results in the obtuse "An
unexpected error occurred" with backtrace and exception data obscuring the
issue. Instead, a simple message is emitted indicating that the settings are
invalid along with the originating exception's message.
- The various settings implementations share a common logger, instead of each
implementation class providing its own. This is aimed to reduce noise from
the logs and to ensure specs validating logging do not need to tie so
closely to implementation details.
* settings: add password-wrapped setting
* settings: make any setting type capable of being nullable
* settings: add `Settings#names` to power programatic iteration
* cli: route CLI-flag deprecations in to deprecation logger
* settings: group API-related settings under `api.*`
retains deprecated aliases, and is fully backward-compatible.
* webserver: cleanup orphaned attr accessors for never-set ivars
* api: pull settings extraction down from agent
This net-no-change refactor introduces a new method `WebServer#from_settings`
that bridges the gap between Logstash settings and Puma-related options, so
that future additions to the API settings don't add complexity to the Agent.
It also has the benefit of initializing the API Rack App and just ONCE, instead
of once per attempted HTTP port.
* api: add optional TLS/SSL
* docs: reference API security settings
* api: when configured securely, bind to all available interfaces by default
* cleanup: remove unused cert artifacts
* tests: generate fresh webserver certificates
* certs: actually add the binary keystores 🤦
Fixes an integration test that expects some output on the stderr.
With PR #13207 was added a deprecation notice to inform the user about the removal of support for JAVA_HOME. This notice is present only on 7.x and that console output needs to be removed in a test that verify installation of plugins.
Backport #13306 to branch 7.x
(cherry picked from commit 7395641a43)
----
This commit applies all the changes needed to run Logstash on JDK 17:
- opens access to module java.base for packages sun.nio.ch and java.io to run the application and to execute the tests
- removes SecurityManager classes used during Logstash startup
- fix exception type catched in JavaKeyStore tampering test
Related to meta issue #13306
Backport PR #13316 to 7.x branch. Original message:
Sets `LS_JAVA_HOME` of the spawned logstash to use the same `java.home`
that the test is running under, rather than default to the system JDK, which
would result in the spawned logstash running under a different JDK to that
intended in the test
This commit fixes the `logstash-plugin update` command which fail to update plugin
that depends on a new version of logstash-mixin-ecs_compatibility_support.
It resolves logstash-* dependencies and puts them in bundler update command.
Fixed: #13181
update golang image to 1.17.1 to get rid of expired DST Root CA X3
disable download manager test cases to silent Faraday::SSLError
Fixed: #13261
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
This PR integrates Elasticsearch bootstrap script to help users keep Logstah geoip plugin run without online update check.
Add `xpack.geoip.download.endpoint` option to config geoip database service endpoint.
Users can point to `http://localhost:8080/overview.json` when using the script to bootstrap nginx docker
* Backport PR #13015 to 7.x: Bundler: freeze lockfile on run, and "normalize" platform on plugin changes
Backport PR #13015 to 7.x branch. Original Message:
This PR enables the upgrade of bundler to the latest version.
Prior to this PR, the ability to do so was blocked by bundler.setup in versions of bundler > `2.23` making runtime changes to `Gemfile.lock` (unless the lock file was `frozen`) based on the specific platform the application was being run on, overriding any platforms (including generic `java` platform) set during build time. This was in conflict with changes made in #12782, which prevented the logstash user writing to files in `/usr/share/logstash`.
This PR will freeze the lockfile when logstash is run, and unfreeze it when manipulating plugins (install, update, remove, install from offline pack) to allow new plugins to be added. While unfrozen, changes are also made to ensure that the platform list remains as the generic `java` platform, and not changed to the specific platform for the runtime JVM.
This PR also introduces a new runtime flag, `--enable-local-plugin-development`. This flag is intended for use by Logstash developers only, and enables a mode of operation where a Gemfile can be manipulated, eg
```
gem "logstash-integration-kafka", :path => '/users/developer/code/plugins/logstash-integration-kafka'
```
to facilitate quick and simple plugin testing.
This PR also sets the `silence_root_warning` flag to avoid bundler printing out alarming looking warning messages when `sudo` is used. This warning message was concerning for users - it would be printed out during normal operation of `bin/logstash-plugin install/update/remove` when run under `sudo`, which is the expected mode of operation when logstash is installed to run as a service via rpm/deb packages.
This PR also updates the vagrant based integration tests to ensure that Logstash still runs after plugin update/install/remove operations, fixes up some regular expressions that would cause test failures, and removes some dead code from tests.
* Updated Bundler to latest version
* Ensured that `Gemfile.lock` are appropriately frozen
* Added new developer-only flag to facilitate local plugin development to allow unfrozen lockfile in a development environment
(cherry picked from commit 4707cb)
* Remove code pinning bundler to ~> 1.17
Backport PR #13005 to 7.x branch. Original Message:
* fpm to 1.13.0 which allows building packages with java 11 + jruby 9.2
* childprocess to 4.x + remove monkey patches
* clamp to 1.x to unlock fpm 1.13.0
(cherry picked from commit 7390b64)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Backport PR #13071 to 7.x branch. Original message:
This PR contains commits attempting to fix the broken acceptance tests:
* Fix the set of test platforms used to run unix acceptance tests
Modernizes the list of OS's used in acceptance tests, to the most modern OS's available at https://app.vagrantup.com/elastic,. This removes the centos-6 platform from the build, which is past end-of-life and fails vagrant bootstrapping, causing the build to fail.
This is more of band-aid than anything - in the longer term, we should remove these vagrant based tests completely, and rely
on the build infrastructure to perform OS-based acceptance tests.
* Fix regexes for plugin list tests. …
Fixes tests to support the plugin alias feature. This introduced a new format for
entries emitted by `bin/logstash-plugin list`:
eg
```
└── logstash-input-elastic_agent (alias)
```
This commit fixes the test to account for this change, and whitespace variances.
The Gradle's configuration of task should be as fast as possible and don't break the build.
This commit moves retrieval of Elastic Stack version from the remote registry to the execution phase of the tasks.
Also the tasks that depends on this has received the same change (downloadEs and check EsSHA), moving from configuration to execution phase.
Close#13030
(cherry picked from commit cef339ce57)
Update getting-started-with-logstash.asciidoc (#12706)
Single quotations cause errors, should be double quotes for Windows
Co-authored-by: Megan Humphreys <catonice@gmail.com>
Adds framework for showing both windows and unix examples.
Doc: Add unix command for running basic pipeline (#12714)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
Starting with version 7.10.0 the name of LS packages changed, adding os and CPU architecture in the name. This change broke the downloading of those from the benchmarking tool. This commit fixes it, composing correctly the name, based on the version it has to download.
(cherry picked from commit b722360ebd)
Loads the production plugin_aliases.yml definition file and check that every alias has
a properly published gem on RubyGems.
Adds clean up of plugin_aliases.yml files
Fixed task dependency for copyPluginAlias
(cherry picked from commit a5f3153a8f)
Because the "Fatal Error" specs specifically inject fatal errors during
execution, and do so by reacting to a "poison" event, the fatal error prevents
the poison event from being ACK'd in the underlying queue.
By specifying a one-off temporary data directory in these specs and cleaning up
after ourselves, we ensure that a PQ containing un-ACK'd events isn't leaked to
the next spec to run.
(cherry picked from commit 6032e5ff64)
This PR makes the Windows logstash.bat exit with the last %ERRORLEVEL% at the end, so that any error in running Logstash will get propagated back to the command line.
Before this change, logstash.bat would always exit with code 0 - success (when doing cmd.exe /C logstash.bat), even if the java.exe process exited with a non-zero code (e.g. due to Logstash throwing an error at runtime).
(cherry picked from commit 1f9ef97836)
Co-authored-by: Dion Williams <dionrhys1@gmail.com>
Uses the OS defined path separator in Rake script to invoke the gradlew command. Without this the sh('./gradlew assemble') results in error when running .\gradlew clean installDefaultGems.
(cherry picked from commit d2c68fc0f9)
Adds a filter to Reflections library initialization so that when it scan "org.logstash.plugins" it includes only .class files and avoid to load and process AliasRegistry.yml and plugin_aliases.yml
Fixes#12992
(cherry picked from commit a6e9a6bcfd)
Avoid the creation of log4j routing appender for log events without the `pipeline.id` fishtag.
In this way no spurious log file named "pipeline_${ctx:pipeline.id}.log" and logs are not duplicated with main Logstash log file.
(cherry picked from commit 1d6a3e4bb3)
Added test to cover the installation of aliased plugins when exists a gem with same name but that's not a Logstash plugin.
In this case the alias is resolved to the original, skipping the gem retrieved from RubyGems.
(cherry picked from commit cafbf03158)
Remove an useful dynamic creation of appender's log file which leverages the `log.format` property
also when it's explicit by the appender itself.
Log4j configuration leverages the placeholder `${sys:ls.log.format}` to compose the name of the log file.
This generates some not evident conflicts in log4j internals, these conflicts became evident when enabling the `pipeline.separate_logs` feature is enabled and the log4j appender definitions contains both json and plain format.
The problem is that under those circumstances the rollover of the log file doesn't happen.
This commit also add a test against the production log4j configuration, to avoid future regressions.
(cherry picked from commit a0774c4e76)
This commit avoid to check for existence of jar files to decide if run or not Gradle assemble,
basically because the outputs of assemble task are not only jars but also others files, for example plugin-aliases.yml.
In this way the decision to execute or not is left the Gradle logic.
(cherry picked from commit 3eaff3612d)
Remove hard coded alias definitions in favor of yaml descriptor file.
Introduce a single point of aliases definition (logstash-core/src/main/resources/org/logstash/plugins/AliasRegistry.yml), checksum and copy it around to be used by Logstash and by Logstash's plugin management tool.
The descriptor yml file contains a checksum to verify it's not changed accidentally in a deployment of Logstash, if the verification phase fail Logstash avoid to start and plugin management tool avoid to operate.
The signing and copying around is managed by a specific Gradle task invoked during the build.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Fixes#12831
(cherry picked from commit 446dc7d906)
Backport PR #12791 to 7.x branch. Original message:
Version bump of JRuby.
+ Fix: a missing require in bootstrap
(cherry picked from commit ee6038afec)
The module LogStash::PluginManager requires the file `lib/pluginmanager/plugin_aliases.yml` is created,
which happend during the Gradle's 'copyPluginAlias' executed as part of Rake's 'bootstrap'.
(cherry picked from commit 8e62e8a01c)
logstash-keystore integration tests spawn a Java process, which by default uses the system JDK generally exposed with JAVA_HOME environment variable. It could be that this JDK is not the one selected with the build system variable BUILD_JAVA_HOME.
This commit uses the JDK defined in BUILD_JAVA_HOME if present.
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit fa9adb4b86)
Backport PR #12925 to 7.x. Original Message:
* Add logstash-integration-elastic_enterprise_search to plugins-metadata.json
* Remove old elastic_app_search plugin and set integration as default
* Add license information for workplace search gem
(cherry picked from commit a935261eeb)
ubi8 image uses microdnf as a package manager, and microdnf does
not support the "yum clean metadata" command. This commit adds
the logic to skip this command if the image_flavor is ubi8
(cherry picked from commit d1b12ded1d)
Backport PR #12891 to 7.x branch. Original message:
On aarch64, yum does not pick the correct 'bind-license' package,
this commit installs a specific noarch RPM
This commit also adds retry to the yum installs and updates.
Update Elasticsearch minimum version requirement to ~>7 also on the 7.x branch.
Relates to #11258
(cherry picked from commit ef9b0d2db5)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
The integrationTests start instances of Elasticsearch so they need it to be present and unpacked in build/ folder before start.
(cherry picked from commit 149ee41a8b)
Adapted install/uninstall/list PluginManager's CLI commands to respect the alised plugins
- adapt install plugin to resolve an alias giving precedence on a real plugin
- changed list to mark alised plugins
- uninstall avoid to remove the alias and ask the user to remove the original plugin
- update update the original plugin in case of alias, else fallback on usual behavior
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
(cherry picked from commit 1e08341e1e)
Introcuce the concept of alias for a plugin (#12796) and removes static part from PluginRegistry to avoid static initializer (#12799)
Creates an AliasRegistry to map plugin aliases to original plugins.
If a real plugin with same name of the an alias is present in the system, then the real plugin take precedence during the
instantiation of the pipeline.
Simplified the error handling in class lookup
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
The conditionals in the distributor pattern section are not correct, as string comparisons require quotes around the value being compared against. Added quotes around them to fix this.
(cherry picked from commit 34429ee0f2)
Co-authored-by: Ahil PonArul <29006086+turnUpTheChill@users.noreply.github.com>
This PR changes the behavior of copying license files from .tgz
Originally, only two files, MaxMind LICENSE.txt and COPYRIGHT.txt, are required
Now more files, README.txt and Elastic ToC, are potentially required
Instead of targeting the files, this change copies all content in .tgz
This commit contains two fixes
* Fix Date class clash when used in pipelines with Date filter and GeoIP
* Pinned jruby-openssl version 0.10.5 to avoid SSL errors
(cherry picked from commit 6f55066b17)
* Fix: logstash-keystore failing with an error (#12784)
* Fix: missing password dependency require
which causes `bin/logstash-keystore` to fail with an error:
```
ERROR: Failed to load settings file from "path.settings". Aborting...
path.setting=/logstash-7.12.0/config, exception=NameError,
message=>uninitialized constant LogStash::Util::Password
```
* Fix: review all LS parts depending on Password
* Test: bin/logstash-keystore create/list
(cherry picked from commit e8e393bdc7)
* Test: let's do the cleanup for every test
Set the correct ownership of /usr/share/logstash on DEB & RPM installs, following FHS guidelines.
Fixes: #12771
Backports: #12782. cherry picked from commit c4cb8f4f12)
Add placeholder and coming tag for 7.12.0 release notes
Generate and update release notes for 7.12.0
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Doc: Rework security update in release notes
Add link to CVE
Upstream `ElasticsearchOptions#es_options_from_settings` already uses the
setting `elasticsearch.ssl.verification_mode` to produce an appropriate
boolean-valued `ssl_certificate_verification` in our `es_settings` hash, so
we can rely on it instead of re-checking equality with a string.
(cherry picked from commit d5becc0082)
Backport PR #12736 to 7.x branch. Original message:
Since the introduction of this block:
```
"pipeline" : {
"workers" : 16,
"batch_size" : 125,
"batch_delay" : 50
},
```
to the node stats API, the benchmarking tool has been broken. This commit fixes the
tool, and updates the payload in the tests to reflect the current payload.
Backport PR #12728 to 7.x branch
Prior to this release a VERSION_QUALIFER env set to an empty string
would create versions looking like `8.0.0--SNAPSHOT` instead of
`8.0.0-SNAPSHOT`, causing the release manager builds to fail.
JRuby had a few releases where it shipped with bundler,
creating some difficulty in working with newer versions.
This no longer happens so we can remove these exclusions
from the jruby unzipping task.
(cherry picked from commit f0c18e89d0)
Clean backport of #12685
This commit fixes up some IT flakiness which has been presenting mostly
in recent DLQ test failures, it includes the following improvements:
* A recent change to Elasticsearch has required the cluster setting
`action.destructive_requires_name` to be set to `false` to enable the use
of destruction actions with wildcards. This commit sets this before
tests on Elasticsearch and DLQ tests
* Adds some extra safety to the `have_hits` rspec matcher
* remove information about snapshot builds
This information has been outdated for some time but we haven't had any reports about them, which points to them not being useful while requiring maintenance after every release.
This commit avoid an error in gathering monitoring information when webserver is disabled or is not yet started;
which could happen with slow loading pipelines or no pipelines defined from the central management UI.
(cherry picked from commit 91996cf2a2)
This change fix the behavior of considering as "running" also pipelines that are still in "loading", both "loading" and "running" is considered as "not terminated".
Fixed a flakyness in tests due to different ways to looks at the same thing: pipeline status.
The pipeline status is determined by both `pipeline.running?` and by `agent.pipelines_running`.
The first checks for an atomic boolean in pipeline object, the second check for the status in PipelineRegistry.
Fixes#12190
(cherry picked from commit 79d8f47437)
Clean backport of #12636
This commit updates the dockerfile template to support environment
variables being used to retrieve the architecture appropriate logstash
build, in the same way as is currently done for the Elasticsearch docker build.
This is required to support the official dockerhub builds of Logstash.
Relates #12578
Avoid the deletion of downaloaded ES artifact during copyEs task and download a new version only if the SHA512 of the local copy differs from the what is retrived from remote repository.
This avoid unusefull download of the same ES artifact when run integrationTests task multiple times.
Clean backport of #12633
Docker container integration tests relating to the java process were
failing due to the introduction of the new JVM option parser. This
commit waits for logstash to start before testing that the logstash
java process is being run as expected
Clean backport of #12589
Due to a change in #11803, using `to_seconds` to normalize values of `config.reload.interval`
would resolve to a value of 0 causing issues in tests where short reload intervals were desired.
This commit uses the `to_nanos` method to preserve the previous functionality.
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
* This PR allows the agent to stop pipeline by pipeline_id instead of fetching the full set of pipelines from elasticsearch and compute the pipeline actions internally
Fixed: #12560
- moved parsing of jvm.options file into Java code
- chnaged the parsing code to consider conditional notation to bind the applicability of certain JVM flags to specific JVM versions
- changed the launch scripts (.sh and .bat) to use the options string composition
- binded CMS flags to JVM specifications 8-14
- replaces all scripted filters with custom Java implementation
- implemented routing appender per pipeline in Java
- adapted log4j configuration shipped with Logstash
- print a warn message if it detects an scripted log4j configuration and continue the execution (#12591)
Currently, LS does not respect fatal errors such as java.lang.OutOfMemoryError and continues executing.
This is dangerous since JVM errors are a legitimate reason to halt the process and not continue processing.
Additionally:
- make sure we log the full stack-trace on fatal errors
- halt the JVM wout executing finalizers/hook (scissors on how ES handles uncaught exceptions)
- also, we should now be aware of a potentially unexpectedly dying thread
Back-port of #12470
When the PQ creates a new page and allocates a memory-mapped buffer, the
underlying file is zero'd out to full page capacity and the version byte is
written to the buffer.
If Logstash crashes or is shut down before any elements have been pushed into
the queue page, we have no guarantees that the version marker has been
persisted to the storage device. A subsequent attempt to load an all-zeros
queue page will result in an obscure error message and failure to load:
~~~
AbstractPipelineExt - Logstash failed to create queue.
org.logstash.ackedqueue.io.MmapPageIOV2$PageIOInvalidVersionException: Expected page version=2 but found version=0
~~~
By sending `MappedByteBuffer#force()` immediately after the version has been
added to the buffer, we can shrink the window in which a crash can leave the
queue on disk in a corrupt state.
This commit updates the license information for the license dependency report.
Specifically, this adds a notice for racc, a different version of which is now
pulled in by nokogiri from the version included with jruby.
* plugin: adds `:validate => :field_reference`
Provide plugins a way of validating that an input is a literal field-reference.
This is useful for input plugins that implement a `target` or other
non-interpolated directive, and allows these plugins to reject invalid
configuration before start-up instead of at run-time.
Plugins should not use this named validator directly, as doing so would cause
validation to fail with "Unknown validator" when the plugin is run on older
releases of Logstash. Instead, plugins should use the `validator_support`
adapter mixin that provides back-ports when necessary.
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Clean backport of #12498
these labels are required for redhat openshift certification.
These commit reintroduces the labels for the ubi8 image only, and adds
acceptance tests to ensure these labels are correct and not inherited
The ModulesSettingArray is responsible to obfuscate password in arrays of settings.
The test are still in Ruby to proove the interoperability with Ruby code that used the previous version.
Added method to mimic .first and .last methods of Ruby Array
(cherry picked from commit fa3891953d)
There is two Password classes that almost does the same thing. One in Ruby (LogStash::Util::Password) and one in Java (co.elastic.logstash.api.Password).
This commit drop the the Ruby implementation to import the Java version in the LogStash::Util so that existing Ruby code haven't to be changed, works as it is.
(cherry picked from commit ca81a8f4a3)
Clean backport of #12447, #12452
This commit fixes two issues with the docker metadata:
Removes non-OCI compliant freeform metadata labels
Uses a consistent build date for all the docker images and dockerfiles
Additionally, this commit adds a `build_docker_ubi8` rake task to enable
`ci/docker_acceptance_tests.sh` to run with no options to build all
docker images for the architecture.
Removing the freeform description labels left the container metadata
without a description label. This commit adds a description under the
"org.opencontainers.image.description" label
Clean backport of #12426
Where available, this commit adds information from getSourceWithMetadata to the
error message of UnexpectedTypeException, dropping down to `toString`
if not, giving more context to find where the issue is caused in the configuration.
Clean backport of #12394
This commit adds context to the pipeline to pipeline input and output
plugins by adding a string containing the `address` field to the input
plugin, and an array containing the `send_to` field to the output plugin.
This helps gain a picture of how pipeline to pipeline enabled configurations
are communicating with each other, without having to refer back to the pipeline
definition
add wildcard support in xpack pipeline id
do the pattern matching with glob
add warning msg to wildcard with legacy api
check invalid pipeline in bootstrap
test cases for invalid checking
Fixed: #10558
Restructures troubleshooting docs in preparation for expanding content
Adds info for plugin tracing to help users track down plugins that might be causing problems
Co-authored-by: João Duarte jsvd@users.noreply.github.com
Backports: #12270Fixes: #12228
Moves Cloud info to Configuration section to make it more obvious and easier to find
Expands content for using cloud id and cloud auth outside of modules
Moves module-specific info into modules section
Backports #11884 to 7.x
Clean backport of #12346
This commit adds an extra optional column 'sourceURL' to the license report. This
column contains a pointer to the source code, which is optional for most dependencies,
but a requirement for some, such as the Red Hat Universal Base Image.
This commit also populates the 'copyright' field, which previously was an used
column in the CSV definition
Relates #12297
to avoid gems being resolved from the usual LS GEM_HOME
this is problematic for gems such as jruby-openssl which are loaded
during boot (by RGs/Bundler) and thus activated in Bundler from a
different GEM_HOME. if such gem is updated it won't end up being
install-ed in the --path location as it's found on the GEM_HOME!
+ Fix: gem conflict 1.3.6 required by core
this is due now isolating GEM_HOME on `bundle install --path`
+ Refactor: we do not need LS_GEM_HOME/PATH
+ avoid pinning jruby-openssl to 0.10.4
resolves GH-12299 (reverting GH-12301)
Clean backport of #12335
When deleting temporary files created by the DLQ writer to store data before moving to their
final location, Windows may leave these files in a "delete pending" state, where the files
are somewhat in a state of limbo, where they result of `Files.exist(filename)` is `false`,
but the result of `filename.toFile().exists()` is true. When files are in this state, a new
file with the same name cannot be created, which causes the DLQ test used to ensure that
closing and reopening the DLQ (in such events as a pipeline restart) to fail.
This commit moves the temporary file to an alternative location before deletion, ensuring that
the "pending delete" status does not interrupt with the DLQ startup
This addresses an incomplete fix in #12019 starting in 7.8.1 where upon catching a worker exception (to avoid crashing the whole logstash per #12306) the input plugin(s) are not terminated prior to closing the pipeline leading to the input plugin(s) continuing execution and failing with IllegalStateException & Tried to write to a closed queue since closing the pipeline also correctly closes the queue.
Clean backport of #12304
This commit changes the DLQ writer to write to a temporary file
which will be renamed on "completion", to avoid the possibility
of the DLQ reader reading an incomplete DLQ segment. The temp file
will be renamed and made available, either when the capacity of this
segment is reached, or if a configurable 'flush interval' has elapsed
since the last event reached the dead letter queue.
This commit fixes#8022, #10275, #10967
This commit replaces #11127
Clean backport of #12314
The `shouldRunAfter` specified in the main script body was causing the runIntegrationTests
task to be evaluated even when it should not have been, causing unnecessary failures
when artifacts required only for integration tests are unavailable.
This can be removed, because the `shouldRunAfter` relationship for the `runIntegrationTests`
task is already defined in the task body.
* replace direct hidden indices access with system indices api
* fulfill backward compatibility
* fix log msg, rename class, simplify response handling
* modularise fetcher
Implements a plugin `ecs_compatibility` option, whose default value is powered
by the pipeline-level setting `pipeline.ecs_compatibility`, in line with the
proposal in elastic/logstash#11623:
In order to increase the confidence a user has when upgrading Logstash, this
implementation uses the deprecation logger to warn when `ecs_compatibility` is
used without an explicit directive.
For now, as we continue to add ECS Compatibility Modes, an opting into a
specific ECS Compatibility mode at a pipeline level is considered a BETA
feature. All plugins using the [ECS Compatibility Support][] adapter will
use the setting correctly, but pipelines configured in this way do not
guarantee consistent behaviour across minor versions of Logstash or the
plugins it bundles (e.g., upgraded plugins that have newly-implemented an ECS
Compatibility mode will use the pipeline-level setting as a default, causing
them to potentially behave differently after the upgrade).
This change-set also includes a significant amount of work within the
`PluginFactory`, which allows us to ensure that pipeline-level settings are
available to a Logstash plugin _before_ its `initialize` is executed,
including the maintaining of context for codecs that are routinely cloned.
* JEE: instantiate codecs only once
* PluginFactory: use passed FilterDelegator class
* PluginFactory: require engine name in init
* NOOP: remove useless secondary plugin factory interface
* PluginFactory: simplify, compute java args only when necessary
* PluginFactory: accept explicit id when vertex unavailable
* PluginFactory: make source optional, args required
* PluginFactory: threadsafe refactor of id duplicate tracking
* PluginFactory: make id extraction/geration more abstract/understandable
* PluginFactory: extract or generate ID when source not available
* PluginFactory: inject ExecutionContext before initializing plugins
* Codec: propagate execution_context and metric to clones
* Plugin: intercept string-specified codecs and propagate execution_context
* Plugin: implement `ecs_compatibility` for all plugins
* Plugin: deprecate use of `Config::Mixin::DSL::validate_value(String, :codec)`
Clean backport of #12302
This commit adds the ability for the docker build to build artifacts for multiple architectures.
By default, the target architecture is inferred from the architecture of the machine the build is being
run from - running the build from an aarch64 machine will build an aarch64 docker image, while building
from an x86_64 machine will build an x86_64 docker image.
This can be overridden by setting the environment variable DOCKER_ARCHITECTURE to either `x86_64` or
`aarch64`.
This commit also updates the integration tests to test against the architecture from the machine the test
is being run on, and includes the target architecture in the test description.
Changed Linux creation artifacts (tar.gz/deb/rpm) to include the ARM JDK.
Extracted common parts of artifact.rake into functions to be shared between ARM and Intel bundling tasks
Create new artifacts with bundled JDK for the supported platforms on x86_64. Download JDK packages from AdoptOpenJDK site, the selected version is loaded from `versions.yml`.
Changed also the launch scripts to give precedence to JAVA_HOME, then fallback on bundled JDK if present, as last resource go to the system Java.
New artifacts produced with bundled JDK are:
- tar.gz with JDK for Linux and Darwin
- zip file for Windows
- dep and rpm
- Docker image
All artifacts without JDK are now postfixed with '-no-jdk' while the ones with JDK included has the architecture extension.
Covered with tests the touched parts
Co-authored-by: Rob Bavey <robbavey@users.noreply.github.com>
Elevates visibility of Offline Plugin Management section so that air gapped users
don't have to struggle through instructions that require an internet connection.
Backports: #12283
Related: #12280
Clean backport of #12242
This commit includes the required changes to pass RedHat docker image certification.
This includes:
Moving license files to /licenses folder
Adding required base labels for name, description, vendor and summary
Relates: https://github.com/elastic/dev/issues/1287
Clean backport of #12233
This commit is intended to fix thread safety issues with the JavaKeystore implementation of the secret store.
From reading the code, it appears that thread safety for the keystore was intended to be provided by
a ReentrantReadWriteLock, a read lock for accessing secrets from the keystore, and a write lock for updating
secrets in the keystore.
In practice, this was insufficient, the act of accessing a secret from the keystore involved the mutation of
a shared keyStore object - the keyStore is `load`ed every time a secret is retrieved from the store.
Previous to https://github.com/elastic/logstash/pull/10794, this did not matter, each pipeline held its
own instance of the secret store, effectively meaning that only a single thread would ever access a key
store at any one time. This PR moved to using a shared keystore instance for substitution variables,
exposing the lack of thread safety in the JavaKeystore class.
This commit is intended to be the simplest change to fix the underlying issue, and does not address whether
we *need* to reload the secrets every time they are read.
Relates #12229
when running a pipeline with ordered execution, flushes on the pipeline
were no longer being called when compute is called with an empty batch, causing
issues with the aggregate filter, for example, not being able to push events on
timeout.
Expose the proxy xpack management proxy setting in docker (xpack.management.elasticsearch.proxy).
Also surface the same proxy setting in the sample config.
This commit adds the rake docker_ubi8 rake task, and associated
changes to the docker template and makefiles.
This commit also refactors the acceptance tests to extract xpack tests
into a helper class to allow the same tests to be used in both 'full'
and 'ubi8' docker image tests
Our internal representation of the composite config file needs only to inject
newline delimiters if they are missing, and to avoid doing so if they are
present. This allows `PipelineConfig#sourceReferences()`, used to map back from
the composite line/column to source file/column, to correctly track an offset
using the source fragments `SourceWithMetadata#getLinesCount()`.
Fixes: #12155
Accidentally succeeding at connecting to an HTTP resource that is not a real,
live Elasticsearch (such as an Elastic Cloud instance that has been shut down
and reaped) can cause client initialization to fail.
Clean backport of #12135
This commit adds integration tests for the Logstash docker images. Previous
integration tests were removed in https://github.com/elastic/logstash/pull/10693,
due to the tests being non functional.
The commit adds image and container tests. The image tests check the contents and the
metadata of the image; the container tests check the logstash process, and includes tests
ensuring that logstash runs, and is configurable.
This test also adds a ci script to allow the tests to be run on jenkins, and to split the
running of these tests up based on the image type and includes updates to the rake tasks to
support this.
During the development of PR #11541 to direct ship monitoring data to an monitoring ES cluster without hopping through a production ES cluster, the settings for elasticsearch ouput was cloned into a version without the `xpack` prefix.
Since that feature has been removed the settings should also be removed from the Docker image
In PR #11799 we missed to add the exposure of proxy also as docker env variable so that uses can connect the dockerzied Logstash to a proxed monitoring cluster
Prior to this commit, the value of `org.label-schema.license` and
the values in `org.opencontainers.image.*` were not set, and therefore
would be inherited from the base OS image.
Although fossa has a fossa init tool to auto discover dependencies,
it doesn't work well for Logstash.
The mix of JRuby and Java allows for correct gradle detection but for
Ruby we tell FOSSA to look at the lockfile, which we generate using
ci/bootstrap_dependencies.sh
This is a work in progress and covers 99% of our dependencies.
As we get comfortable we'll have to uncomment a few ruby subprojects
contained in the logstash source tree.
Integration plugins need a different header. For example, the plugin docs should
to point to the integration repo rather than the input, output, filter, or codec
repo. The new header also includes boilerplate text to indicate that the individual
plugin is part of an integration rather than stand-alone. This work implements needed
changes.
This is a temporary fix.
Currently the check task depends on the integrationt tests task,
which means all dependant tasks will be resolved even if they're just
registered instead of created.
This resolution is a problem because the downloadES task will fail
if, for the version we're building, Elasticsearch doesn't yet have a
build we can download.
So for now we'll remove this to unblock builds, but finding a way
to compartimentalize failures is needed going forward
fixes a regression introduced with the api_key support for xpack monitoring and management in #11864 which disabled the possibility to not use any authentication by relying on the default options and only enabling monitoring for example. It now ignores the default username option when no password is explicitly set.
A pipeline in the process of being created was not marked as such in the pipeline registry resulting in a situation where a slow to initialize pipeline could be recreated on state convergence resulting in a PQ LockException because that pipeline was already existing and held the PQ lock. Replace native Java concurency with Ruby Mutex for simpler and straighforward implementation.
The worker threads were not correctly monitored for a worker loop exception resulting in a complete logstash crash upon any exception even when multiple pipelines are running. Now only the failed pipeline is terminated. If pipeline reloading is enabled, it is possible to edit the config and have that failed pipeline reloaded.
Release Manager builds were failing as `downloadEs` task was being
needlessly run during `rake artifact:all` task. When run with
`RELEASE=1`. this was causing build failures due to the non-availability
of Elasticsearch release artifacts. This commit aims to avoid running
the `downloadES` task when it is not needed, continuing the work done
in #11914
This commit also removes code that was repeated in different parts of
the build script.
This commit updates the kafka setup scripts to ensure that the kafka setup is clean between builds, by
setting an explicit zookeeper data directory to be cleaned each time, and correctly overriding `log.dirs`
instead of `log.dir` to ensure that the kafka logs are written and wiped in a consistent place each time,
which helps when using the non-immutable images used in arm64 tests.
This commit clears the `JAVA_HOME` variable when starting Elasticsearch
to force it to use the bundled version of the JDK, rather than the
default `JAVA_HOME` from the machine Logstash integration tests are being
run on, and removes the likelihood of tests failing to run due to `JAVA_HOME`
being set to a non-compliant JDK.
Many DLQ and PQ tests were run with default settings for their size,
or otherwise set to values such as 1GB.
This meant that the container/machine the test ran on needed a lot of disk space
(i.e. 1GB for the test + 1GB for OS and Logstash + some more free space).
This commit drops the disk space requirement overall by a factor of 10 (e.g. 1GB to 100MB)
Integration tests may fail during elasticsearch teardown, as currently
the stop_es function sends a `SIGTERM` to Elasticsearch, but does not
wait for the process to exit. That can lead to issues when deleting
data directories from a still running process. This commit adds
wait functionality to `stop_es` to wait for a short period of time,
sending a `SIGKILL` if Elasticsearch does not terminate in time.
Clean backport of #12061
this test only called "pipeline.start" and expected an output to receive
the "do_close" method call. Therefore it relies on a race condition that
the pipeline shuts down quickly enough, which could fail.
This change ensures the pipeline fully terminated before making the
assertion
This commit also removes an unused let from the time when logstash
output plugins had workers.
This test relies on the generator input sending 10 events and observing
the output. Calling shutdown immediately after start can cause the
plugin to abort earily.
This change waits for the execution to finish before performing the
test assertions.
The creation of a Ruby thread from Java seems to be a trigger
for jruby/jruby#6207.
Pipeline#shutdown now blocks on the ShutdownWatcher#start, which will wait for
pipeline.finished_execution? to be true.
This removes the need for the pattern:
`pipeline.shutdown { block } && pipeline.thread.join`
And can be replaced with just `pipeline.shutdown`
To avoid having `shutdown` blocked waiting for ready? when pipeline crashes too quickly,
this method returns immediately if finished_execution? is true.
Most uses of pipeline#run have also been replaced by pipeline#start
since the latter will block until the pipeline is ready, again avoiding
the pattern:
`pipeline.run && sleep 0.1 until pipeline.ready?`
Pipeline tests have been changed according to these two changes.
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
The reduction from 500 to 100 is based on observations where 06d7f01fd
reduced the number of generated classes by about an order of magnitude
especially on very large pipelines (e.g. from ~600 to ~30).
Generated implementations of `Dataset` often have fields referencing
other specifically-generated `Dataset`, despite only using public methods
(`compute` and `clear`) defined on the `Dataset` interface.
By allowing the code generation to reference the interface instead of
the specific implementation, we eliminate the need for the compiler to chain
parent class loaders indefinitely, thereby eliminating the need for a global
mutual exclusion when compiling.
This change moves the locking semantics from the compiler to the non-evicting
cache itself, relying on tried-and-true `ConcurrentHashMap#computeIfAbsent`
to minimize synchronization and cache-priming stampedes.
This also vastly reduces the scope of `Dataset` implementations that we need
to generate, because datasets will no longer need to reference the specific
implementation details of all "downstream" datasets and will therefore be more
likely to match an implementation that has already been compiled and cached.
In time comparison of LocalDateTime the isBefore is strict, so in case two instants has the same millisecond, it fails in test (happens in Windows tests)
Close: 11862
Backport of #11862 to 7.x
Refactor: move PipelineConfig from Ruby to Java
Reimplement the Ruby class PipelinceConfig in Java trying to keep the method signatures to limit the changes in client code, this is a step of other that intend to move all the configuration code in Java language.
Having all that code in Java unlock some reasoning about how to better implement it and probably an improvement in performance during process startup.
Moved also the spec into a JUnit and fixed here and there the failing tests
Closes: #11824
Changed ComputeStepSyntaxElement to generate Java code to retrieve the plugin's id by a method instead of hardcoding the value in the generated code.
This permit to share more compiled classes, that differs only by plugin.id and speed up the pipeline compilation.
The change has been secured by future regression with unit test that track pipeline compilations times.
Co-authored-by: Andrea Selva <andsel@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Fixes: #12031
Having the jar around would allow us to fine tune logging for libraries
such as manticore's http-client (4.5) using LS's `log4j2.properties`
e.g.
```
logger.apache_http_headers.name = org.apache.http.headers
logger.apache_http_headers.level = DEBUG
```
... to log http headers for each request
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Enable filebeat and elasticsearch downloads to pull different
architectures, Filebeat and Elasticsearch use different suffixes
to denote their aarch64 architectures, with beats using arm64 and
elasticsearch aarch64
Carrying on from the work done in #11958, update the gradle build to download
the same version of Elasticsearch as is specified in the logstash version.yml file.
This commit updates the standard integration tests to use the same version of
Elasticsearch that is already downloaded for x-pack integration tests, and also
fixes integration tests to allow for the different responses around hits generated
by different versions of Elasticsearch.
JSON.load allows the creation of complex objects, and should not
be given untrusted input. This commit changes the only three uses
of JSON.load in the codebase, which aren't user facing or present
in bundled product, so not really an attact vector.
Clean backport of #11958
This commit changes the download to pull the version of beats based on the version pulled from the branch rather than from an environment variable, or 6.5.4.
This commit also moves the download logic of Filebeat fromfilebeat_setup.sh to build.gradle in order to use the artifacts API in the same way as the downloadEs task, and does some refactoring to DRY up the artifact download tasks.
This commit also fixes the beats integration test to replace the use of a removed setting.
This commit also sets retries to 3 for the download tasks, using 'retries' functionality from gradle download task plugin
This work breaks out the JVM setting info into a new section, and
expands and updates the content. It adds new subheadings to make
scanning the content easier.
ruby produces a LocalJumpError: unexpected return
error if there's a return in a block so this changes just uses
the value of the last expression as the value of the block
The 'prepare_offline_spec.rb' is failing due to a change in the warning message
from JDK11 to JDK14, and JAVA_TOOL_OPTIONS being passed in as an environment
variable by Jenkins, which was not happening before due to the dockerized
environment.
Fixes#11933
Backport of #11931
Escape test fixture service scripts to avoid test failures when run in
Jenkins using multiple yaml configuration files, which causes directories
to be constructed like `centos-7&&immutable` which cause issues with
the service runners cutting off directory locations before '&&'
This commit deviates from the original commit by not setting @setup_script
and @teardown_script variable with the Shellwords escape, as this was removed
in a subsequent commit (#11944)
Try system ruby, then LS_HOME/bin/ruby, then relative path from
script to LS_HOME/bin/ruby. Use LS_RUBY_HOME variable to avoid
testing again on subsequent attempts to wait for port.
This adds the .ci/matrix-runtime-javas.yml file that defines all
the JDKs logstash could be tested against. This is meant to be
used for the Matrix Combinations Jenkins plugin to be able to
select which JDK to test against dynamically.
Avoid to reassing the subdocument for queue metrics preferring a merge
With PR #10576 the PluginsStats.report(stats) overwrites the subsection related to queue instead of merge with newly created entries.
Fixes#11970
Some QA tests reads the FEATURE_FLAG environment variable, for example to test PQ functionality.
This PR passthrough the environement variable inside the Docker instance.
Fixes#11970
openjdk14 appears to be the only version of java14 installed on jenkins windows
worker nodes, so use this instead of zulu14 and adoptopenjdk14
Backport of #11971
Backport of #11944
A previous commit attempted to fix this issue by adding Shellwords.escape to setup_script and teardown_script locations, but File.exists? returns false when called against a filename escaped by Shellwords.escape. This commit localizes the escaping to where the
file is executed.
This commit also adds Shellwords.escape to teardown script runner and the method used to execute logstash to retrieve version. This is to enable tests to run correctly when Jenkins creates execution environments with folders named with &&, eg centos-7&&immutable
When running filebeats integration tests on centos-7, the tests
fail due to permsisions checks on the temporary configuration file
created for the test. This commit sets strict permissions checks
to false in order for the tests to be able to succeed.
Fixes#11949
Fixture test scripts use `nc` to wait for the port to determine
whether a test fixture is up and running. This commit adds a fall
back option to sleep if `nc` is not available - it is not installed
on Jenkins centos worker nodes.
Fixes#11942
* Use task avoidance API in gradle scripts
This commit uses the task avoidance api (tasks.register vs task.create/
task DSL), as recommended since Gradle 5.1
This should reduce the execution of unnecessary tasks in build jobs, and
hopefully improve build resiliency and execution time.
Kafka teardown script can exit with failure, typically when trying to
stop the broker. This commit logs the error code if the scripts fail
rather than crash out causing build failure.
Fixes#11905
`RubyString#gsub` requires a (Ruby) frame to be present.
The method attempts to set a backref for the current caller's frame.
When the frame stack is empty there isn't really a place to set $~.
This can happen when a LogStash::Util::Loggable#logger is retrieved,
from the input worker thread while not being nested in any block.
Fixes#11874
for Strings with copy-on-write semantics when deep cloning.
motivated by "big" events reaching plugins such as split,
which might produce several new events out of a single one.
Fixes#11794
Often when browsing logstash logs for debugging purposes we miss the
information about the Java version and platform being used.
Printing the global RUBY_PLATFORM gives us all of this information
plus the JRuby version as well.
Fixes#11852
logstash_admin role is not enough.
As the ls-security page mentions correctly:
"The user you specify here must have the built-in logstash_admin role as well as the logstash_writer role that you created earlier"
Updates static settings for extra role needed
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Edu González de la Herrán <25320357+eedugon@users.noreply.github.com>
* Defined the versions of JDK to use in test build separated by OS (#11768)
* Added JDK 11 and 14 to Unix testing matrix (#11801)
- OpenJDK14 AdoptOpenJDK11 Zulu11
- OpenJDK14 AdoptOpenJDK14 Zulu14
Close#11801
Co-authored-by: Andrea Selva <andrea.selva@elastic.co>
JDK14 prefers the use of `getCpuLoad` over `getSystemCpuLoad`. This commit
reworks the call to use reflection to use the appropriate method call
depending on the version of the JDK being used.
Fixes#11786
In some workflows such as simple file manipulation, starting a webserver is
unnecessary overhead, and we should be able to avoid it.
Here we introduce a new parameter `http.enabled`, which defaults to `true` to
maintain the existing functionality.
Resolves: elastic/logstash#9408
Closes: elastic/logstash#11525
Co-authored-by: Benoit Dupont <benoit.dupont@gmail.com>
Fixes#11533
We have "required" units for a variety of `TimeValue` settings when they are
provided as a `String`, but unquoted values in YAML have been passed through as
Integers, where we long assumed nanosecond units. This frequently leads to
surprise (e.g., when `config.reload.interval` is set to `60`, we consume 100%
of CPU in a tight loop trying to reload and re-parse the configs every 60
nanoseconds).
By making the setting retain the TimeValue object for the entirety of its
lifecycle, we can issue a deprecation notice the first time an Integer value is
encountered. As a secondary benefit, our usage of the setting value in code
becomes more clear since we are empowered to ask `TimeValue` for a numeric
value in a specific scale.
Fixes#11803
* [Doc] added description of xpack.monitoring.collection.write_direct.enabled setting
* Added page to mark as deprecated the legacy internal collector and fixed all the `xpack.monitoring.*` references
* Included legacy collector file into monitoring overview
* Restructure monitoring docs
* Incorporate review comments
Co-authored-by: andsel <selva.andre@gmail.com>
Fixes#11787
The separator vertices are an implementation detail of the serialized
output of the LIR, and are not meaningful to the pipeline viewer.
This commit removes the separator vertices, and reworks the edges to
account for this.
Fixes#11779
In the docs templating, the plugin name is used to autogenerate a code example of how
to configure specific plugin. As such, if a plugin name is different from how you
configure it, this results in an example of how to configure this plugin with an
incorrect name.
This changes the java_sink plugin name to sink to correctly autogenerate the example.
Fixes: #11675, Fixes: #11214Fixes#11782
Loading a Java Keystore can take anywhere from ~0.3s to upwards of 3s, so the
pattern of loading one per variable we need to replace adds a significant
amount of overhead on pipelines that use these variables, whether or not they
are provided by the keystore.
By providing a private, constant, lazy singleton, we ensure that we don't
incur the cost of repeatedly building the keystore.
Fixes#10794
* Backport of #11742. Not a clean backport as 7.x had not previously been upgraded to 5.6.4 as master had been.
* Update gradle version to 6.3
Gradle versions prior to 6.3 cannot run under JDK14.
This commit upgrades the version of Gradle to 6.3, and removes all deprecation warnings that can currently be removed.
Changes include:
* Increase gradle memory to 2g
* Increase gradle memory in the license check job to 2g
* Replace use of `testCompile`
* Replace `runtime` with `runtimeOnly`
* Remove`compile` depedencies from gradle files
* Replace deprecated archive methods
* Fix dependencies report build
* Make jruby dependencies 'api', fix archiveVersion
* Set `duplicatesStrategy` for all tasks of type Copy
* Use `configureEach` for global 'withType' calls
** Use the recommended Tasks API calls
(https://blog.gradle.org/preview-avoiding-task-configuration-time)
* Run `./gradlew wrapper` earlier to improve caching
* Use copy with chown for resources that need to be run during `./gradlew wrapper`
7.x clean backport or #11737
cleanup RubyArray "rawtypes"
remove all LinkedHashSet from batch and queue classes
avoid processing empty batches in Java worker loop
cleanup AckedReadBatch and MemoryReadBatch
refactor Ruby worker loop similar to Java Execution to not use batch merge
remove QueueBatch merge and replace LinkedHashSet with ArrayList
while also making the array case cleaner & effective
(JRuby uses specialized array holder for 1 / 2 values)
+ Refactor: minor - use true/false constants directly
+ Refactor: do not allocate empty array
Fixes#11732
- changed `2 seconds` to `2s` for consistency
- exchanged *trace* with *debug* time values and vice versa to be referable to the example above
Fixes#11671
.ruby-version is used to select the external jruby
(for package building + acceptance tests on infra)
reverts the upgraded JRuby version from #11647Fixes#11663
there's no need for this and makes code base inconsistent
... also the original intent seems no longer relevant :
was introduced at 57e7a8a56b
> allows for a massive simplification for the invocation of filters and
outputs from the Java execution
Fixes#11587
reuse rubyArray for single element batches
rename preserveBatchOrder to preserveEventOrder
allow boolean and string values for the pipeline.ordered setting, reorg validation
update docs
yml typo
Update docs/static/running-logstash-command-line.asciidoc
Co-Authored-By: Karen Metts <35154725+karenzone@users.noreply.github.com>
Update docs/static/running-logstash-command-line.asciidoc
Co-Authored-By: Karen Metts <35154725+karenzone@users.noreply.github.com>
java execution specs and spec support
docs corrections per review
typo
close not shutdown
Ruby pipeline spec
* Created release notes for release 7.5.1
* Removed deleteable part
* Updated with formatting changes and minor additions
* Added more info about grok filter update
* Change link format from md to asciidoc
* Update docs/static/releasenotes.asciidoc
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Update docs/static/releasenotes.asciidoc
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Update docs/static/releasenotes.asciidoc
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Update docs/static/releasenotes.asciidoc
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Update docs/static/releasenotes.asciidoc
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Incorporate review comments
aligns the Ruby/Java returns as they happen in scripted Java
e.g. as `java.lang.Thread.new` returns a JavaProxy instance
there's really no reason to use JavaObject which always needs `to_java`
conversion to be useful (and is considered legacy in JRuby).
considered breaking change e.g. `LogStash::MemoryReadClient#read_batch`
will now return a proper JavaProxy instead of the JavaObject
Fixes#11391
this is fairly recent - since 7.4 (added in GH-11075)
there's a risk plugins would assume ThreadContext to
exist or collide the 'global' constant - usually best
to import where the Java class actually gets used ...
Fixes#11356
use "127.0.0.1" instead of "localhost" to avoid binding to ipv4 and ipv6
don't assume port 10006 will be open in the machine
rely on the ranges and the actual bound port for the assertions
Fixes#11263
Certain malformed field reference literals (e.g., those containing a series of
multiple open-brackets `[[`) were propagated undetected by the parser, only to
create a crashing error when used.
Starting Logstash with the `--config.test_and_exit` flag (or `-t` shorthand)
would validate the config, even though it could not be used in practice.
By updating the grammar(s) to exclude the use of an open square bracket (`[`),
we more closely match the formal grammar and ensure these malformed literals
are rejected closer to the source.
NOTE: this PR only affects field reference _literals_, and does not resolve
a similar issue with field references in quoted format strings.
Resolves: https://github.com/elastic/logstash/issues/11022Fixes#11195
YAML.parse returns Psych nodes that then need to be converted to plain ruby objects.
Calling YAML.safe_load outputs basic ruby objects already and also increases security as it greatly restricts the classes it deserializes.
Fixes#11208
Quite often we see log entries that are truncated by this limit since java stack traces can be very verbose.
This prevents us from seeing the real issue and require us to ask for users to remove the limitation and trigger the issue again so we can see the full problem.
This commit removes this truncation.
Fixes#11206
This commit clarifies that Logstash monitoring metrics should not be
routed through master-only nodes, and should instead prefer coordinating
nodes.
Fixes#11194
Previously we'd only give a pipeline the settings related to pipelines
The PipelineSettings class was used for this.
However a pipeline may need other settings like the keystore location.
For this we instead clone the settings object and merge all the pipeline
specific settings. This is accomplished with a new method that ensures
that only pipeline level settings are overwritten in the clone.
Fixes#11076
* Starting to audit tests
* Additional field checking in stats
* Add epehemeral id
* More tests
* Test new structure of pipeline report
* Add default_metadata testing
* Add node command tests
* add jvm
* test no mutate
* Add check for graph flag
* Break apart test per review suggestion
* Remove test that doesn't test much
Fixes formatting in a table cell in `logstash-monitoring-overview.html`.
A `+` which was required by AsciiDoc was leaking into the output when
the doc is built with Asciidoctor.
In the "methods" sections of the how to develop a plugin docs
Asciidoctor as incorrectly passing backticks into the output when it
should have marked the words surrounded by backticks as code. I'm not
100% sure why it did that. The fix is to force macro evaluation
immediately on attribute assignment.
Users following our documentation are frustrated to discover that they get 403 errors from Logstash, even when following the instructions to the letter. The problem is that the `create` privilege is missing. With this in place, it works as designed.
These changes may need to be back ported to previous branches, too.
Fixes#11013
* Create running-logstash-windows.asciidoc
Initial commit for #4005
* Update running-logstash-windows
1. Added section to validate JVM pre-requisites and shell sections for nssm, task scheduler, and PowerShell
2. Updated options to run Logstash on Windows, update section headers
3. Clarified JVM pre-requisites and included example to add environmental variables using SETX
4. Added example Logstash configuration, added steps for running Logstash manually with PowerShell
5. Removed `WIP` from the PowerShell section; updated the example to include output to Elasticsearch; Added notes for running Logstash as a service with NSSM
6. Removed `WIP` from the NSSM section; Added notes for running Logstash as a Scheduled Task; Added notes to stopping Logstash for each section; Removed `WIP` from the Scheduled Task section; Removed `WIP` from the page header
7. Updated initial section; moved the running manually section as the first configuration; added notes to the NSSM and Schedule Task sections.
8. Push headings down one level
9. Clarify this document contains examples for running Logstash on Windows. Updated which NSSM file should be extracted for use.
10. Updated formatting for the example Logstash configuration
11. Update formatting for the command examples
12. Update the instructions in the Task Scheduler section
13. Update the instructions in the run Logstash manually section, the NSSM section, and update formatting
14. Update formatting
15. Add note regarding support for running multiple pipelines
16. Clarify use of command line options. Re-state what is mentioned in the `Running Logstash from the Command Line` doc that: "Specifying command line options is useful when you are testing Logstash. However, in a production environment, we recommend that you use [logstash-settings-file] to control Logstash execution."
17. Clarify steps to accessing the Windows Environmental Variables window (i.e., link to Microsoft docs).
18. Remove unnecessary plus signs
19. Updated source types for examples, updated documents for specific Logstash versions with `{logstash_version}`
* Update running-logstash-command-line
1. Add note for running Logstash on Windows with `bin\logstash.bat`
2. Update formatting for running Logstash from the Windows command line
Fixes#10946
By removing the default plugins from the Gemfile.template
the current task that modified the template was not working correctly.
This commit either replaces the dependency entry if it exists or
otherwise creates it.
Fixes#10947
- have `bootstrap` task do as little as possible: install gems in Gemfile.template that don't belong to groups
- have test tasks depend on the `installTestGems` task instead of `bootstrap`
- logstash es output is now a dependency because of license checking
- fix out of memory problem in SharedHelpers.trap
Also use release lockfile during installDefaultGems
The release lockfile is only copied to Gemfile.lock if
it doesn't exist. During the `installDefaultGems` task other
plugin installation tasks already occurred, generating a lock file.
This commit removes it before running the plugin installation.
Fixes#10942
* parent 8c5697c748
author Guy Boertje <guy@elastic.co> 1556806171 +0100
committer Mike Place <mike.place@elastic.co> 1557234770 +0200
Bump JrJackson to 0.4.8
Fixes#10748
LIR serializer refactor
Remove commented code
Remove more commented code
Remove license and add encoding
Style change to make code more vertical.
eid and hash
Use pipelines_info to construct the stats
Add tests for new fields
Add queue stats
* bad merge resolution
* bad merge resolution
* Don't merge if nil
* Better merge strategy
* add vertex gate
* Guard against nil
* Use extended queue stats in pipeline report
* Add cluster uuids to Elasticsearch outputters in pipeline output
* move uuid
* remove old uuid lookup
* Only populate cluster_uuids when present
* remove print
* cluster_uuids -> cluster_uuid
* Update logstash-core/lib/logstash/api/commands/stats.rb
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Update logstash-core/lib/logstash/api/commands/stats.rb
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Update logstash-core/lib/logstash/api/commands/stats.rb
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Make var singular
* Match singular var name
* Remove unnecessary nil check
* Pass in the matching pipeline for the report
* Remove old way of inserting cluster_uuids
* Update logstash-core/lib/logstash/api/commands/stats.rb
I like this much better and in testing it seems to work correctly.
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Remove unreferenced code that was part of debugging
* Remove events var which was unused
* Don't try to remove before insert
* Update logstash-core/lib/logstash/api/commands/stats.rb
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Make pipeline extended stats generation more efficient
* Implement suggestion to improve readability
* Cleaner merging per review recommendation
* Only generate extended_stats once
* remove unneeded comments
* Add cluster_uuid to node vertex
* remove top-level cluster_uuids
* Update logstash-core/lib/logstash/api/commands/stats.rb
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Implement change to make logic more simple suggested in review
* Rely on options gate to insert graph
Resolves concern here:
https://github.com/elastic/logstash/pull/10576#issuecomment-501774635
* Update logstash-core/lib/logstash/api/commands/stats.rb
Co-Authored-By: Ry Biesemeyer <yaauie@users.noreply.github.com>
* Move UUID lookup to API layer
* Move private method to bottom per review recommandation
No Elastic Search dependency is mentioned in the guide for Logstash up to this point. This would be good for those who are getting started to Log Stash without knowing much about Elastic Search and unaware that it isn't already packages along with the install of Logstash.
Fixes#10852
When using the Jruby event API, re-cast java exceptions produced by illegal
field references to ruby `RuntimeError`s, which can be caught by the ruby-based
plugins.
This is similar to what we already do in the Jruby event API when directly
handling field references, but catches a case where the `Valuifier` encounters
an illegal reference when creating a `ConvertedMap`.
Fixes#10839
now that default plugins are read only from the json metadata file and
not from the lock file, the plugin version manifesto needs to be adapted
so the status of "default" plugin is read from the json file.
Fixes#10824
Previously the `do_close` method would never be called on a failing
plugin, because the retry call stops the code under `ensure` from
ever being called.
Fixes#10691
We need a way for a plugin to register simple metadata about external
resources it connects to in order to implement a Monitoring feature in which
an Elasticsearch Output Plugin can store the connected cluster's uuid (#10602)
Here, we add a generic `LogStash::PluginMetadata` along with a registry, and
expose an accessor on `LogStash::Plugin#plugin_metadata` so that instances
can access their own metadata object.
Fixes#10691
In filebeat prospectors settings have been renamed to inputs. When
prospectors is used a deprecation warning is printed. With 7.0
`filebeat.prospectors` will be removed.
This change updates all uses of prospectors with inputs. For now
filebeat events report `prospector.type` and `input.type` for
compatibility reasons.
Fixes#10711
* dont include docker tasks in artifact:all
* don't rebuild tar/zip if source hasn't changed
* allow SKIP_PREPARE to avoid tar creation if no modifications
* don't need a tarball to generate the dockerfile
* remove docker tests as they weren't working anymore
This commit adds a task to produce all necessary files to generate a docker image.
```
% RELEASE=1 rake artifact:dockerfile
....
Dockerfile created in /tmp/elastic/logstash/build/docker
% tree /tmp/elastic/logstash/build/docker
/tmp/elastic/logstash/build/docker
├── Dockerfile
├── bin
│ └── docker-entrypoint
├── config
│ ├── log4j2.properties
│ ├── logstash-full.yml
│ └── pipelines.yml
├── env2yaml
│ └── env2yaml
└── pipeline
└── default.conf
% docker build --rm .
.....
Step 19/20 : LABEL org.label-schema.schema-version="1.0" org.label-schema.vendor="Elastic" org.label-schema.name="logstash" org.label-schema.version="7.0.0" org.label-schema.url="https://www.elastic.co/products/logstash" org.label-schema.vcs-url="https://github.com/elastic/logstash" license="Elastic License"
---> Using cache
---> f622d7555220
Step 20/20 : ENTRYPOINT ["/usr/local/bin/docker-entrypoint"]
---> Using cache
---> b6feba7f4934
Successfully built b6feba7f4934
```
This task works only for releases (not snapshots).
This commit also adds a few tweaks to the artifacts building:
Using `SKIP_PREPARE=1` in `rake artifact:tar` or `rake artifact:tar_oss` will make a check to not rebuild the tarball if there are no code modifications.
These two changes are made since docker image build is new and we want to keep it out of artifact:all for a while. And if we're running these separately, we want to ensure the tarball built is used in the docker image (versus building a new one for each `rake artifact:tar` )
This means that, to generate all artifacts including docker images and dockerfile, it's necessary to run:
```
RELEASE=1 rake artifact:all
SKIP_PREPARE=1 RELEASE=1 rake artifact:docker
SKIP_PREPARE=1 RELEASE=1 rake artifact:docker_oss
RELEASE=1 rake artifact:dockerfile
```
Revert "work around jruby-5642 during package installation on jdk11 (#10658)"
This reverts commit 033c896330.
skip the bundler-1.16.6 files when unpacking jruby
Fixes#10674
* Update breaking changes doc for 7.0
Update structure to allow for previous and current changes
* Update docs/static/breaking-changes.asciidoc
Adds info about field reference parser
Co-Authored-By: karenzone <35154725+karenzone@users.noreply.github.com>
* Populate content for plugin changes
Co-Authored-By: karenzone <35154725+karenzone@users.noreply.github.com>
* Fix asciidoc formatting
* Incorporate review comments and new content
* Minor change for clarity
* add anchors for linking
* Add anchor for ecs-beats
* Incorporate review comments
* [DOCS] Adds tagged region for notable breaking changes
* Incorporate review comments
Remove link
Fixes#10666
introduces two rake tasks: `rake artifact:docker_oss` and `rake artifact:docker`, which will create the docker images of the OSS and non OSS packages. These tasks depend on the tar artifacts being built.
Also `rake artifact:all` has been modified to also call these two tasks.
most code was moved from https://github.com/elastic/logstash-docker/
A tiny change suggestion due to:
- `-SIGHUP` was mentioned in the previous text
- `-SIGHUP` is easier to read than `-1`
- `-1` can be easily mixed up with `-l`
Fixes#10592
* simplify the plugins-metadata.json file
* sort and update the plugin list in the rakelib/plugins-metadata.json
* remove dependency on twitter input for testing
* sorted Gemfile.template (grouped by group)
* remove default plugins from Gemfile.template
Fixes#10509
This commit fixes a ClassCastException which happens when
a plugin has the `enable_metric` setting set to false - a
NullMetricExt is assumed, but that is only created when
'metric.collect' is set to 'false' in the Logstash configuration,
not when an individual plugin disables its metrics.
Fixes#10538
There are several scenarios in which we can trigger concurrent convergence in
the agent, resulting in two or more threads working to perform interleaved and
potentially conflicting or overlapping pipeline actions. Notably, our trap on
`SIGHUP` will be resolved in its own thread, so if we are sent `SIGHUP` while
in the process of converging, the second in-flight convergence may get its
starting state before, during, or after the effects of the first convergence.
By mutually excluding execution of the convergence cycle, we eliminate the
class of bugs in which one convergence acquires actions that cannot succeed due
to the prior success of actions given to the other convergence.
Fixes#10537
Fixes a crash that occurs on pipeline load and/or reload when using both the
java keystore and the multi-pipeline feature, when more than one pipeline
references `${}`-style variables.
Fixes#10408
All contributions are welcome: ideas, patches, documentation, bug reports,
complaints, etc!
If you want to be rewarded for your contributions, sign up for the [Elastic Contributor Program](https://www.elastic.co/community/contributor). Each time you make a valid contribution, you’ll earn points that increase your chances of winning prizes and being recognized as a top contributor.
Programming is not a required skill, and there are many ways to help out!
It is more important to us that you are able to contribute.
@ -12,7 +14,7 @@ That said, some basic guidelines, which you are free to ignore :)
Want to lurk about and see what others are doing with Logstash?
* The irc channel (#logstash on irc.freenode.org) is a good place for this
* The #logstash channel on Elastic Stack Community slack (https://elasticstack.slack.com/channels/logstash) is a good place to start.
* The [forum](https://discuss.elastic.co/c/logstash) is also
great for learning from others.
@ -21,12 +23,11 @@ Want to lurk about and see what others are doing with Logstash?
Have a problem you want Logstash to solve for you?
* You can ask a question in the [forum](https://discuss.elastic.co/c/logstash)
* Alternately, you are welcome to join the IRC channel #logstash on
irc.freenode.org and ask for help there!
* You are welcome to join Elastic Stack Community slack (https://elasticstack.slack.com) and ask for help on the #logstash channel.
## Have an Idea or Feature Request?
* File a ticket on [GitHub](https://github.com/elastic/logstash/issues). Please remember that GitHub is used only for issues and feature requests. If you have a general question, the [forum](https://discuss.elastic.co/c/logstash) or IRC would be the best place to ask.
* File a ticket on [GitHub](https://github.com/elastic/logstash/issues). Please remember that GitHub is used only for issues and feature requests. If you have a general question, the [forum](https://discuss.elastic.co/c/logstash) or Elastic Stack Community slack (https://elasticstack.slack.com) is the best place to ask.
## Something Not Working? Found a Bug?
@ -49,17 +50,22 @@ get in touch with our security team [here](https://www.elastic.co/community/secu
If you have a bugfix or new feature that you would like to contribute to Logstash, and you think it will take
more than a few minutes to produce the fix (ie; write code), it is worth discussing the change with the Logstash
users and developers first. You can reach us via [GitHub](https://github.com/elastic/logstash/issues), the [forum](https://discuss.elastic.co/c/logstash), or via IRC (#logstash on freenode irc)
users and developers first. You can reach us via [GitHub](https://github.com/elastic/logstash/issues), the [forum](https://discuss.elastic.co/c/logstash), or Elastic Stack Community slack (https://elasticstack.slack.com).
Please note that Pull Requests without tests and documentation may not be merged. If you would like to contribute but do not have
experience with writing tests, please ping us on IRC/forum or create a PR and ask our help.
experience with writing tests, please ping us on the forum or create a PR and ask for our help.
If you would like to contribute to Logstash, but don't know where to start, you can use the GitHub labels "adoptme"
and "low hanging fruit". Issues marked with these labels are relatively easy, and provides a good starting
point to contribute to Logstash.
If you would like to contribute to Logstash, but don't know where to start, you
can use the GitHub labels "adoptme", "low hanging fruit" and "good first issue".
Issues marked with these labels are relatively easy, and provide a good
- [DOC] Fixed incorrect formatting of code sample [#85](http://example.org)
## 3.3.2
- Fixed incorrect serialization of input data when encoding was `Emacs-Mule` [#84](http://example.org)
@ -196,4 +215,3 @@ Keep these in mind as both authors and reviewers of PRs:
* If no, ask for clarifications on the PR. This will usually lead to changes in the code such as renaming of variables/functions or extracting of functions or simply adding "why" inline comments. But first ask the author for clarifications before assuming any intent on their part.
* I must not focus on personal preferences or nitpicks. If I understand the code in the PR but simply would've implemented the same solution a different way that's great but its not feedback that belongs in the PR. Such feedback only serves to slow down progress for little to no gain.
RUN for iter in `seq 1 10`; do ./gradlew wrapper --warning-mode all && exit_code=0 && break || exit_code=$? && echo "gradlew error: retry $iter in 10s" && sleep 10; done; exit $exit_code
@ -65,8 +44,8 @@ Logstash core will continue to exist under this repository and all related issue
### Prerequisites
* Install JDK version 8. Make sure to set the `JAVA_HOME` environment variable to the path to your JDK installation directory. For example `set JAVA_HOME=<JDK_PATH>`
* Install JRuby 9.1.x It is recommended to use a Ruby version manager such as [RVM](https://rvm.io/) or [rbenv](https://github.com/sstephenson/rbenv).
* Install JDK version 8 or 11. Make sure to set the `JAVA_HOME` environment variable to the path to your JDK installation directory. For example `set JAVA_HOME=<JDK_PATH>`
* Install JRuby 9.2.x It is recommended to use a Ruby version manager such as [RVM](https://rvm.io/) or [rbenv](https://github.com/sstephenson/rbenv).
* Install `rake` and `bundler` tool using `gem install rake` and `gem install bundler` respectively.
### RVM install (optional)
@ -159,7 +138,7 @@ Run the doc build script from within the `docs` repo. For example:
## Testing
Most of the unit tests in Logstash are written using [rspec](http://rspec.info/) for the Ruby parts. For the Java parts, we use junit. For testing you can use the *test*`rake` tasks and the `bin/rspec` command, see instructions below:
Most of the unit tests in Logstash are written using [rspec](http://rspec.info/) for the Ruby parts. For the Java parts, we use [junit](https://junit.org). For testing you can use the *test*`rake` tasks and the `bin/rspec` command, see instructions below:
### Core tests
@ -183,6 +162,14 @@ Most of the unit tests in Logstash are written using [rspec](http://rspec.info/)
3- To execute the complete test-suite including the integration tests run:
# The acceptance test in our CI infrastructure doesn't clear the workspace between run
# this mean the lock of the Gemfile can be sticky from a previous run, before generating any package
# we will clear them out to make sure we use the latest version of theses files
# If we don't do this we will run into gem Conflict error.
[ -f Gemfile ]&& rm Gemfile
[ -f Gemfile.lock ]&& rm Gemfile.lock
# When running these tests in a Jenkins matrix, in parallel, once one Vagrant job is done, the Jenkins ProcessTreeKiller will kill any other Vagrant processes with the same
# BUILD_ID unless you set this magic flag: https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller
exportBUILD_ID=dontKillMe
LS_HOME="$PWD"
QA_DIR="$PWD/qa"
# Always run the halt, even if the test times out or an exit is sent
cleanup(){
cd$QA_DIR
bundle check || bundle install
bundle exec rake qa:vm:halt
}
trap cleanup EXIT
cd$LS_HOME
# Cleanup any stale VMs from old jobs first
cleanup
get_package_type
if[[$SELECTED_TEST_SUITE==$"redhat"]];then
echo"Generating the RPM, make sure you start with a clean environment before generating other packages."
cd$LS_HOME
rake artifact:rpm
echo"Acceptance: Installing dependencies"
cd$QA_DIR
bundle install
echo"Acceptance: Running the tests"
bundle exec rake qa:vm:setup["redhat"]
bundle exec rake qa:vm:ssh_config
bundle exec rake qa:acceptance:redhat
bundle exec rake qa:vm:halt["redhat"]
elif[[$SELECTED_TEST_SUITE==$"debian"]];then
echo"Generating the DEB, make sure you start with a clean environment before generating other packages."
cd$LS_HOME
rake artifact:deb
echo"Acceptance: Installing dependencies"
cd$QA_DIR
bundle install
echo"Acceptance: Running the tests"
bundle exec rake qa:vm:setup["debian"]
bundle exec rake qa:vm:ssh_config
bundle exec rake qa:acceptance:debian
bundle exec rake qa:vm:halt["debian"]
elif[[$SELECTED_TEST_SUITE==$"all"]];then
echo"Building Logstash artifacts"
cd$LS_HOME
rake artifact:all
echo"Acceptance: Installing dependencies"
cd$QA_DIR
bundle install
echo"Acceptance: Running the tests"
bundle exec rake qa:vm:setup
bundle exec rake qa:vm:ssh_config
bundle exec rake qa:acceptance:all
bundle exec rake qa:vm:halt
# in CI (Buildkite), packaging artifacts are pre-built from a previous step
if[[$BUILDKITE==true]];then
exportLS_ARTIFACTS_PATH="$HOME/build"
echo"--- Downloading artifacts from \"build/*${PACKAGE_TYPE}\" to $LS_ARTIFACTS_PATH"
Logstash is part of the [Elastic Stack](https://www.elastic.co/products) along with Elasticsearch, Kibana, and Beats. Logstash is a server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite "stash." (Ours is Elasticsearch, naturally.). Logstash has over 200 plugins, and you can write your own very easily as well.
For more info, see <https://www.elastic.co/products/logstash>
### Installation instructions
Please follow the documentation on [how to install Logstash with Docker](https://www.elastic.co/guide/en/logstash/current/docker.html).
## Documentation and Getting Started
You can find the documentation and getting started guides for Logstash
on the [elastic.co site](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html).
**Please open new issues and pull requests for plugins under its own repository**
For example, if you have to report an issue/enhancement for the Elasticsearch output, please do so [here](https://github.com/logstash-plugins/logstash-output-elasticsearch/issues).
org.opencontainers.image.description="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
org.label-schema.build-date={{ created_date }} \
{% if image_flavor == 'ubi8' -%}
license="{{ license }}" \
description="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
name="logstash" \
maintainer="info@elastic.co" \
summary="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
# The repository name in registry1, excluding /ironbank/
name:"elastic/logstash/logstash"
# List of tags to push for the repository in registry1
# The most specific version should be the first tag and will be shown
# on ironbank.dsop.io
tags:
- "{{ elastic_version }}"
- "latest"
# Build args passed to Dockerfile ARGs
args:
BASE_IMAGE:"redhat/ubi/ubi8"
BASE_TAG:"8.6"
LOGSTASH_VERSION:"{{ elastic_version }}"
GOLANG_VERSION:"1.17.8"
# Docker image labels
labels:
org.opencontainers.image.title:"logstash"
## Human-readable description of the software packaged in the image
org.opencontainers.image.description:"Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'"
## License(s) under which contained software is distributed