Compare commits

...

145 commits

Author SHA1 Message Date
mergify[bot]
cbb2a2e557
setting: enforce non-nullable (restore 8.15.x behavior) (#17522) (#17528)
(cherry picked from commit 712b37e1df)

Co-authored-by: Rye Biesemeyer <yaauie@users.noreply.github.com>
2025-04-09 10:00:23 -07:00
github-actions[bot]
11172f733d
Update patch plugin versions in gemfile lock (#17518)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2025-04-08 15:27:17 -07:00
mergify[bot]
cde578644c
Fix JDK matrix pipeline after configurable it split (#17461) (#17512)
PR #17219 introduced configurable split quantities for IT tests, which
resulted in broken JDK matrix pipelines (e.g. as seen via the elastic
internal link:
https://buildkite.com/elastic/logstash-linux-jdk-matrix-pipeline/builds/444

reporting the following error

```
  File "/buildkite/builds/bk-agent-prod-k8s-1743469287077752648/elastic/logstash-linux-jdk-matrix-pipeline/.buildkite/scripts/jdk-matrix-tests/generate-steps.py", line 263
    def integration_tests(self, part: int, parts: int) -> JobRetValues:
    ^^^
SyntaxError: invalid syntax
There was a problem rendering the pipeline steps.
Exiting now.
```
)

This commit fixes the above problem, which was already fixed in #17642, using a more
idiomatic way.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
(cherry picked from commit b9469e0726)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2025-04-08 17:03:24 +03:00
mergify[bot]
00cd8bc96f
[Backport 8.17] Update uri gem required by Logstash (#17495) (#17501)
(cherry picked from commit cb4c234aee)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-04-07 11:40:07 +02:00
mergify[bot]
fb83a12ced
Remove technical preview from agent driven monitoring pages. (#17485) (#17497)
(cherry picked from commit 38e0ca171a)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-04-05 13:44:36 -07:00
mergify[bot]
035733831b
pin cgi to 0.3.7 (#17487) (#17490)
(cherry picked from commit eeb2162ae4)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-04-03 19:07:57 +01:00
mergify[bot]
83c0c6e555
Fix persistent-queues.asciidoc PQ sizing multiplication factors (#17451) (#17470)
(cherry picked from commit 81fcfdbf5c)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-04-01 14:07:39 +01:00
mergify[bot]
855df9cea2
Update releasenotes.asciidoc to warn of ES input and filter issues in 8.17.4 (#17453) (#17473)
(cherry picked from commit d59a09c3f7)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-04-01 14:06:28 +01:00
mergify[bot]
5708f749f6
[Backport 8.17] Fix syntax in BK CI script (#17462) (#17465)
(cherry picked from commit 422cd4e06b)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-04-01 13:21:57 +02:00
github-actions[bot]
63ea72db94
Update patch plugin versions in gemfile lock (#17449)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2025-03-31 12:13:35 +01:00
mergify[bot]
b8d95864bc
[Backport 8.17] Removed unused configHash computation that can be replaced by PipelineConfig.configHash() (#17336) (#17417)
Removed unused configHash computation happening in AbstractPipeline and used only in tests replaced by PipelineConfig.configHash() invocation

(cherry picked from commit 787fd2c62f)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-03-31 08:49:35 +02:00
Matt Johnson
3402310b40
Update tls-encryption.asciidoc (#17387)
The referenced elasticsearch output plugin has deprecated the options that were specified (`ssl`, `cacert`) https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-deprecated-options

When I tried using the `ssl` option I noticed a warning in the logstash logs:
```
[WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "ssl" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Set 'ssl_enabled' instead. If you have any questions about this, please visit the #logstash channel on freenode irc.
```

I have updated this document with the suggested new options to use instead.
2025-03-28 15:59:06 -04:00
mergify[bot]
d70feb2c87
[8.17] Pin rubocop-ast development gem due to new dep on prism (backport #17407) (#17426)
* Pin rubocop-ast development gem due to new dep on prism (#17407)

The rubocop-ast gem just introduced a new dependency on prism.
 - https://rubygems.org/gems/rubocop-ast/versions/1.43.0

In our install default gem rake task we are seeing issues trying to build native
extensions. I see that in upstream jruby they are seeing a similar problem (at
least it is the same failure mode https://github.com/jruby/jruby/pull/8415

This commit pins rubocop-ast to 1.42.0 which is the last version that did not
have an explicit prism dependency.

(cherry picked from commit 6de59f2c02)

* Bump rubocop-ast

We are pinning rubocop-ast to the last version that did not require prism.
It is a development gem and should not change any behavior of LS.

---------

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>
2025-03-27 14:46:50 -07:00
github-actions[bot]
7286fe78db
Update 8.17.4 release notes to warn of http_poller issue (#17436) (#17438)
(cherry picked from commit 3cde776be2)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-03-27 18:31:15 +00:00
mergify[bot]
580c8152df
[Backport 8.17] Limit memory consumption in test on overflow (#17373) (#17413)
Updates only test code to be able to run a test that consumes big memory if:
- the physical memory is bigger than the requested Java heap
- JDK version is greater than or equal to 21.

The reason to limit the JDK version is that on 16GB machine the G1GC is more efficient than the one on previous JDKs and so let complete the test with 10GB heap, while in JDK 17 it consistently fails with OOM error.

(cherry picked from commit 075fdb4152)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-03-27 15:40:06 +01:00
mergify[bot]
0b24bebda6
Replace Deprecated SSL Settings with their latest values (#17392) (#17399)
(cherry picked from commit 2a03d7e8dd)

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-03-27 10:06:59 -04:00
Andrea Selva
9279a1561f
Bump 8.17.5 (#17395) 2025-03-25 12:59:44 +01:00
github-actions[bot]
26136d6529
Release notes for 8.17.4 (#17379)
* Update release notes for 8.17.4

* Refined Logstash 8.17.4 release notes

* Update docs/static/releasenotes.asciidoc

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-03-24 10:31:21 +01:00
mergify[bot]
f6736c4141
Release notes for 8.16.6 (backport #17378) (#17388)
* Release notes for 8.16.6 (#17378)

* Update release notes for 8.16.6

* Refined Logstash 8.16.6 release notes

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
(cherry picked from commit 7fe453b130)

# Conflicts:
#	docs/static/releasenotes.asciidoc

* Resolve merge conflicts in release notes

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-03-24 09:36:07 +01:00
mergify[bot]
59c51d13c2
[8.17] Surface failures from nested rake/shell tasks (backport #17310) (#17315)
* Surface failures from nested rake/shell tasks (#17310)

Previously when rake would shell out the output would be lost. This
made debugging CI logs difficult. This commit updates the stack with
improved message surfacing on error.

(cherry picked from commit 0d931a502a)

# Conflicts:
#	rubyUtils.gradle

* Extend ruby linting tasks to handle file inputs (#16660)

This commit extends the gradle and rake tasks to pass through a list of files
for rubocop to lint. This allows more specificity and fine grained control for
linting when the consumer of the tasks only wishes to lint a select few files.

* Ensure shellwords library is loaded

Without this depending on task load order `Shellwords` may not be available.

---------

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>
2025-03-20 08:34:24 -07:00
mergify[bot]
f679ecb374
tests: make integration split quantity configurable (#17219) (#17369)
* tests: make integration split quantity configurable

Refactors shared splitter bash function to take a list of files on stdin
and split into a configurable number of partitions, emitting only those from
the currently-selected partition to stdout.

Also refactors the only caller in the integration_tests launcher script to
accept an optional partition_count parameter (defaulting to `2` for backward-
compatibility), to provide the list of specs to the function's stdin, and to
output relevant information about the quantity of partition splits and which
was selected.

* ci: run integration tests in 3 parts

(cherry picked from commit 3e0f488df2)

Co-authored-by: Rye Biesemeyer <yaauie@users.noreply.github.com>
2025-03-20 05:34:51 -07:00
mergify[bot]
e095a1805f
Added test to verify the int overflow happen (#17353) (#17356)
Use long instead of int type to keep the length of the first token.

The size limit validation requires to sum two integers, one with the length of the accumulated chars till now plus the next fragment head part. If any of the two sizes is close to the max integer it generates an overflow and could successfully fail the test 9c0e50faac/logstash-core/src/main/java/org/logstash/common/BufferedTokenizerExt.java (L123).

To fall in this case it's required that sizeLimit is bigger then 2^32 bytes (2GB) and data fragments without any line delimiter is pushed to the tokenizer with a total size close to 2^32 bytes.

(cherry picked from commit afde43f918)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-03-19 17:26:56 +01:00
mergify[bot]
9228b1eae3
[8.17] Upgrade elasticsearch-ruby client. (backport #17161) (backport #17306) (#17339)
* [8.x] Upgrade elasticsearch-ruby client. (backport #17161) (#17306)

* Upgrade elasticsearch-ruby client. (#17161)

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488e4a)

* Apply the required changes in elasticsearch_client.rb after upgrading the elasticsearch-ruby client to 8.x

* Swallow the exception and make non-connectable client when ES client raises connection refuses exception.

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: Mashhur <mashhur.sattorov@elastic.co>
(cherry picked from commit 7f74ce34a9)

* Update Gemfile lock to reflect elasticsearch-ruby changes.

* Upgrade faraday to v2 in Gemfile lock.

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: Mashhur <mashhur.sattorov@elastic.co>
2025-03-17 10:11:25 -07:00
github-actions[bot]
7c46a7ddbf
bump lock file for 8.17 (#17337)
* Update patch plugin versions in gemfile lock

* Update JRuby lockfile and LSCL grammar parser

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-03-17 17:54:41 +01:00
mergify[bot]
e3333b3c8e
Add Deprecation tag to arcsight module (#17331) (#17333)
(cherry picked from commit d45706debe)

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-03-17 11:30:08 -04:00
mergify[bot]
3e7482e047
Shareable function for partitioning integration tests (#17223) (#17301)
For the fedramp high work https://github.com/elastic/logstash/pull/17038/files a
use case for multiple scripts consuming the partitioning functionality emerged.
As we look to more advanced partitioning we want to ensure that the
functionality will be consumable from multiple scripts.

See https://github.com/elastic/logstash/pull/17219#issuecomment-2698650296

(cherry picked from commit d916972877)

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>
2025-03-12 11:17:14 -07:00
mergify[bot]
9c80c9c7fd
Fix pqcheck and pqrepair on Windows (#17210) (#17261)
A recent change to pqheck, attempted to address an issue where the
pqcheck would not on Windows mahcines when located in a folder containing
a space, such as "C:\program files\elastic\logstash". While this fixed an
issue with spaces in folders, it introduced a new issue related to Java options,
and the pqcheck was still unable to run on Windows.

This PR attempts to address the issue, by removing the quotes around the Java options,
which caused the option parsing to fail, and instead removes the explicit setting of
the classpath - the use of `set CLASSPATH=` in the `:concat` function is sufficient
to set the classpath, and should also fix the spaces issue

Fixes: #17209
(cherry picked from commit ba5f21576c)

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-03-07 16:13:30 -05:00
mergify[bot]
829858e8f1
[CI] Health report integration tests use the new artifacts-api (#17274) (#17286)
migrate to the new artifacts-api

(cherry picked from commit feb2b92ba2)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-03-06 16:49:01 +00:00
Rob Bavey
b6bfa855e6
Forward Port of Release notes for 8.16.5 (#17188) (#17265)
* Update release notes for 8.16.5

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2025-03-06 08:57:25 -05:00
mergify[bot]
3be9a29c5c
gradle task migrate to the new artifacts-api (#17232) (#17238)
This commit migrates gradle task to the new artifacts-api

- remove dependency on staging artifacts
- all builds use snapshot artifacts
- resolve version from current branch, major.x, previous minor,
   with priority given in that order.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
(cherry picked from commit 0a745686f6)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-03-05 18:21:22 +00:00
github-actions[bot]
a3ff8ef769
Update rack to 3.1.11 (#17230) (#17234)
(cherry picked from commit 872ae95588)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-03-05 17:23:20 +00:00
mergify[bot]
c36957c37a
Fix empty node stats pipelines (#17185) (#17198)
Fixed an issue where the `/_node/stats` API displayed empty pipeline metrics
when X-Pack monitoring was enabled

(cherry picked from commit 86785815bd)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-03-05 00:12:09 +00:00
Rob Bavey
62b1f5fa2a
8.17.4 version bump (#17216)
* 8.17.4 version bump
2025-03-04 13:23:33 -05:00
github-actions[bot]
80ac2024db
Release notes for 8.17.3 (#17187)
* Update release notes for 8.17.3


---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-03-04 08:10:11 -05:00
github-actions[bot]
2c4fdf7119
Update Dockerfile.erb to set eux on RUN command with semicolons (#17141) (#17199)
as per guidance https://github.com/elastic/logstash/pull/16063#discussion_r1577000627

(cherry picked from commit 18772dd25a)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-02-28 22:36:54 +00:00
github-actions[bot]
08b22ef499
bump lock file for 8.17 (#17152)
* Update patch plugin versions in gemfile lock

* Remove universal-java-17 to remain consistent with previous version

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-02-26 08:34:51 -05:00
github-actions[bot]
7cb800bb05
Add Windows 2025 to CI (#17133) (#17144)
This commit adds Windows 2025 to the Windows JDK matrix and exhaustive tests pipelines.

(cherry picked from commit 4d52b7258d)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2025-02-24 16:48:35 +02:00
github-actions[bot]
bbacc0bfdc
allow concurrent Batch deserialization (#17050) (#17107)
Currently the deserialization is behind the readBatch's lock, so any large batch will take time deserializing, causing any other Queue writer (e.g. netty executor threads) and any other Queue reader (pipeline worker) to block.

This commit moves the deserialization out of the lock, allowing multiple pipeline workers to deserialize batches concurrently.

- add intermediate batch-holder from `Queue` methods
- make the intermediate batch-holder a private inner class of `Queue` with a descriptive name `SerializedBatchHolder`

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit 637f447b88)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-02-17 22:22:56 +00:00
github-actions[bot]
c1858ac810
CPM handle 404 response gracefully with user-friendly log (#17052) (#17099)
Starting from es-output 12.0.2, a 404 response is treated as an error. Previously, central pipeline management considered 404 as an empty pipeline, not an error.

This commit restores the expected behavior by handling 404 gracefully and logs a user-friendly message.
It also removes the redundant cache of pipeline in CPM

Fixes: #17035
(cherry picked from commit e896cd727d)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-02-17 13:51:06 +00:00
github-actions[bot]
88cd16b2c4
Allow capturing heap dumps in DRA BK jobs (#17081) (#17086)
This commit allows Buildkite to capture any heap dumps produced
during DRA builds.

(cherry picked from commit 78c34465dc)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2025-02-14 09:37:43 +02:00
Dimitrios Liappis
6398d9612e
Fix conflicts (#17075) 2025-02-12 10:46:26 -08:00
github-actions[bot]
3e041cd8eb
Don't honor VERSION_QUALIFIER if set but empty (#17032) (#17070)
PR #17006 revealed that the `VERSION_QUALIFIER` env var gets honored in
various scripts when present but empty.
This shouldn't be the case as the DRA process is designed to gracefully
ignore empty values for this variable.

This commit changes various ruby scripts to not treat "" as truthy.
Bash scripts (used by CI etc.) are already ok with this as part of
refactorings done in #16907.

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
(cherry picked from commit c7204fd7d6)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2025-02-12 16:12:51 +02:00
github-actions[bot]
82f86922de
inject VERSION_QUALIFIER into artifacts (#16904) (#17049) (#17065)
VERSION_QUALIFIER was already observed in rake artifacts task but only to influence the name of the artifact.

This commit ensure that the qualifier is also displayed in the cli and in the http api.

(cherry picked from commit 00f8b91c35)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-02-12 09:10:27 +00:00
Ry Biesemeyer
0e487fd7ea
docs: forward-port 8.16.4 release notes to 8.17 (#17056) 2025-02-11 12:31:57 -08:00
Ry Biesemeyer
03aeacf662
version bump 8.17.3 (#17054) 2025-02-11 08:33:31 -08:00
github-actions[bot]
8491a4ba20
Release notes for 8.17.2 (#17044)
* Update release notes for 8.17.2

* humanize release notes 8.17.2

* Apply suggestions from code review

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-02-10 22:40:10 -08:00
github-actions[bot]
a19a607909
fix logstash-keystore to accept spaces in values when added via stdin (#17039) (#17041)
This commit preserves spaces in values, ensuring that multi-word strings are stored as intended.
Prior to this change, `logstash-keystore` incorrectly handled values containing spaces,
causing only the first word to be stored.

(cherry picked from commit 5573b5ad77)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-02-07 21:35:06 +00:00
github-actions[bot]
40866b9e96
bump lock file for 8.17 (#17024)
* Update patch plugin versions in gemfile lock

* pull in minor from ES filter

* remove java-17 specific (covered by `java`)

* pull in minor from http input to get fixed netty thread names

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
2025-02-05 10:07:15 -08:00
github-actions[bot]
d1f470ae5c
Backport PR #16968 to 8.17: Fix BufferedTokenizer to properly resume after a buffer full condition respecting the encoding of the input string (#16968) (#17022)
Backport PR #16968 to 8.17 branch, original message:

----

Permit to use effectively the tokenizer also in context where a line is bigger than a limit.
Fixes an issues related to token size limit error, when the offending token was bigger than the input fragment in happened that the tokenzer wasn't unable to recover the token stream from the first delimiter after the offending token but messed things, loosing part of tokens.

## How solve the problem
This is a second take to fix the processing of tokens from the tokenizer after a buffer full error. The first try #16482 was rollbacked to the encoding error #16694.
The first try failed on returning the tokens in the same encoding of the input.
This PR does a couple of things:
- accumulates the tokens, so that after a full condition can resume with the next tokens after the offending one.
- respect the encoding of the input string. Use `concat` method instead of `addAll`, which avoid to convert RubyString to String and back to RubyString. When return the head `StringBuilder` it enforce the encoding with the input charset.

(cherry picked from commit 1c8cf546c2)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-02-05 12:14:55 +01:00
github-actions[bot]
23383e71c3
upgrade jdk to 21.0.6+7 (#16932) (#16987) (#16989)
(cherry picked from commit 51ab5d85d2)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit f561207b4b)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-01-30 11:20:09 +00:00
github-actions[bot]
77e355ed9f
plugin manager: add --level=[major|minor|patch] (default: minor) (#16899) (#16974)
* plugin manager: add `--level=[major|minor|patch]` (default: `minor`)

* docs: plugin manager update `--level` behavior

* Update docs/static/plugin-manager.asciidoc

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* docs: plugin update major as subheading

* docs: intention-first in major plugin updates

* Update docs/static/plugin-manager.asciidoc

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 6943df5570)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2025-01-28 17:18:03 -08:00
kaisecheng
dfdbaf2f66
add openssl command to wolfi image (#16971) 2025-01-28 17:31:48 +00:00
github-actions[bot]
56e5ebcf37
remove irrelevant warning for internal pipeline (#16938) (#16963)
This commit removed irrelevant warning for internal pipeline, such as monitoring pipeline.
Monitoring pipeline is expected to be one worker. The warning is not useful

Fixes: #13298
(cherry picked from commit 3f41828ebb)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-01-27 17:29:53 +00:00
github-actions[bot]
3897b718ff
[Backport 8.x] Reimplement LogStash::String setting in Java (#16576) (#16959) (#16960)
Clean backport of #16959 from 8.x to 8.17

----

Reimplements `LogStash::Setting::String` Ruby setting class into the `org.logstash.settings.SettingString` and exposes it through `java_import` as `LogStash::Setting::SettingString`.
Updates the rspec tests in two ways:
- logging mock is now converted to real Log4J appender that spy log line that are later verified
- verifies `java.lang.IllegalArgumentException` instead of `ArgumentError` is thrown because the kind of exception thrown by Java code, during verification.

* Fixed the rename of NullableString to SettingNullableString

* Fixed runner test to use real spy logger from Java Settings instead of mock test double

(cherry picked from commit a0378c05cb)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-01-27 17:25:12 +01:00
github-actions[bot]
f3fd1c5841
fix user and password detection from environment's uri (#16955) (#16958)
(cherry picked from commit c8a6566877)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-01-27 11:50:57 +00:00
github-actions[bot]
b9b5b9553b
Increase Xmx used by JRuby during Rake execution to 4Gb (#16911) (#16943)
(cherry picked from commit 58e6dac94b)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-01-24 14:54:29 +01:00
Dimitrios Liappis
2002e936fe
Backport 16907 to 8.17: Use --qualifier in release manager (#16907) #16941
Backport of #16907 cherry-picked from 9385cfa

This commit uses the new --qualifier parameter in the release manager
for publishing dra artifacts. Additionally, simplifies the expected
variables to rely on a simple `VERSION_QUALIFIER`.

Snapshot builds are skipped when VERSION_QUALIFIER is set.
Finally, for helping to test DRA PRs, we also allow passing the `DRA_BRANCH`  option/env var
to override BUILDKITE_BRANCH.

Closes https://github.com/elastic/ingest-dev/issues/4856
2025-01-24 14:52:07 +02:00
github-actions[bot]
fa6894ae93
Doc: Remove extra symbol to fix formatting error (#16926) (#16935)
(cherry picked from commit f66e00ac10)

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-23 17:35:14 -05:00
github-actions[bot]
0966786feb
[doc] fix the necessary privileges of central pipeline management (#16902) (#16930)
CPM requires two roles logstash_admin and logstash_system

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit dc740b46ca)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-01-23 11:36:13 +00:00
github-actions[bot]
d2aa696142
fix jars installer for new maven and pin psych to 5.2.2 (#16919) (#16924)
handle maven output that can carry "garbage" information after the jar's name. this patch deletes that extra information, also pins psych to 5.2.2 until jruby ships with snakeyaml-engine 2.9 and jar-dependencies 0.5.2

Related to: https://github.com/jruby/jruby/issues/8579

(cherry picked from commit 52b7fb0ae6)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-01-22 17:00:35 +00:00
kaisecheng
648c5abd00
Update Gemfile.jruby-3.1.lock.release (#16922) 2025-01-22 15:13:55 +00:00
kaisecheng
93e792c3d6
Release notes for 8.16.3 (#16879) (#16918)
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
# Conflicts:
#	docs/static/releasenotes.asciidoc

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-01-21 12:00:21 +00:00
kaisecheng
862bbacb6a
bump core 8.17.2 (#16916) 2025-01-21 10:10:52 +00:00
github-actions[bot]
927a043597
Release notes for 8.17.1 (#16880)
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-21 01:44:00 +00:00
github-actions[bot]
5bd02d98b4
Validate the size limit in BufferedTokenizer. (#16882) (#16891)
(cherry picked from commit a215101032)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-01-09 16:52:41 -08:00
github-actions[bot]
43f6fc0f4a
Initialize flow metrics if pipeline metric.collect params is enabled. (#16881) (#16888)
(cherry picked from commit 47d04d06b2)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-01-09 13:32:00 -08:00
João Duarte
92d58d1928
Forward port 8.15.5 and 8.16.2 release notes to 8.17 (#16810)
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-08 11:44:57 +00:00
Mashhur
268b61da9a
elastic_integration plugin version updated. (#16875) 2025-01-07 18:22:51 -08:00
github-actions[bot]
e9c3abb049
bump lock file for 8.17 (#16870)
* Update patch plugin versions in gemfile lock

* Update Gemfile.jruby-3.1.lock.release

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-01-07 17:26:54 +00:00
github-actions[bot]
af3004c8cd
Respect environment variables in jvm.options (#16834) (#16867)
JvmOptionsParser adds support for ${VAR:default} syntax when parsing jvm.options
- allow dynamic resolution of environment variables in the jvm.options file
- enables fallback to default value when the environment variable is not set

(cherry picked from commit ef36df6b81)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-01-07 15:04:17 +00:00
github-actions[bot]
1494f184ef
Add pipeline metrics to Node Stats API (#16839) (#16865)
This commit introduces three new metrics per pipeline in the Node Stats API:
- workers
- batch_size
- batch_delay

```
{
  ...
  pipelines: {
    main: {
      events: {...},
      flow: {...},
      plugins: {...},
      reloads: {...},
      queue: {...},
      pipeline: {
        workers: 12,
        batch_size: 125,
        batch_delay: 5,
      },
    }
  }
  ...
}
```

(cherry picked from commit de6a6c5b0f)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-01-07 14:53:08 +00:00
github-actions[bot]
7609102d20
Doc: Add appropriate alternate for deprecated module in 8.x (#16856) (#16860)
(cherry picked from commit 4bf1fb514e)

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-06 16:13:16 -05:00
João Duarte
9e729ccb78
Update logstash-input-azure_event_hubs 1.5.1 (#16848) 2025-01-03 16:11:03 +00:00
github-actions[bot]
515175f9f3
Apply Jackson stream read constraints defaults at runtime (#16832) (#16846)
When Logstash 8.12.0 added increased Jackson stream read constraints to
jvm.options, assumptions about the existence of that file's contents
were invalidated. This led to issues like #16683.

This change ensures Logstash applies defaults from config at runtime:
- MAX_STRING_LENGTH: 200_000_000
- MAX_NUMBER_LENGTH: 10_000
- MAX_NESTING_DEPTH: 1_000

These match the jvm.options defaults and are applied even when config
is missing. Config values still override these defaults when present.

(cherry picked from commit cc608eb88b)

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>
2025-01-02 15:24:14 -08:00
github-actions[bot]
657d95aa06
Update patch plugin versions in gemfile lock (#16842)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2025-01-02 13:37:46 +00:00
Karen Metts
c77396c2db
Doc: Add json_lines known issue to release notes (#16831) 2024-12-26 14:39:19 -05:00
github-actions[bot]
6115544b75
Avoid lock when ecs_compatibility is explicitly specified (#16786) (#16829)
Because a `break` escapes a `begin`...`end` block, we must not use a `break` in order to ensure that the explicitly set value gets memoized to avoid lock contention.

> ~~~ ruby
> def fake_sync(&block)
>   puts "FAKE_SYNC:enter"
>   val = yield
>   puts "FAKE_SYNC:return(#{val})"
>   return val
> ensure
>   puts "FAKE_SYNC:ensure"
> end
>
> fake_sync do
>   @ivar = begin
>     puts("BE:begin")
>   	break :break
>
>   	val = :ret
>   	puts("BE:return(#{val})")
>   	val
>   ensure
>     puts("BE:ensure")
>   end
> end
> ~~~

Note: no `FAKE_SYNC:return`:

> ~~~
> ╭─{ rye@perhaps:~/src/elastic/logstash (main ✔) }
> ╰─● ruby break-esc.rb
> FAKE_SYNC:enter
> BE:begin
> BE:ensure
> FAKE_SYNC:ensure
> [success]
> ~~~

(cherry picked from commit 01c8e8bb55)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-12-23 10:43:30 -08:00
github-actions[bot]
4f4c21072b
update ironbank image to ubi9/9.5 (#16825) (#16826)
(cherry picked from commit dbb06c20cf)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-12-19 22:57:15 +00:00
github-actions[bot]
d592e3a46f
Doc: Update security docs to replace obsolete cacert setting (#16798) (#16803) 2024-12-19 13:17:01 -05:00
github-actions[bot]
df557cf225
give more memory to tests. 1gb instead of 512mb (#16764) (#16800)
(cherry picked from commit e6e0f9f6eb)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-12-16 11:59:30 +00:00
github-actions[bot]
b361ec35e7
Update minor plugin versions in gemfile lock (#16781)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2024-12-16 10:30:32 +00:00
João Duarte
b6a74f9418
bump to 8.17.1 (#16784) 2024-12-12 16:02:10 +01:00
github-actions[bot]
f663392a91
Release notes for 8.17.0 (#16768)
* Update release notes for 8.17.0

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2024-12-11 17:38:49 -05:00
github-actions[bot]
2db4edcee4
Pin date dependency to 3.3.3 (#16755) (#16782)
Resolves: #16095, #16754
(cherry picked from commit ab19769521)

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2024-12-11 13:37:05 +00:00
github-actions[bot]
33ac2790b2
ensure inputSize state value is reset during buftok.flush (#16760) (#16770)
(cherry picked from commit e36cacedc8)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-12-09 09:46:49 -08:00
github-actions[bot]
6f8fd5a4eb
ensure jackson overrides are available to static initializers (#16719) (#16757)
Moves the application of jackson defaults overrides into pure java, and
applies them statically _before_ the `org.logstash.ObjectMappers` has a chance
to start initializing object mappers that rely on the defaults.

We replace the runner's invocation (which was too late to be fully applied) with
a _verification_ that the configured defaults have been applied.

(cherry picked from commit 202d07cbbf)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-12-04 16:02:18 -08:00
github-actions[bot]
5ab462e8b3
Pin jar-dependencies to 0.4.1 (#16747) (#16750)
Pin jar-dependencies to `0.4.1`, until https://github.com/jruby/jruby/issues/7262
is resolved.

(cherry picked from commit e3265d93e8)

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2024-12-04 10:09:22 -05:00
github-actions[bot]
c68a631a4c
Docs: Troubleshooting update for JDK bug handling cgroups v1 (#16721) (#16731)
---------
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

(cherry picked from commit d913e2ae3d)

Co-authored-by: mmahacek <mark.mahacek@elastic.co>
2024-11-27 13:49:51 +00:00
github-actions[bot]
ab22999a7a
Release notes for 8.16.1 (#16691) (#16707)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 8b97c052e6)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-11-20 18:23:13 -05:00
Cas Donoghue
accc201d1f
Revert "Backport PR #16482 to 8.x: Bugfix for BufferedTokenizer to completely consume lines in case of lines bigger then sizeLimit (#16569)" (#16705)
This reverts commit 27bd2a039b.
2024-11-20 14:46:00 -08:00
github-actions[bot]
adfa02b536
Update minor plugin versions in gemfile lock for 8.17.0 (#16696)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2024-11-20 16:36:32 +00:00
github-actions[bot]
10512266af
Update license checker with new logger dependency (#16695) (#16700)
A new transative dependency on the `logger` gem has been added through sinatra 4.1.0. Update the
license checker to ensure this is accounted for.

(cherry picked from commit e0ed994ab1)

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>
2024-11-20 15:29:43 +00:00
João Duarte
7914ac04c0 add lockfile from 8.16.1 and bump version to 8.17.0 2024-11-20 11:20:38 +00:00
Ry Biesemeyer
8af6343a26
PipelineBusV2 deadlock proofing (#16671)
* pipeline bus: add deadlock test for unlisten/unregisterSender

* pipeline bus: eliminate deadlock

Moves the sync-to-notify out of the `AddressStateMapping#mutate`'s effective
synchronous block to eliminate a race condition where unlistening to an address
and unregistering a sender could deadlock.

It is safe to notify an AddressState's attached input without exclusive access
to the AddressState, because notifying an input that has since been disconnected
is net zero harm.
2024-11-18 08:43:11 -08:00
github-actions[bot]
18e1545e4b
Updates release notes for 8.14.x to call for an update. (#16675) (#16676)
Updates release notes for `8.14.x` to call for an update to a subsequent minor for fix performance regression in JSON decoding.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit 52836f8caf)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-11-18 08:49:35 +01:00
github-actions[bot]
01dcd621d3
Doc: Realign release notes and add known issue (#16663) (#16668)
(cherry picked from commit 15cdf5c63d)

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-12 15:19:30 -05:00
github-actions[bot]
a402be0b8f
Release notes for 8.16.0 (#16605) (#16662)
* Update release notes for 8.16.0

* Refine release notes

* Apply suggestions from code review

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Refine release notes

* Apply suggestions from code review

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Apply suggestions from code review

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Added dependency update section for new jruby version

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: edmocosta <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
(cherry picked from commit 8b897915cd)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-11-11 16:56:46 -05:00
kaisecheng
9ad33e21b9
add deprecation warning for allow_superuser: true (#16555) 2024-11-06 17:47:17 +00:00
github-actions[bot]
54caef7a29
Update .ruby-version to jruby-9.4.9.0 (#16642) (#16645)
(cherry picked from commit efbee31461)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-11-06 12:46:06 +00:00
Andrea Selva
4201628f9e
Update depreacation warning to provide the version the ArcSight module is removed. (#16648) 2024-11-06 12:30:45 +01:00
github-actions[bot]
00898bd560
For custom java plugins, set the platform = 'java'. (#16628) (#16649)
(cherry picked from commit 046ea1f5a8)

Co-authored-by: Nicole Albee <2642763+a03nikki@users.noreply.github.com>
2024-11-06 08:54:06 +00:00
github-actions[bot]
0657729cb7
bump jruby to 9.4.9.0 (#16634) (#16639)
(cherry picked from commit 6703aec476)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-11-06 08:23:32 +00:00
github-actions[bot]
aea58a36a9
Anchor the -java match pattern at the end of the string. (#16626) (#16637)
This fixes the offline install problem of the logstash-input-java_filter_example off-line install.

(cherry picked from commit 113585d4a5)

Co-authored-by: Nicole Albee <2642763+a03nikki@users.noreply.github.com>
2024-11-05 14:25:29 +00:00
github-actions[bot]
a7384c0d6c
fix Windows java not found log (#16633) (#16636)
(cherry picked from commit 849f431033)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-11-05 10:45:59 +00:00
github-actions[bot]
37b1e9006e
Update JDK to latest in versions.yml (#16627) (#16631)
Update JDK to version 21.0.5+11

(cherry picked from commit 852149be2e)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-11-04 17:25:31 +01:00
github-actions[bot]
2bcb5adcbb
add boostrap to docker build to fix missing jars (#16622) (#16623)
The DRA build failed because the required jars were missing, as they had been removed during the Docker build process.

(cherry picked from commit 00da72378b)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-11-01 15:48:36 +00:00
github-actions[bot]
6c8e086d5e
reduce effort during build of docker images (#16619) (#16620)
there's no need to build jdk-less and windows tarballs for docker images
so this change simplifies the build process.

It should reduce the time spent needed to build docker images.

(cherry picked from commit 9eced9a106)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-10-31 17:04:50 +00:00
github-actions[bot]
1cbd092b6f
make docker build and gradle tasks more friendly towards ci output (#16618) (#16621)
(cherry picked from commit 472e27a014)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-10-31 17:04:42 +00:00
github-actions[bot]
b242715f76
[CI] Change agent for JDK availability check and add schedule also for 8.x (#16614) (#16617)
Switch execution agent of JDK availability check pipeline from vm-agent to container-agent.
Moves the schedule definition from the `Logstash Pipeline Scheduler` pipeline into the pipeline definition, adding a schedule also for `8.x` branch.

(cherry picked from commit c602b851bf)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-30 12:52:03 +01:00
github-actions[bot]
1335ec80f3
Fix bad reference to a variable (#16615) (#16616)
(cherry picked from commit 5d523aa5c8)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-30 12:51:14 +01:00
github-actions[bot]
51851a99d3
Use jvm catalog for reproducible builds and expose new pipeline to check JDK availability (#16602) (#16609)
Updates the existing `createElasticCatalogDownloadUrl` method to use the precise version retrieved `versions.yml` to download the JDK instead of using the latest of major version. This makes the build reproducible again.
Defines a new Gradle `checkNewJdkVersion` task to check if there is a new JDK version available from JVM catalog matching the same major of the current branch.
Creates a new Buildkite pipeline to execute a `bash` script to run the Gradle task; plus it also update the `catalog-info.yaml` with the new pipeline and a trigger to execute every week.

(cherry picked from commit ed5874bc27)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-30 12:15:08 +01:00
github-actions[bot]
d6c96b407f
make max inflight warning global to all pipelines (#16597) (#16601)
The current max inflight error message focuses on a single pipeline and on a maximum amount of 10k events regardless of the heap size.

The new warning will take into account all loaded pipelines and also consider the heap size, giving a warning if the total number of events consumes 10% or more of the total heap.

For the purpose of the warning events are assumed to be 2KB as it a normal size for a small log entry.

(cherry picked from commit ca19f0029e)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-10-25 15:16:18 +01:00
Edmo Vamerlatti Costa
79e439e27b
bump version to 8.17.0 (#16592) 2024-10-24 09:45:56 +01:00
github-actions[bot]
8fb1292934
Release notes for 8.15.3 (#16527) (#16571)
* Update release notes for 8.15.3

* Refined release notes

* Apply suggestions from code review

* Update docs/static/releasenotes.asciidoc

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: edmocosta <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 2788841f5c)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-10-18 11:11:33 +02:00
github-actions[bot]
7e1877ca12
add http.* deprecation log (#16538) (#16582)
- refactor deprecated alias to support obsoleted version
- add deprecation log for http.* config

(cherry picked from commit 3f0ad12d06)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-10-17 16:23:15 +01:00
github-actions[bot]
27bd2a039b
Backport PR #16482 to 8.x: Bugfix for BufferedTokenizer to completely consume lines in case of lines bigger then sizeLimit (#16569)
Fixes the behaviour of the tokenizer to be able to work properly when buffer full conditions are met.

Updates BufferedTokenizerExt so that can accumulate token fragments coming from different data segments. When a "buffer full" condition is matched, it record this state in a local field so that on next data segment it can consume all the token fragments till the next token delimiter.
Updated the accumulation variable from RubyArray containing strings to a StringBuilder which contains the head token, plus the remaining token fragments are stored in the input array.
Furthermore it translates the `buftok_spec` tests into JUnit tests.

(cherry picked from commit 85493ce864)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-16 17:32:52 +02:00
github-actions[bot]
216c68f280
Backport PR #16564 to 8.x: Adds a JMH benchmark to test BufferedTokenizerExt class(#16570)
Adds a JMH benchmark to measure the peformances of BufferedTokenizerExt.
Update also Gradle build script to remove CMS GC flags and fix deprecations for Gradle 9.0.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit b6f16c8b81)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-16 17:31:46 +02:00
github-actions[bot]
6a573f40fa
ensure minitar 1.x is used instead of 0.x (#16565) (#16566)
(cherry picked from commit ab77d36daa)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-10-16 13:35:42 +01:00
Andrea Selva
396b3fef40
Deprecate for removal ArcSight module (#16551)
Logs a deprecation when Logstash 8.x is started with ArsSight module.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-10-15 19:50:19 +02:00
Andrea Selva
c1374a1d81
Log deprecation warn if memory buffer type not defined (#16498)
On 8.x series log a deprecation log if the user didn't explicitly specify a selection for pipeline.buffer.type. Before this change the default was silently set to direct, after this change if not explicitly defined, the default is still direct but log a deprecation log.

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-15 16:03:41 +02:00
kaisecheng
4677cb22ed
add modules deprecation log for netflow, fb_apache and azure (#16548)
relates: #16357
2024-10-14 12:40:45 +01:00
github-actions[bot]
dc0739bdaf
refactor log for event_api.tags.illegal (#16545) (#16547)
- add `obsoleted_version` and remove `deprecated_msg` from `deprecated_option` for consistent warning message

(cherry picked from commit 8cd0fa8767)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-10-11 21:51:53 +01:00
github-actions[bot]
a1f4633b55
Backport PR #16525 to 8.x: [test] Fix xpack test to check for http_address stats only if the webserver is enabled (#16531)
* [test] Fix xpack test to check for http_address stats only if the webserver is enabled (#16525)

Set the 'api.enabled' setting to reflect the flag webserver_enabled and consequently test for http_address presence in settings iff the web server is enabled.

(cherry picked from commit 648472106f)

* Update also the global LogStash::SETTINGS's 'api.enabled' setting value becuase used in the constructor of StatsEventFactory and needs to be in synch with the settings provided to the Agent constructor

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-11 15:24:11 +02:00
github-actions[bot]
8c6832e3d9
Backport PR #16506 to 8.x: Avoid to access Java DeprecatedAlias value other than Ruby's one
Update Settings to_hash method to also skip Java DeprecatedAlias and not just the Ruby one.
With PR #15679 was introduced org.logstash.settings.DeprecatedAlias which mirrors the behaviour of Ruby class Setting::DeprecatedAlias. The equality check at Logstash::Settings, as descibed in #16505 (comment), is implemented comparing the maps.
The conversion of Settings to the corresponding maps filtered out the Ruby implementation of DeprecatedAlias but not the Java one.
This PR adds also the Java one to the list of filter.

(cherry picked from commit 5d4825f000)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-11 11:00:48 +02:00
github-actions[bot]
0594c8867f
Backport PR #15679 to 8.x: [Spacetime] Reimplement config Setting classe in java (#16490)
* [Spacetime] Reimplement config Setting classe in java (#15679)

Reimplement the root Ruby Setting class in Java and use it from the Ruby one moving the original Ruby class to a shell wrapping the Java instance.
In particular create a new symmetric hierarchy (at the time just for `Setting`, `Coercible` and `Boolean` classes) to the Ruby one, moving also the feature for setting deprecation. In this way the new `org.logstash.settings.Boolean` is syntactically and semantically equivalent to the old Ruby Boolean class, which replaces.

(cherry picked from commit 61de60fe26)

* Adds supress warnings related to this-escape for Java Settings classes

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-11 08:54:03 +02:00
github-actions[bot]
26c2f61276
Flow worker utilization probe (#16532) (#16537)
* flow: refactor pipeline refs to keep worker flows separate

* health: add worker_utilization probe

pipeline is:
  - RED "completely blocked" when last_5_minutes >= 99.999
  - YELLOW "nearly blocked" when last_5_minutes > 95
    - and inludes "recovering" info when last_1_minute < 80
  - YELLOW "completely blocked" when last_1_minute >= 99.999
  - YELLOW "nearly blocked" when last_1_minute > 95

* tests: improve coverage of PipelineIndicator probes

* Apply suggestions from code review

(cherry picked from commit a931b2cde6)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-10 19:59:43 -07:00
github-actions[bot]
986a253db8
health: add logstash.forceApiStatus: green escape hatch (#16535) (#16536)
(cherry picked from commit 065769636b)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-10 17:21:35 -07:00
github-actions[bot]
ad7c61448f
Health api minor followups (#16533) (#16534)
* Utilize default agent for Health API CI. Call python scripts from directly CI step.

* Change BK agent to support both Java and python. Install pip manually and send env vars to subprocess.

(cherry picked from commit 4037adfc4a)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-10 15:25:53 -07:00
Mashhur
1e5105fcd8
Fix QA failure introduced by Health API changes and update rspec dependency of the QA package. (#16521)
* Update rspec dependency of the QA package.

* Update qa/Gemfile

Align on rspec 3.13.x

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>

* Fix the QA test failure caused after reflecting Health Report status to the Node stats.

---------

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-09 14:47:01 -07:00
Ry Biesemeyer
7eb5185b4e
Feature: health report api (#16520)
* [health] bootstrap HealthObserver from agent to API (#16141)

* [health] bootstrap HealthObserver from agent to API

* specs: mocked agent needs health observer

* add license headers

* Merge `main` into `feature/health-report-api` (#16397)

* Add GH vault plugin bot to allowed list (#16301)

* regenerate webserver test certificates (#16331)

* correctly handle stack overflow errors during pipeline compilation (#16323)

This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320

* Doc: Reposition worker-utilization in doc (#16335)

* settings: add support for observing settings after post-process hooks (#16339)

Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332

* fix line used to determine ES is up (#16349)

* add retries to snyk buildkite job (#16343)

* Fix 8.13.1 release notes (#16363)

make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16347)

* [Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)

* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Doc: Add SNMP integration to breaking changes (#16374)

* deprecate java less-than 17 (#16370)

* Exclude substitution refinement on pipelines.yml (#16375)

* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .

* Doc: Forwardport 8.15.0 release notes to main (#16388)

* Removing 8.14 from ci/branches.json as we have 8.15. (#16390)

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Squashed merge from 8.x

* Failure injector plugin implementation. (#16466)

* Test purpose only failure injector integration (filter and output) plugins implementation. Add unit tests and include license notes.

* Fix the degrate method name typo.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Add explanation to the config params and rebuild plugin gem.

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Health report integration tests bootstrapper and initial tests implementation (#16467)

* Health Report integration tests bootstrapper and initial slow start scenario implementation.

* Apply suggestions from code review

Renaming expectation check method name.

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* Changed to branch concept, YAML structure simplified as changed to Dict.

* Apply suggestions from code review

Reflect `help_url` to the integration test.

---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* health api: expose `GET /_health_report` with pipelines/*/status probe (#16398)

Adds a `GET /_health_report` endpoint with per-pipeline status probes, and wires the
resulting report status into the other API responses, replacing their hard-coded `green`
with a meaningful status indication.

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* docs: health report API, and diagnosis links (feature-targeted) (#16518)

* docs: health report API, and diagnosis links

* Remove plus-for-passthrough markers

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* merge 8.x into feature branch... (#16519)

* Add GH vault plugin bot to allowed list (#16301)

* regenerate webserver test certificates (#16331)

* correctly handle stack overflow errors during pipeline compilation (#16323)

This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320

* Doc: Reposition worker-utilization in doc (#16335)

* settings: add support for observing settings after post-process hooks (#16339)

Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332

* fix line used to determine ES is up (#16349)

* add retries to snyk buildkite job (#16343)

* Fix 8.13.1 release notes (#16363)

make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16347)

* [Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)

* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Doc: Add SNMP integration to breaking changes (#16374)

* deprecate java less-than 17 (#16370)

* Exclude substitution refinement on pipelines.yml (#16375)

* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .

* Doc: Forwardport 8.15.0 release notes to main (#16388)

* Removing 8.14 from ci/branches.json as we have 8.15. (#16390)

* Increase Jruby -Xmx to avoid OOM during zip task in DRA (#16408)

Fix: #16406

* Generate Dataset code with meaningful fields names (#16386)

This PR is intended to help Logstash developers or users that want to better understand the code that's autogenerated to model a pipeline, assigning more meaningful names to the Datasets subclasses' fields.

Updates `FieldDefinition` to receive the name of the field from construction methods, so that it can be used during the code generation phase, instead of the existing incremental `field%n`.
Updates `ClassFields` to propagate the explicit field name down to the `FieldDefinitions`.
Update the `DatasetCompiler` that add fields to `ClassFields` to assign a proper name to generated Dataset's fields.

* Implements safe evaluation of conditional expressions, logging the error without killing the pipeline (#16322)

This PR protects the if statements against expression evaluation errors, cancel the event under processing and log it.
This avoids to crash the pipeline which encounter a runtime error during event condition evaluation, permitting to debug the root cause reporting the offending event and removing from the current processing batch.

Translates the `org.jruby.exceptions.TypeError`, `IllegalArgumentException`, `org.jruby.exceptions.ArgumentError` that could happen during `EventCodition` evaluation into a custom `ConditionalEvaluationError` which bubbles up on AST tree nodes. It's catched in the `SplitDataset` node.
Updates the generation of the `SplitDataset `so that the execution of `filterEvents` method inside the compute body is try-catch guarded and defer the execution to an instance of `AbstractPipelineExt.ConditionalEvaluationListener` to handle such error. In this particular case the error management consist in just logging the offending Event.


---------

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16426)

* Release notes for 8.15.1 (#16405) (#16427)

* Update release notes for 8.15.1

* update release note

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Kaise Cheng <kaise.cheng@elastic.co>
(cherry picked from commit 2fca7e39e8)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Fix ConditionalEvaluationError to do not include the event that errored in its serialiaxed form, because it's not expected that this class is ever serialized. (#16429) (#16430)

Make inner field of ConditionalEvaluationError transient to be avoided during serialization.

(cherry picked from commit bb7ecc203f)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* use gnu tar compatible minitar to generate tar artifact (#16432) (#16434)

Using VERSION_QUALIFIER when building the tarball distribution will fail since Ruby's TarWriter implements the older POSIX88 version of tar and paths will be longer than 100 characters.

For the long paths being used in Logstash's plugins, mainly due to nested folders from jar-dependencies, we need the tarball to follow either the 2001 ustar format or gnu tar, which is implemented by the minitar gem.

(cherry picked from commit 69f0fa54ca)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* account for the 8.x in DRA publishing task (#16436) (#16440)

the current DRA publishing task computes the branch from the version
contained in the version.yml

This is done by taking the major.minor and confirming that a branch
exists with that name.

However this pattern won't be applicable for 8.x, as that branch
currently points to 8.16.0 and there is no 8.16 branch.

This commit falls back to reading the buildkite injected
BUILDKITE_BRANCH variable.

(cherry picked from commit 17dba9f829)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Fixes the issue where LS wipes out all quotes from docker env variables. (#16456) (#16459)

* Fixes the issue where LS wipes out all quotes from docker env variables. This is an issue when running LS on docker with CONFIG_STRING, needs to keep quotes with env variable.

* Add a docker acceptance integration test.

(cherry picked from commit 7c64c7394b)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Known issue for 8.15.1 related to env vars references (#16455) (#16469)

(cherry picked from commit b54caf3fd8)

Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>

* bump .ruby_version to jruby-9.4.8.0 (#16477) (#16480)

(cherry picked from commit 51cca7320e)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Release notes for 8.15.2 (#16471) (#16478)

Co-authored-by: andsel <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 01dc76f3b5)

* Change LogStash::Util::SubstitutionVariables#replace_placeholders refine argument to optional (#16485) (#16488)

(cherry picked from commit 8368c00367)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>

* Use jruby-9.4.8.0 in exhaustive CIs. (#16489) (#16491)

(cherry picked from commit fd1de39005)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Don't use an older JRuby with oraclelinux-7 (#16499) (#16501)

A recent PR (elastic/ci-agent-images/pull/932) modernized the VM images
and removed JRuby 9.4.5.0 and some older versions.

This ended up breaking exhaustive test on Oracle Linux 7 that hard coded
JRuby 9.4.5.0.

PR https://github.com/elastic/logstash/pull/16489 worked around the
problem by pinning to the new JRuby, but actually we don't
need the conditional anymore since the original issue
https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 has
been resolved and none of our releasable branches (apart from 7.17 which
uses `9.2.20.1`) specify `9.3.x.y` in `/.ruby-version`.

Therefore, this commit removes conditional setting of JRuby for
OracleLinux 7 agents in exhaustive tests (and relies on whatever
`/.ruby-version` defines).

(cherry picked from commit 07c01f8231)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>

* Improve pipeline bootstrap error logs (#16495) (#16504)

This PR adds the cause errors details on the pipeline converge state error logs

(cherry picked from commit e84fb458ce)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>

* Logstash Health Report Tests Buildkite pipeline setup. (#16416) (#16511)

(cherry picked from commit 5195332bc6)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Make health report test runner script executable. (#16446) (#16512)

(cherry picked from commit 2ebf2658ff)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Backport PR #16423 to 8.x: DLQ-ing events that trigger an conditional evaluation error. (#16493)

* DLQ-ing events that trigger an conditional evaluation error. (#16423)

When a conditional evaluation encounter an error in the expression the event that triggered the issue is sent to pipeline's DLQ, if enabled for the executing pipeline.

This PR engage with the work done in #16322, the `ConditionalEvaluationListener` that is receives notifications about if-statements evaluation failure, is improved to also send the event to DLQ (if enabled in the pipeline) and not just logging it.

(cherry picked from commit b69d993d71)

* Fixed warning about non serializable field DeadLetterQueueWriter in serializable AbstractPipelineExt

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* add deprecation log for `--event_api.tags.illegal` (#16507) (#16515)

- move `--event_api.tags.illegal` from option to deprecated_option
- add deprecation log when the flag is explicitly used
relates: #16356

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
(cherry picked from commit a4eddb8a2a)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2024-10-09 09:48:12 -07:00
github-actions[bot]
c2c62fdce4
add deprecation log for --event_api.tags.illegal (#16507) (#16515)
- move `--event_api.tags.illegal` from option to deprecated_option
- add deprecation log when the flag is explicitly used
relates: #16356

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
(cherry picked from commit a4eddb8a2a)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-10-08 14:40:49 +01:00
github-actions[bot]
e854ac7bf5
Backport PR #16423 to 8.x: DLQ-ing events that trigger an conditional evaluation error. (#16493)
* DLQ-ing events that trigger an conditional evaluation error. (#16423)

When a conditional evaluation encounter an error in the expression the event that triggered the issue is sent to pipeline's DLQ, if enabled for the executing pipeline.

This PR engage with the work done in #16322, the `ConditionalEvaluationListener` that is receives notifications about if-statements evaluation failure, is improved to also send the event to DLQ (if enabled in the pipeline) and not just logging it.

(cherry picked from commit b69d993d71)

* Fixed warning about non serializable field DeadLetterQueueWriter in serializable AbstractPipelineExt

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-08 13:45:15 +01:00
github-actions[bot]
3b751d9794
Make health report test runner script executable. (#16446) (#16512)
(cherry picked from commit 2ebf2658ff)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-05 07:32:15 -07:00
github-actions[bot]
2f3f6a9651
Logstash Health Report Tests Buildkite pipeline setup. (#16416) (#16511)
(cherry picked from commit 5195332bc6)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-05 07:30:51 -07:00
github-actions[bot]
d1155988c1
Improve pipeline bootstrap error logs (#16495) (#16504)
This PR adds the cause errors details on the pipeline converge state error logs

(cherry picked from commit e84fb458ce)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
2024-10-03 13:31:22 +02:00
github-actions[bot]
476b9216f2
Don't use an older JRuby with oraclelinux-7 (#16499) (#16501)
A recent PR (elastic/ci-agent-images/pull/932) modernized the VM images
and removed JRuby 9.4.5.0 and some older versions.

This ended up breaking exhaustive test on Oracle Linux 7 that hard coded
JRuby 9.4.5.0.

PR https://github.com/elastic/logstash/pull/16489 worked around the
problem by pinning to the new JRuby, but actually we don't
need the conditional anymore since the original issue
https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 has
been resolved and none of our releasable branches (apart from 7.17 which
uses `9.2.20.1`) specify `9.3.x.y` in `/.ruby-version`.

Therefore, this commit removes conditional setting of JRuby for
OracleLinux 7 agents in exhaustive tests (and relies on whatever
`/.ruby-version` defines).

(cherry picked from commit 07c01f8231)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2024-10-02 19:57:58 +03:00
github-actions[bot]
2c024daecd
Use jruby-9.4.8.0 in exhaustive CIs. (#16489) (#16491)
(cherry picked from commit fd1de39005)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-02 09:33:31 +01:00
github-actions[bot]
eafcf577dd
Change LogStash::Util::SubstitutionVariables#replace_placeholders refine argument to optional (#16485) (#16488)
(cherry picked from commit 8368c00367)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
2024-10-01 12:16:03 -07:00
github-actions[bot]
8f20bd90c9
Release notes for 8.15.2 (#16471) (#16478)
Co-authored-by: andsel <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 01dc76f3b5)
2024-09-26 18:28:55 +02:00
github-actions[bot]
1ccfb161d7
bump .ruby_version to jruby-9.4.8.0 (#16477) (#16480)
(cherry picked from commit 51cca7320e)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-09-25 13:27:54 +01:00
github-actions[bot]
30803789fc
Known issue for 8.15.1 related to env vars references (#16455) (#16469)
(cherry picked from commit b54caf3fd8)

Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
2024-09-19 14:34:35 -04:00
github-actions[bot]
14f52c0472
Fixes the issue where LS wipes out all quotes from docker env variables. (#16456) (#16459)
* Fixes the issue where LS wipes out all quotes from docker env variables. This is an issue when running LS on docker with CONFIG_STRING, needs to keep quotes with env variable.

* Add a docker acceptance integration test.

(cherry picked from commit 7c64c7394b)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-09-17 07:30:45 -07:00
github-actions[bot]
d2b19001de
account for the 8.x in DRA publishing task (#16436) (#16440)
the current DRA publishing task computes the branch from the version
contained in the version.yml

This is done by taking the major.minor and confirming that a branch
exists with that name.

However this pattern won't be applicable for 8.x, as that branch
currently points to 8.16.0 and there is no 8.16 branch.

This commit falls back to reading the buildkite injected
BUILDKITE_BRANCH variable.

(cherry picked from commit 17dba9f829)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-09-10 10:56:27 +01:00
github-actions[bot]
32e7a25a15
use gnu tar compatible minitar to generate tar artifact (#16432) (#16434)
Using VERSION_QUALIFIER when building the tarball distribution will fail since Ruby's TarWriter implements the older POSIX88 version of tar and paths will be longer than 100 characters.

For the long paths being used in Logstash's plugins, mainly due to nested folders from jar-dependencies, we need the tarball to follow either the 2001 ustar format or gnu tar, which is implemented by the minitar gem.

(cherry picked from commit 69f0fa54ca)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-09-09 11:35:21 +01:00
github-actions[bot]
5ef86a8aa1
Fix ConditionalEvaluationError to do not include the event that errored in its serialiaxed form, because it's not expected that this class is ever serialized. (#16429) (#16430)
Make inner field of ConditionalEvaluationError transient to be avoided during serialization.

(cherry picked from commit bb7ecc203f)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-09-06 11:40:42 +01:00
216 changed files with 9535 additions and 990 deletions

View file

@ -35,48 +35,71 @@ steps:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 1"
key: "integration-tests-part-1"
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 0
ci/integration_tests.sh split 0 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 2"
key: "integration-tests-part-2"
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 1
ci/integration_tests.sh split 1 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 1"
key: "integration-tests-qa-part-1"
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 2 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0
ci/integration_tests.sh split 0 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 2"
key: "integration-tests-qa-part-2"
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1
ci/integration_tests.sh split 1 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
retry:
automatic:
- limit: 3

View file

@ -4,8 +4,12 @@ steps:
- label: ":pipeline: Generate steps"
command: |
set -euo pipefail
echo "--- Building [${WORKFLOW_TYPE}] artifacts"
echo "--- Building [$${WORKFLOW_TYPE}] artifacts"
python3 -m pip install pyyaml
echo "--- Building dynamic pipeline steps"
python3 .buildkite/scripts/dra/generatesteps.py | buildkite-agent pipeline upload
python3 .buildkite/scripts/dra/generatesteps.py > steps.yml
echo "--- Printing dynamic pipeline steps"
cat steps.yml
echo "--- Uploading dynamic pipeline steps"
cat steps.yml | buildkite-agent pipeline upload

View file

@ -0,0 +1,20 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 64
steps:
- group: ":logstash: Health API integration tests"
key: "testing-phase"
steps:
- label: "main branch"
key: "integ-tests-on-main-branch"
command:
- .buildkite/scripts/health-report-tests/main.sh
retry:
automatic:
- limit: 3

View file

@ -0,0 +1,14 @@
steps:
- label: "JDK Availability check"
key: "jdk-availability-check"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "4"
memory: "6Gi"
ephemeralStorage: "100Gi"
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export GRADLE_OPTS="-Xmx2g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info"
ci/check_jdk_version_availability.sh

View file

@ -79,8 +79,8 @@ steps:
manual:
allowed: true
- label: ":lab_coat: Integration Tests / part 1"
key: "integration-tests-part-1"
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -95,10 +95,10 @@ steps:
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 0
ci/integration_tests.sh split 0 3
- label: ":lab_coat: Integration Tests / part 2"
key: "integration-tests-part-2"
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -113,10 +113,28 @@ steps:
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 1
ci/integration_tests.sh split 1 3
- label: ":lab_coat: IT Persistent Queues / part 1"
key: "integration-tests-qa-part-1"
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 2 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -132,10 +150,10 @@ steps:
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0
ci/integration_tests.sh split 0 3
- label: ":lab_coat: IT Persistent Queues / part 2"
key: "integration-tests-qa-part-2"
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -151,7 +169,26 @@ steps:
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1
ci/integration_tests.sh split 1 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as non root (logstash) user. UID is hardcoded in image.
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
- label: ":lab_coat: x-pack unit tests"
key: "x-pack-unit-tests"

View file

@ -12,7 +12,7 @@ set -eo pipefail
# https://github.com/elastic/ingest-dev/issues/2664
# *******************************************************
ACTIVE_BRANCHES_URL="https://raw.githubusercontent.com/elastic/logstash/main/ci/branches.json"
ACTIVE_BRANCHES_URL="https://storage.googleapis.com/artifacts-api/snapshots/branches.json"
EXCLUDE_BRANCHES_ARRAY=()
BRANCHES=()
@ -63,7 +63,7 @@ exclude_branches_to_array
set -u
set +e
# pull releaseable branches from $ACTIVE_BRANCHES_URL
readarray -t ELIGIBLE_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $ACTIVE_BRANCHES_URL | jq -r '.branches[].branch')
readarray -t ELIGIBLE_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $ACTIVE_BRANCHES_URL | jq -r '.branches[]')
if [[ $? -ne 0 ]]; then
echo "There was an error downloading or parsing the json output from [$ACTIVE_BRANCHES_URL]. Exiting."
exit 1

View file

@ -9,5 +9,5 @@
"amazonlinux": ["amazonlinux-2023"],
"opensuse": ["opensuse-leap-15"]
},
"windows": ["windows-2022", "windows-2019", "windows-2016"]
"windows": ["windows-2025", "windows-2022", "windows-2019", "windows-2016"]
}

View file

@ -7,63 +7,42 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
# WORKFLOW_TYPE is a CI externally configured environment variable that could assume "snapshot" or "staging" values
info "Building artifacts for the $WORKFLOW_TYPE workflow ..."
case "$WORKFLOW_TYPE" in
snapshot)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
rake artifact:docker || error "artifact:docker build failed."
rake artifact:docker_oss || error "artifact:docker_oss build failed."
rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
else
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker || error "artifact:docker build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_oss || error "artifact:docker_oss build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
: # no-op
;;
staging)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
RELEASE=1 rake artifact:docker || error "artifact:docker build failed."
RELEASE=1 rake artifact:docker_oss || error "artifact:docker_oss build failed."
RELEASE=1 rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
RELEASE=1 rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
RELEASE=1 rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
else
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker || error "artifact:docker build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_oss || error "artifact:docker_oss build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
export RELEASE=1
;;
*)
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
rake artifact:docker || error "artifact:docker build failed."
rake artifact:docker_oss || error "artifact:docker_oss build failed."
rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [[ "$ARCH" != "aarch64" ]]; then
rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
if [[ "$WORKFLOW_TYPE" == "staging" ]] && [[ -n "$VERSION_QUALIFIER" ]]; then
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases for staging builds only:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER}"
fi
if [[ "$WORKFLOW_TYPE" == "snapshot" ]]; then
STACK_VERSION="${STACK_VERSION}-SNAPSHOT"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Saving tar.gz for docker images"
save_docker_tarballs "${ARCH}" "${STACK_VERSION}"

View file

@ -7,39 +7,35 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
# WORKFLOW_TYPE is a CI externally configured environment variable that could assume "snapshot" or "staging" values
info "Building artifacts for the $WORKFLOW_TYPE workflow ..."
case "$WORKFLOW_TYPE" in
snapshot)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
else
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
: # no-op
;;
staging)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
RELEASE=1 SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
else
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
export RELEASE=1
;;
*)
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
if [[ "$WORKFLOW_TYPE" == "staging" ]] && [[ -n "$VERSION_QUALIFIER" ]]; then
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases for staging builds only:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER}"
fi
if [[ "$WORKFLOW_TYPE" == "snapshot" ]]; then
STACK_VERSION="${STACK_VERSION}-SNAPSHOT"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Generated Artifacts"
for file in build/logstash-*; do shasum $file;done

View file

@ -29,7 +29,7 @@ function save_docker_tarballs {
# Since we are using the system jruby, we need to make sure our jvm process
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx2g"
export JRUBY_OPTS="-J-Xmx4g"
# Extract the version number from the version.yml file
# e.g.: 8.6.0

View file

@ -3,6 +3,8 @@ import sys
import yaml
YAML_HEADER = '# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json\n'
def to_bk_key_friendly_string(key):
"""
Convert and return key to an acceptable format for Buildkite's key: field
@ -28,6 +30,8 @@ def package_x86_step(branch, workflow_type):
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
eval "$(rbenv init -)"
.buildkite/scripts/dra/build_packages.sh
artifact_paths:
- "**/*.hprof"
'''
return step
@ -42,6 +46,8 @@ def package_x86_docker_step(branch, workflow_type):
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 200
artifact_paths:
- "**/*.hprof"
command: |
export WORKFLOW_TYPE="{workflow_type}"
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
@ -61,6 +67,8 @@ def package_aarch64_docker_step(branch, workflow_type):
imagePrefix: platform-ingest-logstash-ubuntu-2204-aarch64
instanceType: "m6g.4xlarge"
diskSizeGb: 200
artifact_paths:
- "**/*.hprof"
command: |
export WORKFLOW_TYPE="{workflow_type}"
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
@ -106,6 +114,7 @@ def build_steps_to_yaml(branch, workflow_type):
if __name__ == "__main__":
try:
workflow_type = os.environ["WORKFLOW_TYPE"]
version_qualifier = os.environ.get("VERSION_QUALIFIER", "")
except ImportError:
print(f"Missing env variable WORKFLOW_TYPE. Use export WORKFLOW_TYPE=<staging|snapshot>\n.Exiting.")
exit(1)
@ -114,18 +123,25 @@ if __name__ == "__main__":
structure = {"steps": []}
# Group defining parallel steps that build and save artifacts
group_key = to_bk_key_friendly_string(f"logstash_dra_{workflow_type}")
if workflow_type.upper() == "SNAPSHOT" and len(version_qualifier)>0:
structure["steps"].append({
"label": f"no-op pipeline because prerelease builds (VERSION_QUALIFIER is set to [{version_qualifier}]) don't support the [{workflow_type}] workflow",
"command": ":",
"skip": "VERSION_QUALIFIER (prerelease builds) not supported with SNAPSHOT DRA",
})
else:
# Group defining parallel steps that build and save artifacts
group_key = to_bk_key_friendly_string(f"logstash_dra_{workflow_type}")
structure["steps"].append({
"group": f":Build Artifacts - {workflow_type.upper()}",
"key": group_key,
"steps": build_steps_to_yaml(branch, workflow_type),
})
structure["steps"].append({
"group": f":Build Artifacts - {workflow_type.upper()}",
"key": group_key,
"steps": build_steps_to_yaml(branch, workflow_type),
})
# Final step: pull artifacts built above and publish them via the release-manager
structure["steps"].extend(
yaml.safe_load(publish_dra_step(branch, workflow_type, depends_on=group_key)),
)
# Final step: pull artifacts built above and publish them via the release-manager
structure["steps"].extend(
yaml.safe_load(publish_dra_step(branch, workflow_type, depends_on=group_key)),
)
print('# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json\n' + yaml.dump(structure, Dumper=yaml.Dumper, sort_keys=False))
print(YAML_HEADER + yaml.dump(structure, Dumper=yaml.Dumper, sort_keys=False))

View file

@ -7,7 +7,9 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
PLAIN_STACK_VERSION=$STACK_VERSION
# DRA_BRANCH can be used for manually testing packaging with PRs
# e.g. define `DRA_BRANCH="main"` and `RUN_SNAPSHOT="true"` under Options/Environment Variables in the Buildkite UI after clicking new Build
BRANCH="${DRA_BRANCH:="${BUILDKITE_BRANCH:=""}"}"
# This is the branch selector that needs to be passed to the release-manager
# It has to be the name of the branch which originates the artifacts.
@ -15,29 +17,24 @@ RELEASE_VER=`cat versions.yml | sed -n 's/^logstash\:[[:space:]]\([[:digit:]]*\.
if [ -n "$(git ls-remote --heads origin $RELEASE_VER)" ] ; then
RELEASE_BRANCH=$RELEASE_VER
else
RELEASE_BRANCH=main
RELEASE_BRANCH="${BRANCH:="main"}"
fi
echo "RELEASE BRANCH: $RELEASE_BRANCH"
if [ -n "$VERSION_QUALIFIER_OPT" ]; then
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
PLAIN_STACK_VERSION="${PLAIN_STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
VERSION_QUALIFIER="${VERSION_QUALIFIER:=""}"
case "$WORKFLOW_TYPE" in
snapshot)
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
:
;;
staging)
;;
*)
error "Worklflow (WORKFLOW_TYPE variable) is not set, exiting..."
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
info "Uploading artifacts for ${WORKFLOW_TYPE} workflow on branch: ${RELEASE_BRANCH}"
info "Uploading artifacts for ${WORKFLOW_TYPE} workflow on branch: ${RELEASE_BRANCH} for version: ${STACK_VERSION} with version_qualifier: ${VERSION_QUALIFIER}"
if [ "$RELEASE_VER" != "7.17" ]; then
# Version 7.17.x doesn't generates ARM artifacts for Darwin
@ -55,7 +52,16 @@ rm -f build/logstash-ubi8-${STACK_VERSION}-docker-image-aarch64.tar.gz
info "Downloaded ARTIFACTS sha report"
for file in build/logstash-*; do shasum $file;done
mv build/distributions/dependencies-reports/logstash-${STACK_VERSION}.csv build/distributions/dependencies-${STACK_VERSION}.csv
FINAL_VERSION=$STACK_VERSION
if [[ -n "$VERSION_QUALIFIER" ]]; then
FINAL_VERSION="$FINAL_VERSION-${VERSION_QUALIFIER}"
fi
if [[ "$WORKFLOW_TYPE" == "snapshot" ]]; then
FINAL_VERSION="${STACK_VERSION}-SNAPSHOT"
fi
mv build/distributions/dependencies-reports/logstash-${FINAL_VERSION}.csv build/distributions/dependencies-${FINAL_VERSION}.csv
# set required permissions on artifacts and directory
chmod -R a+r build/*
@ -73,6 +79,22 @@ release_manager_login
# ensure the latest image has been pulled
docker pull docker.elastic.co/infra/release-manager:latest
echo "+++ :clipboard: Listing DRA artifacts for version [$STACK_VERSION], branch [$RELEASE_BRANCH], workflow [$WORKFLOW_TYPE], QUALIFIER [$VERSION_QUALIFIER]"
docker run --rm \
--name release-manager \
-e VAULT_ROLE_ID \
-e VAULT_SECRET_ID \
--mount type=bind,readonly=false,src="$PWD",target=/artifacts \
docker.elastic.co/infra/release-manager:latest \
cli list \
--project logstash \
--branch "${RELEASE_BRANCH}" \
--commit "$(git rev-parse HEAD)" \
--workflow "${WORKFLOW_TYPE}" \
--version "${STACK_VERSION}" \
--artifact-set main \
--qualifier "${VERSION_QUALIFIER}"
info "Running the release manager ..."
# collect the artifacts for use with the unified build
@ -88,8 +110,9 @@ docker run --rm \
--branch ${RELEASE_BRANCH} \
--commit "$(git rev-parse HEAD)" \
--workflow "${WORKFLOW_TYPE}" \
--version "${PLAIN_STACK_VERSION}" \
--version "${STACK_VERSION}" \
--artifact-set main \
--qualifier "${VERSION_QUALIFIER}" \
${DRA_DRY_RUN} | tee rm-output.txt
# extract the summary URL from a release manager output line like:

View file

@ -147,11 +147,6 @@ rake artifact:deb artifact:rpm
set -eo pipefail
source .buildkite/scripts/common/vm-agent-multi-jdk.sh
source /etc/os-release
if [[ "$$(echo $$ID_LIKE | tr '[:upper:]' '[:lower:]')" =~ (rhel|fedora) && "$${VERSION_ID%.*}" -le 7 ]]; then
# jruby-9.3.10.0 unavailable on centos-7 / oel-7, see https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 / https://github.com/jruby/jruby/issues/7695
# we only need a working jruby to run the acceptance test framework -- the packages have been prebuilt in a previous stage
rbenv local jruby-9.4.5.0
fi
ci/acceptance_tests.sh"""),
}
steps.append(step)

View file

@ -0,0 +1,18 @@
## Description
This package for integration tests of the Health Report API.
Export `LS_BRANCH` to run on a specific branch. By default, it uses the main branch.
## How to run the Health Report Integration test?
### Prerequisites
Make sure you have python installed. Install the integration test dependencies with the following command:
```shell
python3 -mpip install -r .buildkite/scripts/health-report-tests/requirements.txt
```
### Run the integration tests
```shell
python3 .buildkite/scripts/health-report-tests/main.py
```
### Troubleshooting
- If you get `WARNING: pip is configured with locations that require TLS/SSL,...` warning message, make sure you have python >=3.12.4 installed.

View file

@ -0,0 +1,94 @@
"""
Health Report Integration test bootstrapper with Python script
- A script to resolve Logstash version if not provided
- Download LS docker image and spin up
- When tests finished, teardown the Logstash
"""
import os
import subprocess
import util
import yaml
class Bootstrap:
ELASTIC_STACK_RELEASED_VERSION_URL = "https://storage.googleapis.com/artifacts-api/releases/current/"
def __init__(self) -> None:
f"""
A constructor of the {Bootstrap}.
Returns:
Resolves Logstash branch considering provided LS_BRANCH
Checks out git branch
"""
logstash_branch = os.environ.get("LS_BRANCH")
if logstash_branch is None:
# version is not specified, use the main branch, no need to git checkout
print(f"LS_BRANCH is not specified, using main branch.")
else:
# LS_BRANCH accepts major latest as a major.x or specific branch as X.Y
if logstash_branch.find(".x") == -1:
print(f"Using specified branch: {logstash_branch}")
util.git_check_out_branch(logstash_branch)
else:
major_version = logstash_branch.split(".")[0]
if major_version and major_version.isnumeric():
resolved_version = self.__resolve_latest_stack_version_for(major_version)
minor_version = resolved_version.split(".")[1]
branch = major_version + "." + minor_version
print(f"Using resolved branch: {branch}")
util.git_check_out_branch(branch)
else:
raise ValueError(f"Invalid value set to LS_BRANCH. Please set it properly (ex: 8.x or 9.0) and "
f"rerun again")
def __resolve_latest_stack_version_for(self, major_version: str) -> str:
resp = util.call_url_with_retry(self.ELASTIC_STACK_RELEASED_VERSION_URL + major_version)
release_version = resp.text.strip()
print(f"Resolved latest version for {major_version} is {release_version}.")
if release_version == "":
raise ValueError(f"Cannot resolve latest version for {major_version} major")
return release_version
def install_plugin(self, plugin_path: str) -> None:
util.run_or_raise_error(
["bin/logstash-plugin", "install", plugin_path],
f"Failed to install {plugin_path}")
def build_logstash(self):
print(f"Building Logstash...")
util.run_or_raise_error(
["./gradlew", "clean", "bootstrap", "assemble", "installDefaultGems"],
"Failed to build Logstash")
print(f"Logstash has successfully built.")
def apply_config(self, config: dict) -> None:
with open(os.getcwd() + "/.buildkite/scripts/health-report-tests/config/pipelines.yml", 'w') as pipelines_file:
yaml.dump(config, pipelines_file)
def run_logstash(self, full_start_required: bool) -> subprocess.Popen:
# --config.reload.automatic is to make instance active
# it is helpful when testing crash pipeline cases
config_path = os.getcwd() + "/.buildkite/scripts/health-report-tests/config"
process = subprocess.Popen(["bin/logstash", "--config.reload.automatic", "--path.settings", config_path,
"-w 1"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, shell=False)
if process.poll() is not None:
print(f"Logstash failed to run, check the the config and logs, then rerun.")
return None
# Read stdout and stderr in real-time
logs = []
for stdout_line in iter(process.stdout.readline, ""):
logs.append(stdout_line.strip())
# we don't wait for Logstash fully start as we also test slow pipeline start scenarios
if full_start_required is False and "Starting pipeline" in stdout_line:
break
if full_start_required is True and "Pipeline started" in stdout_line:
break
if "Logstash shut down" in stdout_line or "Logstash stopped" in stdout_line:
print(f"Logstash couldn't spin up.")
print(logs)
return None
print(f"Logstash is running with PID: {process.pid}.")
return process

View file

@ -0,0 +1 @@
# Intentionally left blank

View file

@ -0,0 +1,69 @@
import yaml
from typing import Any, List, Dict
class ConfigValidator:
REQUIRED_KEYS = {
"root": ["name", "config", "conditions", "expectation"],
"config": ["pipeline.id", "config.string"],
"conditions": ["full_start_required"],
"expectation": ["status", "symptom", "indicators"],
"indicators": ["pipelines"],
"pipelines": ["status", "symptom", "indicators"],
"DYNAMIC": ["status", "symptom", "diagnosis", "impacts", "details"],
"details": ["status"],
"status": ["state"]
}
def __init__(self):
self.yaml_content = None
def __has_valid_keys(self, data: any, key_path: str, repeated: bool) -> bool:
if isinstance(data, str) or isinstance(data, bool): # we reached values
return True
# we have two indicators section and for the next repeated ones, we go deeper
first_key = next(iter(data))
data = data[first_key] if repeated and key_path == "indicators" else data
if isinstance(data, dict):
# pipeline-id is a DYNAMIC
required = self.REQUIRED_KEYS.get("DYNAMIC" if repeated and key_path == "indicators" else key_path, [])
repeated = not repeated if key_path == "indicators" else repeated
for key in required:
if key not in data:
print(f"Missing key '{key}' in '{key_path}'")
return False
else:
dic_keys_result = self.__has_valid_keys(data[key], key, repeated)
if dic_keys_result is False:
return False
elif isinstance(data, list):
for item in data:
list_keys_result = self.__has_valid_keys(item, key_path, repeated)
if list_keys_result is False:
return False
return True
def load(self, file_path: str) -> None:
"""Load the YAML file content into self.yaml_content."""
self.yaml_content: [Dict[str, Any]] = None
try:
with open(file_path, 'r') as file:
self.yaml_content = yaml.safe_load(file)
except yaml.YAMLError as exc:
print(f"Error in YAML file: {exc}")
self.yaml_content = None
def is_valid(self) -> bool:
"""Validate the entire YAML structure."""
if self.yaml_content is None:
print(f"YAML content is empty.")
return False
if not isinstance(self.yaml_content, dict):
print(f"YAML structure is not as expected, it should start with a Dict.")
return False
result = self.__has_valid_keys(self.yaml_content, "root", False)
return True if result is True else False

View file

@ -0,0 +1,16 @@
"""
A class to provide information about Logstash node stats.
"""
import util
class LogstashHealthReport:
LOGSTASH_HEALTH_REPORT_URL = "http://localhost:9600/_health_report"
def __init__(self):
pass
def get(self):
response = util.call_url_with_retry(self.LOGSTASH_HEALTH_REPORT_URL)
return response.json()

View file

@ -0,0 +1,87 @@
"""
Main entry point of the LS health report API integration test suites
"""
import glob
import os
import time
import traceback
import yaml
from bootstrap import Bootstrap
from scenario_executor import ScenarioExecutor
from config_validator import ConfigValidator
class BootstrapContextManager:
def __init__(self):
pass
def __enter__(self):
print(f"Starting Logstash Health Report Integration test.")
self.bootstrap = Bootstrap()
self.bootstrap.build_logstash()
plugin_path = os.getcwd() + "/qa/support/logstash-integration-failure_injector/logstash-integration" \
"-failure_injector-*.gem"
matching_files = glob.glob(plugin_path)
if len(matching_files) == 0:
raise ValueError(f"Could not find logstash-integration-failure_injector plugin.")
self.bootstrap.install_plugin(matching_files[0])
print(f"logstash-integration-failure_injector successfully installed.")
return self.bootstrap
def __exit__(self, exc_type, exc_value, exc_traceback):
if exc_type is not None:
print(traceback.format_exception(exc_type, exc_value, exc_traceback))
def main():
with BootstrapContextManager() as bootstrap:
scenario_executor = ScenarioExecutor()
config_validator = ConfigValidator()
working_dir = os.getcwd()
scenario_files_path = working_dir + "/.buildkite/scripts/health-report-tests/tests/*.yaml"
scenario_files = glob.glob(scenario_files_path)
for scenario_file in scenario_files:
print(f"Validating {scenario_file} scenario file.")
config_validator.load(scenario_file)
if config_validator.is_valid() is False:
print(f"{scenario_file} scenario file is not valid.")
return
else:
print(f"Validation succeeded.")
has_failed_scenario = False
for scenario_file in scenario_files:
with open(scenario_file, 'r') as file:
# scenario_content: Dict[str, Any] = None
scenario_content = yaml.safe_load(file)
print(f"Testing `{scenario_content.get('name')}` scenario.")
scenario_name = scenario_content['name']
is_full_start_required = next(sub.get('full_start_required') for sub in
scenario_content.get('conditions') if 'full_start_required' in sub)
config = scenario_content['config']
if config is not None:
bootstrap.apply_config(config)
expectations = scenario_content.get("expectation")
process = bootstrap.run_logstash(is_full_start_required)
if process is not None:
try:
scenario_executor.on(scenario_name, expectations)
except Exception as e:
print(e)
has_failed_scenario = True
process.terminate()
time.sleep(5) # leave some window to terminate the process
if has_failed_scenario:
# intentionally fail due to visibility
raise Exception("Some of scenarios failed, check the log for details.")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,17 @@
#!/bin/bash
set -euo pipefail
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:/opt/buildkite-agent/.java/bin:$PATH"
export JAVA_HOME="/opt/buildkite-agent/.java"
eval "$(rbenv init -)"
eval "$(pyenv init -)"
echo "--- Installing pip"
sudo apt-get install python3-pip -y
echo "--- Installing dependencies"
python3 -mpip install -r .buildkite/scripts/health-report-tests/requirements.txt
echo "--- Running tests"
python3 .buildkite/scripts/health-report-tests/main.py

View file

@ -0,0 +1,2 @@
requests==2.32.3
pyyaml==6.0.2

View file

@ -0,0 +1,65 @@
"""
A class to execute the given scenario for Logstash Health Report integration test
"""
import time
from logstash_health_report import LogstashHealthReport
class ScenarioExecutor:
logstash_health_report_api = LogstashHealthReport()
def __init__(self):
pass
def __has_intersection(self, expects, results):
# we expect expects to be existing in results
for expect in expects:
for result in results:
if result.get('help_url') and "health-report-pipeline-status.html#" not in result.get('help_url'):
return False
if not all(key in result and result[key] == value for key, value in expect.items()):
return False
return True
def __get_difference(self, differences: list, expectations: dict, reports: dict) -> dict:
for key in expectations.keys():
if type(expectations.get(key)) != type(reports.get(key)):
differences.append(f"Scenario expectation and Health API report structure differs for {key}.")
return differences
if isinstance(expectations.get(key), str):
if expectations.get(key) != reports.get(key):
differences.append({key: {"expected": expectations.get(key), "got": reports.get(key)}})
continue
elif isinstance(expectations.get(key), dict):
self.__get_difference(differences, expectations.get(key), reports.get(key))
elif isinstance(expectations.get(key), list):
if not self.__has_intersection(expectations.get(key), reports.get(key)):
differences.append({key: {"expected": expectations.get(key), "got": reports.get(key)}})
return differences
def __is_expected(self, expectations: dict) -> None:
reports = self.logstash_health_report_api.get()
differences = self.__get_difference([], expectations, reports)
if differences:
print("Differences found in 'expectation' section between YAML content and stats:")
for diff in differences:
print(f"Difference: {diff}")
return False
else:
return True
def on(self, scenario_name: str, expectations: dict) -> None:
# retriable check the expectations
attempts = 5
while self.__is_expected(expectations) is False:
attempts = attempts - 1
if attempts == 0:
break
time.sleep(1)
if attempts == 0:
raise Exception(f"{scenario_name} failed.")
else:
print(f"Scenario `{scenario_name}` expectaion meets the health report stats.")

View file

@ -0,0 +1,31 @@
name: "Abnormally terminated pipeline"
config:
- pipeline.id: abnormally-terminated-pp
config.string: |
input { heartbeat { interval => 1 } }
filter { failure_injector { crash_at => filter } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
- full_start_required: true
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`abnormally-terminated-pp`)"
indicators:
abnormally-terminated-pp:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is not running, likely because it has encountered an error"
- action: "view logs to determine the cause of abnormal pipeline shutdown"
impacts:
- description: "the pipeline is not currently processing"
- impact_areas: ["pipeline_execution"]
details:
status:
state: "TERMINATED"

View file

@ -0,0 +1,29 @@
name: "Successfully terminated pipeline"
config:
- pipeline.id: normally-terminated-pp
config.string: |
input { generator { count => 1 } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
- full_start_required: true
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`normally-terminated-pp`)"
indicators:
normally-terminated-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline has finished running because its inputs have been closed and events have been processed"
- action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "FINISHED"

View file

@ -0,0 +1,30 @@
name: "Slow start pipeline"
config:
- pipeline.id: slow-start-pp
config.string: |
input { heartbeat {} }
filter { failure_injector { degrade_at => [register] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
- full_start_required: false
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`slow-start-pp`)"
indicators:
slow-start-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is loading"
- action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "LOADING"

View file

@ -0,0 +1,36 @@
import os
import requests
import subprocess
from requests.adapters import HTTPAdapter, Retry
def call_url_with_retry(url: str, max_retries: int = 5, delay: int = 1) -> requests.Response:
f"""
Calls the given {url} with maximum of {max_retries} retries with {delay} delay.
"""
schema = "https://" if "https://" in url else "http://"
session = requests.Session()
# retry on most common failures such as connection timeout(408), etc...
retries = Retry(total=max_retries, backoff_factor=delay, status_forcelist=[408, 502, 503, 504])
session.mount(schema, HTTPAdapter(max_retries=retries))
return session.get(url)
def git_check_out_branch(branch_name: str) -> None:
f"""
Checks out specified branch or fails with error if checkout operation fails.
"""
run_or_raise_error(["git", "checkout", branch_name],
"Error occurred while checking out the " + branch_name + " branch")
def run_or_raise_error(commands: list, error_message):
f"""
Executes the {list} commands and raises an {Exception} if opration fails.
"""
result = subprocess.run(commands, env=os.environ.copy(), universal_newlines=True, stdout=subprocess.PIPE)
if result.returncode != 0:
full_error_message = (error_message + ", output: " + result.stdout.decode('utf-8')) \
if result.stdout else error_message
raise Exception(f"{full_error_message}")

View file

@ -4,6 +4,7 @@ from dataclasses import dataclass, field
import os
import sys
import typing
from functools import partial
from ruamel.yaml import YAML
from ruamel.yaml.scalarstring import LiteralScalarString
@ -177,17 +178,15 @@ class LinuxJobs(Jobs):
super().__init__(os=os, jdk=jdk, group_key=group_key, agent=agent)
def all_jobs(self) -> list[typing.Callable[[], JobRetValues]]:
return [
self.init_annotation,
self.java_unit_test,
self.ruby_unit_test,
self.integration_tests_part_1,
self.integration_tests_part_2,
self.pq_integration_tests_part_1,
self.pq_integration_tests_part_2,
self.x_pack_unit_tests,
self.x_pack_integration,
]
jobs=list()
jobs.append(self.init_annotation)
jobs.append(self.java_unit_test)
jobs.append(self.ruby_unit_test)
jobs.extend(self.integration_test_parts(3))
jobs.extend(self.pq_integration_test_parts(3))
jobs.append(self.x_pack_unit_tests)
jobs.append(self.x_pack_integration)
return jobs
def prepare_shell(self) -> str:
jdk_dir = f"/opt/buildkite-agent/.java/{self.jdk}"
@ -259,17 +258,14 @@ ci/unit_tests.sh ruby
retry=copy.deepcopy(ENABLED_RETRIES),
)
def integration_tests_part_1(self) -> JobRetValues:
return self.integration_tests(part=1)
def integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def integration_tests_part_2(self) -> JobRetValues:
return self.integration_tests(part=2)
def integration_tests(self, part: int) -> JobRetValues:
step_name_human = f"Integration Tests - {part}"
step_key = f"{self.group_key}-integration-tests-{part}"
def integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"Integration Tests - {part}/{parts}"
step_key = f"{self.group_key}-integration-tests-{part}-of-{parts}"
test_command = f"""
ci/integration_tests.sh split {part-1}
ci/integration_tests.sh split {part-1} {parts}
"""
return JobRetValues(
@ -281,18 +277,15 @@ ci/integration_tests.sh split {part-1}
retry=copy.deepcopy(ENABLED_RETRIES),
)
def pq_integration_tests_part_1(self) -> JobRetValues:
return self.pq_integration_tests(part=1)
def pq_integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.pq_integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def pq_integration_tests_part_2(self) -> JobRetValues:
return self.pq_integration_tests(part=2)
def pq_integration_tests(self, part: int) -> JobRetValues:
step_name_human = f"IT Persistent Queues - {part}"
step_key = f"{self.group_key}-it-persistent-queues-{part}"
def pq_integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"IT Persistent Queues - {part}/{parts}"
step_key = f"{self.group_key}-it-persistent-queues-{part}-of-{parts}"
test_command = f"""
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split {part-1}
ci/integration_tests.sh split {part-1} {parts}
"""
return JobRetValues(

View file

@ -15,6 +15,8 @@ steps:
multiple: true
default: "${DEFAULT_MATRIX_OS}"
options:
- label: "Windows 2025"
value: "windows-2025"
- label: "Windows 2022"
value: "windows-2022"
- label: "Windows 2019"

View file

@ -1 +1 @@
jruby-9.3.10.0
jruby-9.4.9.0

File diff suppressed because it is too large Load diff

View file

@ -13,7 +13,7 @@ gem "ruby-maven-libs", "~> 3", ">= 3.9.6.1"
gem "logstash-output-elasticsearch", ">= 11.14.0"
gem "polyglot", require: false
gem "treetop", require: false
gem "faraday", "~> 1", :require => false # due elasticsearch-transport (elastic-transport) depending faraday '~> 1'
gem "minitar", "~> 1", :group => :build
gem "childprocess", "~> 4", :group => :build
gem "fpm", "~> 1", ">= 1.14.1", :group => :build # compound due to bugfix https://github.com/jordansissel/fpm/pull/1856
gem "gems", "~> 1", :group => :build
@ -26,6 +26,8 @@ gem "stud", "~> 0.0.22", :group => :build
gem "fileutils", "~> 1.7"
gem "rubocop", :group => :development
# rubocop-ast 1.43.0 carries a dep on `prism` which requires native c extensions
gem 'rubocop-ast', '= 1.42.0', :group => :development
gem "belzebuth", :group => :development
gem "benchmark-ips", :group => :development
gem "ci_reporter_rspec", "~> 1", :group => :development
@ -39,5 +41,9 @@ gem "simplecov", "~> 0.22.0", :group => :development
gem "simplecov-json", require: false, :group => :development
gem "jar-dependencies", "= 0.4.1" # Gem::LoadError with jar-dependencies 0.4.2
gem "murmurhash3", "= 0.1.6" # Pins until version 0.1.7-java is released
gem "date", "= 3.3.3"
gem "thwait"
gem "bigdecimal", "~> 3.1"
gem "psych", "5.2.2"
gem "cgi", "0.3.7" # Pins until a new jruby version with updated cgi is released
gem "uri", "0.12.3" # Pins until a new jruby version with updated cgi is released

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" "%JAVA_OPTS%" -cp "%CLASSPATH%" org.logstash.ackedqueue.PqCheck %*
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqCheck %*
:concat
IF not defined CLASSPATH (

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" %JAVA_OPTS% -cp "%CLASSPATH%" org.logstash.ackedqueue.PqRepair %*
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqRepair %*
:concat
IF not defined CLASSPATH (

View file

@ -42,7 +42,7 @@ if defined LS_JAVA_HOME (
)
if not exist "%JAVACMD%" (
echo could not find java; set JAVA_HOME or ensure java is in PATH 1>&2
echo could not find java; set LS_JAVA_HOME or ensure java is in PATH 1>&2
exit /b 1
)

View file

@ -101,6 +101,7 @@ allprojects {
"--add-opens=java.base/java.lang=ALL-UNNAMED",
"--add-opens=java.base/java.util=ALL-UNNAMED"
]
maxHeapSize = "2g"
//https://stackoverflow.com/questions/3963708/gradle-how-to-display-test-results-in-the-console-in-real-time
testLogging {
// set options for log level LIFECYCLE
@ -145,7 +146,6 @@ subprojects {
}
version = versionMap['logstash-core']
String artifactVersionsApi = "https://artifacts-api.elastic.co/v1/versions"
tasks.register("configureArchitecture") {
String arch = System.properties['os.arch']
@ -171,33 +171,28 @@ tasks.register("configureArtifactInfo") {
description "Set the url to download stack artifacts for select stack version"
doLast {
def versionQualifier = System.getenv('VERSION_QUALIFIER')
if (versionQualifier) {
version = "$version-$versionQualifier"
}
def splitVersion = version.split('\\.')
int major = splitVersion[0].toInteger()
int minor = splitVersion[1].toInteger()
String branch = "${major}.${minor}"
String fallbackMajorX = "${major}.x"
boolean isFallBackPreviousMajor = minor - 1 < 0
String fallbackBranch = isFallBackPreviousMajor ? "${major-1}.x" : "${major}.${minor-1}"
def qualifiedVersion = ""
boolean isReleaseBuild = System.getenv('RELEASE') == "1" || versionQualifier
String apiResponse = artifactVersionsApi.toURL().text
def dlVersions = new JsonSlurper().parseText(apiResponse)
String qualifiedVersion = dlVersions['versions'].grep(isReleaseBuild ? ~/^${version}$/ : ~/^${version}-SNAPSHOT/)[0]
if (qualifiedVersion == null) {
if (!isReleaseBuild) {
project.ext.set("useProjectSpecificArtifactSnapshotUrl", true)
project.ext.set("stackArtifactSuffix", "${version}-SNAPSHOT")
return
for (b in [branch, fallbackMajorX, fallbackBranch]) {
def url = "https://storage.googleapis.com/artifacts-api/snapshots/${b}.json"
try {
def snapshotInfo = new JsonSlurper().parseText(url.toURL().text)
qualifiedVersion = snapshotInfo.version
println "ArtifactInfo version: ${qualifiedVersion}"
break
} catch (Exception e) {
println "Failed to fetch branch ${branch} from ${url}: ${e.message}"
}
throw new GradleException("could not find the current artifact from the artifact-api ${artifactVersionsApi} for ${version}")
}
// find latest reference to last build
String buildsListApi = "${artifactVersionsApi}/${qualifiedVersion}/builds/"
apiResponse = buildsListApi.toURL().text
def dlBuilds = new JsonSlurper().parseText(apiResponse)
def stackBuildVersion = dlBuilds["builds"][0]
project.ext.set("artifactApiVersionedBuildUrl", "${artifactVersionsApi}/${qualifiedVersion}/builds/${stackBuildVersion}")
project.ext.set("stackArtifactSuffix", qualifiedVersion)
project.ext.set("useProjectSpecificArtifactSnapshotUrl", false)
project.ext.set("artifactApiVersion", qualifiedVersion)
}
}
@ -440,23 +435,13 @@ tasks.register("downloadFilebeat") {
doLast {
download {
String beatVersion = project.ext.get("stackArtifactSuffix")
String downloadedFilebeatName = "filebeat-${beatVersion}-${project.ext.get("beatsArchitecture")}"
String beatsVersion = project.ext.get("artifactApiVersion")
String downloadedFilebeatName = "filebeat-${beatsVersion}-${project.ext.get("beatsArchitecture")}"
project.ext.set("unpackedFilebeatName", downloadedFilebeatName)
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("beats", beatVersion, downloadedFilebeatName)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/beats/packages/${downloadedFilebeatName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
}
def res = SnapshotArtifactURLs.packageUrls("beats", beatsVersion, downloadedFilebeatName)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
src project.ext.filebeatSnapshotUrl
onlyIfNewer true
@ -492,20 +477,12 @@ tasks.register("checkEsSHA") {
description "Download ES version remote's fingerprint file"
doLast {
String esVersion = project.ext.get("stackArtifactSuffix")
String esVersion = project.ext.get("artifactApiVersion")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
String remoteSHA
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
remoteSHA = res.packageShaUrl
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
remoteSHA = buildUrls.package.sha_url.toURL().text
}
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
remoteSHA = res.packageShaUrl
def localESArchive = new File("${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
if (localESArchive.exists()) {
@ -539,25 +516,14 @@ tasks.register("downloadEs") {
doLast {
download {
String esVersion = project.ext.get("stackArtifactSuffix")
String esVersion = project.ext.get("artifactApiVersion")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
project.ext.set("unpackedElasticsearchName", "elasticsearch-${esVersion}")
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
}
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
src project.ext.elasticsearchSnapshotURL
onlyIfNewer true
@ -740,46 +706,66 @@ class JDKDetails {
return createElasticCatalogDownloadUrl()
}
private String createElasticCatalogDownloadUrl() {
// Ask details to catalog https://jvm-catalog.elastic.co/jdk and return the url to download the JDK
// arch x86_64 never used, only aarch64 if macos
// throws an error iff local version in versions.yml doesn't match the latest from JVM catalog.
void checkLocalVersionMatchingLatest() {
// retrieve the metadata from remote
def url = "https://jvm-catalog.elastic.co/jdk/latest_adoptiumjdk_${major}_${osName}"
// Append the cpu's arch only if Mac on aarch64, all the other OSes doesn't have CPU extension
if (arch == "aarch64") {
url += "_${arch}"
}
println "Retrieving JDK from catalog..."
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
assert catalogConnection.responseCode == 200
def metadataRetrieved = catalogConnection.content.text
println "Retrieved!"
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
return catalogMetadata.url
}
private String createAdoptDownloadUrl() {
String releaseName = major > 8 ?
"jdk-${revision}+${build}" :
"jdk${revision}u${build}"
String vendorOsName = vendorOsName(osName)
switch (vendor) {
case "adoptium":
return "https://api.adoptium.net/v3/binary/version/${releaseName}/${vendorOsName}/${arch}/jdk/hotspot/normal/adoptium"
default:
throw RuntimeException("Can't handle vendor: ${vendor}")
if (catalogMetadata.version != revision || catalogMetadata.revision != build) {
throw new GradleException("Found new jdk version. Please update version.yml to ${catalogMetadata.version} build ${catalogMetadata.revision}")
}
}
private String vendorOsName(String osName) {
if (osName == "darwin")
return "mac"
return osName
private String createElasticCatalogDownloadUrl() {
// Ask details to catalog https://jvm-catalog.elastic.co/jdk and return the url to download the JDK
// arch x86_64 is default, aarch64 if macos or linux
def url = "https://jvm-catalog.elastic.co/jdk/adoptiumjdk-${revision}+${build}-${osName}"
// Append the cpu's arch only if not x86_64, which is the default
if (arch == "aarch64") {
url += "-${arch}"
}
println "Retrieving JDK from catalog..."
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
if (catalogConnection.responseCode != 200) {
println "Can't find adoptiumjdk ${revision} for ${osName} on Elastic JVM catalog"
throw new GradleException("JVM not present on catalog")
}
def metadataRetrieved = catalogConnection.content.text
println "Retrieved!"
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
validateMetadata(catalogMetadata)
return catalogMetadata.url
}
//Verify that the artifact metadata correspond to the request, if not throws an error
private void validateMetadata(Map metadata) {
if (metadata.version != revision) {
throw new GradleException("Expected to retrieve a JDK for version ${revision} but received: ${metadata.version}")
}
if (!isSameArchitecture(metadata.architecture)) {
throw new GradleException("Expected to retrieve a JDK for architecture ${arch} but received: ${metadata.architecture}")
}
}
private boolean isSameArchitecture(String metadataArch) {
if (arch == 'x64') {
return metadataArch == 'x86_64'
}
return metadataArch == arch
}
private String parseJdkArchitecture(String jdkArch) {
@ -791,16 +777,22 @@ class JDKDetails {
return "aarch64"
break
default:
throw RuntimeException("Can't handle CPU architechture: ${jdkArch}")
throw new GradleException("Can't handle CPU architechture: ${jdkArch}")
}
}
}
tasks.register("lint") {
// Calls rake's 'lint' task
description = "Lint Ruby source files. Use -PrubySource=file1.rb,file2.rb to specify files"
dependsOn installDevelopmentGems
doLast {
rake(projectDir, buildDir, 'lint:report')
if (project.hasProperty("rubySource")) {
// Split the comma-separated files and pass them as separate arguments
def files = project.property("rubySource").split(",")
rake(projectDir, buildDir, "lint:report", *files)
} else {
rake(projectDir, buildDir, "lint:report")
}
}
}
@ -840,6 +832,15 @@ tasks.register("downloadJdk", Download) {
}
}
tasks.register("checkNewJdkVersion") {
def versionYml = new Yaml().load(new File("$projectDir/versions.yml").text)
// use Linux x86_64 as canary platform
def jdkDetails = new JDKDetails(versionYml, "linux", "x86_64")
// throws Gradle exception if local and remote doesn't match
jdkDetails.checkLocalVersionMatchingLatest()
}
tasks.register("deleteLocalJdk", Delete) {
// CLI project properties: -Pjdk_bundle_os=[windows|linux|darwin]
String osName = selectOsType()

View file

@ -32,6 +32,8 @@ spec:
- resource:logstash-linux-jdk-matrix-pipeline
- resource:logstash-windows-jdk-matrix-pipeline
- resource:logstash-benchmark-pipeline
- resource:logstash-health-report-tests-pipeline
- resource:logstash-jdk-availability-check-pipeline
# ***********************************
# Declare serverless IT pipeline
@ -642,4 +644,112 @@ spec:
# *******************************
# SECTION END: Benchmark pipeline
# *******************************
# ***********************************
# Declare Health Report Tests pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-health-report-tests-pipeline
description: Buildkite pipeline for the Logstash Health Report Tests
links:
- title: ':logstash Logstash Health Report Tests (Daily, Auto) pipeline'
url: https://buildkite.com/elastic/logstash-health-report-tests-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-health-report-tests-pipeline
description: ':logstash: Logstash Health Report tests :pipeline:'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/health_report_tests_pipeline.yml"
maximum_timeout_in_minutes: 60
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Daily Health Report tests on main branch:
branch: main
cronline: 30 20 * * *
message: Daily trigger of Health Report Tests Pipeline
# *******************************
# SECTION END: Health Report Tests pipeline
# *******************************
# ***********************************
# Declare JDK check pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-jdk-availability-check-pipeline
description: ":logstash: check availability of new JDK version"
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-jdk-availability-check-pipeline
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/jdk_availability_check_pipeline.yml"
maximum_timeout_in_minutes: 10
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Weekly JDK availability check (main):
branch: main
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
Weekly JDK availability check (8.x):
branch: 8.x
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
# *******************************
# SECTION END: JDK check pipeline
# *******************************

View file

@ -19,7 +19,7 @@ function get_package_type {
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export OSS=true
if [ -n "$BUILD_JAVA_HOME" ]; then

View file

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -eo pipefail
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
echo "Checking local JDK version against latest remote from JVM catalog"
./gradlew checkNewJdkVersion

View file

@ -6,7 +6,7 @@ set -x
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
if [ -n "$BUILD_JAVA_HOME" ]; then
GRADLE_OPTS="$GRADLE_OPTS -Dorg.gradle.java.home=$BUILD_JAVA_HOME"

View file

@ -19,24 +19,15 @@ if [[ $1 = "setup" ]]; then
exit 0
elif [[ $1 == "split" ]]; then
cd qa/integration
glob1=(specs/*spec.rb)
glob2=(specs/**/*spec.rb)
all_specs=("${glob1[@]}" "${glob2[@]}")
# Source shared function for splitting integration tests
source "$(dirname "${BASH_SOURCE[0]}")/partition-files.lib.sh"
specs0=${all_specs[@]::$((${#all_specs[@]} / 2 ))}
specs1=${all_specs[@]:$((${#all_specs[@]} / 2 ))}
cd ../..
if [[ $2 == 0 ]]; then
echo "Running the first half of integration specs: $specs0"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="$specs0" --console=plain
elif [[ $2 == 1 ]]; then
echo "Running the second half of integration specs: $specs1"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="$specs1" --console=plain
else
echo "Error, must specify 0 or 1 after the split. For example ci/integration_tests.sh split 0"
exit 1
fi
index="${2:?index}"
count="${3:-2}"
specs=($(cd qa/integration; partition_files "${index}" "${count}" < <(find specs -name '*_spec.rb') ))
echo "Running integration tests partition[${index}] of ${count}: ${specs[*]}"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="${specs[*]}" --console=plain
elif [[ ! -z $@ ]]; then
echo "Running integration tests 'rspec $@'"

78
ci/partition-files.lib.sh Executable file
View file

@ -0,0 +1,78 @@
#!/bin/bash
# partition_files returns a consistent partition of the filenames given on stdin
# Usage: partition_files <partition_index> <partition_count=2> < <(ls files)
# partition_index: the zero-based index of the partition to select `[0,partition_count)`
# partition_count: the number of partitions `[2,#files]`
partition_files() (
set -e
local files
# ensure files is consistently sorted and distinct
IFS=$'\n' read -ra files -d '' <<<"$(cat - | sort | uniq)" || true
local partition_index="${1:?}"
local partition_count="${2:?}"
_error () { >&2 echo "ERROR: ${1:-UNSPECIFIED}"; exit 1; }
# safeguard against nonsense invocations
if (( ${#files[@]} < 2 )); then
_error "#files(${#files[@]}) must be at least 2 in order to partition"
elif ( ! [[ "${partition_count}" =~ ^[0-9]+$ ]] ) || (( partition_count < 2 )) || (( partition_count > ${#files[@]})); then
_error "partition_count(${partition_count}) must be a number that is at least 2 and not greater than #files(${#files[@]})"
elif ( ! [[ "${partition_index}" =~ ^[0-9]+$ ]] ) || (( partition_index < 0 )) || (( partition_index >= $partition_count )) ; then
_error "partition_index(${partition_index}) must be a number that is greater 0 and less than partition_count(${partition_count})"
fi
# round-robbin emit those in our selected partition
for index in "${!files[@]}"; do
partition="$(( index % partition_count ))"
if (( partition == partition_index )); then
echo "${files[$index]}"
fi
done
)
if [[ "$0" == "${BASH_SOURCE[0]}" ]]; then
if [[ "$1" == "test" ]]; then
status=0
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
file_list="$( cd "${SCRIPT_DIR}"; find . -type f )"
# for any legal partitioning into N partitions, we ensure that
# the combined output of `partition_files I N` where `I` is all numbers in
# the range `[0,N)` produces no repeats and no omissions, even if the
# input list is not consistently ordered.
for n in $(seq 2 $(wc -l <<<"${file_list}")); do
result=""
for i in $(seq 0 $(( n - 1 ))); do
for file in $(partition_files $i $n <<<"$( shuf <<<"${file_list}" )"); do
result+="${file}"$'\n'
done
done
repeated="$( uniq --repeated <<<"$( sort <<<"${result}" )" )"
if (( $(printf "${repeated}" | wc -l) > 0 )); then
status=1
echo "[n=${n}]FAIL(repeated):"$'\n'"${repeated}"
fi
missing=$( comm -23 <(sort <<<"${file_list}") <( sort <<<"${result}" ) )
if (( $(printf "${missing}" | wc -l) > 0 )); then
status=1
echo "[n=${n}]FAIL(omitted):"$'\n'"${missing}"
fi
done
if (( status > 0 )); then
echo "There were failures. The input list was:"
echo "${file_list}"
fi
exit "${status}"
else
partition_files $@
fi
fi

View file

@ -34,7 +34,7 @@
## basic
# set the I/O temp directory
#-Djava.io.tmpdir=$HOME
#-Djava.io.tmpdir=${HOME}
# set to headless, just in case
-Djava.awt.headless=true

View file

@ -154,7 +154,7 @@ appender.deprecation_rolling.policies.size.size = 100MB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 30
logger.deprecation.name = org.logstash.deprecation, deprecation
logger.deprecation.name = org.logstash.deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation.additivity = false

View file

@ -332,7 +332,7 @@
#
# Determine where to allocate memory buffers, for plugins that leverage them.
# Default to direct, optionally can be switched to heap to select Java heap space.
# pipeline.buffer.type: direct
# pipeline.buffer.type: heap
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#

View file

@ -31,7 +31,7 @@ build-from-local-full-artifacts: dockerfile env2yaml
-p 8000:8000 --expose=8000 -v $(ARTIFACTS_DIR):/mnt \
python:3 bash -c 'cd /mnt && python3 -m http.server'
timeout 120 bash -c 'until curl -s localhost:8000 > /dev/null; do sleep 1; done'
docker build --network=host -t $(IMAGE_TAG)-full:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-full data/logstash || \
docker build --progress=plain --network=host -t $(IMAGE_TAG)-full:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-full data/logstash || \
(docker kill $(HTTPD); false); \
docker tag $(IMAGE_TAG)-full:$(VERSION_TAG) $(IMAGE_TAG):$(VERSION_TAG);
docker kill $(HTTPD)
@ -41,7 +41,7 @@ build-from-local-oss-artifacts: dockerfile env2yaml
-p 8000:8000 --expose=8000 -v $(ARTIFACTS_DIR):/mnt \
python:3 bash -c 'cd /mnt && python3 -m http.server'
timeout 120 bash -c 'until curl -s localhost:8000 > /dev/null; do sleep 1; done'
docker build --network=host -t $(IMAGE_TAG)-oss:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-oss data/logstash || \
docker build --progress=plain --network=host -t $(IMAGE_TAG)-oss:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-oss data/logstash || \
(docker kill $(HTTPD); false);
-docker kill $(HTTPD)
@ -50,7 +50,7 @@ build-from-local-ubi8-artifacts: dockerfile env2yaml
-p 8000:8000 --expose=8000 -v $(ARTIFACTS_DIR):/mnt \
python:3 bash -c 'cd /mnt && python3 -m http.server'
timeout 120 bash -c 'until curl -s localhost:8000 > /dev/null; do sleep 1; done'
docker build --network=host -t $(IMAGE_TAG)-ubi8:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-ubi8 data/logstash || \
docker build --progress=plain --network=host -t $(IMAGE_TAG)-ubi8:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-ubi8 data/logstash || \
(docker kill $(HTTPD); false);
-docker kill $(HTTPD)
@ -59,7 +59,7 @@ build-from-local-wolfi-artifacts: dockerfile
-p 8000:8000 --expose=8000 -v $(ARTIFACTS_DIR):/mnt \
python:3 bash -c 'cd /mnt && python3 -m http.server'
timeout 120 bash -c 'until curl -s localhost:8000 > /dev/null; do sleep 1; done'
docker build --network=host -t $(IMAGE_TAG)-wolfi:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-wolfi data/logstash || \
docker build --progress=plain --network=host -t $(IMAGE_TAG)-wolfi:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-wolfi data/logstash || \
(docker kill $(HTTPD); false);
-docker kill $(HTTPD)

View file

@ -15,7 +15,7 @@ RUN go build
<%# Start image_flavor 'ironbank' %>
ARG BASE_REGISTRY=registry1.dso.mil
ARG BASE_IMAGE=ironbank/redhat/ubi/ubi9
ARG BASE_TAG=9.3
ARG BASE_TAG=9.5
ARG LOGSTASH_VERSION=<%= elastic_version %>
ARG GOLANG_VERSION=1.21.8
@ -106,7 +106,7 @@ FROM <%= base_image %>
RUN for iter in {1..10}; do \
<% if image_flavor == 'wolfi' %>
<%= package_manager %> add --no-cache curl bash && \
<%= package_manager %> add --no-cache curl bash openssl && \
<% else -%>
<% if image_flavor == 'full' || image_flavor == 'oss' -%>
export DEBIAN_FRONTEND=noninteractive && \
@ -191,7 +191,7 @@ COPY --from=builder-env2yaml /tmp/go/src/env2yaml/env2yaml /usr/local/bin/env2ya
<% else -%>
COPY env2yaml/env2yaml-amd64 env2yaml/env2yaml-arm64 env2yaml/
# Copy over the appropriate env2yaml artifact
RUN env2yamlarch="$(<%= arch_command %>)"; \
RUN set -eux; env2yamlarch="$(<%= arch_command %>)"; \
case "${env2yamlarch}" in \
'x86_64'|'amd64') \
env2yamlarch=amd64; \

View file

@ -14,7 +14,7 @@ tags:
# Build args passed to Dockerfile ARGs
args:
BASE_IMAGE: "redhat/ubi/ubi9"
BASE_TAG: "9.3"
BASE_TAG: "9.5"
LOGSTASH_VERSION: "<%= elastic_version %>"
GOLANG_VERSION: "1.21.8"

View file

@ -6,6 +6,8 @@
<titleabbrev>ArcSight Module</titleabbrev>
++++
deprecated[8.16.0]
NOTE: The Logstash ArcSight module is an
https://www.elastic.co/products/x-pack[{xpack}] feature under the Basic License
and is therefore free to use. Please contact

View file

@ -7,7 +7,7 @@ experimental[]
<titleabbrev>Azure Module (deprecated)</titleabbrev>
++++
deprecated[7.8.0, "We recommend using the Azure modules in {filebeat-ref}/filebeat-module-azure.html[{Filebeat}] and {metricbeat-ref}/metricbeat-module-azure.html[{metricbeat}], which are compliant with the {ecs-ref}/index.html[Elastic Common Schema (ECS)]"]
deprecated[7.8.0, "Replaced by the https://www.elastic.co/guide/en/integrations/current/azure-events.html[Azure Logs integration]."]
The https://azure.microsoft.com/en-us/overview/what-is-azure/[Microsoft Azure]
module in Logstash helps you easily integrate your Azure activity logs and SQL

View file

@ -21,9 +21,10 @@ loss in this situation, you can <<configuring-dlq,configure Logstash>> to write
unsuccessful events to a dead letter queue instead of dropping them.
NOTE: The dead letter queue is currently supported only for the
<<plugins-outputs-elasticsearch,{es} output>>. The dead letter queue is used for
documents with response codes of 400 or 404, both of which indicate an event
<<plugins-outputs-elasticsearch,{es} output>> and <<conditionals, conditional statements evaluation>>.
The dead letter queue is used for documents with response codes of 400 or 404, both of which indicate an event
that cannot be retried.
It's also used when a conditional evaluation encounter an error.
Each event written to the dead letter queue includes the original event,
metadata that describes the reason the event could not be processed, information
@ -57,7 +58,12 @@ status code per entry to indicate why the action could not be performed.
If the DLQ is configured, individual indexing failures are routed there.
Even if you regularly process events, events remain in the dead letter queue.
The dead letter queue requires <<dlq-clear,manual intervention>> to clear it.
The dead letter queue requires <<dlq-clear,manual intervention>> to clear it.
[[conditionals-dlq]]
==== Conditional statements and the dead letter queue
When a conditional statement reaches an error in processing an event, such as comparing string and integer values,
the event, as it is at the time of evaluation, is inserted into the dead letter queue.
[[configuring-dlq]]
==== Configuring {ls} to use dead letter queues

View file

@ -7,7 +7,7 @@ When you configure the Elasticsearch output plugin to use <<plugins-outputs-elas
Examples:
* `output {elasticsearch { cloud_id => "<cloud id>" cloud_auth => "<cloud auth>" } }`
* `output {elasticsearch { cloud_id => "<cloud id>" api_key => "<api key>" } }``
* `output {elasticsearch { cloud_id => "<cloud id>" api_key => "<api key>" } }`
{ess-leadin-short}

View file

@ -2,13 +2,13 @@
[[monitoring]]
== APIs for monitoring {ls}
{ls} provides monitoring APIs for retrieving runtime metrics
about {ls}:
{ls} provides monitoring APIs for retrieving runtime information about {ls}:
* <<node-info-api>>
* <<plugins-api>>
* <<node-stats-api>>
* <<hot-threads-api>>
* <<logstash-health-report-api>>
You can use the root resource to retrieve general information about the Logstash instance, including
@ -714,6 +714,11 @@ Example response:
},
"queue" : {
"type" : "memory"
},
"pipeline": {
"workers": 4,
"batch_size": 125,
"batch_delay": 50
}
},
"test2" : {
@ -795,6 +800,11 @@ Example response:
},
"queue" : {
"type" : "memory"
},
"pipeline": {
"workers": 4,
"batch_size": 125,
"batch_delay": 50
}
}
}
@ -957,6 +967,11 @@ Example response:
"events_count": 0,
"queue_size_in_bytes": 3885,
"max_queue_size_in_bytes": 1073741824
},
"pipeline": {
"workers": 4,
"batch_size": 125,
"batch_delay": 50
}
}
}
@ -1184,3 +1199,155 @@ Example of a human-readable response:
org.jruby.internal.runtime.NativeThread.join(NativeThread.java:75)
--------------------------------------------------
[[logstash-health-report-api]]
=== Health report API
An API that reports the health status of Logstash.
[source,js]
--------------------------------------------------
curl -XGET 'localhost:9600/_health_report?pretty'
--------------------------------------------------
==== Description
The health API returns a report with the health status of Logstash and the pipelines that are running inside of it.
The report contains a list of indicators that compose Logstash functionality.
Each indicator has a health status of: `green`, `unknown`, `yellow`, or `red`.
The indicator will provide an explanation and metadata describing the reason for its current health status.
The top-level status is controlled by the worst indicator status.
In the event that an indicator's status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue.
Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.
Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system.
The root cause and remediation steps are encapsulated in a `diagnosis`.
A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, and the URL for detailed troubleshooting help.
NOTE: The health indicators perform root cause analysis of non-green health statuses.
This can be computationally expensive when called frequently.
==== Response body
`status`::
(Optional, string) Health status of {ls}, based on the aggregated status of all indicators. Statuses are:
`green`:::
{ls} is healthy.
`unknown`:::
The health of {ls} could not be determined.
`yellow`:::
The functionality of {ls} is in a degraded state and may need remediation to avoid the health becoming `red`.
`red`:::
{ls} is experiencing an outage or certain features are unavailable for use.
`indicators`::
(object) Information about the health of the {ls} indicators.
+
.Properties of `indicators`
[%collapsible%open]
====
`<indicator>`::
(object) Contains health results for an indicator.
+
.Properties of `<indicator>`
[%collapsible%open]
=======
`status`::
(string) Health status of the indicator. Statuses are:
`green`:::
The indicator is healthy.
`unknown`:::
The health of the indicator could not be determined.
`yellow`:::
The functionality of an indicator is in a degraded state and may need remediation to avoid the health becoming `red`.
`red`:::
The indicator is experiencing an outage or certain features are unavailable for use.
`symptom`::
(string) A message providing information about the current health status.
`details`::
(Optional, object) An object that contains additional information about the indicator that has lead to the current health status result.
Each indicator has <<logstash-health-api-response-details, a unique set of details>>.
`impacts`::
(Optional, array) If a non-healthy status is returned, indicators may include a list of impacts that this health status will have on {ls}.
+
.Properties of `impacts`
[%collapsible%open]
========
`severity`::
(integer) How important this impact is to the functionality of {ls}.
A value of 1 is the highest severity, with larger values indicating lower severity.
`description`::
(string) A description of the impact on {ls}.
`impact_areas`::
(array of strings) The areas {ls} functionality that this impact affects.
Possible values are:
+
--
* `pipeline_execution`
--
========
`diagnosis`::
(Optional, array) If a non-healthy status is returned, indicators may include a list of diagnosis that encapsulate the cause of the health issue and an action to take in order to remediate the problem.
+
.Properties of `diagnosis`
[%collapsible%open]
========
`cause`::
(string) A description of a root cause of this health problem.
`action`::
(string) A brief description the steps that should be taken to remediate the problem.
A more detailed step-by-step guide to remediate the problem is provided by the `help_url` field.
`help_url`::
(string) A link to the troubleshooting guide that'll fix the health problem.
========
=======
====
[role="child_attributes"]
[[logstash-health-api-response-details]]
==== Indicator Details
Each health indicator in the health API returns a set of details that further explains the state of the system.
The details have contents and a structure that is unique to each indicator.
[[logstash-health-api-response-details-pipeline]]
===== Pipeline Indicator Details
`pipelines/indicators/<pipeline_id>/details`::
(object) Information about the specified pipeline.
+
.Properties of `pipelines/indicators/<pipeline_id>/details`
[%collapsible%open]
====
`status`::
(object) Details related to the pipeline's current status and run-state.
+
.Properties of `status`
[%collapsible%open]
========
`state`::
(string) The current state of the pipeline, including whether it is `loading`, `running`, `finished`, or `terminated`.
========
====

View file

@ -71,8 +71,8 @@ details.
* Under **Logs**, modify the log paths to match your {ls} environment.
. Configure the integration to collect metrics.
* Make sure that **Metrics (Technical Preview)** is turned on, and **Metrics (Stack Monitoring)** is turned off.
* Under **Metrics (Technical Preview)**, make sure the {ls} URL setting
* Make sure that **Metrics (Elastic Agent)** is turned on (default), and **Metrics (Stack Monitoring)** is turned off.
* Under **Metrics (Elastic Agent)**, make sure the {ls} URL setting
points to your {ls} instance URLs. +
By default, the integration collects {ls}
monitoring metrics from `https://localhost:9600`. If that host and port number are not

View file

@ -42,7 +42,7 @@ For more info, check out the {serverless-docs}/observability/what-is-observabili
**Configure the integration to collect metrics**
* Make sure that **Metrics (Stack Monitoring)** is OFF, and that **Metrics (Technical Preview)** is ON.
* Make sure that **Metrics (Stack Monitoring)** is OFF, and that **Metrics (Elastic Agent)** is ON.
* Set the {ls} URL to point to your {ls} instance. +
By default, the integration collects {ls}
monitoring metrics from `https://localhost:9600`. If that host and port number are not

View file

@ -81,8 +81,8 @@ details about it.
to match your {ls} environment.
. Configure the integration to collect metrics
* Make sure that **Metrics (Stack Monitoring)** is turned on, and **Metrics (Technical Preview)** is turned off, if you
want to collect metrics from your {ls} instance
* Make sure that **Metrics (Stack Monitoring)** is turned on, and **Metrics (Elastic Agent)** is turned off, if you
want to collect metrics from your {ls} instance.
* Under **Metrics (Stack Monitoring)**, make sure the hosts setting
points to your {ls} host URLs. By default, the integration collects {ls}
monitoring metrics from `localhost:9600`. If that host and port number are not

View file

@ -1,9 +1,9 @@
After you have confirmed enrollment and data is coming in, click **View assets** to access dashboards related to the {ls} integration.
For traditional Stack Monitoring UI, the dashboards marked **[Logs {ls}]** are used to visualize the logs
produced by your {ls} instances, with those marked **[Metrics {ls}]** for the technical preview metrics
produced by your {ls} instances, with those marked **[Metrics {ls}]** for metrics
dashboards.
These are populated with data only if you selected the **Metrics (Technical Preview)** checkbox.
These are populated with data only if you selected the **Metrics (Elastic Agent)** checkbox.
--
[role="screenshot"]

View file

@ -126,8 +126,8 @@ These tables show examples of overhead by event type and how that affects the mu
| JSON document size (bytes) | Serialized {ls} event size (bytes) | Overhead (bytes) | Overhead (%) | Multiplication Factor
| 947 | 1133 | 186 | 20% | 1.20
| 2707 | 3206 | 499 | 18% | 1.18
| 6751 | 7388 | 637 | 9% | 1.9
| 58901 | 59693 | 792 | 1% | 1.1
| 6751 | 7388 | 637 | 9% | 1.09
| 58901 | 59693 | 792 | 1% | 1.01
|=======================================================================
*Example*

View file

@ -112,6 +112,22 @@ bin/logstash-plugin update logstash-input-github <2>
<1> updates all installed plugins
<2> updates only the plugin you specify
[discrete]
[[updating-major]]
==== Major version plugin updates
To avoid introducing breaking changes, the plugin manager updates only plugins for which newer _minor_ or _patch_ versions exist by default.
If you wish to also include breaking changes, specify `--level=major`.
[source,shell]
----------------------------------
bin/logstash-plugin update --level=major <1>
bin/logstash-plugin update --level=major logstash-input-github <2>
----------------------------------
<1> updates all installed plugins to latest, including major versions with breaking changes
<2> updates only the plugin you specify to latest, including major versions with breaking changes
[discrete]
[[removing-plugins]]
=== Removing plugins

View file

@ -3,6 +3,22 @@
This section summarizes the changes in the following releases:
* <<logstash-8-17-4,Logstash 8.17.4>>
* <<logstash-8-17-3,Logstash 8.17.3>>
* <<logstash-8-17-2,Logstash 8.17.2>>
* <<logstash-8-17-1,Logstash 8.17.1>>
* <<logstash-8-17-0,Logstash 8.17.0>>
* <<logstash-8-16-6,Logstash 8.16.6>>
* <<logstash-8-16-5,Logstash 8.16.5>>
* <<logstash-8-16-4,Logstash 8.16.4>>
* <<logstash-8-16-3,Logstash 8.16.3>>
* <<logstash-8-16-2,Logstash 8.16.2>>
* <<logstash-8-16-1,Logstash 8.16.1>>
* <<logstash-8-16-0,Logstash 8.16.0>>
* <<logstash-8-15-5,Logstash 8.15.5>>
* <<logstash-8-15-4,Logstash 8.15.4>>
* <<logstash-8-15-3,Logstash 8.15.3>>
* <<logstash-8-15-2,Logstash 8.15.2>>
* <<logstash-8-15-1,Logstash 8.15.1>>
* <<logstash-8-15-0,Logstash 8.15.0>>
* <<logstash-8-14-3,Logstash 8.14.3>>
@ -66,9 +82,803 @@ This section summarizes the changes in the following releases:
* <<logstash-8-0-0-alpha1,Logstash 8.0.0-alpha1>>
[[logstash-8-17-4]]
=== Logstash 8.17.4 Release Notes
[[known-issues-8-17-4]]
==== Known issues
** The https://github.com/logstash-plugins/logstash-input-http_poller[http_poller input] plugin will terminate during pipeline startup due to an issue with an underlying library. Please upgrade to logstash-input-http_poller 5.6.1 using {ls}'s plugin manager with `bin/logstash-plugin update logstash-input-http_poller`.
** The https://github.com/logstash-plugins/logstash-input-elasticsearch[elasticsearch input] and https://github.com/logstash-plugins/logstash-filter-elasticsearch[elasticsearch filter] plugins will terminate during pipeline startup due to an upgrade of the underlying elasticsearch ruby client from 7.x to 8.x. Please upgrade to logstash-input-elasticsearch 4.21.2 and logstash-filter-elasticsearch 3.17.1 using {ls}'s plugin manager with `bin/logstash-plugin update logstash-input-elasticsearch logstash-filter-elasticsearch` or downgrade to Logstash 8.17.3.
[[notable-8-17-4]]
==== Notable issues fixed
* Fix pqcheck and pqrepair on Windows. https://github.com/elastic/logstash/pull/17210[#17210]
* Avoid possible integer overflow in string tokenization. https://github.com/elastic/logstash/pull/17353[#17353]
* Fixed an issue where the /_node/stats API displayed empty pipeline metrics when Logstash monitoring was configured to use legacy collectors. https://github.com/elastic/logstash/pull/17185[#17185]
[[plugins-8-17-4]]
==== Plugins
*Syslog Input - 3.7.1*
* Fix issue where the priority field was not being set correctly when grok failed https://github.com/logstash-plugins/logstash-input-syslog/pull/78[#76]
*Jdbc Integration - 5.5.3*
* [DOC] Rework inline comment to a callout in preparation for upcoming MD conversion https://github.com/logstash-plugins/logstash-integration-jdbc/pull/181[#181]
[[logstash-8-17-3]]
=== Logstash 8.17.3 Release Notes
[[notable-8-17-3]]
==== Notable issues fixed
* Improves performance of the Persistent Queue, especially in the case of large events, by moving deserialization out of the exclusive access lock. https://github.com/elastic/logstash/pull/17050[#17050]
* Improve error logging when Centralized Pipeline Management cannot find a configured pipeline. https://github.com/elastic/logstash/pull/17052[#17052]
* Update logstash-keystore to allow spaces in values when `stdin` is used to set values https://github.com/elastic/logstash/pull/17039[#17039]
[[plugins-8-17-3]]
==== Plugins
*Beats Input - 6.9.3*
* Upgrade netty to 4.1.118 https://github.com/logstash-plugins/logstash-input-beats/pull/514[#514]
*Http Input - 3.10.2*
* Upgrade netty to 4.1.118 https://github.com/logstash-plugins/logstash-input-http/pull/194[#194]
*Tcp Input - 6.4.6*
* Upgrade netty to 4.1.118 https://github.com/logstash-plugins/logstash-input-tcp/pull/233[#233]
[[logstash-8-17-2]]
=== Logstash 8.17.2 Release Notes
[[notable-8-17-2]]
==== Notable issues fixed
* The plugin manager's `update` command now correctly updates only _minor_ versions of plugins by default to avoid breaking changes.
If you wish to also include breaking changes, you must specify `--level=major` https://github.com/elastic/logstash/pull/16974[#16974]
* The plugin manager no longer has issues installing plugins with embedded jars or depending on snakeyaml https://github.com/elastic/logstash/pull/16924[#16924]
* The plugin manager now correctly supports authenticated proxies by transmitting username and password from proxy environment URI https://github.com/elastic/logstash/pull/16958[#16958]
* The logstash-keystore now correctly accepts spaces in values when added via stdin https://github.com/elastic/logstash/pull/17041[#17041]
* The buffered-tokenizer, which is used by many plugins to split streams of bytes by a delimiter, now properly resumes at the next delimiter after encountering a buffer-full condition https://github.com/elastic/logstash/pull/17022[#17022]
[[dependencies-8-17-2]]
==== Updates to dependencies
* Update JDK to 21.0.6+7 https://github.com/elastic/logstash/pull/16989[#16989]
[[plugins-8-17-2]]
==== Plugins
*Elastic_integration Filter - 8.17.1*
* Provides a guidance in logs when plugin version mismatches with connected Elasticsearch `major.minor` version https://github.com/elastic/logstash-filter-elastic_integration/pull/255[#255]
* Embeds Ingest Node components from Elasticsearch 8.17
* Compatible with Logstash 8.15+
*Elasticsearch Filter - 3.17.0*
* Added support for custom headers https://github.com/logstash-plugins/logstash-filter-elasticsearch/pull/190[#190]
*Beats Input - 6.9.2*
* Name netty threads according to their purpose and the plugin id https://github.com/logstash-plugins/logstash-input-beats/pull/511[#511]
*Elasticsearch Input - 4.21.1*
* Fix: prevent plugin crash when hits contain illegal structure https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/183[#183]
* When a hit cannot be converted to an event, the input now emits an event tagged with `_elasticsearch_input_failure` with an `[event][original]` containing a JSON-encoded string representation of the entire hit.
* Add support for custom headers https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/217[#217]
*Http Input - 3.10.1*
* Properly naming netty threads https://github.com/logstash-plugins/logstash-input-http/pull/191[#191]
* Add improved proactive rate-limiting, rejecting new requests when queue has been actively blocking for more than 10 seconds https://github.com/logstash-plugins/logstash-input-http/pull/179[#179]
*Tcp Input - 6.4.5*
* Name netty threads with plugin id and their purpose https://github.com/logstash-plugins/logstash-input-tcp/pull/229[#229]
*Snmp Integration - 4.0.6*
* [DOC] Fix typo in snmptrap migration section https://github.com/logstash-plugins/logstash-integration-snmp/pull/74[#74]
*Elasticsearch Output - 11.22.12*
* Properly handle http code 413 (Payload Too Large) https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1199[#1199]
* Remove irrelevant log warning about elastic stack version https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1202[#1202]
[[logstash-8-17-1]]
=== Logstash 8.17.1 Release Notes
[[notable-8.17.1]]
==== Notable issues fixed
* Reset internal size counter in BufferedTokenizer during flush https://github.com/elastic/logstash/pull/16760[#16760].
Fixes <<known-issue-8-16-1-json_lines,"input buffer full" error>> that could appear with versions 8.16.0, 8.16.1, and 8.17.0.
* Avoid lock contention when ecs_compatibility is explicitly specified https://github.com/elastic/logstash/pull/16786[#16786]
* Ensure that the Jackson read constraints defaults (Maximum Number value length, Maximum String value length, and Maximum Nesting depth) are applied at runtime if they are absent from jvm.options https://github.com/elastic/logstash/pull/16832[#16832]
* Fix environment variables `${VAR}` were not interpreted in jvm.options https://github.com/elastic/logstash/pull/16834[#16834]
* Show pipeline metrics (workers, batch_size, batch_delay) in the Node Stats API https://github.com/elastic/logstash/pull/16839[#16839]
[[dependencies-8.17.1]]
==== Updates to dependencies
* Update Iron Bank base image to ubi9/9.5 https://github.com/elastic/logstash/pull/16825[#16825]
[[plugins-8.17.1]]
==== Plugins
*Elastic_integration Filter - 8.17.0*
* Aligns with stack major and minor versions https://github.com/elastic/logstash-filter-elastic_integration/pull/212[#212]
* Embeds Ingest Node components from Elasticsearch 8.17
* Compatible with Logstash 8.15+
*Elasticsearch Filter - 3.16.2*
* Add `x-elastic-product-origin` header to Elasticsearch requests https://github.com/logstash-plugins/logstash-filter-elasticsearch/pull/185[#185]
*Azure_event_hubs Input - 1.5.1*
* Updated multiple Java dependencies https://github.com/logstash-plugins/logstash-input-azure_event_hubs/pull/99[#99]
*Elasticsearch Input - 4.20.5*
* Add `x-elastic-product-origin` header to Elasticsearch requests https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/211[#211]
*Elastic_enterprise_search Integration - 3.0.1*
* Add deprecation log for App Search and Workplace Search. Both products are removed from Elastic Stack in version 9 https://github.com/logstash-plugins/logstash-integration-elastic_enterprise_search/pull/22[#22]
*Jdbc Integration - 5.5.2*
* The input plugin's prior behaviour of opening a new database connection for each scheduled run (removed in `v5.4.1`) is restored, ensuring that infrequently-run schedules do not hold open connections to their databases indefinitely, _without_ reintroducing the leak https://github.com/logstash-plugins/logstash-integration-jdbc/pull/130[#130]
*Kafka Integration - 11.5.4*
* Update kafka client to 3.8.1 and transitive dependencies https://github.com/logstash-plugins/logstash-integration-kafka/pull/188[#188]
* Removed `jar-dependencies` dependency https://github.com/logstash-plugins/logstash-integration-kafka/pull/187[#187]
*Logstash Integration - 1.0.4*
* Fixes a buffer-over-limit exception in the downstream input plugin by emitting event-oriented chunks in the upstream output plugin https://github.com/logstash-plugins/logstash-integration-logstash/pull/25[#25]
*Snmp Integration - 4.0.5*
* Fix typo resulting in "uninitialized constant" exception for invalid column name https://github.com/logstash-plugins/logstash-integration-snmp/pull/73[#73]
*Elasticsearch Output - 11.22.10*
* Add `x-elastic-product-origin` header to Elasticsearch requests https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1194[#1194]
[[logstash-8-17-0]]
=== Logstash 8.17.0 Release Notes
[[known-issues-8-17-0]]
==== Known issues
[[known-issue-8-17-0-jvm]]
===== JVM version changes needed when upgrading {ls} from 8.12.0 (or earlier)
If the `jvm.options` file was modified and not overwritten with the newest version, you may see a "deserialize invocation error" message, causing the pipeline to crash.
Users are affected if the Persistent Queue (PQ) is enabled, and the pipeline is processing messages larger than 20MB.
**Solution:** Apply the default change contained in the newer 'jvm.options' file, as seen in this https://github.com/elastic/logstash/blob/v8.17.0/config/jvm.options#L74-L90[example].
[[known-issue-8-17-0-json_lines]]
===== "Input buffer full" error with {ls} 8.16.0, 8.16.1, or 8.17.0
If you are using `json_lines` codec 3.2.0 (or later) with {ls} 8.16.0, 8.16.1, or 8.17.0, you may see an error similar to this one, crashing the pipelines:
```
unable to process event. {:message=>"input buffer full", :class=>"Java::JavaLang::IllegalStateException", :backtrace=>["org.logstash.common.BufferedTokenizerExt.extract(BufferedTokenizerExt.java:83)", "usr.share.logstash.vendor.bundle.jruby.$3_dot_1_dot_0.gems.logstash_minus_codec_minus_json_lines_minus_3_dot_2_dot_2.lib.logstash.codecs.json_lines.RUBY$method$decode$0(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-codec-json_lines-3.2.2/lib/logstash/codecs/json_lines.rb:69)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:165)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:185)",
```
The issue was fixed in https://github.com/elastic/logstash/pull/16760.
This problem is most likely to be seen when you are using the <<plugins-integrations-logstash,{ls} integration>> plugin to ship data between two {ls} instances, but may appear in other situations, too.
**Workaround for {ls}-to-{ls} communication**
The {ls}-to-{ls} issue can be mitigated by:
* Downgrading the _receiving_ {ls} to `8.16.2`, or any {ls} in the `8.15` series, **_AND/OR_**
* Upgrading the <<plugins-integrations-logstash,{ls} integration>> plugin of the _sending_ {ls} to version `1.0.4`.
**Workaround for other `json_lines` codec situations**
Other `json_lines` codec issues can be mitigated by:
* Downgrading {ls} to `8.16.2`, or any {ls} in the `8.15` series.
[[notable-8-17-0]]
==== Notable fixes and improvements
* Add warning that `allow_superuser` will default to `false` in 9.0.0 https://github.com/elastic/logstash/pull/16555[#16555]
* Update deprecation warning to mention ArcSight module will be removed in 9.0.0 https://github.com/elastic/logstash/pull/16648[#16648]
* Update deprecation warning for http.* settings to mention removal in 9.0.0 https://github.com/elastic/logstash/pull/16538[#16538]
[[core-8-17-0]]
==== Changes to Logstash core
* Make max inflight warning global to all pipelines https://github.com/elastic/logstash/pull/16601[#16601]
* Correctly guide user to use LS_JAVA_HOME instead of JAVA_HOME to configure Java on Windows https://github.com/elastic/logstash/pull/16636[#16636]
* Ensure jackson configurations are applied if found in "jvm.options" https://github.com/elastic/logstash/pull/16757[#16757]
* Set `platform = 'java'` in custom java plugins' gemspecs https://github.com/elastic/logstash/pull/16628[#16628]
* Fix offline installation of java plugins containing "-java" in their name https://github.com/elastic/logstash/pull/16637[#16637]
[[dependencies-8.17.0]]
==== Updates to dependencies
* Pin jar-dependencies gem to 0.4.1 to avoid clashing with version bundled with JRuby https://github.com/elastic/logstash/pull/16750[#16750]
* Update JDK to 21.0.5+11 https://github.com/elastic/logstash/pull/16631[#16631]
[[docs-8.17.0]]
==== Documentation enhancements
* Troubleshooting update for JDK bug handling cgroups v1 https://github.com/elastic/logstash/pull/16731[#16731]
==== Plugins
*Http_client Mixin - 7.5.0*
* Adds new mixin configuration option `with_obsolete` to mark `ssl` options as obsolete https://github.com/logstash-plugins/logstash-mixin-http_client/pull/46[#46]
[[logstash-8-16-6]]
=== Logstash 8.16.6 Release Notes
[[notable-8-16-6]]
==== Notable issues fixed
* Fix pqcheck and pqrepair on Windows. https://github.com/elastic/logstash/pull/17210[#17210]
* Avoid possible integer overflow in string tokenization. https://github.com/elastic/logstash/pull/17353[#17353]
[[plugins-8-16-6]]
==== Plugins
*Syslog Input - 3.7.1*
* Fix issue where the priority field was not being set correctly when grok failed https://github.com/logstash-plugins/logstash-input-syslog/pull/78[#76]
*Jdbc Integration - 5.5.3*
* [DOC] Rework inline comment to a callout in preparation for upcoming MD conversion https://github.com/logstash-plugins/logstash-integration-jdbc/pull/181[#181]
[[logstash-8-16-5]]
=== Logstash 8.16.5 Release Notes
[[notable-8-16-5]]
==== Notable issues fixed
* Improves performance of the Persistent Queue, especially in the case of large events, by moving deserialization out of the exclusive access lock. https://github.com/elastic/logstash/pull/17050[#17050]
* Improve error logging when Centralized Pipeline Management cannot find a configured pipeline. https://github.com/elastic/logstash/pull/17052[#17052]
[[plugins-8-16-5]]
==== Plugins
*Beats Input - 6.9.3*
* Upgrade netty to 4.1.118 https://github.com/logstash-plugins/logstash-input-beats/pull/514[#514]
*Http Input - 3.10.2*
* Upgrade netty to 4.1.118 https://github.com/logstash-plugins/logstash-input-http/pull/194[#194]
*Tcp Input - 6.4.6*
* Upgrade netty to 4.1.118 https://github.com/logstash-plugins/logstash-input-tcp/pull/233[#233]
[[logstash-8-16-4]]
=== Logstash 8.16.4 Release Notes
[[notable-8-16-4]]
==== Notable issues fixed
* The plugin manager's `update` command now correctly updates only _minor_ versions of plugins by default to avoid breaking changes.
If you wish to also include breaking changes, you must specify `--level=major` https://github.com/elastic/logstash/pull/16975[#16975]
* The plugin manager no longer has issues installing plugins with embedded jars or depending on snakeyaml https://github.com/elastic/logstash/pull/16925[#16925]
* The plugin manager now correctly supports authenticated proxies by transmitting username and password from proxy environment URI https://github.com/elastic/logstash/pull/16957[#16957]
* The buffered-tokenizer, which is used by many plugins to split streams of bytes by a delimiter, now properly resumes at the next delimiter after encountering a buffer-full condition https://github.com/elastic/logstash/pull/17021[#17021]
[[dependencies-8-16-4]]
==== Updates to dependencies
* Update JDK to 21.0.6+7 https://github.com/elastic/logstash/pull/16990[#16990]
[[plugins-8-16-4]]
==== Plugins
*Elastic_integration Filter - 8.16.1*
* Provides a guidance in logs when plugin version mismatches with connected Elasticsearch `major.minor` version https://github.com/elastic/logstash-filter-elastic_integration/pull/253[#253]
* Embeds Ingest Node components from Elasticsearch 8.16
* Compatible with Logstash 8.15+
*Elasticsearch Filter - 3.17.0*
* Added support for custom headers https://github.com/logstash-plugins/logstash-filter-elasticsearch/pull/190[#190]
*Beats Input - 6.9.2*
* Name netty threads according to their purpose and the plugin id https://github.com/logstash-plugins/logstash-input-beats/pull/511[#511]
*Elasticsearch Input - 4.21.1*
* Fix: prevent plugin crash when hits contain illegal structure https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/183[#183]
* When a hit cannot be converted to an event, the input now emits an event tagged with `_elasticsearch_input_failure` with an `[event][original]` containing a JSON-encoded string representation of the entire hit.
* Add support for custom headers https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/217[#217]
*Http Input - 3.10.1*
* Properly naming netty threads https://github.com/logstash-plugins/logstash-input-http/pull/191[#191]
* Add improved proactive rate-limiting, rejecting new requests when queue has been actively blocking for more than 10 seconds https://github.com/logstash-plugins/logstash-input-http/pull/179[#179]
*Tcp Input - 6.4.5*
* Name netty threads with plugin id and their purpose https://github.com/logstash-plugins/logstash-input-tcp/pull/229[#229]
*Snmp Integration - 4.0.6*
* [DOC] Fix typo in snmptrap migration section https://github.com/logstash-plugins/logstash-integration-snmp/pull/74[#74]
*Elasticsearch Output - 11.22.12*
* Properly handle http code 413 (Payload Too Large) https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1199[#1199]
* Remove irrelevant log warning about elastic stack version https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1202[#1202]
[[logstash-8-16-3]]
=== Logstash 8.16.3 Release Notes
[[notable-8.16.3]]
==== Notable issues fixed
* Avoid lock contention when ecs_compatibility is explicitly specified https://github.com/elastic/logstash/pull/16786[#16786]
* Ensure that the Jackson read constraints defaults (Maximum Number value length, Maximum String value length, and Maximum Nesting depth) are applied at runtime if they are absent from jvm.options https://github.com/elastic/logstash/pull/16832[#16832]
[[dependencies-8.16.3]]
==== Updates to dependencies
* Update Iron Bank base image to ubi9/9.5 https://github.com/elastic/logstash/pull/16825[#16825]
[[plugins-8.16.3]]
==== Plugins
*Elastic_integration Filter - 8.16.0*
* Aligns with stack major and minor versions https://github.com/elastic/logstash-filter-elastic_integration/pull/210[#210]
* Embeds Ingest Node components from Elasticsearch 8.16
* Compatible with Logstash 8.15+
*Azure_event_hubs Input - 1.5.1*
* Updated multiple Java dependencies https://github.com/logstash-plugins/logstash-input-azure_event_hubs/pull/99[#99]
*Elastic_enterprise_search Integration - 3.0.1*
* Add deprecation log for App Search and Workplace Search. https://github.com/logstash-plugins/logstash-integration-elastic_enterprise_search/pull/22[#22]
*Jdbc Integration - 5.5.2*
* The input plugin's prior behaviour of opening a new database connection for each scheduled run (removed in `v5.4.1`) is restored, ensuring that infrequently-run schedules do not hold open connections to their databases indefinitely, _without_ reintroducing the leak https://github.com/logstash-plugins/logstash-integration-jdbc/pull/130[#130]
*Kafka Integration - 11.5.4*
* Update kafka client to 3.8.1 and transitive dependencies https://github.com/logstash-plugins/logstash-integration-kafka/pull/188[#188]
* Removed `jar-dependencies` dependency https://github.com/logstash-plugins/logstash-integration-kafka/pull/187[#187]
*Snmp Integration - 4.0.5*
* Fix typo resulting in "uninitialized constant" exception for invalid column name https://github.com/logstash-plugins/logstash-integration-snmp/pull/73[#73]
[[logstash-8-16-2]]
=== Logstash 8.16.2 Release Notes
[[notable-8-16-2]]
==== Notable issues fixed
* Reset internal size counter in BufferedTokenizer during flush https://github.com/elastic/logstash/pull/16771[#16771].
Fixes <<known-issue-8-16-1-json_lines,"input buffer full" error>> that could appear with versions 8.16.0 and 8.16.1.
* Ensure overrides to jackson settings are applied during startup https://github.com/elastic/logstash/pull/16758[#16758].
[[dependencies-8-16-2]]
==== Updates to dependencies
* Pin `jar-dependencies` to `0.4.1` and `date` to `3.3.3` to avoid clashes between what's bundled with JRuby and newer versions in Rubygems https://github.com/elastic/logstash/pull/16749[#16749] https://github.com/elastic/logstash/pull/16779[#16779]
==== Plugins
*Elastic_integration Filter - 0.1.17*
* Add `x-elastic-product-origin` header to Elasticsearch requests https://github.com/elastic/logstash-filter-elastic_integration/pull/197[#197]
*Elasticsearch Filter - 3.16.2*
* Add `x-elastic-product-origin` header to Elasticsearch requests https://github.com/logstash-plugins/logstash-filter-elasticsearch/pull/185[#185]
*Elasticsearch Input - 4.20.5*
* Add `x-elastic-product-origin` header to Elasticsearch requests https://github.com/logstash-plugins/logstash-input-elasticsearch/pull/211[#211]
*Jdbc Integration - 5.5.1*
* Document `statement_retry_attempts` and `statement_retry_attempts_wait_time` options https://github.com/logstash-plugins/logstash-integration-jdbc/pull/177[#177]
*Kafka Integration - 11.5.3*
* Update kafka client to 3.7.1 and transitive dependencies https://github.com/logstash-plugins/logstash-integration-kafka/pull/186[#186]
*Logstash Integration - 1.0.4*
* Align output plugin with documentation by producing event-oriented ndjson-compatible payloads instead of JSON array of events https://github.com/logstash-plugins/logstash-integration-logstash/pull/25[#25]
*Elasticsearch Output - 11.22.10*
* Add `x-elastic-product-origin` header to Elasticsearch requests https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1195[#1195]
[[logstash-8-16-1]]
=== Logstash 8.16.1 Release Notes
[[known-issues-8-16-1]]
==== Known issue
[[known-issue-8-16-1-json_lines]]
===== "Input buffer full" error with {ls} 8.16.0, 8.16.1, or 8.17.0
If you are using `json_lines` codec 3.2.0 (or later) with {ls} 8.16.0, 8.16.1, or 8.17.0, you may see an error similar to this one, crashing the pipelines:
```
unable to process event. {:message=>"input buffer full", :class=>"Java::JavaLang::IllegalStateException", :backtrace=>["org.logstash.common.BufferedTokenizerExt.extract(BufferedTokenizerExt.java:83)", "usr.share.logstash.vendor.bundle.jruby.$3_dot_1_dot_0.gems.logstash_minus_codec_minus_json_lines_minus_3_dot_2_dot_2.lib.logstash.codecs.json_lines.RUBY$method$decode$0(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-codec-json_lines-3.2.2/lib/logstash/codecs/json_lines.rb:69)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:165)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:185)",
```
The issue was fixed in https://github.com/elastic/logstash/pull/16760.
This problem is most likely to be seen when you are using the <<plugins-integrations-logstash,{ls} integration>> plugin to ship data between two {ls} instances, but may appear in other situations, too.
**Workaround for {ls}-to-{ls} communication**
The {ls}-to-{ls} issue can be mitigated by:
* Downgrading the _receiving_ {ls} to `8.16.2`, or any {ls} in the `8.15` series, **_AND/OR_**
* Upgrading the {ls} integration filter of the _sending_ {ls} to version `1.0.4`.
**Workaround for other `json_lines` codec situations**
Other `json_lines` codec issues can be mitigated by:
* Downgrading {ls} to `8.16.2`, or any {ls} in the `8.15` series.
[[notable-8-16-1]]
==== Notable issues fixed
* PipelineBusV2 deadlock proofing: We fixed an issue that could cause a deadlock when the pipeline-to-pipeline feature was in use, causing pipelines (and consequently) {ls} to never terminate https://github.com/elastic/logstash/pull/16680[#16680]
==== Plugins
*Elastic_integration Filter - 0.1.16*
* Reflect the Elasticsearch GeoIP changes into the plugin and sync with Elasticsearch 8.16 branch https://github.com/elastic/logstash-filter-elastic_integration/pull/170[#170]
*Xml Filter - 4.2.1*
* patch rexml to improve performance of multi-threaded xml parsing https://github.com/logstash-plugins/logstash-filter-xml/pull/84[#84]
*Beats Input - 6.9.1*
* Upgrade netty to 4.1.115 https://github.com/logstash-plugins/logstash-input-beats/pull/507[#507]
*Http Input - 3.9.2*
* Upgrade netty to 4.1.115 https://github.com/logstash-plugins/logstash-input-http/pull/183[#183]
*Tcp Input - 6.4.4*
* Upgrade netty to 4.1.115 https://github.com/logstash-plugins/logstash-input-tcp/pull/227[#227]
*Http Output - 5.7.1*
* Added new development `rackup` dependency to fix tests
[[logstash-8-16-0]]
=== Logstash 8.16.0 Release Notes
[[known-issues-8-16-0]]
==== Known issues
[[known-issue-8-16-0-shutdown-failure]]
===== {ls} may fail to shut down under some circumstances
{ls} may fail to shut down when you are using <<pipeline-to-pipeline>>.
Check out issue https://github.com/elastic/logstash/issues/16657[#16657] for details.
Workaround: Add `-Dlogstash.pipelinebus.implementation=v1` to `config/jvm.options`.
This change reverts the `PipelineBus` to `v1`, a version that does not exhibit this issue, but may impact performance in pipeline-to-pipeline scenarios.
[[known-issue-8-16-0-json_lines]]
===== "Input buffer full" error with {ls} 8.16.0, 8.16.1, or 8.17.0
If you are using `json_lines` codec 3.2.0 (or later) with {ls} 8.16.0, 8.16.1, or 8.17.0, you may see an error similar to this one, crashing the pipelines:
```
unable to process event. {:message=>"input buffer full", :class=>"Java::JavaLang::IllegalStateException", :backtrace=>["org.logstash.common.BufferedTokenizerExt.extract(BufferedTokenizerExt.java:83)", "usr.share.logstash.vendor.bundle.jruby.$3_dot_1_dot_0.gems.logstash_minus_codec_minus_json_lines_minus_3_dot_2_dot_2.lib.logstash.codecs.json_lines.RUBY$method$decode$0(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-codec-json_lines-3.2.2/lib/logstash/codecs/json_lines.rb:69)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:165)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:185)",
```
The issue was fixed in https://github.com/elastic/logstash/pull/16760.
This problem is most likely to be seen when you are using the <<plugins-integrations-logstash,{ls} integration>> plugin to ship data between two {ls} instances, but may appear in other situations, too.
**Workaround for {ls}-to-{ls} communication**
The {ls}-to-{ls} issue can be mitigated by:
* Downgrading the _receiving_ {ls} to `8.16.2`, or any {ls} in the `8.15` series, **_AND/OR_**
* Upgrading the {ls} integration filter of the _sending_ {ls} to version `1.0.4`.
**Workaround for other `json_lines` codec situations**
Other `json_lines` codec issues can be mitigated by:
* Downgrading {ls} to `8.16.2`, or any {ls} in the `8.15` series.
[[health-api-8-16-0]]
==== Announcing the new {ls} Health Report API
The new Health Report API (`GET /_health_report`) is available starting with {ls} `8.16.0`.
This API uses indicators capable of detecting the degraded status of pipelines and
providing actionable insights https://github.com/elastic/logstash/pull/16520[#16520], https://github.com/elastic/logstash/pull/16532[#16532].
**Upgrading from earlier versions.** If your existing automation relies on liveliness scripts that expect the {ls} API status to be unavailable or to return a hardcoded `green` status, you can set a property to preserve pre-8.16.0.
To maintain existing behavior for API responses, add the `-Dlogstash.forceApiStatus=green` property to your `config/jvm.options` file.
This setting prevents the new Health API status from affecting the top-level `status` field of existing {ls} API responses, forcing other APIs to return the previous hard-coded `green` value. https://github.com/elastic/logstash/pull/16535[#16535]
Check out the <<logstash-health-report-api>> docs more for info.
[[featured-8-16-0]]
==== New features and enhancements
* {ls} now gracefully handles `if` conditionals in pipeline definitions that can't be evaluated (https://github.com/elastic/logstash/pull/16322[#16322]), either by dropping
the event or by sending it to the pipeline's DLQ if enabled. https://github.com/elastic/logstash/pull/16423[#16423]
[[core-8-16-0]]
==== Other changes to Logstash core
* Added deprecation logs for modules `netflow`, `fb_apache` and `azure`. https://github.com/elastic/logstash/pull/16548[#16548]
* Added deprecation logs for users that doesn't explicitly select a value for `pipeline.buffer.type` forcing them to proactively make a choice before version `9.0` when this setting will default to heap. https://github.com/elastic/logstash/pull/16498[#16498]
* The flag `--event_api.tags.illegal` was deprecated and will be removed in version 9. This flag remains available throughout all version 8.x releases. Users who rely on this flag to allow non strings assignment to `tags` field should update their pipeline. https://github.com/elastic/logstash/pull/16507[#16507]
[[dependencies-8.16.0]]
==== Updates to dependencies
* Updated JRuby to 9.4.9.0 https://github.com/elastic/logstash/pull/16638[#16638]
[[plugins-8-16-0]]
==== Plugins
*Cef Codec - 6.2.8*
* [DOC] Added missing documentation of the `raw_data_field` option https://github.com/logstash-plugins/logstash-codec-cef/pull/105[#105]
*Json_lines Codec - 3.2.2*
* Raised the default value of the `decode_size_limit_bytes` option to 512 MB https://github.com/logstash-plugins/logstash-codec-json_lines/pull/46[#46]
* Added the `decode_size_limit_bytes` option to limit the maximum size of JSON lines that can be parsed. https://github.com/logstash-plugins/logstash-codec-json_lines/pull/43[#43]
*Elastic_integration Filter - 0.1.15*
* Use Elasticsearch code from its `8.16` branch and adapt to changes in Elasticsearch GeoIP processor https://github.com/elastic/logstash-filter-elastic_integration/pull/170[#170]
*Geoip Filter - 7.3.1*
* Fixed a pipeline crash when looking up a database with customised fields https://github.com/logstash-plugins/logstash-filter-geoip/pull/225[#225]
*Azure_event_hubs Input - 1.5.0*
* Updated Azure Event Hub client library to version `3.3.0` https://github.com/logstash-plugins/logstash-input-azure_event_hubs/pull/96[#96]
*Beats Input - 6.9.0*
* Improved plugin's shutdown process and fixed a crash when a connection is terminated while processing messages https://github.com/logstash-plugins/logstash-input-beats/pull/500[#500]
*Http Input - 3.9.1*
* Fixed an issue where the value of `ssl_enabled` during `run` wasn't correctly logged https://github.com/logstash-plugins/logstash-input-http/pull/180[#180]
* Separated Netty boss and worker groups to improve the graceful shutdown https://github.com/logstash-plugins/logstash-input-http/pull/178[#178]
*Tcp Input - 6.4.3*
* Updated dependencies for TCP input https://github.com/logstash-plugins/logstash-input-tcp/pull/224[#224]
*Jdbc Integration - 5.5.0*
* Added support for SQL `DATE` columns to jdbc static and streaming filters https://github.com/logstash-plugins/logstash-integration-jdbc/pull/171[#171]
*Rabbitmq Integration - 7.4.0*
* Removed obsolete `verify_ssl` and `debug` options https://github.com/logstash-plugins/logstash-integration-rabbitmq/pull/60[#60]
[[logstash-8-15-5]]
=== Logstash 8.15.5 Release Notes
[[notable-8-15-5]]
==== Notable issues fixed
* PipelineBusV2 deadlock proofing: We fixed an issue that could cause a deadlock when the pipeline-to-pipeline feature was in use, causing pipelines (and consequently) {ls} to never terminate https://github.com/elastic/logstash/pull/16681[#16681]
* We reverted a change in BufferedTokenizer (https://github.com/elastic/logstash/pull/16482[#16482]) that improved handling of large messages but introduced a double encoding bug https://github.com/elastic/logstash/pull/16687[#16687].
==== Plugins
*Elastic_integration Filter - 0.1.16*
* Reflect the Elasticsearch GeoIP changes into the plugin and sync with Elasticsearch 8.16 branch https://github.com/elastic/logstash-filter-elastic_integration/pull/170[#170]
*Xml Filter - 4.2.1*
* patch rexml to improve performance of multi-threaded xml parsing https://github.com/logstash-plugins/logstash-filter-xml/pull/84[#84]
*Tcp Input - 6.4.4*
* update netty to 4.1.115 https://github.com/logstash-plugins/logstash-input-tcp/pull/227[#227]
*Http Output - 5.7.1*
* Added new development `rackup` dependency to fix tests
[[logstash-8-15-4]]
=== Logstash 8.15.4 Release Notes
[[known-issues-8-15-4]]
==== Known issue
**{ls} may fail to shut down under some circumstances when you are using <<pipeline-to-pipeline>>.**
Check out issue https://github.com/elastic/logstash/issues/16657[#16657] for details.
Workaround: Add `-Dlogstash.pipelinebus.implementation=v1` to `config/jvm.options`.
This change reverts the `PipelineBus` to `v1`, a version that does not exhibit this issue, but may impact performance in pipeline-to-pipeline scenarios.
[[notable-8-15-4]]
==== Notable issues fixed
* Fixed an issue where Logstash could not consume lines correctly when a codec with a delimiter is in use and the input buffer becomes full https://github.com/elastic/logstash/pull/16482[#16482]
[[dependencies-8-15-4]]
==== Updates to dependencies
* Updated JRuby to 9.4.9.0 https://github.com/elastic/logstash/pull/16638[#16638]
[[plugins-8-15-4]]
==== Plugins
*Cef Codec - 6.2.8*
* [DOC] Added `raw_data_field` to docs https://github.com/logstash-plugins/logstash-codec-cef/pull/105[#105]
*Elastic_integration Filter - 0.1.15*
* Fixed the connection failure where SSL verification mode is disabled over SSL connection https://github.com/elastic/logstash-filter-elastic_integration/pull/165[#165]
*Geoip Filter - 7.3.1*
* Fixed issue causing pipelines to crash during lookup when a database has custom fields https://github.com/logstash-plugins/logstash-filter-geoip/pull/225[#225]
*Tcp Input - 6.4.3*
* Updated dependencies https://github.com/logstash-plugins/logstash-input-tcp/pull/224[#224]
[[logstash-8-15-3]]
=== Logstash 8.15.3 Release Notes
[[known-issues-8-15-3]]
==== Known issue
**{ls} may fail to shut down under some circumstances when you are using <<pipeline-to-pipeline>>.**
Check out issue https://github.com/elastic/logstash/issues/16657[#16657] for details.
Workaround: Add `-Dlogstash.pipelinebus.implementation=v1` to `config/jvm.options`.
This change reverts the `PipelineBus` to `v1`, a version that does not exhibit this issue, but may impact performance in pipeline-to-pipeline scenarios.
[[notable-8.15.3]]
==== Notable issues fixed
* Improved the pipeline bootstrap error logs to include the cause's backtrace, giving a hint where the issue occurred https://github.com/elastic/logstash/pull/16495[#16495]
* Fixed Logstash core compatibility issues with `logstash-input-azure_event_hubs` versions `1.4.8` and earlier https://github.com/elastic/logstash/pull/16485[#16485]
==== Plugins
*Elastic_integration Filter - 0.1.14*
* Enabled the use of org.elasticsearch.ingest.common.Processors in Ingest Pipelines, resolving an issue where some integrations would fail to load https://github.com/elastic/logstash-filter-elastic_integration/pull/162[#162]
*Azure_event_hubs Input - 1.4.9*
* Fixed issue with `getHostContext` method accessibility, causing plugin not to be able to run https://github.com/logstash-plugins/logstash-input-azure_event_hubs/pull/93[#93]
* Fixed connection placeholder replacements errors with Logstash `8.15.1` and `8.15.2` https://github.com/logstash-plugins/logstash-input-azure_event_hubs/pull/92[#92]
*Kafka Integration - 11.5.2*
* Updated avro to 1.11.4 and confluent kafka to 7.4.7 https://github.com/logstash-plugins/logstash-integration-kafka/pull/184[#184]
[[logstash-8-15-2]]
=== Logstash 8.15.2 Release Notes
[[known-issues-8-15-2]]
==== Known issue
**{ls} may fail to shut down under some circumstances when you are using <<pipeline-to-pipeline>>.**
Check out issue https://github.com/elastic/logstash/issues/16657[#16657] for details.
Workaround: Add `-Dlogstash.pipelinebus.implementation=v1` to `config/jvm.options`.
This change reverts the `PipelineBus` to `v1`, a version that does not exhibit this issue, but may impact performance in pipeline-to-pipeline scenarios.
[[notable-8.15.2]]
==== Notable issues fixed
* Fixed a https://github.com/elastic/logstash/issues/16437[regression] from {ls} 8.15.1 in which {ls} removes all quotes from docker env variables, possibly causing {ls} not to start https://github.com/elastic/logstash/pull/16456[#16456]
==== Plugins
*Beats Input - 6.8.4*
* Fix to populate the `@metadata` fields even if the source's metadata value is `nil` https://github.com/logstash-plugins/logstash-input-beats/pull/502[#502]
*Dead_letter_queue Input - 2.0.1*
* Fix NullPointerException when the plugin closes https://github.com/logstash-plugins/logstash-input-dead_letter_queue/pull/53[#53]
*Elastic_serverless_forwarder Input - 0.1.5*
* [DOC] Fix attributes to accurately set and clear default codec values https://github.com/logstash-plugins/logstash-input-elastic_serverless_forwarder/pull/8[#8]
*Logstash Integration - 1.0.3*
* [DOC] Fix attributes to accurately set and clear default codec values https://github.com/logstash-plugins/logstash-integration-logstash/pull/23[#23]
*Elasticsearch Output - 11.22.9*
* Vendor ECS template for Elasticsearch 9.x in built gem https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1188[#1188]
* Added ECS template for Elasticsearch 9.x https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/1187[#1187]
[[logstash-8-15-1]]
=== Logstash 8.15.1 Release Notes
[[known-issues-8-15-1]]
==== Known issues
* **{ls} may fail to start under some circumstances.** Single and double quotes are stripped from a pipeline configuration if the configuration includes environment or keystore variable references.
If this situation occurs, {ls} may fail to start or some plugins may use a malformed configuration.
Check out issue https://github.com/elastic/logstash/issues/16437[#16437] for details.
+
Workaround: Downgrade to {ls} 8.15.0, or temporarily avoid using environment and keystore variable references.
* **{ls} may fail to shut down under some circumstances when you are using <<pipeline-to-pipeline>>.**
Check out issue https://github.com/elastic/logstash/issues/16657[#16657] for details.
+
Workaround: Add `-Dlogstash.pipelinebus.implementation=v1` to `config/jvm.options`.
This change reverts the `PipelineBus` to `v1`, a version that does not exhibit this issue, but may impact performance in pipeline-to-pipeline scenarios.
[[notable-8.15.1]]
==== Performance improvements and notable issues fixed
@ -102,6 +912,15 @@ This section summarizes the changes in the following releases:
[[logstash-8-15-0]]
=== Logstash 8.15.0 Release Notes
[[known-issues-8-15-0]]
==== Known issue
**{ls} may fail to shut down under some circumstances when you are using <<pipeline-to-pipeline>>.**
Check out issue https://github.com/elastic/logstash/issues/16657[#16657] for details.
Workaround: Add `-Dlogstash.pipelinebus.implementation=v1` to `config/jvm.options`.
This change reverts the `PipelineBus` to `v1`, a version that does not exhibit this issue, but may impact performance in pipeline-to-pipeline scenarios.
[[snmp-ga-8.15.0]]
==== Announcing the new {ls} SNMP integration plugin
@ -224,6 +1043,14 @@ This new image flavor builds on top of a smaller and more secure base image, and
[[logstash-8-14-3]]
=== Logstash 8.14.3 Release Notes
[[known-issues-8-14-3]]
==== Known issue
**{ls} performance regression in JSON encoding**
{ls} `8.14.1` fixed a bug in the JSON encoding of strings containing non-unicode data https://github.com/elastic/logstash/issues/15833[#15833].
The fix introduced a performance regression that has since been solved with https://github.com/elastic/logstash/pull/16313[#16313] and included in {ls} `8.15.0`.
There is no workaround for this issue, please upgrade to {ls} 8.15.0 or later.
[[notable-8.14.3]]
==== Enhancements and notable issues fixed
@ -2475,4 +3302,4 @@ We have added another flag to the Benchmark CLI to allow passing a data file wit
This feature allows users to run the Benchmark CLI in a custom test case with a custom config and a custom dataset. https://github.com/elastic/logstash/pull/12437[#12437]
==== Plugin releases
Plugins align with release 7.14.0
Plugins align with release 7.14.0

View file

@ -249,8 +249,7 @@ POST /_security/api_key
"name": "logstash_host001", <1>
"role_descriptors": {
"logstash_monitoring": { <2>
"cluster": ["monitor"],
"index": ["read"]
"cluster": ["monitor", "manage_logstash_pipelines"]
}
}
}

View file

@ -78,7 +78,7 @@ As always, there's a definite argument for consistency across deployments.
[[es-sec-plugin]]
==== Configure the elasticsearch output
Use the <<plugins-outputs-elasticsearch,`elasticsearch output`'s>> <<plugins-outputs-elasticsearch-cacert,`cacert` option>> to point to the certificate's location.
Use the <<plugins-outputs-elasticsearch,`elasticsearch output`'s>> <<plugins-outputs-elasticsearch-ssl_certificate_authorities,`ssl_certificate_authorities` option>> to point to the certificate's location.
**Example**
@ -87,7 +87,7 @@ Use the <<plugins-outputs-elasticsearch,`elasticsearch output`'s>> <<plugins-out
output {
elasticsearch {
hosts => ["https://...] <1>
cacert => '/etc/logstash/config/certs/ca.crt' <2>
ssl_certificate_authorities => ['/etc/logstash/config/certs/ca.crt'] <2>
}
}
-------

View file

@ -1,6 +1,6 @@
[discrete]
[[ls-user-access]]
=== Granting access to the Logstash indices
=== Granting access to the indices Logstash creates
To access the indices Logstash creates, users need the `read` and
`view_index_metadata` privileges:
@ -13,14 +13,20 @@ privileges for the Logstash indices. You can create roles from the
---------------------------------------------------------------
POST _security/role/logstash_reader
{
"cluster": ["manage_logstash_pipelines"]
"cluster": ["manage_logstash_pipelines"],
"indices": [
{
"names": [ "logstash-*" ],
"privileges": ["read","view_index_metadata"]
}
]
}
---------------------------------------------------------------
. Assign your Logstash users the `logstash_reader` role. If the Logstash user
will be using
{logstash-ref}/logstash-centralized-pipeline-management.html[centralized pipeline management],
also assign the `logstash_admin` role. You can create and manage users from the
also assign the `logstash_system` role. You can create and manage users from the
**Management > Users** UI in {kib} or through the `user` API:
+
[source, sh]
@ -28,9 +34,9 @@ also assign the `logstash_admin` role. You can create and manage users from the
POST _security/user/logstash_user
{
"password" : "x-pack-test-password",
"roles" : [ "logstash_reader", "logstash_admin"], <1>
"roles" : [ "logstash_reader", "logstash_system"], <1>
"full_name" : "Kibana User for Logstash"
}
---------------------------------------------------------------
<1> `logstash_admin` is a built-in role that provides access to system
indices for managing configurations.
<1> `logstash_system` is a built-in role that provides the necessary permissions to
check the availability of the supported features of {es} cluster.

View file

@ -11,12 +11,12 @@ client-certificate for authentication, you configure the `keystore` and
output {
elasticsearch {
...
keystore => /path/to/keystore.jks
keystore_password => realpassword
truststore => /path/to/truststore.jks <1>
truststore_password => realpassword
ssl_keystore_path => /path/to/keystore.jks
ssl_keystore_password => realpassword
ssl_truststore_path => /path/to/truststore.jks <1>
ssl_truststore_password => realpassword
}
}
--------------------------------------------------
<1> If you use a separate truststore, the truststore path and password are
also required.
also required.

View file

@ -3,21 +3,23 @@
=== Configuring Logstash to use TLS/SSL encryption
If TLS encryption is enabled on an on premise {es} cluster, you need to
configure the `ssl` and `cacert` options in your Logstash `.conf` file:
configure the `ssl_enabled` and `ssl_certificate_authorities` options in your Logstash `.conf` file:
NOTE: See https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html[elasticsearch output plugin documentation] for a full list of options
[source,js]
--------------------------------------------------
output {
elasticsearch {
...
ssl => true
cacert => '/path/to/cert.pem' <1>
ssl_enabled => true
ssl_certificate_authorities => '/path/to/cert.pem' <1>
}
}
--------------------------------------------------
<1> The path to the local `.pem` file that contains the Certificate
Authority's certificate.
NOTE: Hosted {ess} simplifies security. This configuration step is not necessary for hosted Elasticsearch Service on Elastic Cloud.
{ess-leadin-short}

View file

@ -54,9 +54,10 @@ section in your Logstash configuration, or a different one. Defaults to
If your {es} cluster is protected with basic authentication, these settings
provide the username and password that the Logstash instance uses to
authenticate for accessing the configuration data. The username you specify here
should have the built-in `logstash_admin` role and the customized `logstash_writer` role, which provides access to system
indices for managing configurations. Starting with Elasticsearch version 7.10.0, the
`logstash_admin` role inherits the `manage_logstash_pipelines` cluster privilege for centralized pipeline management.
should have the built-in `logstash_admin` and `logstash_system` roles.
These roles provide access to system indices for managing configurations.
NOTE: Starting with Elasticsearch version 7.10.0, the `logstash_admin` role inherits the `manage_logstash_pipelines` cluster privilege for centralized pipeline management.
If a user has created their own roles and granted them access to the .logstash index, those roles will continue to work in 7.x but will need to be updated for 8.0.
`xpack.management.elasticsearch.proxy`::
@ -143,8 +144,8 @@ If you're using {es} in {ecloud}, you can set your auth credentials here.
This setting is an alternative to both `xpack.management.elasticsearch.username`
and `xpack.management.elasticsearch.password`. If `cloud_auth` is configured,
those settings should not be used.
The credentials you specify here should be for a user with the `logstash_admin` role, which
provides access to system indices for managing configurations.
The credentials you specify here should be for a user with the `logstash_admin` and `logstash_system` roles, which
provide access to system indices for managing configurations.
`xpack.management.elasticsearch.api_key`::

View file

@ -0,0 +1,44 @@
[[health-report-pipeline-flow-worker-utilization]]
=== Health Report Pipeline Flow: Worker Utilization
The Pipeline indicator has a `flow:worker_utilization` probe that is capable of producing one of several diagnoses about blockages in the pipeline.
A pipeline is considered "blocked" when its workers are fully-utilized, because if they are consistently spending 100% of their time processing events, they are unable to pick up new events from the queue.
This can cause back-pressure to cascade to upstream services, which can result in data loss or duplicate processing depending on upstream configuration.
The issue typically stems from one or more causes:
* a downstream resource being blocked,
* a plugin consuming more resources than expected, and/or
* insufficient resources being allocated to the pipeline.
To address the issue, observe the <<plugin-flow-rates>> from the <<node-stats-api>>, and identify which plugins have the highest `worker_utilization`.
This will tell you which plugins are spending the most of the pipeline's worker resources.
* If the offending plugin connects to a downstream service or another pipeline that is exerting back-pressure, the issue needs to be addressed in the downstream service or pipeline.
* If the offending plugin connects to a downstream service with high network latency, throughput for the pipeline may be improved by <<tuning-logstash-settings, allocating more worker resources to the pipeline>>.
* If the offending plugin is a computation-heavy filter such as `grok` or `kv`, its configuration may need to be tuned to eliminate wasted computation.
[[health-report-pipeline-flow-worker-utilization-diagnosis-blocked-5m]]
==== [[blocked-5m]]Blocked Pipeline (5 minutes)
A pipeline that has been completely blocked for five minutes or more represents a critical blockage to the flow of events through your pipeline that needs to be addressed immediately to avoid or limit data loss.
See above for troubleshooting steps.
[[health-report-pipeline-flow-worker-utilization-diagnosis-nearly-blocked-5m]]
==== [[nearly-blocked-5m]]Nearly Blocked Pipeline (5 minutes)
A pipeline that has been nearly blocked for five minutes or more may be creating intermittent blockage to the flow of events through your pipeline, which can result in the risk of data loss.
See above for troubleshooting steps.
[[health-report-pipeline-flow-worker-utilization-diagnosis-blocked-1m]]
==== [[blocked-1m]]Blocked Pipeline (1 minute)
A pipeline that has been completely blocked for one minute or more represents a high-risk or upcoming blockage to the flow of events through your pipeline that likely needs to be addressed soon to avoid or limit data loss.
See above for troubleshooting steps.
[[health-report-pipeline-flow-worker-utilization-diagnosis-nearly-blocked-1m]]
==== [[nearly-blocked-1m]]Nearly Blocked Pipeline (1 minute)
A pipeline that has been nearly blocked for one minute or more may be creating intermittent blockage to the flow of events through your pipeline, which can result in the risk of data loss.
See above for troubleshooting steps.

View file

@ -0,0 +1,37 @@
[[health-report-pipeline-status]]
=== Health Report Pipeline Status
The Pipeline indicator has a `status` probe that is capable of producing one of several diagnoses about the pipeline's lifecycle, indicating whether the pipeline is currently running.
[[health-report-pipeline-status-diagnosis-loading]]
==== [[loading]]Loading Pipeline
A pipeline that is loading is not yet processing data, and is considered a temporarily-degraded pipeline state.
Some plugins perform actions or pre-validation that can delay the starting of the pipeline, such as when a plugin pre-establishes a connection to an external service before allowing the pipeline to start.
When these plugins take significant time to start up, the whole pipeline can remain in a loading state for an extended time.
If your pipeline does not come up in a reasonable amount of time, consider checking the Logstash logs to see if the plugin shows evidence of being caught in a retry loop.
[[health-report-pipeline-status-diagnosis-finished]]
==== [[finished]]Finished Pipeline
A logstash pipeline whose input plugins have all completed will be shut down once events have finished processing.
Many plugins can be configured to run indefinitely, either by listening for new inbound events or by polling for events on a schedule.
A finished pipeline will not produce or process any more events until it is restarted, which will occur if the pipeline's definition is changed and pipeline reloads are enabled.
If you wish to keep your pipeline runing, consider configuring its input to run on a schedule or otherwise listen for new events.
[[health-report-pipeline-status-diagnosis-terminated]]
==== [[terminated]]Terminated Pipeline
When a Logstash pipeline's filter or output plugins crash, the entire pipeline is terminated and intervention is required.
A terminated pipeline will not produce or process any more events until it is restarted, which will occur if the pipeline's definition is changed and pipeline reloads are enabled.
Check the logs to determine the cause of the crash, and report the issue to the plugin maintainers.
[[health-report-pipeline-status-diagnosis-unknown]]
==== [[unknown]]Unknown Pipeline
When a Logstash pipeline either cannot be created or has recently been deleted the health report doesn't know enough to produce a meaningful status.
Check the logs to determine if the pipeline crashed during creation, and report the issue to the plugin maintainers.

View file

@ -28,3 +28,5 @@ include::ts-logstash.asciidoc[]
include::ts-plugins-general.asciidoc[]
include::ts-plugins.asciidoc[]
include::ts-other-issues.asciidoc[]
include::health-pipeline-status.asciidoc[]
include::health-pipeline-flow-worker-utilization.asciidoc[]

View file

@ -106,6 +106,76 @@ This issue affects some OpenJDK-derived JVM versions (Adoptium, OpenJDK, and Azu
-Djdk.io.File.enableADS=true
-----
[[ts-container-cgroup]]
===== Container exits with 'An unexpected error occurred!' message
{ls} running in a container may not start due to a https://bugs.openjdk.org/browse/JDK-8343191[bug in the JDK].
*Sample error*
[source,sh]
-----
[FATAL] 2024-11-11 11:11:11.465 [LogStash::Runner] runner - An unexpected error occurred! {:error=>#<Java::JavaLang::NullPointerException: >, :backtrace=>[
"java.util.Objects.requireNonNull(java/util/Objects.java:233)",
"sun.nio.fs.UnixFileSystem.getPath(sun/nio/fs/UnixFileSystem.java:296)",
"java.nio.file.Path.of(java/nio/file/Path.java:148)",
"java.nio.file.Paths.get(java/nio/file/Paths.java:69)",
"jdk.internal.platform.CgroupUtil.lambda$readStringValue$1(jdk/internal/platform/CgroupUtil.java:67)",
"java.security.AccessController.doPrivileged(java/security/AccessController.java:571)",
"jdk.internal.platform.CgroupUtil.readStringValue(jdk/internal/platform/CgroupUtil.java:69)",
"jdk.internal.platform.CgroupSubsystemController.getStringValue(jdk/internal/platform/CgroupSubsystemController.java:65)",
"jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getCpuSetCpus(jdk/internal/platform/cgroupv1/CgroupV1Subsystem.java:275)",
"jdk.internal.platform.CgroupMetrics.getCpuSetCpus(jdk/internal/platform/CgroupMetrics.java:100)",
"com.sun.management.internal.OperatingSystemImpl.isCpuSetSameAsHostCpuSet(com/sun/management/internal/OperatingSystemImpl.java:277)",
"com.sun.management.internal.OperatingSystemImpl$ContainerCpuTicks.getContainerCpuLoad(com/sun/management/internal/OperatingSystemImpl.java:96)",
"com.sun.management.internal.OperatingSystemImpl.getProcessCpuLoad(com/sun/management/internal/OperatingSystemImpl.java:271)",
"org.logstash.instrument.monitors.ProcessMonitor$Report.<init>(org/logstash/instrument/monitors/ProcessMonitor.java:63)",
"org.logstash.instrument.monitors.ProcessMonitor.detect(org/logstash/instrument/monitors/ProcessMonitor.java:136)",
"org.logstash.instrument.reports.ProcessReport.generate(org/logstash/instrument/reports/ProcessReport.java:35)",
"jdk.internal.reflect.DirectMethodHandleAccessor.invoke(jdk/internal/reflect/DirectMethodHandleAccessor.java:103)",
"java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:580)",
"org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:300)",
"org.jruby.javasupport.JavaMethod.invokeStaticDirect(org/jruby/javasupport/JavaMethod.java:222)",
"RUBY.collect_process_metrics(/usr/share/logstash/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb:102)",
"RUBY.collect(/usr/share/logstash/logstash-core/lib/logstash/instrument/periodic_poller/jvm.rb:73)",
"RUBY.start(/usr/share/logstash/logstash-core/lib/logstash/instrument/periodic_poller/base.rb:72)",
"org.jruby.RubySymbol$SymbolProcBody.yieldSpecific(org/jruby/RubySymbol.java:1541)",
"org.jruby.RubySymbol$SymbolProcBody.doYield(org/jruby/RubySymbol.java:1534)",
"org.jruby.RubyArray.collectArray(org/jruby/RubyArray.java:2770)",
"org.jruby.RubyArray.map(org/jruby/RubyArray.java:2803)",
"org.jruby.RubyArray$INVOKER$i$0$0$map.call(org/jruby/RubyArray$INVOKER$i$0$0$map.gen)",
"RUBY.start(/usr/share/logstash/logstash-core/lib/logstash/instrument/periodic_pollers.rb:41)",
"RUBY.configure_metrics_collectors(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:477)",
"RUBY.initialize(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:88)",
"org.jruby.RubyClass.new(org/jruby/RubyClass.java:949)",
"org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)",
"RUBY.create_agent(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:552)",
"RUBY.execute(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:434)",
"RUBY.run(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/clamp-1.0.1/lib/clamp/command.rb:68)",
"RUBY.run(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:293)",
"RUBY.run(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/clamp-1.0.1/lib/clamp/command.rb:133)",
"usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:89)",
"usr.share.logstash.lib.bootstrap.environment.run(usr/share/logstash/lib/bootstrap//usr/share/logstash/lib/bootstrap/environment.rb)",
"java.lang.invoke.MethodHandle.invokeWithArguments(java/lang/invoke/MethodHandle.java:733)",
"org.jruby.Ruby.runScript(org/jruby/Ruby.java:1245)",
"org.jruby.Ruby.runNormally(org/jruby/Ruby.java:1157)",
"org.jruby.Ruby.runFromMain(org/jruby/Ruby.java:983)",
"org.logstash.Logstash.run(org/logstash/Logstash.java:163)",
"org.logstash.Logstash.main(org/logstash/Logstash.java:73)"
]
}
[FATAL] 2024-11-11 11:11:11.516 [LogStash::Runner] Logstash - Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java: 921) ~[jruby.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java: 880) ~[jruby.jar:?]
at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb: 90) ~[?:?]
-----
This error can happen when cgroups v2 is not enabled, such as when running on a Red Had version 8 operating system.
*Work around*
Follow your operating system's instructions for enabling cgroups v2.
[[ts-pqs]]
==== Troubleshooting persistent queues
@ -265,4 +335,4 @@ adding to the field name a `_1` suffix:
"data":"{\"name\": [}"
}
}
-----
-----

View file

@ -15,6 +15,9 @@
# specific language governing permissions and limitations
# under the License.
# work around https://github.com/jruby/jruby/issues/8579
require_relative './patches/jar_dependencies'
module LogStash
module Bundler
extend self
@ -264,6 +267,7 @@ module LogStash
elsif options[:update]
arguments << "update"
arguments << expand_logstash_mixin_dependencies(options[:update])
arguments << "--#{options[:level] || 'minor'}"
arguments << "--local" if options[:local]
arguments << "--conservative" if options[:conservative]
elsif options[:clean]

View file

@ -21,7 +21,7 @@ def require_jar(*args)
return nil unless Jars.require?
result = Jars.require_jar(*args)
if result.is_a? String
# JAR_DEBUG=1 will now show theses
# JARS_VERBOSE=true will show these
Jars.debug { "--- jar coordinate #{args[0..-2].join(':')} already loaded with version #{result} - omit version #{args[-1]}" }
Jars.debug { " try to load from #{caller.join("\n\t")}" }
return false
@ -29,3 +29,29 @@ def require_jar(*args)
Jars.debug { " register #{args.inspect} - #{result == true}" }
result
end
# work around https://github.com/jruby/jruby/issues/8579
# the ruby maven 3.9.3 + maven-libs 3.9.9 gems will output unnecessary text we need to trim down during `load_from_maven`
# remove everything from "--" until the end of the line
# the `[...-5]` is just to remove the color changing characters from the end of the string that exist before "--"
require 'jars/installer'
class ::Jars::Installer
def self.load_from_maven(file)
Jars.debug { "[load_from_maven] called with arguments: #{file.inspect}" }
result = []
::File.read(file).each_line do |line|
if line.match?(/ --/)
Jars.debug { "[load_from_maven] line: #{line.inspect}" }
fixed_line = line.strip.gsub(/ --.+?$/, "")[0...-5]
Jars.debug { "[load_from_maven] fixed_line: #{fixed_line.inspect}" }
dep = ::Jars::Installer::Dependency.new(fixed_line)
else
dep = ::Jars::Installer::Dependency.new(line)
end
result << dep if dep && dep.scope == :runtime
end
Jars.debug { "[load_from_maven] returned: #{result.inspect}" }
result
end
end

View file

@ -39,7 +39,7 @@ module LogStash module PluginManager module PackInstaller
class GemInformation
EXTENSION = ".gem"
SPLIT_CHAR = "-"
JAVA_PLATFORM_RE = /-java/
JAVA_PLATFORM_RE = /-java$/
DEPENDENCIES_DIR_RE = /dependencies/
attr_reader :file, :name, :version, :platform

View file

@ -55,8 +55,8 @@ def apply_env_proxy_settings(settings)
scheme = settings[:protocol].downcase
java.lang.System.setProperty("#{scheme}.proxyHost", settings[:host])
java.lang.System.setProperty("#{scheme}.proxyPort", settings[:port].to_s)
java.lang.System.setProperty("#{scheme}.proxyUsername", settings[:username].to_s)
java.lang.System.setProperty("#{scheme}.proxyPassword", settings[:password].to_s)
java.lang.System.setProperty("#{scheme}.proxyUser", settings[:username].to_s)
java.lang.System.setProperty("#{scheme}.proxyPass", settings[:password].to_s)
end
def extract_proxy_values_from_uri(proxy_uri)

View file

@ -24,7 +24,13 @@ class LogStash::PluginManager::Update < LogStash::PluginManager::Command
# These are local gems used by LS and needs to be filtered out of other plugin gems
NON_PLUGIN_LOCAL_GEMS = ["logstash-core", "logstash-core-plugin-api"]
SUPPORTED_LEVELS = %w(major minor patch)
parameter "[PLUGIN] ...", "Plugin name(s) to upgrade to latest version", :attribute_name => :plugins_arg
option "--level", "LEVEL", "restrict updates to given semantic version level (one of #{SUPPORTED_LEVELS})", :default => "minor" do |given_level|
fail("unsupported level `#{given_level}`; expected one of #{SUPPORTED_LEVELS}") unless SUPPORTED_LEVELS.include?(given_level)
given_level
end
option "--[no-]verify", :flag, "verify plugin validity before installation", :default => true
option "--local", :flag, "force local-only plugin update. see bin/logstash-plugin package|unpack", :default => false
option "--[no-]conservative", :flag, "do a conservative update of plugin's dependencies", :default => true
@ -82,6 +88,7 @@ class LogStash::PluginManager::Update < LogStash::PluginManager::Command
# Bundler cannot update and clean gems in one operation so we have to call the CLI twice.
Bundler.settings.temporary(:frozen => false) do # Unfreeze the bundle when updating gems
output = LogStash::Bundler.invoke! update: plugins,
level: level,
rubygems_source: gemfile.gemset.sources,
local: local?,
conservative: conservative?

View file

@ -12,7 +12,13 @@ if File.exist?(project_versions_yaml_path)
# we ignore the copy in git and we overwrite an existing file
# each time we build the logstash-core gem
original_lines = IO.readlines(project_versions_yaml_path)
original_lines << ""
# introduce the version qualifier (e.g. beta1, rc1) into the copied yml so it's displayed by Logstash
unless ENV['VERSION_QUALIFIER'].to_s.strip.empty?
logstash_version_line = original_lines.find {|line| line.match(/^logstash:/) }
logstash_version_line.chomp!
logstash_version_line << "-#{ENV['VERSION_QUALIFIER']}\n"
end
original_lines << "\n"
original_lines << "# This is a copy the project level versions.yml into this gem's root and it is created when the gemspec is evaluated."
gem_versions_yaml_path = File.expand_path("./versions-gem-copy.yml", File.dirname(__FILE__))
File.open(gem_versions_yaml_path, 'w') do |new_file|

View file

@ -50,7 +50,7 @@ jar {
}
ext {
jmh = 1.22
jmh = 1.37
}
dependencies {
@ -79,17 +79,15 @@ tasks.register("jmh", JavaExec) {
dependsOn=[':logstash-core-benchmarks:clean', ':logstash-core-benchmarks:shadowJar']
main = "-jar"
mainClass = "-jar"
def include = project.properties.get('include', '')
doFirst {
args = [
"-Djava.io.tmpdir=${buildDir.absolutePath}",
"-XX:+UseConcMarkSweepGC", "-XX:CMSInitiatingOccupancyFraction=75",
"-XX:+UseCMSInitiatingOccupancyOnly", "-XX:+DisableExplicitGC",
"-XX:+HeapDumpOnOutOfMemoryError", "-Xms2g", "-Xmx2g",
shadowJar.archivePath,
shadowJar.archiveFile.get().asFile,
include
]
}

View file

@ -0,0 +1,83 @@
package org.logstash.benchmark;
import org.jruby.RubyArray;
import org.jruby.RubyString;
import org.jruby.runtime.ThreadContext;
import org.jruby.runtime.builtin.IRubyObject;
import org.logstash.RubyUtil;
import org.logstash.common.BufferedTokenizerExt;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Level;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.Blackhole;
import java.util.concurrent.TimeUnit;
import static org.logstash.RubyUtil.RUBY;
@Warmup(iterations = 3, time = 100, timeUnit = TimeUnit.MILLISECONDS)
@Measurement(iterations = 10, time = 100, timeUnit = TimeUnit.MILLISECONDS)
@Fork(1)
@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@State(Scope.Thread)
public class BufferedTokenizerExtBenchmark {
private BufferedTokenizerExt sut;
private ThreadContext context;
private RubyString singleTokenPerFragment;
private RubyString multipleTokensPerFragment;
private RubyString multipleTokensSpreadMultipleFragments_1;
private RubyString multipleTokensSpreadMultipleFragments_2;
private RubyString multipleTokensSpreadMultipleFragments_3;
@Setup(Level.Invocation)
public void setUp() {
sut = new BufferedTokenizerExt(RubyUtil.RUBY, RubyUtil.BUFFERED_TOKENIZER);
context = RUBY.getCurrentContext();
IRubyObject[] args = {};
sut.init(context, args);
singleTokenPerFragment = RubyUtil.RUBY.newString("a".repeat(512) + "\n");
multipleTokensPerFragment = RubyUtil.RUBY.newString("a".repeat(512) + "\n" + "b".repeat(512) + "\n" + "c".repeat(512) + "\n");
multipleTokensSpreadMultipleFragments_1 = RubyUtil.RUBY.newString("a".repeat(512) + "\n" + "b".repeat(512) + "\n" + "c".repeat(256));
multipleTokensSpreadMultipleFragments_2 = RubyUtil.RUBY.newString("c".repeat(256) + "\n" + "d".repeat(512) + "\n" + "e".repeat(256));
multipleTokensSpreadMultipleFragments_3 = RubyUtil.RUBY.newString("f".repeat(256) + "\n" + "g".repeat(512) + "\n" + "h".repeat(512) + "\n");
}
@SuppressWarnings("unchecked")
@Benchmark
public final void onlyOneTokenPerFragment(Blackhole blackhole) {
RubyArray<RubyString> tokens = (RubyArray<RubyString>) sut.extract(context, singleTokenPerFragment);
blackhole.consume(tokens);
}
@SuppressWarnings("unchecked")
@Benchmark
public final void multipleTokenPerFragment(Blackhole blackhole) {
RubyArray<RubyString> tokens = (RubyArray<RubyString>) sut.extract(context, multipleTokensPerFragment);
blackhole.consume(tokens);
}
@SuppressWarnings("unchecked")
@Benchmark
public final void multipleTokensCrossingMultipleFragments(Blackhole blackhole) {
RubyArray<RubyString> tokens = (RubyArray<RubyString>) sut.extract(context, multipleTokensSpreadMultipleFragments_1);
blackhole.consume(tokens);
tokens = (RubyArray<RubyString>) sut.extract(context, multipleTokensSpreadMultipleFragments_2);
blackhole.consume(tokens);
tokens = (RubyArray<RubyString>) sut.extract(context, multipleTokensSpreadMultipleFragments_3);
blackhole.consume(tokens);
}
}

View file

@ -57,6 +57,7 @@ def versionMap = (Map) (new Yaml()).load(new File("$projectDir/../versions.yml")
description = """Logstash Core Java"""
String logstashCoreVersion = versionMap['logstash-core']
String jacksonVersion = versionMap['jackson']
String jacksonDatabindVersion = versionMap['jackson-databind']
String jrubyVersion = versionMap['jruby']['version']
@ -123,6 +124,9 @@ tasks.register("javaTests", Test) {
exclude '/org/logstash/plugins/factory/PluginFactoryExtTest.class'
exclude '/org/logstash/execution/ObservedExecutionTest.class'
// 10GB is needed by the BufferedTokenizerExtWithSizeLimitTest.givenTooLongInputExtractDoesntOverflow test
maxHeapSize = "10g"
jacoco {
enabled = true
destinationFile = layout.buildDirectory.file('jacoco/test.exec').get().asFile
@ -183,6 +187,23 @@ artifacts {
}
}
task generateVersionInfoResources(type: DefaultTask) {
ext.outDir = layout.buildDirectory.dir("generated-resources/version-info").get()
inputs.property("version-info:logstash-core", logstashCoreVersion)
outputs.dir(ext.outDir)
doLast {
mkdir outDir;
def resourceFile = outDir.file('version-info.properties').asFile
resourceFile.text = "logstash-core: ${logstashCoreVersion}"
}
}
sourceSets {
main { output.dir(generateVersionInfoResources.outputs.files) }
}
processResources.dependsOn generateVersionInfoResources
configurations {
provided
}

View file

@ -26,6 +26,7 @@ require "logstash/pipeline_action"
require "logstash/state_resolver"
require "logstash/pipelines_registry"
require "logstash/persisted_queue_config_validator"
require "logstash/pipeline_resource_usage_validator"
require "stud/trap"
require "uri"
require "socket"
@ -40,6 +41,8 @@ class LogStash::Agent
attr_reader :metric, :name, :settings, :dispatcher, :ephemeral_id, :pipeline_bus
attr_accessor :logger
attr_reader :health_observer
# initialize method for LogStash::Agent
# @param params [Hash] potential parameters are:
# :name [String] - identifier for the agent
@ -51,6 +54,9 @@ class LogStash::Agent
@auto_reload = setting("config.reload.automatic")
@ephemeral_id = SecureRandom.uuid
java_import("org.logstash.health.HealthObserver")
@health_observer ||= HealthObserver.new
# Mutex to synchronize in the exclusive method
# Initial usage for the Ruby pipeline initialization which is not thread safe
@webserver_control_lock = Mutex.new
@ -97,6 +103,7 @@ class LogStash::Agent
initialize_geoip_database_metrics(metric)
@pq_config_validator = LogStash::PersistedQueueConfigValidator.new
@pipeline_resource_usage_validator = LogStash::PipelineResourceUsageValidator.new(Java::java.lang.Runtime.getRuntime().maxMemory)
@dispatcher = LogStash::EventDispatcher.new(self)
LogStash::PLUGIN_REGISTRY.hooks.register_emitter(self.class, dispatcher)
@ -151,6 +158,31 @@ class LogStash::Agent
transition_to_stopped
end
include org.logstash.health.PipelineIndicator::PipelineDetailsProvider
def pipeline_details(pipeline_id)
logger.trace("fetching pipeline details for `#{pipeline_id}`")
pipeline_id = pipeline_id.to_sym
java_import org.logstash.health.PipelineIndicator
pipeline_state = @pipelines_registry.states.get(pipeline_id)
if pipeline_state.nil?
return PipelineIndicator::Details.new(PipelineIndicator::Status::UNKNOWN)
end
pipeline_state.synchronize do |sync_state|
status = case
when sync_state.loading? then PipelineIndicator::Status::LOADING
when sync_state.crashed? then PipelineIndicator::Status::TERMINATED
when sync_state.running? then PipelineIndicator::Status::RUNNING
when sync_state.finished? then PipelineIndicator::Status::FINISHED
else PipelineIndicator::Status::UNKNOWN
end
PipelineIndicator::Details.new(status, sync_state.pipeline&.to_java.collectWorkerUtilizationFlowObservation)
end
end
def auto_reload?
@auto_reload
end
@ -191,13 +223,15 @@ class LogStash::Agent
converge_result = resolve_actions_and_converge_state(results.response)
update_metrics(converge_result)
logger.info(
"Pipelines running",
:count => running_pipelines.size,
:running_pipelines => running_pipelines.keys,
:non_running_pipelines => non_running_pipelines.keys
) if converge_result.success? && converge_result.total > 0
if converge_result.success? && converge_result.total > 0
logger.info(
"Pipelines running",
:count => running_pipelines.size,
:running_pipelines => running_pipelines.keys,
:non_running_pipelines => non_running_pipelines.keys
)
@pipeline_resource_usage_validator.check(@pipelines_registry)
end
dispatch_events(converge_result)
@ -395,7 +429,13 @@ class LogStash::Agent
)
end
rescue SystemExit, Exception => e
logger.error("Failed to execute action", :action => action, :exception => e.class.name, :message => e.message, :backtrace => e.backtrace)
error_details = { :action => action, :exception => e.class.name, :message => e.message, :backtrace => e.backtrace }
cause = e.cause
if cause && e != cause
error_details[:cause] = { :exception => cause.class, :message => cause.message }
error_details[:cause][:backtrace] = cause.backtrace if cause.backtrace
end
logger.error('Failed to execute action', error_details)
converge_result.add(action, LogStash::ConvergeResult::FailedAction.from_exception(e))
end
end

View file

@ -18,6 +18,7 @@
require "logstash/api/service"
require "logstash/api/commands/system/basicinfo_command"
require "logstash/api/commands/system/plugins_command"
require "logstash/api/commands/health_report"
require "logstash/api/commands/stats"
require "logstash/api/commands/node"
require "logstash/api/commands/default_metadata"
@ -34,6 +35,7 @@ module LogStash
:plugins_command => ::LogStash::Api::Commands::System::Plugins,
:stats => ::LogStash::Api::Commands::Stats,
:node => ::LogStash::Api::Commands::Node,
:health_report => ::LogStash::Api::Commands::HealthReport,
:default_metadata => ::LogStash::Api::Commands::DefaultMetadata
}
end

View file

@ -22,20 +22,14 @@ module LogStash
module Commands
class DefaultMetadata < Commands::Base
def all
res = {:host => host,
:version => version,
:http_address => http_address,
:id => service.agent.id,
:name => service.agent.name,
:ephemeral_id => service.agent.ephemeral_id,
:status => "green", # This is hard-coded to mirror x-pack behavior
:snapshot => ::BUILD_INFO["build_snapshot"],
res = base_info.merge({
:status => service.agent.health_observer.status,
:pipeline => {
:workers => LogStash::SETTINGS.get("pipeline.workers"),
:batch_size => LogStash::SETTINGS.get("pipeline.batch.size"),
:batch_delay => LogStash::SETTINGS.get("pipeline.batch.delay"),
},
}
})
monitoring = {}
if enabled_xpack_monitoring?
monitoring = monitoring.merge({
@ -49,12 +43,24 @@ module LogStash
res.merge(monitoring.empty? ? {} : {:monitoring => monitoring})
end
def base_info
{
:host => host,
:version => version,
:http_address => http_address,
:id => service.agent.id,
:name => service.agent.name,
:ephemeral_id => service.agent.ephemeral_id,
:snapshot => ::BUILD_INFO["build_snapshot"],
}
end
def host
@@host ||= Socket.gethostname
end
def version
LOGSTASH_CORE_VERSION
LOGSTASH_VERSION
end
def http_address

View file

@ -0,0 +1,31 @@
# Licensed to Elasticsearch B.V. under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch B.V. licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
require "logstash/api/commands/base"
module LogStash
module Api
module Commands
class HealthReport < Commands::Base
def all(selected_fields = [])
service.agent.health_observer.report
end
end
end
end
end

View file

@ -183,7 +183,12 @@ module LogStash
:outputs => plugin_stats(stats, :outputs)
},
:reloads => stats[:reloads],
:queue => stats[:queue]
:queue => stats[:queue],
:pipeline => {
:workers => stats.dig(:config, :workers),
:batch_size => stats.dig(:config, :batch_size),
:batch_delay => stats.dig(:config, :batch_delay),
}
}
ret[:dead_letter_queue] = stats[:dlq] if stats.include?(:dlq)

View file

@ -0,0 +1,49 @@
# Licensed to Elasticsearch B.V. under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch B.V. licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
module LogStash
module Api
module Modules
class HealthReport < ::LogStash::Api::Modules::Base
get "/" do
payload = health_report.all.then do |health_report_pojo|
# The app_helper needs a ruby-hash.
# Manually creating a map of properties works around the issue.
base_metadata.merge({
status: health_report_pojo.status,
symptom: health_report_pojo.symptom,
indicators: health_report_pojo.indicators,
})
end
respond_with(payload, {exclude_default_metadata: true})
end
private
def health_report
@health_report ||= factory.build(:health_report)
end
def base_metadata
@factory.build(:default_metadata).base_info
end
end
end
end
end

View file

@ -18,6 +18,7 @@
require "rack"
require "sinatra/base"
require "logstash/api/modules/base"
require "logstash/api/modules/health_report"
require "logstash/api/modules/node"
require "logstash/api/modules/node_stats"
require "logstash/api/modules/plugins"
@ -123,6 +124,7 @@ module LogStash
def self.rack_namespaces(agent)
{
"/_health_report" => LogStash::Api::Modules::HealthReport,
"/_node" => LogStash::Api::Modules::Node,
"/_stats" => LogStash::Api::Modules::Stats,
"/_node/stats" => LogStash::Api::Modules::NodeStats,

View file

@ -67,6 +67,9 @@ module LogStash module Config
raise LogStash::ConfigLoadingError, I18n.t("logstash.modules.configuration.modules-unavailable", **i18n_opts)
end
specified_and_available_names
.each { |mn| deprecation_logger.deprecated("The #{mn} module has been deprecated and will be removed in version 9.") }
specified_and_available_names.each do |module_name|
connect_fail_args = {}
begin

View file

@ -16,7 +16,7 @@
# under the License.
require "elasticsearch"
require "elasticsearch/transport/transport/http/manticore"
require "elastic/transport/transport/http/manticore"
require 'logstash/util/manticore_ssl_config_helper'
require 'logstash/util/password'
@ -24,7 +24,7 @@ module LogStash class ElasticsearchClient
include LogStash::Util::Loggable
class Response
# duplicated here from Elasticsearch::Transport::Transport::Response
# duplicated here from Elastic::Transport::Transport::Response
# to create a normalised response across different client IMPL
attr_reader :status, :body, :headers
@ -65,8 +65,13 @@ module LogStash class ElasticsearchClient
def can_connect?
begin
head(SecureRandom.hex(32).prepend('_'))
rescue Elasticsearch::Transport::Transport::Errors::BadRequest
rescue Elastic::Transport::Transport::Errors::BadRequest
true
rescue Elastic::Transport::Transport::Errors::Unauthorized
true
rescue Exception => e
return true if e.message.include?('Connection refused')
raise e
rescue Manticore::SocketException
false
end
@ -116,7 +121,7 @@ module LogStash class ElasticsearchClient
def client_args
{
:transport_class => Elasticsearch::Transport::Transport::HTTP::Manticore,
:transport_class => Elastic::Transport::Transport::HTTP::Manticore,
:hosts => [*unpack_hosts],
# :logger => @logger, # silence the client logging
}

View file

@ -35,10 +35,10 @@ module LogStash
[
Setting::Boolean.new("allow_superuser", true),
Setting::String.new("node.name", Socket.gethostname),
Setting::NullableString.new("path.config", nil, false),
Setting::SettingString.new("node.name", Socket.gethostname),
Setting::SettingNullableString.new("path.config", nil, false),
Setting::WritableDirectory.new("path.data", ::File.join(LogStash::Environment::LOGSTASH_HOME, "data")),
Setting::NullableString.new("config.string", nil, false),
Setting::SettingNullableString.new("config.string", nil, false),
Setting::Modules.new("modules.cli", LogStash::Util::ModulesSettingArray, []),
Setting::Modules.new("modules", LogStash::Util::ModulesSettingArray, []),
Setting.new("modules_list", Array, []),
@ -50,10 +50,10 @@ module LogStash
Setting::Boolean.new("config.reload.automatic", false),
Setting::TimeValue.new("config.reload.interval", "3s"), # in seconds
Setting::Boolean.new("config.support_escapes", false),
Setting::String.new("config.field_reference.escape_style", "none", true, %w(none percent ampersand)),
Setting::String.new("event_api.tags.illegal", "rename", true, %w(rename warn)),
Setting::SettingString.new("config.field_reference.escape_style", "none", true, %w(none percent ampersand)),
Setting::SettingString.new("event_api.tags.illegal", "rename", true, %w(rename warn)),
Setting::Boolean.new("metric.collect", true),
Setting::String.new("pipeline.id", "main"),
Setting::SettingString.new("pipeline.id", "main"),
Setting::Boolean.new("pipeline.system", false),
Setting::PositiveInteger.new("pipeline.workers", LogStash::Config::CpuCoreStrategy.maximum),
Setting::PositiveInteger.new("pipeline.batch.size", 125),
@ -65,32 +65,32 @@ module LogStash
Setting::CoercibleString.new("pipeline.ordered", "auto", true, ["auto", "true", "false"]),
Setting::CoercibleString.new("pipeline.ecs_compatibility", "v8", true, %w(disabled v1 v8)),
Setting.new("path.plugins", Array, []),
Setting::NullableString.new("interactive", nil, false),
Setting::SettingNullableString.new("interactive", nil, false),
Setting::Boolean.new("config.debug", false),
Setting::String.new("log.level", "info", true, ["fatal", "error", "warn", "debug", "info", "trace"]),
Setting::SettingString.new("log.level", "info", true, ["fatal", "error", "warn", "debug", "info", "trace"]),
Setting::Boolean.new("version", false),
Setting::Boolean.new("help", false),
Setting::Boolean.new("enable-local-plugin-development", false),
Setting::String.new("log.format", "plain", true, ["json", "plain"]),
Setting::SettingString.new("log.format", "plain", true, ["json", "plain"]),
Setting::Boolean.new("log.format.json.fix_duplicate_message_fields", false),
Setting::Boolean.new("api.enabled", true).with_deprecated_alias("http.enabled"),
Setting::String.new("api.http.host", "127.0.0.1").with_deprecated_alias("http.host"),
Setting::PortRange.new("api.http.port", 9600..9700).with_deprecated_alias("http.port"),
Setting::String.new("api.environment", "production").with_deprecated_alias("http.environment"),
Setting::String.new("api.auth.type", "none", true, %w(none basic)),
Setting::String.new("api.auth.basic.username", nil, false).nullable,
Setting::Boolean.new("api.enabled", true).with_deprecated_alias("http.enabled", "9"),
Setting::SettingString.new("api.http.host", "127.0.0.1").with_deprecated_alias("http.host", "9"),
Setting::PortRange.new("api.http.port", 9600..9700).with_deprecated_alias("http.port", "9"),
Setting::SettingString.new("api.environment", "production").with_deprecated_alias("http.environment", "9"),
Setting::SettingString.new("api.auth.type", "none", true, %w(none basic)),
Setting::SettingString.new("api.auth.basic.username", nil, false).nullable,
Setting::Password.new("api.auth.basic.password", nil, false).nullable,
Setting::String.new("api.auth.basic.password_policy.mode", "WARN", true, %w[WARN ERROR]),
Setting::SettingString.new("api.auth.basic.password_policy.mode", "WARN", true, %w[WARN ERROR]),
Setting::Numeric.new("api.auth.basic.password_policy.length.minimum", 8),
Setting::String.new("api.auth.basic.password_policy.include.upper", "REQUIRED", true, %w[REQUIRED OPTIONAL]),
Setting::String.new("api.auth.basic.password_policy.include.lower", "REQUIRED", true, %w[REQUIRED OPTIONAL]),
Setting::String.new("api.auth.basic.password_policy.include.digit", "REQUIRED", true, %w[REQUIRED OPTIONAL]),
Setting::String.new("api.auth.basic.password_policy.include.symbol", "OPTIONAL", true, %w[REQUIRED OPTIONAL]),
Setting::SettingString.new("api.auth.basic.password_policy.include.upper", "REQUIRED", true, %w[REQUIRED OPTIONAL]),
Setting::SettingString.new("api.auth.basic.password_policy.include.lower", "REQUIRED", true, %w[REQUIRED OPTIONAL]),
Setting::SettingString.new("api.auth.basic.password_policy.include.digit", "REQUIRED", true, %w[REQUIRED OPTIONAL]),
Setting::SettingString.new("api.auth.basic.password_policy.include.symbol", "OPTIONAL", true, %w[REQUIRED OPTIONAL]),
Setting::Boolean.new("api.ssl.enabled", false),
Setting::ExistingFilePath.new("api.ssl.keystore.path", nil, false).nullable,
Setting::Password.new("api.ssl.keystore.password", nil, false).nullable,
Setting::StringArray.new("api.ssl.supported_protocols", nil, true, %w[TLSv1 TLSv1.1 TLSv1.2 TLSv1.3]),
Setting::String.new("queue.type", "memory", true, ["persisted", "memory"]),
Setting::SettingString.new("queue.type", "memory", true, ["persisted", "memory"]),
Setting::Boolean.new("queue.drain", false),
Setting::Bytes.new("queue.page_capacity", "64mb"),
Setting::Bytes.new("queue.max_bytes", "1024mb"),
@ -102,16 +102,16 @@ module LogStash
Setting::Boolean.new("dead_letter_queue.enable", false),
Setting::Bytes.new("dead_letter_queue.max_bytes", "1024mb"),
Setting::Numeric.new("dead_letter_queue.flush_interval", 5000),
Setting::String.new("dead_letter_queue.storage_policy", "drop_newer", true, ["drop_newer", "drop_older"]),
Setting::NullableString.new("dead_letter_queue.retain.age"), # example 5d
Setting::SettingString.new("dead_letter_queue.storage_policy", "drop_newer", true, ["drop_newer", "drop_older"]),
Setting::SettingNullableString.new("dead_letter_queue.retain.age"), # example 5d
Setting::TimeValue.new("slowlog.threshold.warn", "-1"),
Setting::TimeValue.new("slowlog.threshold.info", "-1"),
Setting::TimeValue.new("slowlog.threshold.debug", "-1"),
Setting::TimeValue.new("slowlog.threshold.trace", "-1"),
Setting::String.new("keystore.classname", "org.logstash.secret.store.backend.JavaKeyStore"),
Setting::String.new("keystore.file", ::File.join(::File.join(LogStash::Environment::LOGSTASH_HOME, "config"), "logstash.keystore"), false), # will be populated on
Setting::NullableString.new("monitoring.cluster_uuid"),
Setting::String.new("pipeline.buffer.type", "direct", true, ["direct", "heap"])
Setting::SettingString.new("keystore.classname", "org.logstash.secret.store.backend.JavaKeyStore"),
Setting::SettingString.new("keystore.file", ::File.join(::File.join(LogStash::Environment::LOGSTASH_HOME, "config"), "logstash.keystore"), false), # will be populated on
Setting::SettingNullableString.new("monitoring.cluster_uuid"),
Setting::SettingString.new("pipeline.buffer.type", nil, false, ["direct", "heap"])
# post_process
].each {|setting| SETTINGS.register(setting) }

View file

@ -39,7 +39,6 @@ module LogStash; class JavaPipeline < AbstractPipeline
:started_at,
:thread
MAX_INFLIGHT_WARN_THRESHOLD = 10_000
SECOND = 1
MEMORY = "memory".freeze
@ -65,6 +64,7 @@ module LogStash; class JavaPipeline < AbstractPipeline
@flushing = java.util.concurrent.atomic.AtomicBoolean.new(false)
@flushRequested = java.util.concurrent.atomic.AtomicBoolean.new(false)
@shutdownRequested = java.util.concurrent.atomic.AtomicBoolean.new(false)
@crash_detected = Concurrent::AtomicBoolean.new(false)
@outputs_registered = Concurrent::AtomicBoolean.new(false)
# @finished_execution signals that the pipeline thread has finished its execution
@ -87,6 +87,10 @@ module LogStash; class JavaPipeline < AbstractPipeline
@finished_execution.true?
end
def finished_run?
@finished_run.true?
end
def ready?
@ready.value
end
@ -229,6 +233,10 @@ module LogStash; class JavaPipeline < AbstractPipeline
@running.false?
end
def crashed?
@crash_detected.true?
end
# register_plugins calls #register_plugin on the plugins list and upon exception will call Plugin#do_close on all registered plugins
# @param plugins [Array[Plugin]] the list of plugins to register
def register_plugins(plugins)
@ -275,10 +283,6 @@ module LogStash; class JavaPipeline < AbstractPipeline
"pipeline.sources" => pipeline_source_details)
@logger.info("Starting pipeline", pipeline_log_params)
if max_inflight > MAX_INFLIGHT_WARN_THRESHOLD
@logger.warn("CAUTION: Recommended inflight events max exceeded! Logstash will run with up to #{max_inflight} events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently #{batch_size}), or changing the number of pipeline workers (currently #{pipeline_workers})", default_logging_keys)
end
filter_queue_client.set_batch_dimensions(batch_size, batch_delay)
# First launch WorkerLoop initialization in separate threads which concurrently
@ -305,6 +309,7 @@ module LogStash; class JavaPipeline < AbstractPipeline
rescue => e
# WorkerLoop.run() catches all Java Exception class and re-throws as IllegalStateException with the
# original exception as the cause
@crash_detected.make_true
@logger.error(
"Pipeline worker error, the pipeline will be stopped",
default_logging_keys(:error => e.cause.message, :exception => e.cause.class, :backtrace => e.cause.backtrace)
@ -319,6 +324,7 @@ module LogStash; class JavaPipeline < AbstractPipeline
begin
start_inputs
rescue => e
@crash_detected.make_true
# if there is any exception in starting inputs, make sure we shutdown workers.
# exception will already by logged in start_inputs
shutdown_workers
@ -628,7 +634,7 @@ module LogStash; class JavaPipeline < AbstractPipeline
case settings.get("pipeline.ordered")
when "auto"
if settings.set?("pipeline.workers") && settings.get("pipeline.workers") == 1
@logger.warn("'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary")
@logger.warn("'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary") unless settings.get("pipeline.system")
return true
end
when "true"

View file

@ -69,13 +69,13 @@ module LogStash module Modules class LogStashConfig
# validate the values and replace them in the template.
case default
when String
get_setting(LogStash::Setting::NullableString.new(name, default.to_s))
get_setting(LogStash::Setting::SettingNullableString.new(name, default.to_s))
when Numeric
get_setting(LogStash::Setting::Numeric.new(name, default))
when true, false
get_setting(LogStash::Setting::Boolean.new(name, default))
else
get_setting(LogStash::Setting::NullableString.new(name, default.to_s))
get_setting(LogStash::Setting::SettingNullableString.new(name, default.to_s))
end
end

View file

@ -79,8 +79,13 @@ module Clamp
new_flag = opts[:new_flag]
new_value = opts.fetch(:new_value, value)
passthrough = opts.fetch(:passthrough, false)
obsoleted_version = opts[:obsoleted_version]
LogStash::DeprecationMessage.instance << "DEPRECATION WARNING: The flag #{option.switches} has been deprecated, please use \"--#{new_flag}=#{new_value}\" instead."
dmsg = "DEPRECATION WARNING: The flag #{option.switches} has been deprecated"
dmsg += obsoleted_version.nil? ? " and may be removed in a future release" : " and will be removed in version #{obsoleted_version}"
dmsg += new_flag.nil? ? ".": ", please use \"--#{new_flag}=#{new_value}\" instead."
LogStash::DeprecationMessage.instance << dmsg
if passthrough
LogStash::SETTINGS.set(option.attribute_name, value)

View file

@ -46,13 +46,21 @@ module LogStash module PipelineAction
# The execute assume that the thread safety access of the pipeline
# is managed by the caller.
def execute(agent, pipelines_registry)
attach_health_indicator(agent)
new_pipeline = LogStash::JavaPipeline.new(@pipeline_config, @metric, agent)
success = pipelines_registry.create_pipeline(pipeline_id, new_pipeline) do
new_pipeline.start # block until the pipeline is correctly started or crashed
end
LogStash::ConvergeResult::ActionResult.create(self, success)
end
def attach_health_indicator(agent)
health_observer = agent.health_observer
health_observer.detach_pipeline_indicator(pipeline_id) # just in case ...
health_observer.attach_pipeline_indicator(pipeline_id, agent)
end
def to_s
"PipelineAction::Create<#{pipeline_id}>"
end

View file

@ -27,10 +27,15 @@ module LogStash module PipelineAction
def execute(agent, pipelines_registry)
success = pipelines_registry.delete_pipeline(@pipeline_id)
detach_health_indicator(agent) if success
LogStash::ConvergeResult::ActionResult.create(self, success)
end
def detach_health_indicator(agent)
agent.health_observer.detach_pipeline_indicator(pipeline_id)
end
def to_s
"PipelineAction::Delete<#{pipeline_id}>"
end

View file

@ -31,10 +31,15 @@ module LogStash module PipelineAction
end
success = pipelines_registry.delete_pipeline(@pipeline_id)
detach_health_indicator(agent) if success
LogStash::ConvergeResult::ActionResult.create(self, success)
end
def detach_health_indicator(agent)
agent.health_observer.detach_pipeline_indicator(pipeline_id)
end
def to_s
"PipelineAction::StopAndDelete<#{pipeline_id}>"
end

View file

@ -0,0 +1,54 @@
# Licensed to Elasticsearch B.V. under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch B.V. licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
module LogStash
class PipelineResourceUsageValidator
include LogStash::Util::Loggable
WARN_HEAP_THRESHOLD = 10 # 10%
def initialize(max_heap_size)
@max_heap_size = max_heap_size
end
def check(pipelines_registry)
return if pipelines_registry.size == 0
percentage_of_heap = compute_percentage(pipelines_registry)
if percentage_of_heap >= WARN_HEAP_THRESHOLD
logger.warn("For a baseline of 2KB events, the maximum heap memory consumed across #{pipelines_registry.size} pipelines may reach up to #{percentage_of_heap}% of the entire heap (more if the events are bigger). The recommended percentage is less than #{WARN_HEAP_THRESHOLD}%. Consider reducing the number of pipelines, or the batch size and worker count per pipeline.")
else
logger.debug("For a baseline of 2KB events, the maximum heap memory consumed across #{pipelines_registry.size} pipelines may reach up to #{percentage_of_heap}% of the entire heap (more if the events are bigger).")
end
end
def compute_percentage(pipelines_registry)
max_event_count = sum_event_count(pipelines_registry)
estimated_heap_usage = max_event_count * 2.0 * 1024 # assume 2KB per event
percentage_of_heap = ((estimated_heap_usage / @max_heap_size) * 100).round(2)
end
def sum_event_count(pipelines_registry)
pipelines_registry.loaded_pipelines.inject(0) do |sum, (pipeline_id, pipeline)|
batch_size = pipeline.settings.get("pipeline.batch.size")
pipeline_workers = pipeline.settings.get("pipeline.workers")
sum + (batch_size * pipeline_workers)
end
end
end
end

View file

@ -28,6 +28,7 @@ module LogStash
@lock = Monitor.new
end
# a terminated pipeline has either crashed OR finished normally
def terminated?
@lock.synchronize do
# a loading pipeline is never considered terminated
@ -35,6 +36,20 @@ module LogStash
end
end
# a finished pipeline finished _normally_ without exception
def finished?
@lock.synchronize do
# a loading pipeline is never considered terminated
@loading.false? && @pipeline.finished_run?
end
end
def crashed?
@lock.synchronize do
@pipeline&.crashed?
end
end
def running?
@lock.synchronize do
# not terminated and not loading
@ -104,6 +119,7 @@ module LogStash
end
end
def empty?
@lock.synchronize do
@states.empty?

View file

@ -9,10 +9,11 @@ module LogStash
def ecs_compatibility
@_ecs_compatibility || LogStash::Util.synchronize(self) do
@_ecs_compatibility ||= begin
# use config_init-set value if present
break @ecs_compatibility unless @ecs_compatibility.nil?
# use config_init-set value if present
@_ecs_compatibility ||= @ecs_compatibility
# load default from settings
@_ecs_compatibility ||= begin
pipeline = execution_context.pipeline
pipeline_settings = pipeline && pipeline.settings
pipeline_settings ||= LogStash::SETTINGS

View file

@ -92,10 +92,6 @@ class LogStash::Runner < Clamp::StrictCommand
:default => LogStash::SETTINGS.get_default("config.field_reference.escape_style"),
:attribute_name => "config.field_reference.escape_style"
option ["--event_api.tags.illegal"], "STRING",
I18n.t("logstash.runner.flag.event_api.tags.illegal"),
:default => LogStash::SETTINGS.get_default("event_api.tags.illegal"),
:attribute_name => "event_api.tags.illegal"
# Module settings
option ["--modules"], "MODULES",
@ -257,15 +253,24 @@ class LogStash::Runner < Clamp::StrictCommand
deprecated_option ["--http.enabled"], :flag,
I18n.t("logstash.runner.flag.http_enabled"),
:new_flag => "api.enabled", :passthrough => true # use settings to disambiguate
:new_flag => "api.enabled", :passthrough => true, # use settings to disambiguate
:obsoleted_version => "9"
deprecated_option ["--http.host"], "HTTP_HOST",
I18n.t("logstash.runner.flag.http_host"),
:new_flag => "api.http.host", :passthrough => true # use settings to disambiguate
:new_flag => "api.http.host", :passthrough => true, # use settings to disambiguate
:obsoleted_version => "9"
deprecated_option ["--http.port"], "HTTP_PORT",
I18n.t("logstash.runner.flag.http_port"),
:new_flag => "api.http.port", :passthrough => true # use settings to disambiguate
:new_flag => "api.http.port", :passthrough => true, # use settings to disambiguate
:obsoleted_version => "9"
deprecated_option ["--event_api.tags.illegal"], "STRING",
I18n.t("logstash.runner.flag.event_api.tags.illegal"),
:default => LogStash::SETTINGS.get_default("event_api.tags.illegal"),
:attribute_name => "event_api.tags.illegal", :passthrough => true,
:obsoleted_version => "9"
# We configure the registry and load any plugin that can register hooks
# with logstash, this needs to be done before any operation.
@ -310,9 +315,17 @@ class LogStash::Runner < Clamp::StrictCommand
if setting("config.debug") && !logger.debug?
logger.warn("--config.debug was specified, but log.level was not set to \'debug\'! No config info will be logged.")
end
if setting("pipeline.buffer.type") != nil
configure_pipeline_buffer_type
if setting("pipeline.buffer.type") == nil
deprecation_logger.deprecated(
"'pipeline.buffer.type' setting is not explicitly defined."\
"Before moving to 9.x set it to 'heap' and tune heap size upward, or set it to 'direct' to maintain existing behavior."
)
# set to direct to keep backward ecs_compatibility
buffer_type_setting = @settings.get_setting("pipeline.buffer.type")
buffer_type_setting.set("direct")
end
configure_pipeline_buffer_type
while (msg = LogStash::DeprecationMessage.instance.shift)
deprecation_logger.deprecated msg
@ -340,8 +353,8 @@ class LogStash::Runner < Clamp::StrictCommand
# Add local modules to the registry before everything else
LogStash::Modules::Util.register_local_modules(LogStash::Environment::LOGSTASH_HOME)
# Set up the Jackson defaults
LogStash::Util::Jackson.set_jackson_defaults(logger)
# Verify the Jackson defaults
LogStash::Util::Jackson.verify_jackson_overrides
@dispatcher = LogStash::EventDispatcher.new(self)
LogStash::PLUGIN_REGISTRY.hooks.register_emitter(self.class, @dispatcher)
@ -480,8 +493,15 @@ class LogStash::Runner < Clamp::StrictCommand
def running_as_superuser
if Process.euid() == 0
unless @settings.set?("allow_superuser")
deprecation_logger.deprecated("Starting from version 9.0, " +
"running with superuser privileges is not permitted unless you explicitly set 'allow_superuser' to true, " +
"thereby acknowledging the possible security risks")
end
if setting("allow_superuser")
deprecation_logger.deprecated("NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.")
logger.warn("NOTICE: Running Logstash as a superuser is strongly discouraged as it poses a security risk. " +
"Set 'allow_superuser' to false for better security.")
else
raise(RuntimeError, "Logstash cannot be run as superuser.")
end

View file

@ -86,7 +86,10 @@ module LogStash
end
def register(setting)
return setting.map { |s| register(s) } if setting.kind_of?(Array)
# Method #with_deprecated_alias returns collection containing couple of other settings.
# In case the method is implemented in Ruby returns an Array while for the Java implementation
# return a List, so the following type checking before going deep by one layer.
return setting.map { |s| register(s) } if setting.kind_of?(Array) || setting.kind_of?(java.util.List)
if @settings.key?(setting.name)
raise ArgumentError.new("Setting \"#{setting.name}\" has already been registered as #{setting.inspect}")
@ -151,7 +154,7 @@ module LogStash
def to_hash
hash = {}
@settings.each do |name, setting|
next if setting.kind_of? Setting::DeprecatedAlias
next if (setting.kind_of? Setting::DeprecatedAlias) || (setting.kind_of? Java::org.logstash.settings.DeprecatedAlias)
hash[name] = setting.value
end
hash
@ -244,54 +247,73 @@ module LogStash
class Setting
include LogStash::Settings::LOGGABLE_PROXY
attr_reader :name, :default
attr_reader :wrapped_setting
def initialize(name, klass, default = nil, strict = true, &validator_proc)
@name = name
unless klass.is_a?(Class)
raise ArgumentError.new("Setting \"#{@name}\" must be initialized with a class (received #{klass})")
raise ArgumentError.new("Setting \"#{name}\" must be initialized with a class (received #{klass})")
end
setting_builder = Java::org.logstash.settings.BaseSetting.create(name)
.defaultValue(default)
.strict(strict)
if validator_proc
setting_builder = setting_builder.validator(validator_proc)
end
@wrapped_setting = setting_builder.build()
@klass = klass
@validator_proc = validator_proc
@value = nil
@value_is_set = false
@strict = strict
validate(default) if @strict
@default = default
validate(default) if strict?
end
def default
@wrapped_setting.default
end
def name
@wrapped_setting.name
end
def initialize_copy(original)
@wrapped_setting = original.wrapped_setting.clone
end
# To be used only internally
def update_wrapper(wrapped_setting)
@wrapped_setting = wrapped_setting
end
def value
@value_is_set ? @value : default
@wrapped_setting.value()
end
def set?
@value_is_set
@wrapped_setting.set?
end
def strict?
@strict
@wrapped_setting.strict?
end
def set(value)
validate(value) if @strict
@value = value
@value_is_set = true
@value
validate(value) if strict?
@wrapped_setting.set(value)
@wrapped_setting.value
end
def reset
@value = nil
@value_is_set = false
@wrapped_setting.reset
end
def to_hash
{
"name" => @name,
"name" => @wrapped_setting.name,
"klass" => @klass,
"value" => @value,
"value_is_set" => @value_is_set,
"default" => @default,
"value" => @wrapped_setting.value,
"value_is_set" => @wrapped_setting.set?,
"default" => @wrapped_setting.default,
# Proc#== will only return true if it's the same obj
# so no there's no point in comparing it
# also thereś no use case atm to return the proc
@ -301,7 +323,7 @@ module LogStash
end
def inspect
"<#{self.class.name}(#{name}): #{value.inspect}" + (@value_is_set ? '' : ' (DEFAULT)') + ">"
"<#{self.class.name}(#{name}): #{value.inspect}" + (@wrapped_setting.set? ? '' : ' (DEFAULT)') + ">"
end
def ==(other)
@ -312,8 +334,8 @@ module LogStash
validate(value)
end
def with_deprecated_alias(deprecated_alias_name)
SettingWithDeprecatedAlias.wrap(self, deprecated_alias_name)
def with_deprecated_alias(deprecated_alias_name, obsoleted_version=nil)
SettingWithDeprecatedAlias.wrap(self, deprecated_alias_name, obsoleted_version)
end
##
@ -323,58 +345,65 @@ module LogStash
end
def format(output)
effective_value = self.value
default_value = self.default
setting_name = self.name
@wrapped_setting.format(output)
end
if default_value == value # print setting and its default value
output << "#{setting_name}: #{effective_value.inspect}" unless effective_value.nil?
elsif default_value.nil? # print setting and warn it has been set
output << "*#{setting_name}: #{effective_value.inspect}"
elsif effective_value.nil? # default setting not set by user
output << "#{setting_name}: #{default_value.inspect}"
else # print setting, warn it has been set, and show default value
output << "*#{setting_name}: #{effective_value.inspect} (default: #{default_value.inspect})"
end
def clone(*args)
copy = self.dup
copy.update_wrapper(@wrapped_setting.clone())
copy
end
protected
def validate(input)
if !input.is_a?(@klass)
raise ArgumentError.new("Setting \"#{@name}\" must be a #{@klass}. Received: #{input} (#{input.class})")
raise ArgumentError.new("Setting \"#{@wrapped_setting.name}\" must be a #{@klass}. Received: #{input} (#{input.class})")
end
if @validator_proc && !@validator_proc.call(input)
raise ArgumentError.new("Failed to validate setting \"#{@name}\" with value: #{input}")
raise ArgumentError.new("Failed to validate setting \"#{@wrapped_setting.name}\" with value: #{input}")
end
end
class Coercible < Setting
def initialize(name, klass, default = nil, strict = true, &validator_proc)
@name = name
unless klass.is_a?(Class)
raise ArgumentError.new("Setting \"#{@name}\" must be initialized with a class (received #{klass})")
raise ArgumentError.new("Setting \"#{name}\" must be initialized with a class (received #{klass})")
end
@klass = klass
@validator_proc = validator_proc
@value = nil
@value_is_set = false
# needed to have the name method accessible when invoking validate
@wrapped_setting = Java::org.logstash.settings.BaseSetting.create(name)
.defaultValue(default)
.strict(strict)
.build()
if strict
coerced_default = coerce(default)
validate(coerced_default)
@default = coerced_default
updated_default = coerced_default
else
@default = default
updated_default = default
end
# default value must be coerced to the right type before being set
setting_builder = Java::org.logstash.settings.BaseSetting.create(name)
.defaultValue(updated_default)
.strict(strict)
if validator_proc
setting_builder = setting_builder.validator(validator_proc)
end
@wrapped_setting = setting_builder.build()
end
def set(value)
coerced_value = coerce(value)
validate(coerced_value)
@value = coerce(coerced_value)
@value_is_set = true
@value
@wrapped_setting.set(coerced_value)
coerced_value
end
def coerce(value)
@ -383,22 +412,7 @@ module LogStash
end
### Specific settings #####
class Boolean < Coercible
def initialize(name, default, strict = true, &validator_proc)
super(name, Object, default, strict, &validator_proc)
end
def coerce(value)
case value
when TrueClass, "true"
true
when FalseClass, "false"
false
else
raise ArgumentError.new("could not coerce #{value} into a boolean")
end
end
end
java_import org.logstash.settings.Boolean
class Numeric < Coercible
def initialize(name, default = nil, strict = true)
@ -509,27 +523,10 @@ module LogStash
@validator_class.validate(value)
end
end
java_import org.logstash.settings.SettingString
class String < Setting
def initialize(name, default = nil, strict = true, possible_strings = [])
@possible_strings = possible_strings
super(name, ::String, default, strict)
end
def validate(value)
super(value)
unless @possible_strings.empty? || @possible_strings.include?(value)
raise ArgumentError.new("Invalid value \"#{name}: #{value}\". Options are: #{@possible_strings.inspect}")
end
end
end
class NullableString < String
def validate(value)
return if value.nil?
super(value)
end
end
java_import org.logstash.settings.SettingNullableString
class Password < Coercible
def initialize(name, default = nil, strict = true)
@ -733,15 +730,15 @@ module LogStash
protected
def validate(input)
if !input.is_a?(@klass)
raise ArgumentError.new("Setting \"#{@name}\" must be a #{@klass}. Received: #{input} (#{input.class})")
raise ArgumentError.new("Setting \"#{@wrapped_setting.name}\" must be a #{@klass}. Received: #{input} (#{input.class})")
end
unless input.all? {|el| el.kind_of?(@element_class) }
raise ArgumentError.new("Values of setting \"#{@name}\" must be #{@element_class}. Received: #{input.map(&:class)}")
raise ArgumentError.new("Values of setting \"#{@wrapped_setting.name}\" must be #{@element_class}. Received: #{input.map(&:class)}")
end
if @validator_proc && !@validator_proc.call(input)
raise ArgumentError.new("Failed to validate setting \"#{@name}\" with value: #{input}")
raise ArgumentError.new("Failed to validate setting \"#{@wrapped_setting.name}\" with value: #{input}")
end
end
end
@ -782,7 +779,7 @@ module LogStash
return unless invalid_value.any?
raise ArgumentError,
"Failed to validate the setting \"#{@name}\" value(s): #{invalid_value.inspect}. Valid options are: #{@possible_strings.inspect}"
"Failed to validate the setting \"#{@wrapped_setting.name}\" value(s): #{invalid_value.inspect}. Valid options are: #{@possible_strings.inspect}"
end
end
@ -792,9 +789,9 @@ module LogStash
end
def set(value)
@value = coerce(value)
@value_is_set = true
@value
coerced_value = coerce(value)
@wrapped_setting.set(coerced_value)
coerced_value
end
def coerce(value)
@ -810,6 +807,8 @@ module LogStash
end
end
java_import org.logstash.settings.NullableSetting
# @see Setting#nullable
# @api internal
class Nullable < SimpleDelegator
@ -833,14 +832,14 @@ module LogStash
class DeprecatedAlias < SimpleDelegator
# include LogStash::Util::Loggable
alias_method :wrapped, :__getobj__
attr_reader :canonical_proxy
attr_reader :canonical_proxy, :obsoleted_version
def initialize(canonical_proxy, alias_name)
def initialize(canonical_proxy, alias_name, obsoleted_version)
@canonical_proxy = canonical_proxy
@obsoleted_version = obsoleted_version
clone = @canonical_proxy.canonical_setting.clone
clone.instance_variable_set(:@name, alias_name)
clone.instance_variable_set(:@default, nil)
clone.update_wrapper(clone.wrapped_setting.deprecate(alias_name))
super(clone)
end
@ -869,9 +868,15 @@ module LogStash
private
def do_log_setter_deprecation
deprecation_logger.deprecated(I18n.t("logstash.settings.deprecation.set",
:deprecated_alias => name,
:canonical_name => canonical_proxy.name))
deprecation_logger.deprecated(
I18n.t("logstash.settings.deprecation.set",
:deprecated_alias => name,
:canonical_name => canonical_proxy.name,
:obsoleted_sentences =>
@obsoleted_version.nil? ?
I18n.t("logstash.settings.deprecation.obsoleted_future") :
I18n.t("logstash.settings.deprecation.obsoleted_version", :obsoleted_version => @obsoleted_version))
)
end
end
@ -888,10 +893,11 @@ module LogStash
# including the canonical setting and a deprecated alias.
# @param canonical_setting [Setting]: the setting to wrap
# @param deprecated_alias_name [String]: the name for the deprecated alias
# @param obsoleted_version [String]: the version of Logstash that deprecated alias will be removed
#
# @return [SettingWithDeprecatedAlias,DeprecatedSetting]
def self.wrap(canonical_setting, deprecated_alias_name)
setting_proxy = new(canonical_setting, deprecated_alias_name)
def self.wrap(canonical_setting, deprecated_alias_name, obsoleted_version=nil)
setting_proxy = new(canonical_setting, deprecated_alias_name, obsoleted_version)
[setting_proxy, setting_proxy.deprecated_alias]
end
@ -899,10 +905,10 @@ module LogStash
attr_reader :deprecated_alias
alias_method :canonical_setting, :__getobj__
def initialize(canonical_setting, deprecated_alias_name)
def initialize(canonical_setting, deprecated_alias_name, obsoleted_version)
super(canonical_setting)
@deprecated_alias = DeprecatedAlias.new(self, deprecated_alias_name)
@deprecated_alias = DeprecatedAlias.new(self, deprecated_alias_name, obsoleted_version)
end
def set(value)

Some files were not shown because too many files have changed in this diff Show more