Compare commits

...

288 commits

Author SHA1 Message Date
Karen Metts
b519cf4213
Doc: Remove local k8s files (#17547) 2025-04-17 19:20:41 -04:00
Karen Metts
f91f5a692d
Remove ADOC preview link (#17496) 2025-04-17 12:15:45 -04:00
João Duarte
005358ffb4
set major-version to "9.x" used only for the package installation section (#17562)
closes #17561
2025-04-17 17:13:50 +01:00
Victor Martinez
6646400637
mergify: support 8.19 and remove support 8.x (#17567) 2025-04-16 20:07:34 +02:00
Colleen McGinnis
47d430d4fb
Doc: Fix image paths for docs-assembler (#17566) 2025-04-16 09:46:48 -04:00
Cas Donoghue
49cf7acad0
Update logstash_releases.json after 8.17.5 (#17560)
* Update logstash_releases.json after 8.17.5

* Update 9.next  post 9.0.0 release

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-04-15 11:18:37 -07:00
Andrea Selva
dae2b61ab2
Update logstash_releases.json after 8.18.0 (#17506) 2025-04-15 12:02:11 +02:00
Yehor Shvedov
86042f8c98
New way of backporting to active branches using gh action (#17551) 2025-04-15 00:39:38 +03:00
George Wallace
2c95068e04
Update index.md (#17548) 2025-04-10 16:08:27 -04:00
Karen Metts
187c925cc8
Doc: Update installation info (#17532) 2025-04-09 17:04:17 -04:00
Cas Donoghue
8e6e183adc
Ensure elasticsearch logs and data dirs exist before startup (#17531)
With a recent change in ES https://github.com/elastic/elasticsearch/pull/125449
configuring path.data or path.logs to directories that do not exist cause ES to
not be able to start up. This commit ensures those directories exist. The
teardown script already ensures they are removed 712b37e1df/qa/integration/services/elasticsearch_teardown.sh (L26-L27)
2025-04-09 13:32:32 -07:00
Rye Biesemeyer
712b37e1df
setting: enforce non-nullable (restore 8.15.x behavior) (#17522) 2025-04-09 09:28:38 -07:00
João Duarte
815fa8be1c
updates to docker image template based on feedback (#17494)
* change base images to ubi9-minimal
* do all env2yaml related copying in 1 COPY
* use -trimpath in go build
* move other COPY to end of dockerfile
* don't run package manager upgrade
* FROM and AS with same case
* ENV x=y instead of ENV x y
* remove indirect config folder
2025-04-09 16:17:39 +01:00
Mashhur
b9bac5dfc6
[Monitoring LS] Recommends collecting metricsets to fully visualize metrics on dashboards. (#17479)
* Disabling some of metricsets may cause dashboards partially display some metrics. This change recommends collecting metricsets to fully visualize metrics on dashboards.

* Paraphrasing sentences and grammar correction.

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>

* Update docs/reference/serverless-monitoring-with-elastic-agent.md

* Make recommendation as a tip.

---------

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-04-08 15:03:53 -07:00
Dimitrios Liappis
b9469e0726
Fix JDK matrix pipeline after configurable it split (#17461)
PR #17219 introduced configurable split quantities for IT tests, which
resulted in broken JDK matrix pipelines (e.g. as seen via the elastic
internal link:
https://buildkite.com/elastic/logstash-linux-jdk-matrix-pipeline/builds/444

reporting the following error

```
  File "/buildkite/builds/bk-agent-prod-k8s-1743469287077752648/elastic/logstash-linux-jdk-matrix-pipeline/.buildkite/scripts/jdk-matrix-tests/generate-steps.py", line 263
    def integration_tests(self, part: int, parts: int) -> JobRetValues:
    ^^^
SyntaxError: invalid syntax
There was a problem rendering the pipeline steps.
Exiting now.
```
)

This commit fixes the above problem, which was already fixed in #17642, using a more
idiomatic way.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-04-08 15:34:07 +03:00
Karen Metts
d66a2cf758
Release notes, deprecations, breaking for 9.0.0 (#17507) 2025-04-07 19:09:14 -04:00
Karen Metts
e13fcadad8
Doc: Incorporate field ref deep dive content (#17484)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-04-07 15:04:49 -04:00
Andrea Selva
cb4c234aee
Update uri gem required by Logstash (#17495) 2025-04-07 10:32:12 +02:00
kaisecheng
eeb2162ae4
pin cgi to 0.3.7 (#17487) 2025-04-03 18:34:58 +01:00
Mashhur
ae3b3ed17c
Remove tech preview from agent driven LS monitoring pages. (#17482)
* Remove tech preview from agent driven LS monitoring pages.

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-04-02 10:43:42 -07:00
Rob Bavey
3c6cbbf35b
Breaking changes for 9.0 (#17380) 2025-04-01 23:19:30 -04:00
Rob Bavey
5a052b33f9
Fix standalone agent access for agent-driven monitoring (#17386)
* Fix standalone agent access for agent-driven monitoring

Change incorrect dedicated instructions, and base them on those from running Elastic Agent
in standalone mode.


---------

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-04-01 14:50:14 -04:00
Karen Metts
7913f91340
Doc: Move 9.0 pre-release notes to release notes (#17439) 2025-04-01 10:16:51 -04:00
João Duarte
5f5b4bb3c3
Fix persistent-queues.md PQ sizing multiplication factors #17451 (#17452)
closes #17431 for main/9.0 branches
2025-04-01 13:22:50 +01:00
Andrea Selva
422cd4e06b
Fix syntax in BK CI script (#17462) 2025-04-01 12:26:07 +02:00
Victor Martinez
26af21df85
ci(backport): remove former approach (#17347) 2025-03-31 17:45:31 +02:00
Victor Martinez
a539695830
github-actions: enable dependabot (#17421) 2025-03-30 10:06:18 +02:00
Mashhur
4b88773726
Do not pipe Git output into a pager. (#17442) 2025-03-28 10:56:39 -07:00
Colleen McGinnis
e5bebcea17
remove reliance on redirects (#17440) 2025-03-28 12:45:19 -05:00
Karen Metts
e2c6254c81
Doc: Remove plugin docs from logstash core (#17405)
Co-authored-by: Colleen McGinnis <colleen.mcginnis@elastic.co>
2025-03-27 11:02:45 -04:00
Karen Metts
add7b3f4d3
Doc: Upgrade content improvements (#17403)
* Change upgrade prereq from 8.17 to 8.18
2025-03-26 13:25:31 -04:00
Cas Donoghue
6de59f2c02
Pin rubocop-ast development gem due to new dep on prism (#17407)
The rubocop-ast gem just introduced a new dependency on prism.
 - https://rubygems.org/gems/rubocop-ast/versions/1.43.0

In our install default gem rake task we are seeing issues trying to build native
extensions. I see that in upstream jruby they are seeing a similar problem (at
least it is the same failure mode https://github.com/jruby/jruby/pull/8415

This commit pins rubocop-ast to 1.42.0 which is the last version that did not
have an explicit prism dependency.
2025-03-26 08:48:28 -07:00
Andrea Selva
075fdb4152
Limit memory consumption in test on overflow (#17373)
Updates only test code to be able to run a test that consumes big memory if:
- the physical memory is bigger than the requested Java heap
- JDK version is greater than or equal to 21.

The reason to limit the JDK version is that on 16GB machine the G1GC is more efficient than the one on previous JDKs and so let complete the test with 10GB heap, while in JDK 17 it consistently fails with OOM error.
2025-03-26 11:29:09 +01:00
Andrea Selva
6b277ccf0d
Update logstash_releases.json after 8.17.4 (#17396) 2025-03-26 09:50:25 +01:00
Andrea Selva
f76edcea5e
Update logstash_releases.json after 8.16.6 (#17394) 2025-03-26 09:01:27 +01:00
Rob Bavey
f705a9de48
Fix Elasticsearch output SSL settings (#17391)
Replace removed Elasticsearch output SSL settings with the latest values
2025-03-24 11:45:14 -04:00
Kaarina Tungseth
7ac0423de0
Updates navigation titles and descriptions for release notes (#17381) 2025-03-21 11:58:31 -05:00
Colleen McGinnis
284272b137
[docs] Miscellaneous docs clean up (#17372)
* remove unused substitutions
* move images
* validate build
2025-03-21 11:11:24 -04:00
Karen Metts
960d997e9f
Doc: Fix upgrade TOC structure (#17361) 2025-03-21 10:28:39 -04:00
Rye Biesemeyer
3e0f488df2
tests: make integration split quantity configurable (#17219)
* tests: make integration split quantity configurable

Refactors shared splitter bash function to take a list of files on stdin
and split into a configurable number of partitions, emitting only those from
the currently-selected partition to stdout.

Also refactors the only caller in the integration_tests launcher script to
accept an optional partition_count parameter (defaulting to `2` for backward-
compatibility), to provide the list of specs to the function's stdin, and to
output relevant information about the quantity of partition splits and which
was selected.

* ci: run integration tests in 3 parts
2025-03-19 16:37:27 -07:00
Yehor Shvedov
7683983168
Disable support of OpenJDK 17 (#17338) 2025-03-20 01:34:33 +02:00
Andrea Selva
afde43f918
Added test to verify the int overflow happen (#17353)
Use long instead of int type to keep the length of the first token.

The size limit validation requires to sum two integers, one with the length of the accumulated chars till now plus the next fragment head part. If any of the two sizes is close to the max integer it generates an overflow and could successfully fail the test 9c0e50faac/logstash-core/src/main/java/org/logstash/common/BufferedTokenizerExt.java (L123).

To fall in this case it's required that sizeLimit is bigger then 2^32 bytes (2GB) and data fragments without any line delimiter is pushed to the tokenizer with a total size close to 2^32 bytes.
2025-03-19 16:50:04 +01:00
Andrea Selva
9c0e50faac
Use org.logstash.common.Util to hashing by default to SHA256 (#17346)
Removes the usage fo Apache Commons Codec MessgeDigest to use internal Util class with embodies hashing methods.
2025-03-19 09:15:25 +01:00
Rye Biesemeyer
10b5a84f84
add ci shared qualified-version script (#17311)
* ci: add shareable script for generating qualified version

* ci: use shared script to generate qualified version
2025-03-18 15:41:33 -07:00
Andrea Selva
787fd2c62f
Removed unused configHash computation that can be replaced by PipelineConfig.configHash() (#17336)
Removed unused configHash computation happening in AbstractPipeline and used only in tests replaced by PipelineConfig.configHash() invocation
2025-03-18 08:58:24 +01:00
Karen Metts
193af6a272
Doc: Refine new MD docs (WIP) (#17325) 2025-03-17 17:14:40 -04:00
Mashhur
8d10baa957
Internal collection doc update to reflect enabling legacy collection. (#17326) 2025-03-14 14:50:42 -07:00
Karen Metts
ea99e1db58
Doc: Update TOC order in API docs (#17324) 2025-03-14 15:47:16 -04:00
kaisecheng
4779d5e250
[Doc] OpenAPI spec (#17292)
This commit migrated API doc to OpenAPI spec

- expand Node Info `/_node/<types>`
- expand Node Stats 
- add health report 
- break down the share part to components
- add examples
- update authentication

Co-authored-by: Lisa Cawley <lcawley@elastic.co>
2025-03-14 17:31:39 +00:00
Cas Donoghue
964468f922
Require shellwords in artifact rake task (#17319)
The https://github.com/elastic/logstash/pull/17310 PR changed the rake task for
artifact creation to use shellwords from standard library. The acceptance tests
need to explitily load that library. This commit updates the rake file to handle
loading the required code.
2025-03-13 18:38:52 -07:00
Cas Donoghue
0d931a502a
Surface failures from nested rake/shell tasks (#17310)
Previously when rake would shell out the output would be lost. This
made debugging CI logs difficult. This commit updates the stack with
improved message surfacing on error.
2025-03-13 15:20:38 -07:00
Mashhur
e748488e4a
Upgrade elasticsearch-ruby client. (#17161)
* Upgrade elasticsearch-ruby client.

* Fix Faraday removed basic auth option and apply the ES client module name change.
2025-03-13 10:03:07 -07:00
Cas Donoghue
d916972877
Shareable function for partitioning integration tests (#17223)
For the fedramp high work https://github.com/elastic/logstash/pull/17038/files a
use case for multiple scripts consuming the partitioning functionality emerged.
As we look to more advanced partitioning we want to ensure that the
functionality will be consumable from multiple scripts.

See https://github.com/elastic/logstash/pull/17219#issuecomment-2698650296
2025-03-12 09:30:45 -07:00
Rob Bavey
bff0d5c40f
Remove automatic backport-8.x label creation (#17290)
This commit removes the automatic creation of the `backport-8.x` label on any commit
2025-03-06 17:55:01 -05:00
Colleen McGinnis
cb6886814c
Doc: Fix external links (#17288) 2025-03-06 13:38:31 -05:00
kaisecheng
feb2b92ba2
[CI] Health report integration tests use the new artifacts-api (#17274)
migrate to the new artifacts-api
2025-03-06 16:17:43 +00:00
kaisecheng
d61a83abbe
[CI] benchmark uses the new artifacts-api (#17224)
migrate benchmark to the new artifacts-api
2025-03-06 16:16:19 +00:00
Andrea Selva
07a3c8e73b
Reimplement LogStash::Numeric setting in Java (#17127)
Reimplements `LogStash::Setting::Numeric` Ruby setting class into the `org.logstash.settings.NumericSetting` and exposes it through `java_import` as `LogStash::Setting::NumericSetting`.
Updates the rspec tests:
- verifies `java.lang.IllegalArgumentException` instead of `ArgumentError` is thrown because the kind of exception thrown by Java code, during verification.
2025-03-06 10:44:22 +01:00
Ry Biesemeyer
a736178d59
Pluginmanager install preserve (#17267)
* tests: integration tests for pluginmanager install --preserve

* fix regression where pluginmanager's install --preserve flag didn't
2025-03-05 20:38:59 -08:00
Mashhur
b993bec499
Temporarily disable mergify conflict process. (#17258) 2025-03-05 12:52:00 -08:00
Rob Bavey
ba5f21576c
Fix pqcheck and pqrepair on Windows (#17210)
A recent change to pqheck, attempted to address an issue where the
pqcheck would not on Windows mahcines when located in a folder containing
a space, such as "C:\program files\elastic\logstash". While this fixed an
issue with spaces in folders, it introduced a new issue related to Java options,
and the pqcheck was still unable to run on Windows.

This PR attempts to address the issue, by removing the quotes around the Java options,
which caused the option parsing to fail, and instead removes the explicit setting of
the classpath - the use of `set CLASSPATH=` in the `:concat` function is sufficient
to set the classpath, and should also fix the spaces issue

Fixes: #17209
2025-03-05 15:48:20 -05:00
Mashhur
1e06eea86e
Additional cleanify changes to ls2ls integ tests (#17246)
* Additional cleanify changes to ls2ls integ tests: replace heartbeat-input with reload option, set queue drain to get consistent result.
2025-03-05 12:24:23 -08:00
Rob Bavey
7446e6bf6a
Update logstash_releases.json (#17253)
Update Logstash releases file with new 8.17 and 8.16 releases
2025-03-05 15:11:44 -05:00
Victor Martinez
f4ca06cfed
mergify: support for some backport aliases (#17217) 2025-03-05 20:46:06 +01:00
Ry Biesemeyer
73ffa243bf
tests: ls2ls delay checking until events have been processed (#17167)
* tests: ls2ls delay checking until events have been processed

* Make sure upstream sends expected number of events before checking the expectation with downstream. Remove unnecessary or duplicated logics from the spec.

* Add exception handling in `wait_for_rest_api` to make wait for LS REST API retriable.

---------

Co-authored-by: Mashhur <mashhur.sattorov@elastic.co>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-03-05 11:36:52 -08:00
kaisecheng
0a745686f6
gradle task migrate to the new artifacts-api (#17232)
This commit migrates gradle task to the new artifacts-api

- remove dependency on staging artifacts
- all builds use snapshot artifacts
- resolve version from current branch, major.x, previous minor,
   with priority given in that order.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-03-05 17:12:52 +00:00
Victor Martinez
7d1458fad3
ci: filter mergify backports for the Exhaustive tests pipeline (#17227) 2025-03-05 17:07:55 +01:00
Karen Metts
0a3a2a302c
Doc: Running LS in ECK (#17225) 2025-03-05 09:09:58 -05:00
kaisecheng
34416fd971
[test] ls2ls spec adds heartbeat to keep alive (#17228)
Adds heartbeat-input to the LS to LS integration test config to flush the last batch from PQ
2025-03-05 12:29:28 +00:00
Victor Martinez
c95430b586
mergify: add backport rules for 8.18 and 9.0 (#17168) 2025-03-05 08:57:03 +01:00
Cas Donoghue
062154494a
Improve warning for insufficient file resources for PQ max_bytes (#16656)
This commit refactors the `PersistedQueueConfigValidator` class to provide a
more detailed, accurate and actionable warning when pipeline's PQ configs are at
risk of running out of disk space. See
https://github.com/elastic/logstash/issues/14839 for design considerations. The
highlights of the changes include accurately determining the free resources on a
filesystem disk and then providing a breakdown of the usage for each of the
paths configured for a queue.
2025-03-04 13:27:10 -08:00
Ry Biesemeyer
8c96913807
Pluginmanager clean after mutate (#17203)
* pluginmanager: always clean after mutate

* pluginmanager: don't skip updating plugins installed with --version

* pr feedback
2025-03-04 10:59:35 -08:00
Liam Thompson
53d39adb21
[DOCS] Fix doc link (#17213) 2025-03-04 11:38:22 -05:00
Colleen McGinnis
50671709e3
add missing mapped_page (#17208) 2025-03-03 12:29:35 -06:00
Colleen McGinnis
24fd2a6c75
clean up cross-repo links (#17190) 2025-03-03 12:37:47 -05:00
kaisecheng
86785815bd
Fix empty node stats pipelines (#17185)
Fixed an issue where the `/_node/stats` API displayed empty pipeline metrics 
when X-Pack monitoring was enabled
2025-02-28 21:38:35 +00:00
João Duarte
a4cf2bcc52
add missing Makefile tasks to build oss and wolfi images from build context tarballs (#17189) 2025-02-28 20:54:46 +00:00
kaisecheng
f562f37df2
Update z_rubycheck.rake to no longer inject Xmx1g
This allows the environment variable JRUBY_OPTS to be used for setting properties like Xmx
original pr: #16420
2025-02-28 15:22:34 +00:00
João Duarte
fecfc7c602
add env2yaml source files to build context tarball (#17151)
* build full docker image from dockerfiles during docker acceptance tests
2025-02-28 11:34:06 +00:00
kaisecheng
2d69d06809
use UBI9 as base image (#17156)
- the base image change from ubi8 to ubi9
- remove installation of curl
2025-02-28 09:29:19 +00:00
Ry Biesemeyer
0f81816311
qa: don't bypass plugin manger tests on linux (#17171)
* qa: don't bypass plugin manger tests on linux

* add gradle task to build gem fixtures for integration tests
2025-02-27 13:24:04 -08:00
Ry Biesemeyer
793e8c0b45
plugin manager: add --no-expand flag for list command (#17124)
* plugin manager: add --no-expand flag for list command

Allows us to avoid expanding aliases and integration plugins

* spec: escape expected output in regexp
2025-02-27 07:24:56 -08:00
Victor Martinez
d40386a335
mergify: support backports automation with labels (#16937) 2025-02-27 15:16:51 +01:00
Colleen McGinnis
3115c78bf8
[docs] Migrate docs from AsciiDoc to Markdown (#17159)
* delete asciidoc files
* add migrated files
2025-02-26 14:19:48 -05:00
Colleen McGinnis
884ae815b5
add the new ci checks (#17158) 2025-02-26 14:19:25 -05:00
João Duarte
823dcd25fa
Update logstash_releases.json to include Logstash 7.17.28 (#17150) 2025-02-25 16:17:37 +00:00
Dimitrios Liappis
4d52b7258d
Add Windows 2025 to CI (#17133)
This commit adds Windows 2025 to the Windows JDK matrix and exhaustive tests pipelines.
2025-02-24 15:29:35 +02:00
Cas Donoghue
227c0d8150
Update container acceptance tests with stdout/stderr changes (#17138)
In https://github.com/elastic/logstash/pull/17125 jvm setup was redirected to
stderr to avoid polluting stdout. This test was actually having to do some
additional processing to parse that information. Now that we have split the
destinations the tests can be simplified to look for the data they are trying to
validate on the appropriate stream.
2025-02-21 10:40:43 -08:00
Ry Biesemeyer
91258c3f98
entrypoint: avoid polluting stdout (#17125)
routes output from setup-related functions to stderr, so that stdout can
include only the output of the actual program.
2025-02-20 12:55:40 -08:00
Cas Donoghue
e8e24a0397
Fix acceptance test assertions for updated plugin remove (#17126)
This commit updates the acceptance tests to expect messages in the updated
format for removing plugins. See https://github.com/elastic/logstash/pull/17030
for change.
2025-02-20 07:19:36 -08:00
Cas Donoghue
e094054c0e
Fix acceptance test assertions for updated plugin remove (#17122)
This commit updates the acceptance tests to expect messages in the updated
format for removing plugins. See https://github.com/elastic/logstash/pull/17030
for change.
2025-02-19 15:44:29 -08:00
Ry Biesemeyer
089558801e
plugins: improve remove command to support multiple plugins (#17030)
Removal works in a single pass by finding plugins that would have unmet
dependencies if all of the specified plugins were to be removed, and
proceeding with the removal only if no conflicts were created.

> ~~~
> ╭─{ rye@perhaps:~/src/elastic/logstash@main (pluginmanager-remove-multiple ✘) }
> ╰─● bin/logstash-plugin remove logstash-input-syslog logstash-filter-grok
> Using system java: /Users/rye/.jenv/shims/java
> Resolving dependencies......
> Successfully removed logstash-input-syslog
> Successfully removed logstash-filter-grok
> [success (00:00:05)]
~~~
2025-02-19 11:17:20 -08:00
Ry Biesemeyer
9abad6609c
spec: improve ls2ls spec (#17114)
* spec: improve ls2ls spec

 - fixes upstream/downstream convention
   - upstream: the sending logstash (has an LS output)
   - downstream: the receiving logstash (has an LS input)
 - helper `run_logstash_instance` yields the `LogstashService` instance
   and handles the teardown.
 - pass the pipeline id and node name to the LS instances via command line
   flags to make logging easier to differentiate
 - use the generator input's sequence id to ensure that the _actual_ events
   generated are received by the downstream pipeline

* start with port-offset 100

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-02-18 21:53:35 -08:00
João Duarte
637f447b88
allow concurrent Batch deserialization (#17050)
Currently the deserialization is behind the readBatch's lock, so any large batch will take time deserializing, causing any other Queue writer (e.g. netty executor threads) and any other Queue reader (pipeline worker) to block.

This commit moves the deserialization out of the lock, allowing multiple pipeline workers to deserialize batches concurrently.

- add intermediate batch-holder from `Queue` methods
- make the intermediate batch-holder a private inner class of `Queue` with a descriptive name `SerializedBatchHolder`

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2025-02-17 19:01:44 +00:00
kaisecheng
e896cd727d
CPM handle 404 response gracefully with user-friendly log (#17052)
Starting from es-output 12.0.2, a 404 response is treated as an error. Previously, central pipeline management considered 404 as an empty pipeline, not an error.

This commit restores the expected behavior by handling 404 gracefully and logs a user-friendly message.
It also removes the redundant cache of pipeline in CPM

Fixes: #17035
2025-02-17 11:08:19 +00:00
Ry Biesemeyer
d20eb4dbcb
qa: use clean expansion of LS tarball per fixture instance (#17082)
* qa: use clean expansion of LS tarball per fixture instance

Because QA tests can _modify_ the Logstash installation (e.g. those that
invoke the plugin manager), it is important that the service wrapper
begins with a clean expansion of the logstash tarball.

* qa: enable safe reuse of ls_home in ls_to_ls tests
2025-02-14 07:53:52 -08:00
Dimitrios Liappis
78c34465dc
Allow capturing heap dumps in DRA BK jobs (#17081)
This commit allows Buildkite to capture any heap dumps produced
during DRA builds.
2025-02-13 18:13:17 +02:00
Dimitrios Liappis
8cd38499b5
Use centralized source of truth for active branches (#17063)
This commit simplifies the DRA process in Logstash by removing the need to maintain a separate file for the active branches, and instead rely on a centrally maintained file containing source of truth.
While at it, we refactor/simplify the creation of an array with the versions in `.buildkite/scripts/snyk/resolve_stack_version.sh`.
2025-02-12 16:17:52 +02:00
Ry Biesemeyer
a847ef7764
Update logstash_releases.json (#17055)
- 8.18 branch was cut 2025-01-29; add 8.next and shift 8.future
 - 8.16.4 and 8.17.2 were released 2025-02-11; shift forward
2025-02-11 10:54:21 -08:00
kaisecheng
5573b5ad77
fix logstash-keystore to accept spaces in values when added via stdin (#17039)
This commit preserves spaces in values, ensuring that multi-word strings are stored as intended.
Prior to this change, `logstash-keystore` incorrectly handled values containing spaces, 
causing only the first word to be stored.
2025-02-07 21:30:11 +00:00
Dimitrios Liappis
c7204fd7d6
Don't honor VERSION_QUALIFIER if set but empty (#17032)
PR #17006 revealed that the `VERSION_QUALIFIER` env var gets honored in
various scripts when present but empty.
This shouldn't be the case as the DRA process is designed to gracefully
ignore empty values for this variable.

This commit changes various ruby scripts to not treat "" as truthy.
Bash scripts (used by CI etc.) are already ok with this as part of
refactorings done in #16907.

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-02-07 13:05:23 +02:00
Mashhur
e23da7985c
Release note placeholder might be empty, making parsing lines nil tolerant. (#17026) 2025-02-05 11:07:56 -08:00
Andrea Selva
1c8cf546c2
Fix BufferedTokenizer to properly resume after a buffer full condition respecting the encoding of the input string (#16968)
Permit to use effectively the tokenizer also in context where a line is bigger than a limit.
Fixes an issues related to token size limit error, when the offending token was bigger than the input fragment in happened that the tokenzer wasn't unable to recover the token stream from the first delimiter after the offending token but messed things, loosing part of tokens.

## How solve the problem
This is a second take to fix the processing of tokens from the tokenizer after a buffer full error. The first try #16482 was rollbacked to the encoding error #16694.
The first try failed on returning the tokens in the same encoding of the input.
This PR does a couple of things:
- accumulates the tokens, so that after a full condition can resume with the next tokens after the offending one.
- respect the encoding of the input string. Use `concat` method instead of `addAll`, which avoid to convert RubyString to String and back to RubyString. When return the head `StringBuilder` it enforce the encoding with the input charset.
2025-02-05 10:25:08 +01:00
Mashhur
32cc85b9a7
Add short living 9.0 next and update main in CI release version definition. (#17008) 2025-01-31 10:32:16 -08:00
Mashhur
14c16de0c5
Core version bump to 9.1.0 (#16991) 2025-01-30 15:13:58 -08:00
Mashhur
786911fa6d
Add 9.0 branch to the CI branches definition (#16997) 2025-01-30 11:37:30 -08:00
Jan Calanog
2172879989
github-action: Add AsciiDoc freeze warning (#16969) 2025-01-30 10:39:27 -05:00
João Duarte
51ab5d85d2
upgrade jdk to 21.0.6+7 (#16932) 2025-01-30 11:17:04 +00:00
Mashhur
7378b85f41
Adding elastic_integration upgrade guidelines. (#16979)
* Adding elastic_integration upgrade guidelines.
2025-01-29 15:20:47 -08:00
Rob Bavey
70a6c9aea6
[wip] Changing upgrade docs to refer to 9.0 instead of 8.0 (#16977) 2025-01-29 17:33:17 -05:00
Rob Bavey
8a41a4e0e5
Remove sample breaking change from breaking changes doc (#16978) 2025-01-29 17:31:37 -05:00
Andrea Selva
6660395f4d
Update branches.json for 8.18 (#16981)
Add 8.18 to CI inventory branches
2025-01-29 19:04:16 +01:00
João Duarte
d3093e4b44
integration tests: switch log input to filestream in filebeat (#16983)
Log input has been deprecated in filebeat 9.0.0 and throws an error if it's present in the configuration.
This commit switches the configuration to the "filestream" input.
2025-01-29 13:51:09 +00:00
Ry Biesemeyer
6943df5570
plugin manager: add --level=[major|minor|patch] (default: minor) (#16899)
* plugin manager: add `--level=[major|minor|patch]` (default: `minor`)

* docs: plugin manager update `--level` behavior

* Update docs/static/plugin-manager.asciidoc

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* docs: plugin update major as subheading

* docs: intention-first in major plugin updates

* Update docs/static/plugin-manager.asciidoc

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-28 08:33:37 -08:00
Victor Martinez
1fda320ed9
ci(buildkite): exclude files/folders that are not tested in Buildkite (#16929) 2025-01-28 11:54:29 +01:00
kaisecheng
2be4812118
add openssl command to wolfi image (#16966)
This commit added openssl command to logstash-wolfi image
Fixes: #16965
2025-01-27 18:58:39 +00:00
kaisecheng
3f41828ebb
remove irrelevant warning for internal pipeline (#16938)
This commit removed irrelevant warning for internal pipeline, such as monitoring pipeline.
Monitoring pipeline is expected to be one worker. The warning is not useful

Fixes: #13298
2025-01-27 16:44:12 +00:00
João Duarte
c8a6566877
fix user and password detection from environment's uri (#16955) 2025-01-27 11:38:42 +00:00
Andrea Selva
03b11e9827
Reimplement LogStash::String setting in Java (#16576)
Reimplements `LogStash::Setting::String` Ruby setting class into the `org.logstash.settings.SettingString` and exposes it through `java_import` as `LogStash::Setting::SettingString`.
Updates the rspec tests in two ways:
- logging mock is now converted to real Log4J appender that spy log line that are later verified
- verifies `java.lang.IllegalArgumentException` instead of `ArgumentError` is thrown because the kind of exception thrown by Java code, during verification.
2025-01-24 16:56:53 +01:00
kaisecheng
dc740b46ca
[doc] fix the necessary privileges of central pipeline management (#16902)
CPM requires two roles logstash_admin and logstash_system

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-23 11:04:19 +00:00
Karen Metts
f66e00ac10
Doc: Remove extra symbol to fix formatting error (#16926) 2025-01-22 18:21:25 -05:00
João Duarte
52b7fb0ae6
fix jars installer for new maven and pin psych to 5.2.2 (#16919)
handle maven output that can carry "garbage" information after the jar's name. this patch deletes that extra information, also pins psych to 5.2.2 until jruby ships with snakeyaml-engine 2.9 and jar-dependencies 0.5.2

Related to: https://github.com/jruby/jruby/issues/8579
2025-01-22 15:35:50 +00:00
kaisecheng
d4ba08c358
Update logstash_releases.json (#16917) 2025-01-21 11:29:26 +00:00
Dimitrios Liappis
9385cfac5a
Use --qualifier in release manager (#16907)
This commit uses the new --qualifier parameter in the release manager
for publishing dra artifacts. Additionally, simplifies the expected
variables to rely on a simple `VERSION_QUALIFIER`.

Snapshot builds are skipped when VERSION_QUALIFIER is set.
Finally, for helping to test DRA PRs, we also allow passing the `DRA_BRANCH`  option/env var
to override BUILDKITE_BRANCH.

Closes https://github.com/elastic/ingest-dev/issues/4856
2025-01-20 13:55:17 +02:00
Andrea Selva
58e6dac94b
Increase Xmx used by JRuby during Rake execution to 4Gb (#16911) 2025-01-19 18:02:23 +01:00
Karen Metts
92d7210146
Doc: WPS integration (#16910) 2025-01-17 13:20:53 -05:00
kaisecheng
cd729b7682
Add %{{TIME_NOW}} pattern for sprintf (#16906)
* Add a new pattern %{{TIME_NOW}} to `event.sprintf` to generate a fresh timestamp.
The timestamp is represented as a string in the default ISO 8601 format

For example,
```
input {
    heartbeat {
    add_field => { "heartbeat_time" => "%{{TIME_NOW}}" }
    }
}
```
2025-01-17 16:51:15 +00:00
João Duarte
9a2cd015d4
inject VERSION_QUALIFIER into artifacts (#16904)
VERSION_QUALIFIER was already observed in rake artifacts task but only to influence the name of the artifact.

This commit ensure that the qualifier is also displayed in the cli and in the http api.
2025-01-17 10:18:35 +00:00
kaisecheng
ff44b7cc20
Update logstash_releases.json (#16901) 2025-01-14 15:46:29 +00:00
Ry Biesemeyer
348f1627a5
remove pipeline bus v1 (#16895) 2025-01-10 15:55:25 -08:00
Cas Donoghue
356ecb3705
Replace/remove references to defunct freenode instance (#16873)
The preferred channel for communication about LS is the elastic discussion
forum, this commit updates the source code and readme files to reflect that.
2025-01-10 14:28:35 -08:00
Mashhur
ae8ad28aaa
Add beats and elastic-agent SSL changes to 9.0 breaking change doc. (#16742) 2025-01-10 09:54:04 -08:00
Mashhur
db34116c46
Doc: Environment variables cannot be in config.string comments. (#16689)
* Doc: Environment variables cannot be in config.string comments due to substitution logic.

* Apply suggestions from code review

Suggestion to revise the statements.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

Make sentences present tense.

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-10 09:53:23 -08:00
Mashhur
a215101032
Validate the size limit in BufferedTokenizer. (#16882) 2025-01-09 15:53:07 -08:00
Karen Metts
d978e07f2c
Logstash API spec - first pass (#16546)
Co-authored-by: lcawl <lcawley@elastic.co>
2025-01-09 16:27:06 -05:00
Mashhur
47d04d06b2
Initialize flow metrics if pipeline metric.collect params is enabled. (#16881) 2025-01-09 12:30:49 -08:00
Ry Biesemeyer
4554749da2
Test touchups (#16884)
* test: improved and clearer stream read constraints validation

* test: remove unused  var
2025-01-09 12:28:45 -08:00
Ry Biesemeyer
16392908e2
jackson stream read constraints: code-based defaults (#16854)
* Revert "Apply Jackson stream read constraints defaults at runtime (#16832)"

This reverts commit cc608eb88b.

* jackson stream read constraints: code-based defaults

refactors stream read constraints to couple default values with their
associated overrides, which allows us to have more descriptive logging
that includes provenance of the value that has been applied.
2025-01-09 07:18:25 -08:00
kaisecheng
dae7fd93db
remove enterprise search from default distribution (#16818)
Elastic App Search and Elastic Workplace Search are deprecated and removed from v9
- remove enterprise_search integration plugin from default plugins
- add breaking change doc

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-09 12:15:41 +00:00
Cas Donoghue
274c212d9d
Ensure plugin config marked :deprecated logs to deprecation logger (#16833)
Previously when the `:deprecated` modifier was used in the plugin config DSL a
log message was sent at `:warn` level to the main logger. This commit updates
that message to be routed *only* to the deprecation logger.
2025-01-06 12:18:25 -08:00
Karen Metts
e2b322e8c1
Doc: Message Logstash module deprecations and removal (#16840) 2025-01-06 11:17:58 -05:00
kaisecheng
ef36df6b81
Respect environment variables in jvm.options (#16834)
JvmOptionsParser adds support for ${VAR:default} syntax when parsing jvm.options
- allow dynamic resolution of environment variables in the jvm.options file
- enables fallback to default value when the environment variable is not set
2025-01-03 23:04:28 +00:00
kaisecheng
de6a6c5b0f
Add pipeline metrics to Node Stats API (#16839)
This commit introduces three new metrics per pipeline in the Node Stats API:
- workers
- batch_size
- batch_delay

```
{
  ...
  pipelines: {
    main: {
      events: {...}, 
      flow: {...}, 
      plugins: {...}, 
      reloads: {...}, 
      queue: {...}, 
      pipeline: {
        workers: 12,
        batch_size: 125,
        batch_delay: 5,
      }, 
    }
  }
  ...
}
```
2025-01-03 20:48:14 +00:00
Cas Donoghue
531f795037
Ssl standardization breaking changes docs (#16844)
* Add logstash-filter-elasticsearch obsolete ssl section to breaking changes doc

Adds information about the SSL setting obsolescence for the Elasticsearch filter to the 9.0 breaking changes doc

* Fix headers

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>

* Add logstash-output-tcp obsolete ssl section to breaking changes doc

* Fix Asciidoctor doc links

* Add logstash-input-tcp obsolete ssl section to breaking changes doc

* Fix asciidoc plugin links

* Add logstash-filter-http obsolete ssl section to breaking changes doc

* Add logstash-input-http obsolete ssl section to breaking changes doc

* Add logstash-input-http_poller obsolete ssl section to breaking changes doc

* Add logstash-input-elastic_serverless_forwarder ssl conf to breaking changes

---------

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-01-03 11:26:46 -08:00
Cas Donoghue
cc608eb88b
Apply Jackson stream read constraints defaults at runtime (#16832)
When Logstash 8.12.0 added increased Jackson stream read constraints to
jvm.options, assumptions about the existence of that file's contents
were invalidated. This led to issues like #16683.

This change ensures Logstash applies defaults from config at runtime:
- MAX_STRING_LENGTH: 200_000_000
- MAX_NUMBER_LENGTH: 10_000
- MAX_NESTING_DEPTH: 1_000

These match the jvm.options defaults and are applied even when config
is missing. Config values still override these defaults when present.
2025-01-02 14:52:39 -08:00
Ry Biesemeyer
01c8e8bb55
Avoid lock when ecs_compatibility is explicitly specified (#16786)
Because a `break` escapes a `begin`...`end` block, we must not use a `break` in order to ensure that the explicitly set value gets memoized to avoid lock contention.

> ~~~ ruby
> def fake_sync(&block)
>   puts "FAKE_SYNC:enter"
>   val = yield
>   puts "FAKE_SYNC:return(#{val})"
>   return val
> ensure
>   puts "FAKE_SYNC:ensure"
> end
> 
> fake_sync do
>   @ivar = begin
>     puts("BE:begin")
>   	break :break
>   	
>   	val = :ret
>   	puts("BE:return(#{val})")
>   	val
>   ensure
>     puts("BE:ensure")
>   end
> end
> ~~~

Note: no `FAKE_SYNC:return`:

> ~~~
> ╭─{ rye@perhaps:~/src/elastic/logstash (main ✔) }
> ╰─● ruby break-esc.rb
> FAKE_SYNC:enter
> BE:begin
> BE:ensure
> FAKE_SYNC:ensure
> [success]
> ~~~
2024-12-19 15:48:05 -08:00
kaisecheng
ae75636e17
update ironbank image to ubi9/9.5 (#16819) 2024-12-19 17:24:01 +00:00
kaisecheng
05789744d2
Remove the Arcsight module and the modules framework (#16794)
Remove all module related code
- remove arcsight module
- remove module framework
- remove module tests
- remove module configs
2024-12-19 14:28:54 +00:00
kaisecheng
03ddf12893
[doc] use UBI8 as base image (#16812) 2024-12-17 18:16:48 +00:00
João Duarte
6e0d235c9d
update releases file with 8.16.2 GA (#16807) 2024-12-17 10:22:12 +00:00
Karen Metts
e1f4e772dc
Doc: Update security docs to replace obsolete cacert setting (#16798) 2024-12-16 15:25:43 -05:00
João Duarte
e6e0f9f6eb
give more memory to tests. 1gb instead of 512mb (#16764) 2024-12-16 10:41:05 +00:00
Cas Donoghue
2d51cc0ba9
Document obsolete settings for elasticsearch output plugin (#16787)
Update breaking changes doc with the standardized ssl settings for the
logstash-output-elasticsearch plugin.
2024-12-13 11:21:07 -08:00
Cas Donoghue
188d9e7ed8
Update serverless tests to include product origin header (#16766)
This commit updates the curl scripts that interact with Kibana's
`api/logstash/pipeline/*` endpoin. Additionally this adds the header to any curl
that interacts with elasticsearch API as well.
2024-12-13 09:22:45 -08:00
kaisecheng
65495263d4
[CI] remove 8.15 DRA (#16795) 2024-12-13 13:44:33 +00:00
João Duarte
264283889e
update logstash_releases to account for 8.17.0 GA (#16785) 2024-12-12 17:32:05 +01:00
kaisecheng
5bff2ad436
[CI] benchmark readme (#16783)
- add instruction to run benchmark in v8 with `xpack.monitoring.allow_legacy_collection`
- remove scripted field 5m_num from dashboards
2024-12-12 11:09:22 +00:00
Cas Donoghue
095fbbb992
Add breaking changes docs for input-elasticsearch (#16744)
This commit follows the pattern established in
https://github.com/elastic/logstash/pull/16701 for indicating obsolete ssl
settings in logstash core plugins.
2024-12-09 13:24:07 -08:00
João Duarte
e36cacedc8
ensure inputSize state value is reset during buftok.flush (#16760) 2024-12-09 09:10:54 -08:00
Ry Biesemeyer
202d07cbbf
ensure jackson overrides are available to static initializers (#16719)
Moves the application of jackson defaults overrides into pure java, and
applies them statically _before_ the `org.logstash.ObjectMappers` has a chance
to start initializing object mappers that rely on the defaults.

We replace the runner's invocation (which was too late to be fully applied) with
a _verification_ that the configured defaults have been applied.
2024-12-04 14:27:26 -08:00
Rob Bavey
ab19769521
Pin date dependency to 3.3.3 (#16755)
Resolves: #16095, #16754
2024-12-04 14:39:50 -05:00
Mashhur
4d9942d68a
Update usage of beats-input obsoleted SSL params in the core. (#16753) 2024-12-04 11:12:21 -08:00
Rob Bavey
e3265d93e8
Pin jar-dependencies to 0.4.1 (#16747)
Pin jar-dependencies to `0.4.1`, until https://github.com/jruby/jruby/issues/7262
is resolved.
2024-12-03 17:35:00 -05:00
João Duarte
af76c45e65
Update logstash_releases.json to account for 7.17.26 GA (#16746) 2024-12-03 15:25:55 +00:00
João Duarte
1851fe6b2d
Update logstash_releases.json to account for GA of 8.15.5 (#16734) 2024-11-27 14:09:22 +00:00
mmahacek
d913e2ae3d
Docs: Troubleshooting update for JDK bug handling cgroups v1 (#16721)
---------
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-27 11:08:20 +00:00
Rob Bavey
ccde1eb8fb
Add SSL Breaking changes for logstash-output-http to breaking changes… (#16701)
* Add SSL Breaking changes for logstash-output-http to breaking changes doc
* Add missing discrete tags
* Improve formatting of plugin breaking chagnes

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-26 16:23:44 -05:00
Rob Bavey
0e58e417ee
Increase timeout for docs PR link action (#16718)
Update the timeout from 15 minutes to 30 minutes to try and fix the inline docs previews feature, and cleanup comments
2024-11-22 11:57:15 -05:00
Karen Metts
615545027f
Doc: Roll 6.3 breaking changes under 6.0 series (#16706) 2024-11-22 11:16:02 -05:00
Cas Donoghue
eb7e1253e0
Revert "Bugfix for BufferedTokenizer to completely consume lines in case of lines bigger then sizeLimit (#16482)" (#16715)
This reverts commit 85493ce864.
2024-11-21 09:18:59 -08:00
João Duarte
aff8d1cce7
update logstash_release with 8.16.1 and 8.16.2-SNAPSHOT (#16711) 2024-11-21 10:03:50 +00:00
João Duarte
2f0e10468d
Update logstash_releases.json to account for 8.17 Feature Freeze (#16709) 2024-11-21 09:35:33 +00:00
github-actions[bot]
0dd64a9d63
PipelineBusV2 deadlock proofing (#16671) (#16682)
* pipeline bus: add deadlock test for unlisten/unregisterSender

* pipeline bus: eliminate deadlock

Moves the sync-to-notify out of the `AddressStateMapping#mutate`'s effective
synchronous block to eliminate a race condition where unlistening to an address
and unregistering a sender could deadlock.

It is safe to notify an AddressState's attached input without exclusive access
to the AddressState, because notifying an input that has since been disconnected
is net zero harm.

(cherry picked from commit 8af6343a26)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-11-20 13:26:13 -08:00
Mashhur
15b203448a
[Health API E2E] Align on agent python version and avoid pip install. (#16704)
* Pip install is unnecessary and sometimes hangs with prompt dialog. This commit aligns on agent python version which has a default pip installed.

* Update .buildkite/scripts/health-report-tests/main.sh

Improve readability.

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>

---------

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>
2024-11-20 10:26:02 -08:00
Cas Donoghue
7b3d23b9d5
Replace removed yaml module (#16703)
In https://github.com/elastic/logstash/pull/16586 the module include was
removed. This causes failures in the script when the module is referenced. This
commit re-enables the include for the yaml module.
2024-11-20 10:13:30 -08:00
João Duarte
977efbddde
Update branches.json to include 8.15, 8.16 and 8.17 (#16698) 2024-11-20 16:37:10 +00:00
Cas Donoghue
e0ed994ab1
Update license checker with new logger dependency (#16695)
A new transative dependency on the `logger` gem has been added through sinatra 4.1.0. Update the
license checker to ensure this is accounted for.
2024-11-20 11:49:28 +00:00
Andrea Selva
d4fb06e498
Introduce a new flag to explicitly permit legacy monitoring (#16586)
Introduce a new flag setting `xpack.monitoring.allow_legacy_collection` which eventually enable the legacy monitoring collector.

Update the method to test if monitoring is enabled so that consider also `xpack.monitoring.allow_legacy_collection` to determine if `monitoring.*` settings are valid or not.
By default it's false, the user has to intentionally enable it to continue to use the legacy monitoring settings.


---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-19 08:52:28 +01:00
Andrea Selva
a94659cf82
Default buffer type to 'heap' for 9.0 (#16500)
Switch the default value of `pipeline.buffer.type` to use the heap memory instead of direct one.

Change the default value of the setting `pipeline.buffer.type` from direct to heap and update consequently the documentation.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-13 16:58:56 +01:00
João Duarte
2a23680cfd
Update logstash_releases.json for new branching strategy (#16585)
See naming rational in https://github.com/logstash-plugins/.ci/pull/63#issue-2597373955
After 8.16 is GA, the "releases" entry should become:

```json
  "releases": {
    "7.current": "7.17.24",
    "8.current": "8.16.0",
    "8.previous": "8.15.3"
  },
```

For snapshots we'll also test against "main", "8.next", and "8.future". The labels are:

- `main`: main branch
-  `8.future`: the future 8.x release, i.e. current version of the 8.x branch
- `8.next`: the short lived period between a minor's FF - when the new branch is cut from 8.x - and GA
- `8.current`: the most recent 8.x release
- `8.previous`: the previous, but still supported, 8.x release
2024-11-13 11:04:02 +00:00
Cas Donoghue
ff8c154c4d
Extend ruby linting tasks to handle file inputs (#16660)
This commit extends the gradle and rake tasks to pass through a list of files
for rubocop to lint. This allows more specificity and fine grained control for
linting when the consumer of the tasks only wishes to lint a select few files.
2024-11-12 12:46:52 -08:00
Yehor Shvedov
74d87c9ea0
Update catalog-info file with correct system property (#16669) 2024-11-12 21:40:17 +02:00
kaisecheng
5826c6f902
Use UBI as base image (#16599)
Logstash Docker images, full and OSS, now use UBI image as its base, replacing the previous Ubuntu base.

- change the base image of `full` and `oss` to ubi
- Set locale to C.UTF-8
- remove ubi flavour
- use go image to build env2yaml
- remove redundant and refactor steps
- add support to build image in mac aarch64
- allow customizing ELASTIC_VERSION and LOCAL_ARTIFACTS for test purpose
2024-11-08 16:25:12 +00:00
Ellie
d9ead9a8db
Remove link to deleted cluster (#16658) 2024-11-08 09:03:47 -05:00
Nicole Albee
046ea1f5a8
For custom java plugins, set the platform = 'java'. (#16628) 2024-11-06 08:48:21 +00:00
João Duarte
efbee31461
Update .ruby-version to jruby-9.4.9.0 (#16642) 2024-11-06 08:22:37 +00:00
kaisecheng
5847d77331
skip allow_superuser in Windows OS (#16629)
As user id is always zero in Windows,
this commit excluded the checking of running as root in Windows.
2024-11-05 15:37:19 +00:00
Nicole Albee
113585d4a5
Anchor the -java match pattern at the end of the string. (#16626)
This fixes the offline install problem of the logstash-input-java_filter_example off-line install.
2024-11-05 14:21:15 +00:00
João Duarte
6703aec476
bump jruby to 9.4.9.0 (#16634) 2024-11-05 13:49:07 +00:00
kaisecheng
849f431033
fix Windows java not found log (#16633) 2024-11-05 10:15:51 +00:00
Andrea Selva
852149be2e
Update JDK to latest in versions.yml (#16627)
Update JDK to version 21.0.5+11
2024-11-04 16:40:20 +01:00
kaisecheng
8ce58b8355
run acceptance tests as non-root (#16624) 2024-11-01 19:47:08 +00:00
kaisecheng
00da72378b
add boostrap to docker build to fix missing jars (#16622)
The DRA build failed because the required jars were missing, as they had been removed during the Docker build process.
2024-11-01 15:45:35 +00:00
Karen Metts
0006937e46
Doc: Reset breaking changes and release notes for 9.0 (#16603) 2024-10-31 18:06:52 -04:00
João Duarte
9eced9a106
reduce effort during build of docker images (#16619)
there's no need to build jdk-less and windows tarballs for docker images
so this change simplifies the build process.

It should reduce the time spent needed to build docker images.
2024-10-31 16:00:25 +00:00
João Duarte
472e27a014
make docker build and gradle tasks more friendly towards ci output (#16618) 2024-10-31 16:00:11 +00:00
kaisecheng
db59cd0fbd
set allow_superuser to false as default (#16558)
- set allow_superuser as false by default for v9
- change the buildkite image of ruby unit test to non-root
2024-10-31 13:33:00 +00:00
Andrea Selva
c602b851bf
[CI] Change agent for JDK availability check and add schedule also for 8.x (#16614)
Switch execution agent of JDK availability check pipeline from vm-agent to container-agent.
Moves the schedule definition from the `Logstash Pipeline Scheduler` pipeline into the pipeline definition, adding a schedule also for `8.x` branch.
2024-10-30 12:13:28 +01:00
Andrea Selva
5d523aa5c8
Fix bad reference to a variable (#16615) 2024-10-30 11:44:41 +01:00
Andrea Selva
ed5874bc27
Use jvm catalog for reproducible builds and expose new pipeline to check JDK availability (#16602)
Updates the existing `createElasticCatalogDownloadUrl` method to use the precise version retrieved `versions.yml` to download the JDK instead of using the latest of major version. This makes the build reproducible again.
Defines a new Gradle `checkNewJdkVersion` task to check if there is a new JDK version available from JVM catalog matching the same major of the current branch. 
Creates a new Buildkite pipeline to execute a `bash` script to run the Gradle task; plus it also update the `catalog-info.yaml` with the new pipeline and a trigger to execute every week.
2024-10-29 10:55:15 +01:00
João Duarte
ca19f0029e
make max inflight warning global to all pipelines (#16597)
The current max inflight error message focuses on a single pipeline and on a maximum amount of 10k events regardless of the heap size.

The new warning will take into account all loaded pipelines and also consider the heap size, giving a warning if the total number of events consumes 10% or more of the total heap.

For the purpose of the warning events are assumed to be 2KB as it a normal size for a small log entry.
2024-10-25 14:44:33 +01:00
Edmo Vamerlatti Costa
93b0913fd9
Updated CI releases inventory after 7.17.25 (#16589) 2024-10-22 15:40:02 +02:00
kaisecheng
566bdf66fc
remove http.* settings (#16552)
The commit removes the deprecated settings http.port, http.host, http.enabled, and http.environment
2024-10-18 12:15:38 +01:00
kaisecheng
467ab3f44b
Enable log.format.json.fix_duplicate_message_fields by default (#16578)
Set `log.format.json.fix_duplicate_message_fields` to `true` as default
to avoid collision of the message field in log lines when log.format is JSON
2024-10-17 16:22:30 +01:00
Edmo Vamerlatti Costa
dcafa0835e
Updated CI releases inventory after 8.15.3 (#16581) 2024-10-17 16:55:31 +02:00
João Duarte
daf979c189
default to jdk 21 on all ci/build tasks (#16464) 2024-10-17 15:47:52 +01:00
kaisecheng
3f0ad12d06
add http.* deprecation log (#16538)
- refactor deprecated alias to support obsoleted version
- add deprecation log for http.* config
2024-10-17 14:07:51 +01:00
Andrea Selva
b6f16c8b81
Adds a JMH benchmark to test BufferedTokenizerExt class (#16564)
Adds a JMH benchmark to measure the peformances of BufferedTokenizerExt.
Update also Gradle build script to remove CMS GC flags and fix deprecations for Gradle 9.0.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-10-16 16:55:52 +02:00
Andrea Selva
85493ce864
Bugfix for BufferedTokenizer to completely consume lines in case of lines bigger then sizeLimit (#16482)
Fixes the behaviour of the tokenizer to be able to work properly when buffer full conditions are met.

Updates BufferedTokenizerExt so that can accumulate token fragments coming from different data segments. When a "buffer full" condition is matched, it record this state in a local field so that on next data segment it can consume all the token fragments till the next token delimiter.
Updated the accumulation variable from RubyArray containing strings to a StringBuilder which contains the head token, plus the remaining token fragments are stored in the input array.
Furthermore it translates the `buftok_spec` tests into JUnit tests.
2024-10-16 16:48:25 +02:00
João Duarte
ab77d36daa
ensure minitar 1.x is used instead of 0.x (#16565) 2024-10-16 12:37:35 +01:00
Andrea Selva
63706c1a36
Marked ArcSight Module as deprecated in documentation. (#16553)
Updates guide for ArcSight Module to deprecate it.
2024-10-16 08:37:13 +02:00
kaisecheng
cb3b7c01dc
Remove event_api.tags.illegal for v9 (#16461)
This commit removed `--event_api.tags.illegal` option
Fix: #16356
2024-10-15 22:27:36 +01:00
Rob Bavey
3f2a659289
Update branches.json to point to 8.16 branch (#16560) 2024-10-15 15:43:16 -04:00
Mashhur
dfd256e307
[Health API] Add 1-min, 5-min backpressure and multipipeline test cases. (#16550)
* Health API: Add 1min 5min backpressure cases and improve Logstash temination logic.

* Apply suggestions from code review

Uncomment accidentally commented sources.

* Update .buildkite/scripts/health-report-tests/tests/slow-start.yaml

No need to wait for LS startup when using slow start scenario.

* Apply suggestions from code review

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* Standardize YAML structure and rename wait time to wait_seconds

---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-10-15 08:32:17 -07:00
kaisecheng
b571e8f3e3
remove deprecated modules netflow, fb_apache and azure (#16514)
This commit removes files related to netflow, fb_apache and azure modules
Fix: #16357
2024-10-15 14:03:53 +01:00
Karen Metts
fc119df24a
Doc: Update 8.15.0 release notes to future proof link (#16554) 2024-10-14 11:10:15 -04:00
Ry Biesemeyer
937a9ea49f
docs: add health details for flow/worker_utilization (#16544)
* docs: add health details for flow/worker_utilization

* plus-append to indent flow details under flow
2024-10-11 15:10:11 -07:00
kaisecheng
8cd0fa8767
refactor log for event_api.tags.illegal (#16545)
- add `obsoleted_version` and remove `deprecated_msg` from `deprecated_option` for consistent warning message
2024-10-11 21:22:29 +01:00
Andrea Selva
6064587bc4
Keeps global settings aligned across entities used in the test for StatsEventFactory
Fixes a potential flaky test, due to shared (LogStash:SETTINGS) fixture across the test base.


Forward port the commit 609155a61b used to fix the non clean backport PR #16531 of #16525 to 8.x.

LogStash:SETTINGS is used in the constructor of LogStash::Inputs::Metrics::StatsEventFactory to query the value of api.enabled. This PR keeps updated the value for the setting provided to the Agent constructor and to the StatsEventFactory.
2024-10-11 15:26:53 +02:00
Ry Biesemeyer
a931b2cde6
Flow worker utilization probe (#16532)
* flow: refactor pipeline refs to keep worker flows separate

* health: add worker_utilization probe

pipeline is:
  - RED "completely blocked" when last_5_minutes >= 99.999
  - YELLOW "nearly blocked" when last_5_minutes > 95
    - and inludes "recovering" info when last_1_minute < 80
  - YELLOW "completely blocked" when last_1_minute >= 99.999
  - YELLOW "nearly blocked" when last_1_minute > 95

* tests: improve coverage of PipelineIndicator probes

* Apply suggestions from code review
2024-10-10 17:56:22 -07:00
Ry Biesemeyer
065769636b
health: add logstash.forceApiStatus: green escape hatch (#16535) 2024-10-10 16:35:06 -07:00
Mashhur
4037adfc4a
Health api minor followups (#16533)
* Utilize default agent for Health API CI. Call python scripts from directly CI step.

* Change BK agent to support both Java and python. Install pip manually and send env vars to subprocess.
2024-10-10 14:57:41 -07:00
github-actions[bot]
7f7af057f0
Feature: health report api (#16520) (#16523)
* [health] bootstrap HealthObserver from agent to API (#16141)

* [health] bootstrap HealthObserver from agent to API

* specs: mocked agent needs health observer

* add license headers

* Merge `main` into `feature/health-report-api` (#16397)

* Add GH vault plugin bot to allowed list (#16301)

* regenerate webserver test certificates (#16331)

* correctly handle stack overflow errors during pipeline compilation (#16323)

This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320

* Doc: Reposition worker-utilization in doc (#16335)

* settings: add support for observing settings after post-process hooks (#16339)

Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332

* fix line used to determine ES is up (#16349)

* add retries to snyk buildkite job (#16343)

* Fix 8.13.1 release notes (#16363)

make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16347)

* [Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)

* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Doc: Add SNMP integration to breaking changes (#16374)

* deprecate java less-than 17 (#16370)

* Exclude substitution refinement on pipelines.yml (#16375)

* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .

* Doc: Forwardport 8.15.0 release notes to main (#16388)

* Removing 8.14 from ci/branches.json as we have 8.15. (#16390)

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Squashed merge from 8.x

* Failure injector plugin implementation. (#16466)

* Test purpose only failure injector integration (filter and output) plugins implementation. Add unit tests and include license notes.

* Fix the degrate method name typo.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Add explanation to the config params and rebuild plugin gem.

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Health report integration tests bootstrapper and initial tests implementation (#16467)

* Health Report integration tests bootstrapper and initial slow start scenario implementation.

* Apply suggestions from code review

Renaming expectation check method name.

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* Changed to branch concept, YAML structure simplified as changed to Dict.

* Apply suggestions from code review

Reflect `help_url` to the integration test.

---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* health api: expose `GET /_health_report` with pipelines/*/status probe (#16398)

Adds a `GET /_health_report` endpoint with per-pipeline status probes, and wires the
resulting report status into the other API responses, replacing their hard-coded `green`
with a meaningful status indication.

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* docs: health report API, and diagnosis links (feature-targeted) (#16518)

* docs: health report API, and diagnosis links

* Remove plus-for-passthrough markers

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* merge 8.x into feature branch... (#16519)

* Add GH vault plugin bot to allowed list (#16301)

* regenerate webserver test certificates (#16331)

* correctly handle stack overflow errors during pipeline compilation (#16323)

This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320

* Doc: Reposition worker-utilization in doc (#16335)

* settings: add support for observing settings after post-process hooks (#16339)

Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332

* fix line used to determine ES is up (#16349)

* add retries to snyk buildkite job (#16343)

* Fix 8.13.1 release notes (#16363)

make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16347)

* [Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)

* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Doc: Add SNMP integration to breaking changes (#16374)

* deprecate java less-than 17 (#16370)

* Exclude substitution refinement on pipelines.yml (#16375)

* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .

* Doc: Forwardport 8.15.0 release notes to main (#16388)

* Removing 8.14 from ci/branches.json as we have 8.15. (#16390)

* Increase Jruby -Xmx to avoid OOM during zip task in DRA (#16408)

Fix: #16406

* Generate Dataset code with meaningful fields names (#16386)

This PR is intended to help Logstash developers or users that want to better understand the code that's autogenerated to model a pipeline, assigning more meaningful names to the Datasets subclasses' fields.

Updates `FieldDefinition` to receive the name of the field from construction methods, so that it can be used during the code generation phase, instead of the existing incremental `field%n`.
Updates `ClassFields` to propagate the explicit field name down to the `FieldDefinitions`.
Update the `DatasetCompiler` that add fields to `ClassFields` to assign a proper name to generated Dataset's fields.

* Implements safe evaluation of conditional expressions, logging the error without killing the pipeline (#16322)

This PR protects the if statements against expression evaluation errors, cancel the event under processing and log it.
This avoids to crash the pipeline which encounter a runtime error during event condition evaluation, permitting to debug the root cause reporting the offending event and removing from the current processing batch.

Translates the `org.jruby.exceptions.TypeError`, `IllegalArgumentException`, `org.jruby.exceptions.ArgumentError` that could happen during `EventCodition` evaluation into a custom `ConditionalEvaluationError` which bubbles up on AST tree nodes. It's catched in the `SplitDataset` node.
Updates the generation of the `SplitDataset `so that the execution of `filterEvents` method inside the compute body is try-catch guarded and defer the execution to an instance of `AbstractPipelineExt.ConditionalEvaluationListener` to handle such error. In this particular case the error management consist in just logging the offending Event.

---------

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16426)

* Release notes for 8.15.1 (#16405) (#16427)

* Update release notes for 8.15.1

* update release note

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Kaise Cheng <kaise.cheng@elastic.co>
(cherry picked from commit 2fca7e39e8)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Fix ConditionalEvaluationError to do not include the event that errored in its serialiaxed form, because it's not expected that this class is ever serialized. (#16429) (#16430)

Make inner field of ConditionalEvaluationError transient to be avoided during serialization.

(cherry picked from commit bb7ecc203f)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* use gnu tar compatible minitar to generate tar artifact (#16432) (#16434)

Using VERSION_QUALIFIER when building the tarball distribution will fail since Ruby's TarWriter implements the older POSIX88 version of tar and paths will be longer than 100 characters.

For the long paths being used in Logstash's plugins, mainly due to nested folders from jar-dependencies, we need the tarball to follow either the 2001 ustar format or gnu tar, which is implemented by the minitar gem.

(cherry picked from commit 69f0fa54ca)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* account for the 8.x in DRA publishing task (#16436) (#16440)

the current DRA publishing task computes the branch from the version
contained in the version.yml

This is done by taking the major.minor and confirming that a branch
exists with that name.

However this pattern won't be applicable for 8.x, as that branch
currently points to 8.16.0 and there is no 8.16 branch.

This commit falls back to reading the buildkite injected
BUILDKITE_BRANCH variable.

(cherry picked from commit 17dba9f829)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Fixes the issue where LS wipes out all quotes from docker env variables. (#16456) (#16459)

* Fixes the issue where LS wipes out all quotes from docker env variables. This is an issue when running LS on docker with CONFIG_STRING, needs to keep quotes with env variable.

* Add a docker acceptance integration test.

(cherry picked from commit 7c64c7394b)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Known issue for 8.15.1 related to env vars references (#16455) (#16469)

(cherry picked from commit b54caf3fd8)

Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>

* bump .ruby_version to jruby-9.4.8.0 (#16477) (#16480)

(cherry picked from commit 51cca7320e)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Release notes for 8.15.2 (#16471) (#16478)

Co-authored-by: andsel <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 01dc76f3b5)

* Change LogStash::Util::SubstitutionVariables#replace_placeholders refine argument to optional (#16485) (#16488)

(cherry picked from commit 8368c00367)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>

* Use jruby-9.4.8.0 in exhaustive CIs. (#16489) (#16491)

(cherry picked from commit fd1de39005)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Don't use an older JRuby with oraclelinux-7 (#16499) (#16501)

A recent PR (elastic/ci-agent-images/pull/932) modernized the VM images
and removed JRuby 9.4.5.0 and some older versions.

This ended up breaking exhaustive test on Oracle Linux 7 that hard coded
JRuby 9.4.5.0.

PR https://github.com/elastic/logstash/pull/16489 worked around the
problem by pinning to the new JRuby, but actually we don't
need the conditional anymore since the original issue
https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 has
been resolved and none of our releasable branches (apart from 7.17 which
uses `9.2.20.1`) specify `9.3.x.y` in `/.ruby-version`.

Therefore, this commit removes conditional setting of JRuby for
OracleLinux 7 agents in exhaustive tests (and relies on whatever
`/.ruby-version` defines).

(cherry picked from commit 07c01f8231)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>

* Improve pipeline bootstrap error logs (#16495) (#16504)

This PR adds the cause errors details on the pipeline converge state error logs

(cherry picked from commit e84fb458ce)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>

* Logstash Health Report Tests Buildkite pipeline setup. (#16416) (#16511)

(cherry picked from commit 5195332bc6)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Make health report test runner script executable. (#16446) (#16512)

(cherry picked from commit 2ebf2658ff)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Backport PR #16423 to 8.x: DLQ-ing events that trigger an conditional evaluation error. (#16493)

* DLQ-ing events that trigger an conditional evaluation error. (#16423)

When a conditional evaluation encounter an error in the expression the event that triggered the issue is sent to pipeline's DLQ, if enabled for the executing pipeline.

This PR engage with the work done in #16322, the `ConditionalEvaluationListener` that is receives notifications about if-statements evaluation failure, is improved to also send the event to DLQ (if enabled in the pipeline) and not just logging it.

(cherry picked from commit b69d993d71)

* Fixed warning about non serializable field DeadLetterQueueWriter in serializable AbstractPipelineExt

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* add deprecation log for `--event_api.tags.illegal` (#16507) (#16515)

- move `--event_api.tags.illegal` from option to deprecated_option
- add deprecation log when the flag is explicitly used
relates: #16356

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
(cherry picked from commit a4eddb8a2a)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
(cherry picked from commit 7eb5185b4e)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-10 11:17:14 -07:00
Andrea Selva
648472106f
[test] Fix xpack test to check for http_address stats only if the webserver is enabled (#16525)
Set the 'api.enabled' setting to reflect the flag webserver_enabled and consequently test for http_address presence in settings iff the web server is enabled.
2024-10-10 18:57:34 +02:00
github-actions[bot]
dc24f02972
Fix QA failure introduced by Health API changes and update rspec dependency of the QA package. (#16521) (#16522)
* Update rspec dependency of the QA package.

* Update qa/Gemfile

Align on rspec 3.13.x

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>

* Fix the QA test failure caused after reflecting Health Report status to the Node stats.

---------

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit 1e5105fcd8)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-09 15:28:02 -07:00
kaisecheng
a4eddb8a2a
add deprecation log for --event_api.tags.illegal (#16507)
- move `--event_api.tags.illegal` from option to deprecated_option
- add deprecation log when the flag is explicitly used
relates: #16356

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-08 13:49:59 +01:00
João Duarte
3480c32b6e
install jdk 11 in agent for snyk 7.17 scanning (#16510) 2024-10-07 13:30:25 +01:00
Andrea Selva
5d4825f000
Avoid to access Java DeprecatedAlias value other than Ruby's one (#16506)
Update Settings to_hash method to also skip Java DeprecatedAlias and not just the Ruby one.
With PR #15679 was introduced org.logstash.settings.DeprecatedAlias which mirrors the behaviour of Ruby class Setting::DeprecatedAlias. The equality check at Logstash::Settings, as descibed in #16505 (comment), is implemented comparing the maps.
The conversion of Settings to the corresponding maps filtered out the Ruby implementation of DeprecatedAlias but not the Java one.
This PR adds also the Java one to the list of filter.
2024-10-04 16:46:00 +02:00
João Duarte
5aabeda5fd
fix snapshot branch detection for snyk (#16484)
* handle two 8. branches
2024-10-04 09:13:12 +01:00
Edmo Vamerlatti Costa
e84fb458ce
Improve pipeline bootstrap error logs (#16495)
This PR adds the cause errors details on the pipeline converge state error logs
2024-10-03 11:08:42 +02:00
Dimitrios Liappis
60670087cb
[ci] Skip slack for retries JDK matrix jobs (#16316)
With this commit we shush slack alerts for JDK matrix CI jobs that
succeed after (automatic) retries.
2024-10-03 10:29:58 +03:00
Dimitrios Liappis
07c01f8231
Don't use an older JRuby with oraclelinux-7 (#16499)
A recent PR (elastic/ci-agent-images/pull/932) modernized the VM images
and removed JRuby 9.4.5.0 and some older versions.

This ended up breaking exhaustive test on Oracle Linux 7 that hard coded
JRuby 9.4.5.0.

PR https://github.com/elastic/logstash/pull/16489 worked around the
problem by pinning to the new JRuby, but actually we don't
need the conditional anymore since the original issue
https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 has
been resolved and none of our releasable branches (apart from 7.17 which
uses `9.2.20.1`) specify `9.3.x.y` in `/.ruby-version`.

Therefore, this commit removes conditional setting of JRuby for
OracleLinux 7 agents in exhaustive tests (and relies on whatever
`/.ruby-version` defines).
2024-10-02 19:07:16 +03:00
Andrea Selva
4e49adc6f3
Fix jdk21 warnings (#16496)
Suppress some warnings compared with JDK 21

- this-escape uses this before it is completely initialised.
- avoid a non serialisable DeadLetterQueueWriter field from serialisable instance.
2024-10-02 15:28:37 +02:00
Andrea Selva
b69d993d71
DLQ-ing events that trigger an conditional evaluation error. (#16423)
When a conditional evaluation encounter an error in the expression the event that triggered the issue is sent to pipeline's DLQ, if enabled for the executing pipeline.

This PR engage with the work done in #16322, the `ConditionalEvaluationListener` that is receives notifications about if-statements evaluation failure, is improved to also send the event to DLQ (if enabled in the pipeline) and not just logging it.
2024-10-02 12:23:54 +02:00
Mashhur
fd1de39005
Use jruby-9.4.8.0 in exhaustive CIs. (#16489) 2024-10-02 09:30:22 +01:00
Andrea Selva
61de60fe26
[Spacetime] Reimplement config Setting classe in java (#15679)
Reimplement the root Ruby Setting class in Java and use it from the Ruby one moving the original Ruby class to a shell wrapping the Java instance.
In particular create a new symmetric hierarchy (at the time just for `Setting`, `Coercible` and `Boolean` classes) to the Ruby one, moving also the feature for setting deprecation. In this way the new `org.logstash.settings.Boolean` is syntactically and semantically equivalent to the old Ruby Boolean class, which replaces.
2024-10-02 09:09:47 +02:00
Edmo Vamerlatti Costa
8368c00367
Change LogStash::Util::SubstitutionVariables#replace_placeholders refine argument to optional (#16485) 2024-10-01 11:33:35 -07:00
github-actions[bot]
2fe91226eb
Release notes for 8.15.2 (#16471) (#16486)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: andsel <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 01dc76f3b5)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-10-01 13:28:26 -04:00
Andrea Selva
f35e10d792
Updated CI releases inventory after 8.15.2 (#16474) 2024-09-26 18:25:32 +02:00
João Duarte
5c57adebb9
simplify snyk scanning (#16475)
* remove docker image scanning as that's handled by infosec
* run buildkite job on a docker image instead of vm (no need to test docker any more)
2024-09-25 14:38:52 +01:00
João Duarte
51cca7320e
bump .ruby_version to jruby-9.4.8.0 (#16477) 2024-09-25 13:15:14 +01:00
kaisecheng
0ef4c7da32
[ci] fix wrong queue type in benchmark marathon (#16465) 2024-09-20 19:58:20 +01:00
Luca Belluccini
b54caf3fd8
Known issue for 8.15.1 related to env vars references (#16455) 2024-09-19 13:32:14 -04:00
kaisecheng
3e98cb1625
[CI] fix benchmark marathon (#16447)
- split main.sh to core.sh and main.sh
- rename all.sh to marathon.sh
- fix vault expiry issue for long running task
- fix unable to call main function
- update save object runtime field `release` to return true when `version` contains "SNAPSHOT"
2024-09-17 22:29:02 +01:00
Mashhur
7c64c7394b
Fixes the issue where LS wipes out all quotes from docker env variables. (#16456)
* Fixes the issue where LS wipes out all quotes from docker env variables. This is an issue when running LS on docker with CONFIG_STRING, needs to keep quotes with env variable.

* Add a docker acceptance integration test.
2024-09-17 06:46:19 -07:00
kaisecheng
4e82655cd5
remove ingest-converter (#16453)
Removed the tool Ingest Converter
2024-09-16 15:57:51 +01:00
Andrea Selva
1ec37b7c41
Drop JDK 11 support (#16443)
If a user runs Logstash with a hosted JDK and not the one bundled with Logstash distribution, like setting a specific LS_JAVA_HOME, which is minor than JDK 17 then Logstash refuses to start. Has to provide at least a JDK 17 or unset the LS_JAVA_HOME and let Logstash uses the bundled JDK.

Updates the jvm.options and JvmOptionsParser to remove support for JDK 11. If the options parser identifies that the running JVM is less than 17, it refuses to start.

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-09-13 17:33:16 +02:00
Mashhur
2ebf2658ff
Make health report test runner script executable. (#16446) 2024-09-12 13:24:53 -07:00
kaisecheng
5452cccf76
[CI] benchmark dashboard and pipeline for testing against multiple versions (#16421)
- add becnhmark dashboard and related save objects
- add one buildkite pipeline to test against multiple versions
- remove null field in json
- add `FLOG_FILE_CNT`, `VAULT_PATH`, `TAGS`
2024-09-12 19:45:23 +01:00
Mashhur
5195332bc6
Logstash Health Report Tests Buildkite pipeline setup. (#16416) 2024-09-10 11:14:14 -07:00
kaisecheng
701108f88b
update ci release 7.17.24 (#16439) 2024-09-10 12:20:50 +01:00
João Duarte
17dba9f829
account for the 8.x in DRA publishing task (#16436)
the current DRA publishing task computes the branch from the version
contained in the version.yml

This is done by taking the major.minor and confirming that a branch
exists with that name.

However this pattern won't be applicable for 8.x, as that branch
currently points to 8.16.0 and there is no 8.16 branch.

This commit falls back to reading the buildkite injected
BUILDKITE_BRANCH variable.
2024-09-10 10:55:34 +01:00
João Duarte
f60e987173
bump to 9.0.0 and adapt CI accordingly (#16428) 2024-09-09 13:46:00 +01:00
João Duarte
69f0fa54ca
use gnu tar compatible minitar to generate tar artifact (#16432)
Using VERSION_QUALIFIER when building the tarball distribution will fail since Ruby's TarWriter implements the older POSIX88 version of tar and paths will be longer than 100 characters.

For the long paths being used in Logstash's plugins, mainly due to nested folders from jar-dependencies, we need the tarball to follow either the 2001 ustar format or gnu tar, which is implemented by the minitar gem.
2024-09-09 11:33:44 +01:00
Andrea Selva
bb7ecc203f
Fix ConditionalEvaluationError to do not include the event that errored in its serialiaxed form, because it's not expected that this class is ever serialized. (#16429)
Make inner field of ConditionalEvaluationError transient to be avoided during serialization.
2024-09-06 12:09:58 +02:00
github-actions[bot]
58b6a0ac77
Release notes for 8.15.1 (#16405) (#16427)
* Update release notes for 8.15.1

* update release note

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Kaise Cheng <kaise.cheng@elastic.co>
(cherry picked from commit 2fca7e39e8)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-09-05 17:12:58 +01:00
kaisecheng
285d13a515
Update logstash_releases.json (#16426) 2024-09-05 17:10:52 +01:00
Andrea Selva
b88e23702c
Implements safe evaluation of conditional expressions, logging the error without killing the pipeline (#16322)
This PR protects the if statements against expression evaluation errors, cancel the event under processing and log it.
This avoids to crash the pipeline which encounter a runtime error during event condition evaluation, permitting to debug the root cause reporting the offending event and removing from the current processing batch.

Translates the `org.jruby.exceptions.TypeError`, `IllegalArgumentException`, `org.jruby.exceptions.ArgumentError` that could happen during `EventCodition` evaluation into a custom `ConditionalEvaluationError` which bubbles up on AST tree nodes. It's catched in the `SplitDataset` node.
Updates the generation of the `SplitDataset `so that the execution of `filterEvents` method inside the compute body is try-catch guarded and defer the execution to an instance of `AbstractPipelineExt.ConditionalEvaluationListener` to handle such error. In this particular case the error management consist in just logging the offending Event.


---------

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-09-05 10:57:10 +02:00
Andrea Selva
ac034a14ee
Generate Dataset code with meaningful fields names (#16386)
This PR is intended to help Logstash developers or users that want to better understand the code that's autogenerated to model a pipeline, assigning more meaningful names to the Datasets subclasses' fields.

Updates `FieldDefinition` to receive the name of the field from construction methods, so that it can be used during the code generation phase, instead of the existing incremental `field%n`.
Updates `ClassFields` to propagate the explicit field name down to the `FieldDefinitions`.
Update the `DatasetCompiler` that add fields to `ClassFields` to assign a proper name to generated Dataset's fields.
2024-09-04 11:10:29 +02:00
kaisecheng
6e93b30c7f
Increase Jruby -Xmx to avoid OOM during zip task in DRA (#16408)
Fix: #16406
2024-08-28 11:10:21 +01:00
Mashhur
b2796afc92
Removing 8.14 from ci/branches.json as we have 8.15. (#16390) 2024-08-19 12:49:34 -07:00
Karen Metts
d4519711a6
Doc: Forwardport 8.15.0 release notes to main (#16388) 2024-08-14 09:00:37 -04:00
Mashhur
e104704830
Exclude substitution refinement on pipelines.yml (#16375)
* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .
2024-08-09 09:33:01 -07:00
Ry Biesemeyer
3d13ebe33e
deprecate java less-than 17 (#16370) 2024-08-09 08:58:11 +01:00
Karen Metts
2db2a224ed
Doc: Add SNMP integration to breaking changes (#16374) 2024-08-08 11:06:48 -04:00
Mashhur
62ef8a0847
[Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)
* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-08-06 11:09:26 -07:00
Andrea Selva
09a2827802
Update logstash_releases.json (#16347) 2024-07-30 16:17:10 +01:00
João Duarte
03841cace3
Fix 8.13.1 release notes (#16363)
make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-07-30 09:19:13 +01:00
João Duarte
629d8fe5a8
add retries to snyk buildkite job (#16343) 2024-07-29 12:00:43 +01:00
João Duarte
90f303e401
fix line used to determine ES is up (#16349) 2024-07-24 16:48:42 +02:00
Ry Biesemeyer
c633ad2568
settings: add support for observing settings after post-process hooks (#16339)
Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332
2024-07-24 10:22:34 +01:00
Karen Metts
eff9b540df
Doc: Reposition worker-utilization in doc (#16335) 2024-07-19 12:34:42 -04:00
João Duarte
8f2dae618c
correctly handle stack overflow errors during pipeline compilation (#16323)
This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320
2024-07-18 10:08:38 +01:00
João Duarte
c30aa1c7f5
regenerate webserver test certificates (#16331) 2024-07-17 10:43:57 +01:00
ev1yehor
e065088cd8
Add GH vault plugin bot to allowed list (#16301) 2024-07-16 14:38:56 +03:00
github-actions[bot]
758098cdcd
Release notes for 8.14.3 (#16312) (#16318)
* Update release notes for 8.14.3

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit a60c7cb95e)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-07-11 18:12:35 +02:00
Andrea Selva
01b08c7640
Update logstash_releases.json after 8.14.3 (#16306) 2024-07-11 18:10:59 +02:00
Karen Metts
9c6550a0df
Doc: Update headers for plugins (LSR) (#16277) 2024-07-10 11:22:46 -04:00
Dimitrios Liappis
f728c44a0a
Remove Debian 10 from CI (#16300)
This commit removes Debian 10 (Buster) which is EOL
since July 1 2024[^1] from CI.

Relates https://github.com/elastic/ingest-dev/issues/2872
2024-07-10 15:17:10 +03:00
Ry Biesemeyer
66aeeeef83
Json normalization performance (#16313)
* licenses: allow elv2, standard abbreviation for Elastic License version 2

* json-dump: reduce unicode normalization cost

Since the underlying JrJackson now properly (and efficiently) encodes the
UTF-8 transcode of whichever strings it is given, we no longer need to
pre-normalize to UTF-8 in ruby _except_ when the string is flagged as BINARY
because we have alternate behaviour to preserve valid UTF-8 sequences.

By emitting a _copy_ of binary-flagged strings that have been re-flagged as
UTF-8, we allow the downstream (efficient) encoding operation in jrjackson
to produce equivalent behaviour at much lower cost.

* cleanup: remove orphan unicode normalizer
2024-07-09 14:12:21 -07:00
kaisecheng
2404bad9a9
[CI] fix benchmark to pull snapshot version (#16308)
- fixes the CI benchmark script to always runs against the latest snapshot version
- uses `/v1/versions/$VERSION/builds/latest` to get the latest build id

Fixes: #16307

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-07-08 22:20:59 +01:00
Dimitrios Liappis
ea0c16870f
Add Ubuntu 24.04 to CI (#16299)
Now that we have custom VM images for Ubuntu 24.04, this commit adds
CI for Ubuntu 24.04.

This is a revert of #16279
2024-07-08 14:43:55 +03:00
Dimitrios Liappis
db06ec415a
Remove CentOS 7 from CI (#16293)
CentOS 7 is EOL since June 30 2024[^1]. All repositories and mirrors are
now unreachable.

This commit removes CentOS 7 from CI jobs using it.

Relates https://github.com/elastic/ingest-dev/issues/3520

[^1]: https://www.redhat.com/en/topics/linux/centos-linux-eol
2024-07-04 14:13:16 +03:00
Ry Biesemeyer
a63d8a831d
bump ci releases for 8.14.2 (#16287) 2024-07-04 10:08:22 +01:00
Ry Biesemeyer
b51b5392e1
Release notes for 8.14.2 (#16266) (#16286)
---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-07-04 09:11:46 +01:00
Ry Biesemeyer
e3271db946
add flow-informed tuning guidance (#16265)
* docs: sentence-case headings

* docs-style: one-line-per-sentence asciidoc convention

* docs: add flow-informed tuning guidance

* docs: clarify `pipeline.batch.delay`

* Apply suggestions from code review

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Update docs/static/performance-checklist.asciidoc

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-07-03 15:33:37 -07:00
João Duarte
9872159c71
bump version to 8.16.0 (#16281) 2024-07-03 13:08:59 +01:00
João Duarte
83506eabe7
add 8.15 and remove 8.13 from CI testing (#16282) 2024-07-03 13:08:50 +01:00
1953 changed files with 28410 additions and 59559 deletions

View file

@ -35,48 +35,71 @@ steps:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 1"
key: "integration-tests-part-1"
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 0
ci/integration_tests.sh split 0 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 2"
key: "integration-tests-part-2"
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 1
ci/integration_tests.sh split 1 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 1"
key: "integration-tests-qa-part-1"
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 2 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0
ci/integration_tests.sh split 0 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 2"
key: "integration-tests-qa-part-2"
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1
ci/integration_tests.sh split 1 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
retry:
automatic:
- limit: 3

View file

@ -0,0 +1,11 @@
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 100
diskType: pd-ssd
steps:
- label: "Benchmark Marathon"
command: .buildkite/scripts/benchmark/marathon.sh

View file

@ -4,8 +4,12 @@ steps:
- label: ":pipeline: Generate steps"
command: |
set -euo pipefail
echo "--- Building [${WORKFLOW_TYPE}] artifacts"
echo "--- Building [$${WORKFLOW_TYPE}] artifacts"
python3 -m pip install pyyaml
echo "--- Building dynamic pipeline steps"
python3 .buildkite/scripts/dra/generatesteps.py | buildkite-agent pipeline upload
python3 .buildkite/scripts/dra/generatesteps.py > steps.yml
echo "--- Printing dynamic pipeline steps"
cat steps.yml
echo "--- Uploading dynamic pipeline steps"
cat steps.yml | buildkite-agent pipeline upload

View file

@ -0,0 +1,20 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 64
steps:
- group: ":logstash: Health API integration tests"
key: "testing-phase"
steps:
- label: "main branch"
key: "integ-tests-on-main-branch"
command:
- .buildkite/scripts/health-report-tests/main.sh
retry:
automatic:
- limit: 3

View file

@ -0,0 +1,14 @@
steps:
- label: "JDK Availability check"
key: "jdk-availability-check"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "4"
memory: "6Gi"
ephemeralStorage: "100Gi"
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export GRADLE_OPTS="-Xmx2g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info"
ci/check_jdk_version_availability.sh

View file

@ -5,7 +5,7 @@
env:
DEFAULT_MATRIX_OS: "ubuntu-2204"
DEFAULT_MATRIX_JDK: "adoptiumjdk_17"
DEFAULT_MATRIX_JDK: "adoptiumjdk_21"
steps:
- input: "Test Parameters"
@ -18,6 +18,8 @@ steps:
multiple: true
default: "${DEFAULT_MATRIX_OS}"
options:
- label: "Ubuntu 24.04"
value: "ubuntu-2404"
- label: "Ubuntu 22.04"
value: "ubuntu-2204"
- label: "Ubuntu 20.04"
@ -26,14 +28,10 @@ steps:
value: "debian-12"
- label: "Debian 11"
value: "debian-11"
- label: "Debian 10"
value: "debian-10"
- label: "RHEL 9"
value: "rhel-9"
- label: "RHEL 8"
value: "rhel-8"
- label: "CentOS 7"
value: "centos-7"
- label: "Oracle Linux 8"
value: "oraclelinux-8"
- label: "Oracle Linux 7"
@ -62,20 +60,12 @@ steps:
value: "adoptiumjdk_21"
- label: "Adoptium JDK 17 (Eclipse Temurin)"
value: "adoptiumjdk_17"
- label: "Adoptium JDK 11 (Eclipse Temurin)"
value: "adoptiumjdk_11"
- label: "OpenJDK 21"
value: "openjdk_21"
- label: "OpenJDK 17"
value: "openjdk_17"
- label: "OpenJDK 11"
value: "openjdk_11"
- label: "Zulu 21"
value: "zulu_21"
- label: "Zulu 17"
value: "zulu_17"
- label: "Zulu 11"
value: "zulu_11"
- wait: ~
if: build.source != "schedule" && build.source != "trigger_job"

View file

@ -5,7 +5,7 @@
"pipeline_slug": "logstash-pull-request-pipeline",
"allow_org_users": true,
"allowed_repo_permissions": ["admin", "write"],
"allowed_list": ["dependabot[bot]", "mergify[bot]", "github-actions[bot]"],
"allowed_list": ["dependabot[bot]", "mergify[bot]", "github-actions[bot]", "elastic-vault-github-plugin-prod[bot]"],
"set_commit_status": true,
"build_on_commit": true,
"build_on_comment": true,
@ -14,7 +14,11 @@
"skip_ci_labels": [ ],
"skip_target_branches": [ ],
"skip_ci_on_only_changed": [
"^docs/"
"^.github/",
"^docs/",
"^.mergify.yml$",
"^.pre-commit-config.yaml",
"\\.md$"
],
"always_require_ci_on_changed": [ ]
}

View file

@ -22,10 +22,12 @@ steps:
- label: ":rspec: Ruby unit tests"
key: "ruby-unit-tests"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "4"
memory: "8Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
@ -79,8 +81,8 @@ steps:
manual:
allowed: true
- label: ":lab_coat: Integration Tests / part 1"
key: "integration-tests-part-1"
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -95,10 +97,10 @@ steps:
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 0
ci/integration_tests.sh split 0 3
- label: ":lab_coat: Integration Tests / part 2"
key: "integration-tests-part-2"
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -113,10 +115,28 @@ steps:
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 1
ci/integration_tests.sh split 1 3
- label: ":lab_coat: IT Persistent Queues / part 1"
key: "integration-tests-qa-part-1"
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 2 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -132,10 +152,10 @@ steps:
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0
ci/integration_tests.sh split 0 3
- label: ":lab_coat: IT Persistent Queues / part 2"
key: "integration-tests-qa-part-2"
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -151,7 +171,26 @@ steps:
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1
ci/integration_tests.sh split 1 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as non root (logstash) user. UID is hardcoded in image.
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
- label: ":lab_coat: x-pack unit tests"
key: "x-pack-unit-tests"

View file

@ -0,0 +1,22 @@
## Steps to set up GCP instance to run benchmark script
- Create an instance "n2-standard-16" with Ubuntu image
- Install docker
- `sudo snap install docker`
- `sudo usermod -a -G docker $USER`
- Install jq
- Install vault
- `sudo snap install vault`
- `vault login --method github`
- `vault kv get -format json secret/ci/elastic-logstash/benchmark`
- Setup Elasticsearch index mapping and alias with `setup/*`
- Import Kibana dashboard with `save-objects/*`
- Run the benchmark script
- Send data to your own Elasticsearch. Customise `VAULT_PATH="secret/ci/elastic-logstash/your/path"`
- Run the script `main.sh`
- or run in background `nohup bash -x main.sh > log.log 2>&1 &`
## Notes
- Benchmarks should only be compared using the same hardware setup.
- Please do not send the test metrics to the benchmark cluster. You can set `VAULT_PATH` to send data and metrics to your own server.
- Run `all.sh` as calibration which gives you a baseline of performance in different versions.
- [#16586](https://github.com/elastic/logstash/pull/16586) allows legacy monitoring using the configuration `xpack.monitoring.allow_legacy_collection: true`, which is not recognized in version 8. To run benchmarks in version 8, use the script of the corresponding branch (e.g. `8.16`) instead of `main` in buildkite.

View file

@ -3,6 +3,7 @@ pipeline.workers: ${WORKER}
pipeline.batch.size: ${BATCH_SIZE}
queue.type: ${QTYPE}
xpack.monitoring.allow_legacy_collection: true
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: ${MONITOR_ES_USER}
xpack.monitoring.elasticsearch.password: ${MONITOR_ES_PW}

View file

@ -0,0 +1 @@
f74f1a28-25e9-494f-ba41-ca9f13d4446d

View file

@ -0,0 +1,315 @@
#!/usr/bin/env bash
set -eo pipefail
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
CONFIG_PATH="$SCRIPT_PATH/config"
source "$SCRIPT_PATH/util.sh"
usage() {
echo "Usage: $0 [FB_CNT] [QTYPE] [CPU] [MEM]"
echo "Example: $0 4 {persisted|memory|all} 2 2"
exit 1
}
parse_args() {
while [[ "$#" -gt 0 ]]; do
if [ -z "$FB_CNT" ]; then
FB_CNT=$1
elif [ -z "$QTYPE" ]; then
case $1 in
all | persisted | memory)
QTYPE=$1
;;
*)
echo "Error: wrong queue type $1"
usage
;;
esac
elif [ -z "$CPU" ]; then
CPU=$1
elif [ -z "$MEM" ]; then
MEM=$1
else
echo "Error: Too many arguments"
usage
fi
shift
done
# set default value
# number of filebeat
FB_CNT=${FB_CNT:-4}
# all | persisted | memory
QTYPE=${QTYPE:-all}
CPU=${CPU:-4}
MEM=${MEM:-4}
XMX=$((MEM / 2))
IFS=','
# worker multiplier: 1,2,4
MULTIPLIERS="${MULTIPLIERS:-1,2,4}"
read -ra MULTIPLIERS <<< "$MULTIPLIERS"
BATCH_SIZES="${BATCH_SIZES:-500}"
read -ra BATCH_SIZES <<< "$BATCH_SIZES"
# tags to json array
read -ra TAG_ARRAY <<< "$TAGS"
JSON_TAGS=$(printf '"%s",' "${TAG_ARRAY[@]}" | sed 's/,$//')
JSON_TAGS="[$JSON_TAGS]"
IFS=' '
echo "filebeats: $FB_CNT, cpu: $CPU, mem: $MEM, Queue: $QTYPE, worker multiplier: ${MULTIPLIERS[@]}, batch size: ${BATCH_SIZES[@]}"
}
get_secret() {
VAULT_PATH=${VAULT_PATH:-secret/ci/elastic-logstash/benchmark}
VAULT_DATA=$(vault kv get -format json $VAULT_PATH)
BENCHMARK_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.es_host')
BENCHMARK_ES_USER=$(echo $VAULT_DATA | jq -r '.data.es_user')
BENCHMARK_ES_PW=$(echo $VAULT_DATA | jq -r '.data.es_pw')
MONITOR_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.monitor_es_host')
MONITOR_ES_USER=$(echo $VAULT_DATA | jq -r '.data.monitor_es_user')
MONITOR_ES_PW=$(echo $VAULT_DATA | jq -r '.data.monitor_es_pw')
}
pull_images() {
echo "--- Pull docker images"
if [[ -n "$LS_VERSION" ]]; then
# pull image if it doesn't exist in local
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker pull "docker.elastic.co/logstash/logstash:$LS_VERSION"
else
# pull the latest snapshot logstash image
# select the SNAPSHOT artifact with the highest semantic version number
LS_VERSION=$( curl --retry-all-errors --retry 5 --retry-delay 1 -s "https://storage.googleapis.com/artifacts-api/snapshots/main.json" | jq -r '.version' )
BUILD_ID=$(curl --retry-all-errors --retry 5 --retry-delay 1 -s "https://storage.googleapis.com/artifacts-api/snapshots/main.json" | jq -r '.build_id')
ARCH=$(arch)
IMAGE_URL="https://snapshots.elastic.co/${BUILD_ID}/downloads/logstash/logstash-$LS_VERSION-docker-image-$ARCH.tar.gz"
IMAGE_FILENAME="$LS_VERSION.tar.gz"
echo "Download $LS_VERSION from $IMAGE_URL"
[[ ! -e $IMAGE_FILENAME ]] && curl -fsSL --retry-max-time 60 --retry 3 --retry-delay 5 -o "$IMAGE_FILENAME" "$IMAGE_URL"
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker load -i "$IMAGE_FILENAME"
fi
# pull filebeat image
FB_DEFAULT_VERSION="8.13.4"
FB_VERSION=${FB_VERSION:-$FB_DEFAULT_VERSION}
docker pull "docker.elastic.co/beats/filebeat:$FB_VERSION"
}
generate_logs() {
FLOG_FILE_CNT=${FLOG_FILE_CNT:-4}
SINGLE_SIZE=524288000
TOTAL_SIZE="$((FLOG_FILE_CNT * SINGLE_SIZE))"
FLOG_PATH="$SCRIPT_PATH/flog"
mkdir -p $FLOG_PATH
if [[ ! -e "$FLOG_PATH/log${FLOG_FILE_CNT}.log" ]]; then
echo "--- Generate logs in background. log: ${FLOG_FILE_CNT}, each size: 500mb"
docker run -d --name=flog --rm -v $FLOG_PATH:/go/src/data mingrammer/flog -t log -w -o "/go/src/data/log.log" -b $TOTAL_SIZE -p $SINGLE_SIZE
fi
}
check_logs() {
echo "--- Check log generation"
local cnt=0
until [[ -e "$FLOG_PATH/log${FLOG_FILE_CNT}.log" || $cnt -gt 600 ]]; do
echo "wait 30s" && sleep 30
cnt=$((cnt + 30))
done
ls -lah $FLOG_PATH
}
start_logstash() {
LS_CONFIG_PATH=$SCRIPT_PATH/ls/config
mkdir -p $LS_CONFIG_PATH
cp $CONFIG_PATH/pipelines.yml $LS_CONFIG_PATH/pipelines.yml
cp $CONFIG_PATH/logstash.yml $LS_CONFIG_PATH/logstash.yml
cp $CONFIG_PATH/uuid $LS_CONFIG_PATH/uuid
LS_JAVA_OPTS=${LS_JAVA_OPTS:--Xmx${XMX}g}
docker run -d --name=ls --net=host --cpus=$CPU --memory=${MEM}g -e LS_JAVA_OPTS="$LS_JAVA_OPTS" \
-e QTYPE="$QTYPE" -e WORKER="$WORKER" -e BATCH_SIZE="$BATCH_SIZE" \
-e BENCHMARK_ES_HOST="$BENCHMARK_ES_HOST" -e BENCHMARK_ES_USER="$BENCHMARK_ES_USER" -e BENCHMARK_ES_PW="$BENCHMARK_ES_PW" \
-e MONITOR_ES_HOST="$MONITOR_ES_HOST" -e MONITOR_ES_USER="$MONITOR_ES_USER" -e MONITOR_ES_PW="$MONITOR_ES_PW" \
-v $LS_CONFIG_PATH/logstash.yml:/usr/share/logstash/config/logstash.yml:ro \
-v $LS_CONFIG_PATH/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro \
-v $LS_CONFIG_PATH/uuid:/usr/share/logstash/data/uuid:ro \
docker.elastic.co/logstash/logstash:$LS_VERSION
}
start_filebeat() {
for ((i = 0; i < FB_CNT; i++)); do
FB_PATH="$SCRIPT_PATH/fb${i}"
mkdir -p $FB_PATH
cp $CONFIG_PATH/filebeat.yml $FB_PATH/filebeat.yml
docker run -d --name=fb$i --net=host --user=root \
-v $FB_PATH/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v $SCRIPT_PATH/flog:/usr/share/filebeat/flog \
docker.elastic.co/beats/filebeat:$FB_VERSION filebeat -e --strict.perms=false
done
}
capture_stats() {
CURRENT=$(jq -r '.flow.output_throughput.current' $NS_JSON)
local eps_1m=$(jq -r '.flow.output_throughput.last_1_minute' $NS_JSON)
local eps_5m=$(jq -r '.flow.output_throughput.last_5_minutes' $NS_JSON)
local worker_util=$(jq -r '.pipelines.main.flow.worker_utilization.last_1_minute' $NS_JSON)
local worker_concurr=$(jq -r '.pipelines.main.flow.worker_concurrency.last_1_minute' $NS_JSON)
local cpu_percent=$(jq -r '.process.cpu.percent' $NS_JSON)
local heap=$(jq -r '.jvm.mem.heap_used_in_bytes' $NS_JSON)
local non_heap=$(jq -r '.jvm.mem.non_heap_used_in_bytes' $NS_JSON)
local q_event_cnt=$(jq -r '.pipelines.main.queue.events_count' $NS_JSON)
local q_size=$(jq -r '.pipelines.main.queue.queue_size_in_bytes' $NS_JSON)
TOTAL_EVENTS_OUT=$(jq -r '.pipelines.main.events.out' $NS_JSON)
printf "current: %s, 1m: %s, 5m: %s, worker_utilization: %s, worker_concurrency: %s, cpu: %s, heap: %s, non-heap: %s, q_events: %s, q_size: %s, total_events_out: %s\n" \
$CURRENT $eps_1m $eps_5m $worker_util $worker_concurr $cpu_percent $heap $non_heap $q_event_cnt $q_size $TOTAL_EVENTS_OUT
}
aggregate_stats() {
local file_glob="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_*.json"
MAX_EPS_1M=$( jqmax '.flow.output_throughput.last_1_minute' "$file_glob" )
MAX_EPS_5M=$( jqmax '.flow.output_throughput.last_5_minutes' "$file_glob" )
MAX_WORKER_UTIL=$( jqmax '.pipelines.main.flow.worker_utilization.last_1_minute' "$file_glob" )
MAX_WORKER_CONCURR=$( jqmax '.pipelines.main.flow.worker_concurrency.last_1_minute' "$file_glob" )
MAX_Q_EVENT_CNT=$( jqmax '.pipelines.main.queue.events_count' "$file_glob" )
MAX_Q_SIZE=$( jqmax '.pipelines.main.queue.queue_size_in_bytes' "$file_glob" )
AVG_CPU_PERCENT=$( jqavg '.process.cpu.percent' "$file_glob" )
AVG_VIRTUAL_MEM=$( jqavg '.process.mem.total_virtual_in_bytes' "$file_glob" )
AVG_HEAP=$( jqavg '.jvm.mem.heap_used_in_bytes' "$file_glob" )
AVG_NON_HEAP=$( jqavg '.jvm.mem.non_heap_used_in_bytes' "$file_glob" )
}
send_summary() {
echo "--- Send summary to Elasticsearch"
# build json
local timestamp
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%S")
SUMMARY="{\"timestamp\": \"$timestamp\", \"version\": \"$LS_VERSION\", \"cpu\": \"$CPU\", \"mem\": \"$MEM\", \"workers\": \"$WORKER\", \"batch_size\": \"$BATCH_SIZE\", \"queue_type\": \"$QTYPE\""
not_empty "$TOTAL_EVENTS_OUT" && SUMMARY="$SUMMARY, \"total_events_out\": \"$TOTAL_EVENTS_OUT\""
not_empty "$MAX_EPS_1M" && SUMMARY="$SUMMARY, \"max_eps_1m\": \"$MAX_EPS_1M\""
not_empty "$MAX_EPS_5M" && SUMMARY="$SUMMARY, \"max_eps_5m\": \"$MAX_EPS_5M\""
not_empty "$MAX_WORKER_UTIL" && SUMMARY="$SUMMARY, \"max_worker_utilization\": \"$MAX_WORKER_UTIL\""
not_empty "$MAX_WORKER_CONCURR" && SUMMARY="$SUMMARY, \"max_worker_concurrency\": \"$MAX_WORKER_CONCURR\""
not_empty "$AVG_CPU_PERCENT" && SUMMARY="$SUMMARY, \"avg_cpu_percentage\": \"$AVG_CPU_PERCENT\""
not_empty "$AVG_HEAP" && SUMMARY="$SUMMARY, \"avg_heap\": \"$AVG_HEAP\""
not_empty "$AVG_NON_HEAP" && SUMMARY="$SUMMARY, \"avg_non_heap\": \"$AVG_NON_HEAP\""
not_empty "$AVG_VIRTUAL_MEM" && SUMMARY="$SUMMARY, \"avg_virtual_memory\": \"$AVG_VIRTUAL_MEM\""
not_empty "$MAX_Q_EVENT_CNT" && SUMMARY="$SUMMARY, \"max_queue_events\": \"$MAX_Q_EVENT_CNT\""
not_empty "$MAX_Q_SIZE" && SUMMARY="$SUMMARY, \"max_queue_bytes_size\": \"$MAX_Q_SIZE\""
not_empty "$TAGS" && SUMMARY="$SUMMARY, \"tags\": $JSON_TAGS"
SUMMARY="$SUMMARY}"
tee summary.json << EOF
{"index": {}}
$SUMMARY
EOF
# send to ES
local resp
local err_status
resp=$(curl -s -X POST -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" "$BENCHMARK_ES_HOST/benchmark_summary/_bulk" -H 'Content-Type: application/json' --data-binary @"summary.json")
echo "$resp"
err_status=$(echo "$resp" | jq -r ".errors")
if [[ "$err_status" == "true" ]]; then
echo "Failed to send summary"
exit 1
fi
}
# $1: snapshot index
node_stats() {
NS_JSON="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_$1.json" # m_w8b1000_0.json
# curl inside container because docker on mac cannot resolve localhost to host network interface
docker exec -i ls curl localhost:9600/_node/stats > "$NS_JSON" 2> /dev/null
}
# $1: index
snapshot() {
node_stats $1
capture_stats
}
create_directory() {
NS_DIR="fb${FB_CNT}c${CPU}m${MEM}" # fb4c4m4
mkdir -p "$SCRIPT_PATH/$NS_DIR"
}
queue() {
for QTYPE in "persisted" "memory"; do
worker
done
}
worker() {
for m in "${MULTIPLIERS[@]}"; do
WORKER=$((CPU * m))
batch
done
}
batch() {
for BATCH_SIZE in "${BATCH_SIZES[@]}"; do
run_pipeline
stop_pipeline
done
}
run_pipeline() {
echo "--- Run pipeline. queue type: $QTYPE, worker: $WORKER, batch size: $BATCH_SIZE"
start_logstash
start_filebeat
docker ps
echo "(0) sleep 3m" && sleep 180
snapshot "0"
for i in {1..8}; do
echo "($i) sleep 30s" && sleep 30
snapshot "$i"
# print docker log when ingestion rate is zero
# remove '.' in number and return max val
[[ $(max -g "${CURRENT/./}" "0") -eq 0 ]] &&
docker logs fb0 &&
docker logs ls
done
aggregate_stats
send_summary
}
stop_pipeline() {
echo "--- Stop Pipeline"
for ((i = 0; i < FB_CNT; i++)); do
docker stop fb$i
docker rm fb$i
done
docker stop ls
docker rm ls
curl -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" -X DELETE $BENCHMARK_ES_HOST/_data_stream/logs-generic-default
echo " data stream deleted "
# TODO: clean page caches, reduce memory fragmentation
# https://github.com/elastic/logstash/pull/16191#discussion_r1647050216
}
clean_up() {
# stop log generation if it has not done yet
[[ -n $(docker ps | grep flog) ]] && docker stop flog || true
# remove image
docker image rm docker.elastic.co/logstash/logstash:$LS_VERSION
}

View file

@ -15,9 +15,8 @@ set -eo pipefail
# - The script sends a summary of EPS and resource usage to index `benchmark_summary`
# *******************************************************
SCRIPT_PATH="$(cd "$(dirname "$0")"; pwd)"
CONFIG_PATH="$SCRIPT_PATH/config"
source "$SCRIPT_PATH/util.sh"
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
source "$SCRIPT_PATH/core.sh"
## usage:
## main.sh FB_CNT QTYPE CPU MEM
@ -36,271 +35,9 @@ source "$SCRIPT_PATH/util.sh"
## MEM=4 # number of GB for Logstash container
## QTYPE=memory # queue type to test {persisted|memory|all}
## FB_CNT=4 # number of filebeats to use in benchmark
usage() {
echo "Usage: $0 [FB_CNT] [QTYPE] [CPU] [MEM]"
echo "Example: $0 4 {persisted|memory|all} 2 2"
exit 1
}
parse_args() {
while [[ "$#" -gt 0 ]]; do
if [ -z "$FB_CNT" ]; then
FB_CNT=$1
elif [ -z "$QTYPE" ]; then
case $1 in
all | persisted | memory)
QTYPE=$1
;;
*)
echo "Error: wrong queue type $1"
usage
;;
esac
elif [ -z "$CPU" ]; then
CPU=$1
elif [ -z "$MEM" ]; then
MEM=$1
else
echo "Error: Too many arguments"
usage
fi
shift
done
# set default value
# number of filebeat
FB_CNT=${FB_CNT:-4}
# all | persisted | memory
QTYPE=${QTYPE:-all}
CPU=${CPU:-4}
MEM=${MEM:-4}
XMX=$((MEM / 2))
IFS=','
# worker multiplier: 1,2,4
MULTIPLIERS="${MULTIPLIERS:-1,2,4}"
read -ra MULTIPLIERS <<< "$MULTIPLIERS"
BATCH_SIZES="${BATCH_SIZES:-500}"
read -ra BATCH_SIZES <<< "$BATCH_SIZES"
IFS=' '
echo "filebeats: $FB_CNT, cpu: $CPU, mem: $MEM, Queue: $QTYPE, worker multiplier: ${MULTIPLIERS[@]}, batch size: ${BATCH_SIZES[@]}"
}
get_secret() {
VAULT_PATH=secret/ci/elastic-logstash/benchmark
VAULT_DATA=$(vault kv get -format json $VAULT_PATH)
BENCHMARK_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.es_host')
BENCHMARK_ES_USER=$(echo $VAULT_DATA | jq -r '.data.es_user')
BENCHMARK_ES_PW=$(echo $VAULT_DATA | jq -r '.data.es_pw')
MONITOR_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.monitor_es_host')
MONITOR_ES_USER=$(echo $VAULT_DATA | jq -r '.data.monitor_es_user')
MONITOR_ES_PW=$(echo $VAULT_DATA | jq -r '.data.monitor_es_pw')
}
pull_images() {
echo "--- Pull docker images"
# pull the latest snapshot logstash image
if [[ -n "$LS_VERSION" ]]; then
docker pull "docker.elastic.co/logstash/logstash:$LS_VERSION"
else
LS_VERSION=$( curl --retry-all-errors --retry 5 --retry-delay 1 -s https://artifacts-api.elastic.co/v1/versions | jq -r ".versions[-1]" )
BUILD_ID=$( curl --retry-all-errors --retry 5 --retry-delay 1 -s https://artifacts-api.elastic.co/v1/branches/master/builds | jq -r ".builds[0]" )
ARCH=$(arch)
IMAGE_URL="https://snapshots.elastic.co/${BUILD_ID}/downloads/logstash/logstash-$LS_VERSION-docker-image-$ARCH.tar.gz"
IMAGE_FILENAME="$LS_VERSION.tar.gz"
echo "Download $LS_VERSION from $IMAGE_URL"
[[ ! -e $IMAGE_FILENAME ]] && curl -fsSL --retry-max-time 60 --retry 3 --retry-delay 5 -o "$IMAGE_FILENAME" "$IMAGE_URL"
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker load -i "$IMAGE_FILENAME"
fi
# pull filebeat image
FB_DEFAULT_VERSION="8.13.4"
FB_VERSION=${FB_VERSION:-$FB_DEFAULT_VERSION}
docker pull "docker.elastic.co/beats/filebeat:$FB_VERSION"
}
generate_logs() {
FLOG_PATH="$SCRIPT_PATH/flog"
mkdir -p $FLOG_PATH
if [[ ! -e "$FLOG_PATH/log4.log" ]]; then
echo "--- Generate logs in background. log: 5, size: 500mb"
docker run -d --name=flog --rm -v $FLOG_PATH:/go/src/data mingrammer/flog -t log -w -o "/go/src/data/log.log" -b 2621440000 -p 524288000
fi
}
check_logs() {
echo "--- Check log generation"
local cnt=0
until [[ -e "$FLOG_PATH/log4.log" || $cnt -gt 600 ]]; do
echo "wait 30s" && sleep 30
cnt=$((cnt + 30))
done
ls -lah $FLOG_PATH
}
start_logstash() {
LS_CONFIG_PATH=$SCRIPT_PATH/ls/config
mkdir -p $LS_CONFIG_PATH
cp $CONFIG_PATH/pipelines.yml $LS_CONFIG_PATH/pipelines.yml
cp $CONFIG_PATH/logstash.yml $LS_CONFIG_PATH/logstash.yml
LS_JAVA_OPTS=${LS_JAVA_OPTS:--Xmx${XMX}g}
docker run -d --name=ls --net=host --cpus=$CPU --memory=${MEM}g -e LS_JAVA_OPTS="$LS_JAVA_OPTS" \
-e QTYPE="$QTYPE" -e WORKER="$WORKER" -e BATCH_SIZE="$BATCH_SIZE" \
-e BENCHMARK_ES_HOST="$BENCHMARK_ES_HOST" -e BENCHMARK_ES_USER="$BENCHMARK_ES_USER" -e BENCHMARK_ES_PW="$BENCHMARK_ES_PW" \
-e MONITOR_ES_HOST="$MONITOR_ES_HOST" -e MONITOR_ES_USER="$MONITOR_ES_USER" -e MONITOR_ES_PW="$MONITOR_ES_PW" \
-v $LS_CONFIG_PATH/logstash.yml:/usr/share/logstash/config/logstash.yml:ro \
-v $LS_CONFIG_PATH/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro \
docker.elastic.co/logstash/logstash:$LS_VERSION
}
start_filebeat() {
for ((i = 0; i < FB_CNT; i++)); do
FB_PATH="$SCRIPT_PATH/fb${i}"
mkdir -p $FB_PATH
cp $CONFIG_PATH/filebeat.yml $FB_PATH/filebeat.yml
docker run -d --name=fb$i --net=host --user=root \
-v $FB_PATH/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v $SCRIPT_PATH/flog:/usr/share/filebeat/flog \
docker.elastic.co/beats/filebeat:$FB_VERSION filebeat -e --strict.perms=false
done
}
capture_stats() {
CURRENT=$(jq -r '.flow.output_throughput.current' $NS_JSON)
local eps_1m=$(jq -r '.flow.output_throughput.last_1_minute' $NS_JSON)
local eps_5m=$(jq -r '.flow.output_throughput.last_5_minutes' $NS_JSON)
local worker_util=$(jq -r '.pipelines.main.flow.worker_utilization.last_1_minute' $NS_JSON)
local worker_concurr=$(jq -r '.pipelines.main.flow.worker_concurrency.last_1_minute' $NS_JSON)
local cpu_percent=$(jq -r '.process.cpu.percent' $NS_JSON)
local heap=$(jq -r '.jvm.mem.heap_used_in_bytes' $NS_JSON)
local non_heap=$(jq -r '.jvm.mem.non_heap_used_in_bytes' $NS_JSON)
local q_event_cnt=$(jq -r '.pipelines.main.queue.events_count' $NS_JSON)
local q_size=$(jq -r '.pipelines.main.queue.queue_size_in_bytes' $NS_JSON)
TOTAL_EVENTS_OUT=$(jq -r '.pipelines.main.events.out' $NS_JSON)
printf "current: %s, 1m: %s, 5m: %s, worker_utilization: %s, worker_concurrency: %s, cpu: %s, heap: %s, non-heap: %s, q_events: %s, q_size: %s, total_events_out: %s\n" \
$CURRENT $eps_1m $eps_5m $worker_util $worker_concurr $cpu_percent $heap $non_heap $q_event_cnt $q_size $TOTAL_EVENTS_OUT
}
aggregate_stats() {
local file_glob="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_*.json"
MAX_EPS_1M=$( jqmax '.flow.output_throughput.last_1_minute' "$file_glob" )
MAX_EPS_5M=$( jqmax '.flow.output_throughput.last_5_minutes' "$file_glob" )
MAX_WORKER_UTIL=$( jqmax '.pipelines.main.flow.worker_utilization.last_1_minute' "$file_glob" )
MAX_WORKER_CONCURR=$( jqmax '.pipelines.main.flow.worker_concurrency.last_1_minute' "$file_glob" )
MAX_Q_EVENT_CNT=$( jqmax '.pipelines.main.queue.events_count' "$file_glob" )
MAX_Q_SIZE=$( jqmax '.pipelines.main.queue.queue_size_in_bytes' "$file_glob" )
AVG_CPU_PERCENT=$( jqavg '.process.cpu.percent' "$file_glob" )
AVG_VIRTUAL_MEM=$( jqavg '.process.mem.total_virtual_in_bytes' "$file_glob" )
AVG_HEAP=$( jqavg '.jvm.mem.heap_used_in_bytes' "$file_glob" )
AVG_NON_HEAP=$( jqavg '.jvm.mem.non_heap_used_in_bytes' "$file_glob" )
}
send_summary() {
echo "--- Send summary to Elasticsearch"
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%S")
tee summary.json << EOF
{"index": {}}
{"timestamp": "$timestamp", "version": "$LS_VERSION", "cpu": "$CPU", "mem": "$MEM", "workers": "$WORKER", "batch_size": "$BATCH_SIZE", "queue_type": "$QTYPE", "total_events_out": "$TOTAL_EVENTS_OUT", "max_eps_1m": "$MAX_EPS_1M", "max_eps_5m": "$MAX_EPS_5M", "max_worker_utilization": "$MAX_WORKER_UTIL", "max_worker_concurrency": "$MAX_WORKER_CONCURR", "avg_cpu_percentage": "$AVG_CPU_PERCENT", "avg_heap": "$AVG_HEAP", "avg_non_heap": "$AVG_NON_HEAP", "avg_virtual_memory": "$AVG_VIRTUAL_MEM", "max_queue_events": "$MAX_Q_EVENT_CNT", "max_queue_bytes_size": "$MAX_Q_SIZE"}
EOF
curl -X POST -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" "$BENCHMARK_ES_HOST/benchmark_summary/_bulk" -H 'Content-Type: application/json' --data-binary @"summary.json"
echo ""
}
# $1: snapshot index
node_stats() {
NS_JSON="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_$1.json" # m_w8b1000_0.json
# curl inside container because docker on mac cannot resolve localhost to host network interface
docker exec -it ls curl localhost:9600/_node/stats > "$NS_JSON" 2> /dev/null
}
# $1: index
snapshot() {
node_stats $1
capture_stats
}
create_directory() {
NS_DIR="fb${FB_CNT}c${CPU}m${MEM}" # fb4c4m4
mkdir -p "$SCRIPT_PATH/$NS_DIR"
}
queue() {
for QTYPE in "persisted" "memory"; do
worker
done
}
worker() {
for m in "${MULTIPLIERS[@]}"; do
WORKER=$((CPU * m))
batch
done
}
batch() {
for BATCH_SIZE in "${BATCH_SIZES[@]}"; do
run_pipeline
stop_pipeline
done
}
run_pipeline() {
echo "--- Run pipeline. queue type: $QTYPE, worker: $WORKER, batch size: $BATCH_SIZE"
start_logstash
start_filebeat
docker ps
echo "(0) sleep 3m" && sleep 180
snapshot "0"
for i in {1..8}; do
echo "($i) sleep 30s" && sleep 30
snapshot "$i"
# print docker log when ingestion rate is zero
# remove '.' in number and return max val
[[ $(max -g "${CURRENT/./}" "0") -eq 0 ]] &&
docker logs fb0 &&
docker logs ls
done
aggregate_stats
send_summary
}
stop_pipeline() {
echo "--- Stop Pipeline"
for ((i = 0; i < FB_CNT; i++)); do
docker stop fb$i
docker rm fb$i
done
docker stop ls
docker rm ls
curl -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" -X DELETE $BENCHMARK_ES_HOST/_data_stream/logs-generic-default
echo " data stream deleted "
# TODO: clean page caches, reduce memory fragmentation
# https://github.com/elastic/logstash/pull/16191#discussion_r1647050216
}
## FLOG_FILE_CNT=4 # number of files to generate for ingestion
## VAULT_PATH=secret/path # vault path point to Elasticsearch credentials. The default value points to benchmark cluster.
## TAGS=test,other # tags with "," separator.
main() {
parse_args "$@"
get_secret
@ -316,8 +53,7 @@ main() {
worker
fi
# stop log generation if it has not done yet
[[ -n $(docker ps | grep flog) ]] && docker stop flog || true
clean_up
}
main "$@"
main "$@"

View file

@ -0,0 +1,44 @@
#!/usr/bin/env bash
set -eo pipefail
# *******************************************************
# Run benchmark for versions that have flow metrics
# When the hardware changes, run the marathon task to establish a new baseline.
# Usage:
# nohup bash -x all.sh > log.log 2>&1 &
# Accept env vars:
# STACK_VERSIONS=8.15.0,8.15.1,8.16.0-SNAPSHOT # versions to test. It is comma separator string
# *******************************************************
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
source "$SCRIPT_PATH/core.sh"
parse_stack_versions() {
IFS=','
STACK_VERSIONS="${STACK_VERSIONS:-8.6.0,8.7.0,8.8.0,8.9.0,8.10.0,8.11.0,8.12.0,8.13.0,8.14.0,8.15.0}"
read -ra STACK_VERSIONS <<< "$STACK_VERSIONS"
}
main() {
parse_stack_versions
parse_args "$@"
get_secret
generate_logs
check_logs
USER_QTYPE="$QTYPE"
for V in "${STACK_VERSIONS[@]}" ; do
LS_VERSION="$V"
QTYPE="$USER_QTYPE"
pull_images
create_directory
if [[ $QTYPE == "all" ]]; then
queue
else
worker
fi
done
}
main "$@"

View file

@ -0,0 +1,8 @@
## 20241210
Remove scripted field `5m_num` from dashboards
## 20240912
Updated runtime field `release` to return `true` when `version` contains "SNAPSHOT"
## 20240912
Initial dashboards

View file

@ -0,0 +1,14 @@
benchmark_objects.ndjson contains the following resources
- Dashboards
- daily snapshot
- released versions
- Data Views
- benchmark
- runtime fields
- | Fields Name | Type | Comment |
|--------------|---------------------------------------------------------------------------------------|--------------------------------------------------|
| versions_num | long | convert semantic versioning to number for graph sorting |
| release | boolean | `true` for released version. `false` for snapshot version. It is for graph filtering. |
To import objects to Kibana, navigate to Stack Management > Save Objects and click Import

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,6 @@
POST /_aliases
{
"actions": [
{ "add": { "index": "benchmark_summary_v2", "alias": "benchmark_summary" } }
]
}

View file

@ -0,0 +1,179 @@
PUT /benchmark_summary_v2/_mapping
{
"properties": {
"avg_cpu_percentage": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_heap": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_non_heap": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_virtual_memory": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"batch_size": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"cpu": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_eps_1m": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_eps_5m": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_queue_bytes_size": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_queue_events": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_worker_concurrency": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_worker_utilization": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"mem": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"queue_type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"tag": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"timestamp": {
"type": "date"
},
"total_events_out": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"workers": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"tags" : {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}

View file

@ -30,3 +30,12 @@ jqavg() {
jqmax() {
jq -r "$1 | select(. != null)" $2 | jq -s . | jq 'max'
}
# return true if $1 is non empty and not "null"
not_empty() {
if [[ -n "$1" && "$1" != "null" ]]; then
return 0
else
return 1
fi
}

View file

@ -19,7 +19,7 @@ changed_files=$(git diff --name-only $previous_commit)
if [[ -n "$changed_files" ]] && [[ -z "$(echo "$changed_files" | grep -vE "$1")" ]]; then
echo "All files compared to the previous commit [$previous_commit] match the specified regex: [$1]"
echo "Files changed:"
git diff --name-only HEAD^
git --no-pager diff --name-only HEAD^
exit 0
else
exit 1

View file

@ -0,0 +1,29 @@
#!/usr/bin/env bash
# ********************************************************
# Source this script to get the QUALIFIED_VERSION env var
# or execute it to receive the qualified version on stdout
# ********************************************************
set -euo pipefail
export QUALIFIED_VERSION="$(
# Extract the version number from the version.yml file
# e.g.: 8.6.0
printf '%s' "$(awk -F':' '{ if ("logstash" == $1) { gsub(/^ | $/,"",$2); printf $2; exit } }' versions.yml)"
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases for staging builds only:
# e.g: 8.0.0-alpha1
printf '%s' "${VERSION_QUALIFIER:+-${VERSION_QUALIFIER}}"
# add the SNAPSHOT tag unless WORKFLOW_TYPE=="staging" or RELEASE=="1"
if [[ ! ( "${WORKFLOW_TYPE:-}" == "staging" || "${RELEASE:+$RELEASE}" == "1" ) ]]; then
printf '%s' "-SNAPSHOT"
fi
)"
# if invoked directly, output the QUALIFIED_VERSION to stdout
if [[ "$0" == "${BASH_SOURCE:-${ZSH_SCRIPT:-}}" ]]; then
printf '%s' "${QUALIFIED_VERSION}"
fi

View file

@ -12,7 +12,7 @@ set -eo pipefail
# https://github.com/elastic/ingest-dev/issues/2664
# *******************************************************
ACTIVE_BRANCHES_URL="https://raw.githubusercontent.com/elastic/logstash/main/ci/branches.json"
ACTIVE_BRANCHES_URL="https://storage.googleapis.com/artifacts-api/snapshots/branches.json"
EXCLUDE_BRANCHES_ARRAY=()
BRANCHES=()
@ -63,7 +63,7 @@ exclude_branches_to_array
set -u
set +e
# pull releaseable branches from $ACTIVE_BRANCHES_URL
readarray -t ELIGIBLE_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $ACTIVE_BRANCHES_URL | jq -r '.branches[].branch')
readarray -t ELIGIBLE_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $ACTIVE_BRANCHES_URL | jq -r '.branches[]')
if [[ $? -ne 0 ]]; then
echo "There was an error downloading or parsing the json output from [$ACTIVE_BRANCHES_URL]. Exiting."
exit 1

View file

@ -1,13 +1,13 @@
{
"#comment": "This file lists all custom vm images. We use it to make decisions about randomized CI jobs.",
"linux": {
"ubuntu": ["ubuntu-2204", "ubuntu-2004"],
"debian": ["debian-12", "debian-11", "debian-10"],
"ubuntu": ["ubuntu-2404", "ubuntu-2204", "ubuntu-2004"],
"debian": ["debian-12", "debian-11"],
"rhel": ["rhel-9", "rhel-8"],
"oraclelinux": ["oraclelinux-8", "oraclelinux-7"],
"rocky": ["rocky-linux-8"],
"amazonlinux": ["amazonlinux-2023"],
"opensuse": ["opensuse-leap-15"]
},
"windows": ["windows-2022", "windows-2019", "windows-2016"]
"windows": ["windows-2025", "windows-2022", "windows-2019", "windows-2016"]
}

View file

@ -7,63 +7,28 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
# WORKFLOW_TYPE is a CI externally configured environment variable that could assume "snapshot" or "staging" values
info "Building artifacts for the $WORKFLOW_TYPE workflow ..."
case "$WORKFLOW_TYPE" in
snapshot)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
rake artifact:docker || error "artifact:docker build failed."
rake artifact:docker_oss || error "artifact:docker_oss build failed."
rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
else
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker || error "artifact:docker build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_oss || error "artifact:docker_oss build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
: # no-op
;;
staging)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
RELEASE=1 rake artifact:docker || error "artifact:docker build failed."
RELEASE=1 rake artifact:docker_oss || error "artifact:docker_oss build failed."
RELEASE=1 rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
RELEASE=1 rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
RELEASE=1 rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
else
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker || error "artifact:docker build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_oss || error "artifact:docker_oss build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
export RELEASE=1
;;
*)
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
rake artifact:docker || error "artifact:docker build failed."
rake artifact:docker_oss || error "artifact:docker_oss build failed."
rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
STACK_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Saving tar.gz for docker images"
save_docker_tarballs "${ARCH}" "${STACK_VERSION}"
@ -73,10 +38,6 @@ for file in build/logstash-*; do shasum $file;done
info "Uploading DRA artifacts in buildkite's artifact store ..."
# Note the deb, rpm tar.gz AARCH64 files generated has already been loaded by the build_packages.sh
images="logstash logstash-oss logstash-wolfi"
if [ "$ARCH" != "aarch64" ]; then
# No logstash-ubi8 for AARCH64
images="logstash logstash-oss logstash-wolfi logstash-ubi8"
fi
for image in ${images}; do
buildkite-agent artifact upload "build/$image-${STACK_VERSION}-docker-image-${ARCH}.tar.gz"
done
@ -84,7 +45,7 @@ done
# Upload 'docker-build-context.tar.gz' files only when build x86_64, otherwise they will be
# overwritten when building aarch64 (or viceversa).
if [ "$ARCH" != "aarch64" ]; then
for image in logstash logstash-oss logstash-wolfi logstash-ubi8 logstash-ironbank; do
for image in logstash logstash-oss logstash-wolfi logstash-ironbank; do
buildkite-agent artifact upload "build/${image}-${STACK_VERSION}-docker-build-context.tar.gz"
done
fi

View file

@ -7,39 +7,25 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
# WORKFLOW_TYPE is a CI externally configured environment variable that could assume "snapshot" or "staging" values
info "Building artifacts for the $WORKFLOW_TYPE workflow ..."
case "$WORKFLOW_TYPE" in
snapshot)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
else
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
: # no-op
;;
staging)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
RELEASE=1 SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
else
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
export RELEASE=1
;;
*)
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Generated Artifacts"
for file in build/logstash-*; do shasum $file;done

View file

@ -11,10 +11,6 @@ function save_docker_tarballs {
local arch="${1:?architecture required}"
local version="${2:?stack-version required}"
local images="logstash logstash-oss logstash-wolfi"
if [ "${arch}" != "aarch64" ]; then
# No logstash-ubi8 for AARCH64
images="logstash logstash-oss logstash-wolfi logstash-ubi8"
fi
for image in ${images}; do
tar_file="${image}-${version}-docker-image-${arch}.tar"
@ -29,16 +25,16 @@ function save_docker_tarballs {
# Since we are using the system jruby, we need to make sure our jvm process
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export JRUBY_OPTS="-J-Xmx4g"
# Extract the version number from the version.yml file
# e.g.: 8.6.0
# The suffix part like alpha1 etc is managed by the optional VERSION_QUALIFIER_OPT environment variable
# The suffix part like alpha1 etc is managed by the optional VERSION_QUALIFIER environment variable
STACK_VERSION=`cat versions.yml | sed -n 's/^logstash\:[[:space:]]\([[:digit:]]*\.[[:digit:]]*\.[[:digit:]]*\)$/\1/p'`
info "Agent is running on architecture [$(uname -i)]"
export VERSION_QUALIFIER_OPT=${VERSION_QUALIFIER_OPT:-""}
export VERSION_QUALIFIER=${VERSION_QUALIFIER:-""}
export DRA_DRY_RUN=${DRA_DRY_RUN:-""}
if [[ ! -z $DRA_DRY_RUN && $BUILDKITE_STEP_KEY == "logstash_publish_dra" ]]; then

View file

@ -3,6 +3,8 @@ import sys
import yaml
YAML_HEADER = '# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json\n'
def to_bk_key_friendly_string(key):
"""
Convert and return key to an acceptable format for Buildkite's key: field
@ -28,6 +30,8 @@ def package_x86_step(branch, workflow_type):
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
eval "$(rbenv init -)"
.buildkite/scripts/dra/build_packages.sh
artifact_paths:
- "**/*.hprof"
'''
return step
@ -42,6 +46,8 @@ def package_x86_docker_step(branch, workflow_type):
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 200
artifact_paths:
- "**/*.hprof"
command: |
export WORKFLOW_TYPE="{workflow_type}"
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
@ -61,6 +67,8 @@ def package_aarch64_docker_step(branch, workflow_type):
imagePrefix: platform-ingest-logstash-ubuntu-2204-aarch64
instanceType: "m6g.4xlarge"
diskSizeGb: 200
artifact_paths:
- "**/*.hprof"
command: |
export WORKFLOW_TYPE="{workflow_type}"
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
@ -106,6 +114,7 @@ def build_steps_to_yaml(branch, workflow_type):
if __name__ == "__main__":
try:
workflow_type = os.environ["WORKFLOW_TYPE"]
version_qualifier = os.environ.get("VERSION_QUALIFIER", "")
except ImportError:
print(f"Missing env variable WORKFLOW_TYPE. Use export WORKFLOW_TYPE=<staging|snapshot>\n.Exiting.")
exit(1)
@ -114,18 +123,25 @@ if __name__ == "__main__":
structure = {"steps": []}
# Group defining parallel steps that build and save artifacts
group_key = to_bk_key_friendly_string(f"logstash_dra_{workflow_type}")
if workflow_type.upper() == "SNAPSHOT" and len(version_qualifier)>0:
structure["steps"].append({
"label": f"no-op pipeline because prerelease builds (VERSION_QUALIFIER is set to [{version_qualifier}]) don't support the [{workflow_type}] workflow",
"command": ":",
"skip": "VERSION_QUALIFIER (prerelease builds) not supported with SNAPSHOT DRA",
})
else:
# Group defining parallel steps that build and save artifacts
group_key = to_bk_key_friendly_string(f"logstash_dra_{workflow_type}")
structure["steps"].append({
"group": f":Build Artifacts - {workflow_type.upper()}",
"key": group_key,
"steps": build_steps_to_yaml(branch, workflow_type),
})
structure["steps"].append({
"group": f":Build Artifacts - {workflow_type.upper()}",
"key": group_key,
"steps": build_steps_to_yaml(branch, workflow_type),
})
# Final step: pull artifacts built above and publish them via the release-manager
structure["steps"].extend(
yaml.safe_load(publish_dra_step(branch, workflow_type, depends_on=group_key)),
)
# Final step: pull artifacts built above and publish them via the release-manager
structure["steps"].extend(
yaml.safe_load(publish_dra_step(branch, workflow_type, depends_on=group_key)),
)
print('# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json\n' + yaml.dump(structure, Dumper=yaml.Dumper, sort_keys=False))
print(YAML_HEADER + yaml.dump(structure, Dumper=yaml.Dumper, sort_keys=False))

View file

@ -7,7 +7,9 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
PLAIN_STACK_VERSION=$STACK_VERSION
# DRA_BRANCH can be used for manually testing packaging with PRs
# e.g. define `DRA_BRANCH="main"` and `RUN_SNAPSHOT="true"` under Options/Environment Variables in the Buildkite UI after clicking new Build
BRANCH="${DRA_BRANCH:="${BUILDKITE_BRANCH:=""}"}"
# This is the branch selector that needs to be passed to the release-manager
# It has to be the name of the branch which originates the artifacts.
@ -15,29 +17,24 @@ RELEASE_VER=`cat versions.yml | sed -n 's/^logstash\:[[:space:]]\([[:digit:]]*\.
if [ -n "$(git ls-remote --heads origin $RELEASE_VER)" ] ; then
RELEASE_BRANCH=$RELEASE_VER
else
RELEASE_BRANCH=main
RELEASE_BRANCH="${BRANCH:="main"}"
fi
echo "RELEASE BRANCH: $RELEASE_BRANCH"
if [ -n "$VERSION_QUALIFIER_OPT" ]; then
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
PLAIN_STACK_VERSION="${PLAIN_STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
VERSION_QUALIFIER="${VERSION_QUALIFIER:=""}"
case "$WORKFLOW_TYPE" in
snapshot)
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
:
;;
staging)
;;
*)
error "Worklflow (WORKFLOW_TYPE variable) is not set, exiting..."
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
info "Uploading artifacts for ${WORKFLOW_TYPE} workflow on branch: ${RELEASE_BRANCH}"
info "Uploading artifacts for ${WORKFLOW_TYPE} workflow on branch: ${RELEASE_BRANCH} for version: ${STACK_VERSION} with version_qualifier: ${VERSION_QUALIFIER}"
if [ "$RELEASE_VER" != "7.17" ]; then
# Version 7.17.x doesn't generates ARM artifacts for Darwin
@ -45,17 +42,12 @@ if [ "$RELEASE_VER" != "7.17" ]; then
:
fi
# Deleting ubi8 for aarch64 for the time being. This image itself is not being built, and it is not expected
# by the release manager.
# See https://github.com/elastic/infra/blob/master/cd/release/release-manager/project-configs/8.5/logstash.gradle
# for more details.
# TODO filter it out when uploading artifacts instead
rm -f build/logstash-ubi8-${STACK_VERSION}-docker-image-aarch64.tar.gz
info "Downloaded ARTIFACTS sha report"
for file in build/logstash-*; do shasum $file;done
mv build/distributions/dependencies-reports/logstash-${STACK_VERSION}.csv build/distributions/dependencies-${STACK_VERSION}.csv
FINAL_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
mv build/distributions/dependencies-reports/logstash-${FINAL_VERSION}.csv build/distributions/dependencies-${FINAL_VERSION}.csv
# set required permissions on artifacts and directory
chmod -R a+r build/*
@ -73,6 +65,22 @@ release_manager_login
# ensure the latest image has been pulled
docker pull docker.elastic.co/infra/release-manager:latest
echo "+++ :clipboard: Listing DRA artifacts for version [$STACK_VERSION], branch [$RELEASE_BRANCH], workflow [$WORKFLOW_TYPE], QUALIFIER [$VERSION_QUALIFIER]"
docker run --rm \
--name release-manager \
-e VAULT_ROLE_ID \
-e VAULT_SECRET_ID \
--mount type=bind,readonly=false,src="$PWD",target=/artifacts \
docker.elastic.co/infra/release-manager:latest \
cli list \
--project logstash \
--branch "${RELEASE_BRANCH}" \
--commit "$(git rev-parse HEAD)" \
--workflow "${WORKFLOW_TYPE}" \
--version "${STACK_VERSION}" \
--artifact-set main \
--qualifier "${VERSION_QUALIFIER}"
info "Running the release manager ..."
# collect the artifacts for use with the unified build
@ -88,8 +96,9 @@ docker run --rm \
--branch ${RELEASE_BRANCH} \
--commit "$(git rev-parse HEAD)" \
--workflow "${WORKFLOW_TYPE}" \
--version "${PLAIN_STACK_VERSION}" \
--version "${STACK_VERSION}" \
--artifact-set main \
--qualifier "${VERSION_QUALIFIER}" \
${DRA_DRY_RUN} | tee rm-output.txt
# extract the summary URL from a release manager output line like:

View file

@ -10,7 +10,7 @@ from ruamel.yaml.scalarstring import LiteralScalarString
VM_IMAGES_FILE = ".buildkite/scripts/common/vm-images.json"
VM_IMAGE_PREFIX = "platform-ingest-logstash-multi-jdk-"
ACCEPTANCE_LINUX_OSES = ["ubuntu-2204", "ubuntu-2004", "debian-11", "debian-10", "rhel-8", "oraclelinux-7", "rocky-linux-8", "opensuse-leap-15", "amazonlinux-2023"]
ACCEPTANCE_LINUX_OSES = ["ubuntu-2404", "ubuntu-2204", "ubuntu-2004", "debian-11", "rhel-8", "oraclelinux-7", "rocky-linux-8", "opensuse-leap-15", "amazonlinux-2023"]
CUR_PATH = os.path.dirname(os.path.abspath(__file__))
@ -147,11 +147,6 @@ rake artifact:deb artifact:rpm
set -eo pipefail
source .buildkite/scripts/common/vm-agent-multi-jdk.sh
source /etc/os-release
if [[ "$$(echo $$ID_LIKE | tr '[:upper:]' '[:lower:]')" =~ (rhel|fedora) && "$${VERSION_ID%.*}" -le 7 ]]; then
# jruby-9.3.10.0 unavailable on centos-7 / oel-7, see https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 / https://github.com/jruby/jruby/issues/7695
# we only need a working jruby to run the acceptance test framework -- the packages have been prebuilt in a previous stage
rbenv local jruby-9.4.5.0
fi
ci/acceptance_tests.sh"""),
}
steps.append(step)
@ -160,7 +155,7 @@ ci/acceptance_tests.sh"""),
def acceptance_docker_steps()-> list[typing.Any]:
steps = []
for flavor in ["full", "oss", "ubi8", "wolfi"]:
for flavor in ["full", "oss", "ubi", "wolfi"]:
steps.append({
"label": f":docker: {flavor} flavor acceptance",
"agents": gcp_agent(vm_name="ubuntu-2204", image_prefix="family/platform-ingest-logstash"),

View file

@ -0,0 +1,18 @@
## Description
This package for integration tests of the Health Report API.
Export `LS_BRANCH` to run on a specific branch. By default, it uses the main branch.
## How to run the Health Report Integration test?
### Prerequisites
Make sure you have python installed. Install the integration test dependencies with the following command:
```shell
python3 -mpip install -r .buildkite/scripts/health-report-tests/requirements.txt
```
### Run the integration tests
```shell
python3 .buildkite/scripts/health-report-tests/main.py
```
### Troubleshooting
- If you get `WARNING: pip is configured with locations that require TLS/SSL,...` warning message, make sure you have python >=3.12.4 installed.

View file

@ -0,0 +1,111 @@
"""
Health Report Integration test bootstrapper with Python script
- A script to resolve Logstash version if not provided
- Download LS docker image and spin up
- When tests finished, teardown the Logstash
"""
import os
import subprocess
import time
import util
import yaml
class Bootstrap:
ELASTIC_STACK_RELEASED_VERSION_URL = "https://storage.googleapis.com/artifacts-api/releases/current/"
def __init__(self) -> None:
f"""
A constructor of the {Bootstrap}.
Returns:
Resolves Logstash branch considering provided LS_BRANCH
Checks out git branch
"""
logstash_branch = os.environ.get("LS_BRANCH")
if logstash_branch is None:
# version is not specified, use the main branch, no need to git checkout
print(f"LS_BRANCH is not specified, using main branch.")
else:
# LS_BRANCH accepts major latest as a major.x or specific branch as X.Y
if logstash_branch.find(".x") == -1:
print(f"Using specified branch: {logstash_branch}")
util.git_check_out_branch(logstash_branch)
else:
major_version = logstash_branch.split(".")[0]
if major_version and major_version.isnumeric():
resolved_version = self.__resolve_latest_stack_version_for(major_version)
minor_version = resolved_version.split(".")[1]
branch = major_version + "." + minor_version
print(f"Using resolved branch: {branch}")
util.git_check_out_branch(branch)
else:
raise ValueError(f"Invalid value set to LS_BRANCH. Please set it properly (ex: 8.x or 9.0) and "
f"rerun again")
def __resolve_latest_stack_version_for(self, major_version: str) -> str:
resp = util.call_url_with_retry(self.ELASTIC_STACK_RELEASED_VERSION_URL + major_version)
release_version = resp.text.strip()
print(f"Resolved latest version for {major_version} is {release_version}.")
if release_version == "":
raise ValueError(f"Cannot resolve latest version for {major_version} major")
return release_version
def install_plugin(self, plugin_path: str) -> None:
util.run_or_raise_error(
["bin/logstash-plugin", "install", plugin_path],
f"Failed to install {plugin_path}")
def build_logstash(self):
print(f"Building Logstash...")
util.run_or_raise_error(
["./gradlew", "clean", "bootstrap", "assemble", "installDefaultGems"],
"Failed to build Logstash")
print(f"Logstash has successfully built.")
def apply_config(self, config: dict) -> None:
with open(os.getcwd() + "/.buildkite/scripts/health-report-tests/config/pipelines.yml", 'w') as pipelines_file:
yaml.dump(config, pipelines_file)
def run_logstash(self, full_start_required: bool) -> subprocess.Popen:
# --config.reload.automatic is to make instance active
# it is helpful when testing crash pipeline cases
config_path = os.getcwd() + "/.buildkite/scripts/health-report-tests/config"
process = subprocess.Popen(["bin/logstash", "--config.reload.automatic", "--path.settings", config_path,
"-w 1"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, shell=False)
if process.poll() is not None:
print(f"Logstash failed to run, check the the config and logs, then rerun.")
return None
# Read stdout and stderr in real-time
logs = []
for stdout_line in iter(process.stdout.readline, ""):
logs.append(stdout_line.strip())
# we don't wait for Logstash fully start as we also test slow pipeline start scenarios
if full_start_required is False and "Starting pipeline" in stdout_line:
break
if full_start_required is True and "Pipeline started" in stdout_line:
break
if "Logstash shut down" in stdout_line or "Logstash stopped" in stdout_line:
print(f"Logstash couldn't spin up.")
print(logs)
return None
print(f"Logstash is running with PID: {process.pid}.")
return process
def stop_logstash(self, process: subprocess.Popen):
start_time = time.time() # in seconds
process.terminate()
for stdout_line in iter(process.stdout.readline, ""):
# print(f"STDOUT: {stdout_line.strip()}")
if "Logstash shut down" in stdout_line or "Logstash stopped" in stdout_line:
print(f"Logstash stopped.")
return None
# shudown watcher keep running, we should be good with considering time spent
if time.time() - start_time > 60:
print(f"Logstash didn't stop in 1min, sending SIGTERM signal.")
process.kill()
if time.time() - start_time > 70:
print(f"Logstash didn't stop over 1min, exiting.")
return None

View file

@ -0,0 +1 @@
# Intentionally left blank

View file

@ -0,0 +1,70 @@
import yaml
from typing import Any, List, Dict
class ConfigValidator:
REQUIRED_KEYS = {
"root": ["name", "config", "conditions", "expectation"],
"config": ["pipeline.id", "config.string"],
"conditions": ["full_start_required", "wait_seconds"],
"expectation": ["status", "symptom", "indicators"],
"indicators": ["pipelines"],
"pipelines": ["status", "symptom", "indicators"],
"DYNAMIC": ["status", "symptom", "diagnosis", "impacts", "details"], # pipeline-id is a DYNAMIC
"details": ["status"],
"status": ["state"]
}
def __init__(self):
self.yaml_content = None
def __has_valid_keys(self, data: any, key_path: str, repeated: bool) -> bool:
# we reached the value
if isinstance(data, str) or isinstance(data, bool) or isinstance(data, int) or isinstance(data, float):
return True
# we have two indicators section and for the next repeated ones, we go deeper
first_key = next(iter(data))
data = data[first_key] if repeated and key_path == "indicators" else data
if isinstance(data, dict):
# pipeline-id is a DYNAMIC
required = self.REQUIRED_KEYS.get("DYNAMIC" if repeated and key_path == "indicators" else key_path, [])
repeated = not repeated if key_path == "indicators" else repeated
for key in required:
if key not in data:
print(f"Missing key '{key}' in '{key_path}'")
return False
else:
dic_keys_result = self.__has_valid_keys(data[key], key, repeated)
if dic_keys_result is False:
return False
elif isinstance(data, list):
for item in data:
list_keys_result = self.__has_valid_keys(item, key_path, repeated)
if list_keys_result is False:
return False
return True
def load(self, file_path: str) -> None:
"""Load the YAML file content into self.yaml_content."""
self.yaml_content: [Dict[str, Any]] = None
try:
with open(file_path, 'r') as file:
self.yaml_content = yaml.safe_load(file)
except yaml.YAMLError as exc:
print(f"Error in YAML file: {exc}")
self.yaml_content = None
def is_valid(self) -> bool:
"""Validate the entire YAML structure."""
if self.yaml_content is None:
print(f"YAML content is empty.")
return False
if not isinstance(self.yaml_content, dict):
print(f"YAML structure is not as expected, it should start with a Dict.")
return False
result = self.__has_valid_keys(self.yaml_content, "root", False)
return True if result is True else False

View file

@ -0,0 +1,16 @@
"""
A class to provide information about Logstash node stats.
"""
import util
class LogstashHealthReport:
LOGSTASH_HEALTH_REPORT_URL = "http://localhost:9600/_health_report"
def __init__(self):
pass
def get(self):
response = util.call_url_with_retry(self.LOGSTASH_HEALTH_REPORT_URL)
return response.json()

View file

@ -0,0 +1,89 @@
"""
Main entry point of the LS health report API integration test suites
"""
import glob
import os
import time
import traceback
import yaml
from bootstrap import Bootstrap
from scenario_executor import ScenarioExecutor
from config_validator import ConfigValidator
class BootstrapContextManager:
def __init__(self):
pass
def __enter__(self):
print(f"Starting Logstash Health Report Integration test.")
self.bootstrap = Bootstrap()
self.bootstrap.build_logstash()
plugin_path = os.getcwd() + "/qa/support/logstash-integration-failure_injector/logstash-integration" \
"-failure_injector-*.gem"
matching_files = glob.glob(plugin_path)
if len(matching_files) == 0:
raise ValueError(f"Could not find logstash-integration-failure_injector plugin.")
self.bootstrap.install_plugin(matching_files[0])
print(f"logstash-integration-failure_injector successfully installed.")
return self.bootstrap
def __exit__(self, exc_type, exc_value, exc_traceback):
if exc_type is not None:
print(traceback.format_exception(exc_type, exc_value, exc_traceback))
def main():
with BootstrapContextManager() as bootstrap:
scenario_executor = ScenarioExecutor()
config_validator = ConfigValidator()
working_dir = os.getcwd()
scenario_files_path = working_dir + "/.buildkite/scripts/health-report-tests/tests/*.yaml"
scenario_files = glob.glob(scenario_files_path)
for scenario_file in scenario_files:
print(f"Validating {scenario_file} scenario file.")
config_validator.load(scenario_file)
if config_validator.is_valid() is False:
print(f"{scenario_file} scenario file is not valid.")
return
else:
print(f"Validation succeeded.")
has_failed_scenario = False
for scenario_file in scenario_files:
with open(scenario_file, 'r') as file:
# scenario_content: Dict[str, Any] = None
scenario_content = yaml.safe_load(file)
print(f"Testing `{scenario_content.get('name')}` scenario.")
scenario_name = scenario_content['name']
is_full_start_required = scenario_content.get('conditions').get('full_start_required')
wait_seconds = scenario_content.get('conditions').get('wait_seconds')
config = scenario_content['config']
if config is not None:
bootstrap.apply_config(config)
expectations = scenario_content.get("expectation")
process = bootstrap.run_logstash(is_full_start_required)
if process is not None:
if wait_seconds is not None:
print(f"Test requires to wait for `{wait_seconds}` seconds.")
time.sleep(wait_seconds) # wait for Logstash to start
try:
scenario_executor.on(scenario_name, expectations)
except Exception as e:
print(e)
has_failed_scenario = True
bootstrap.stop_logstash(process)
if has_failed_scenario:
# intentionally fail due to visibility
raise Exception("Some of scenarios failed, check the log for details.")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,16 @@
#!/bin/bash
set -euo pipefail
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:/opt/buildkite-agent/.java/bin:$PATH"
export JAVA_HOME="/opt/buildkite-agent/.java"
export PYENV_VERSION="3.11.5"
eval "$(rbenv init -)"
eval "$(pyenv init -)"
echo "--- Installing dependencies"
python3 -m pip install -r .buildkite/scripts/health-report-tests/requirements.txt
echo "--- Running tests"
python3 .buildkite/scripts/health-report-tests/main.py

View file

@ -0,0 +1,2 @@
requests==2.32.3
pyyaml==6.0.2

View file

@ -0,0 +1,67 @@
"""
A class to execute the given scenario for Logstash Health Report integration test
"""
import time
from logstash_health_report import LogstashHealthReport
class ScenarioExecutor:
logstash_health_report_api = LogstashHealthReport()
def __init__(self):
pass
def __has_intersection(self, expects, results):
# TODO: this logic is aligned on current Health API response
# there is no guarantee that method correctly runs if provided multi expects and results
# we expect expects to be existing in results
for expect in expects:
for result in results:
if result.get('help_url') and "health-report-pipeline-" not in result.get('help_url'):
return False
if not all(key in result and result[key] == value for key, value in expect.items()):
return False
return True
def __get_difference(self, differences: list, expectations: dict, reports: dict) -> dict:
for key in expectations.keys():
if type(expectations.get(key)) != type(reports.get(key)):
differences.append(f"Scenario expectation and Health API report structure differs for {key}.")
return differences
if isinstance(expectations.get(key), str):
if expectations.get(key) != reports.get(key):
differences.append({key: {"expected": expectations.get(key), "got": reports.get(key)}})
continue
elif isinstance(expectations.get(key), dict):
self.__get_difference(differences, expectations.get(key), reports.get(key))
elif isinstance(expectations.get(key), list):
if not self.__has_intersection(expectations.get(key), reports.get(key)):
differences.append({key: {"expected": expectations.get(key), "got": reports.get(key)}})
return differences
def __is_expected(self, expectations: dict) -> None:
reports = self.logstash_health_report_api.get()
differences = self.__get_difference([], expectations, reports)
if differences:
print("Differences found in 'expectation' section between YAML content and stats:")
for diff in differences:
print(f"Difference: {diff}")
return False
else:
return True
def on(self, scenario_name: str, expectations: dict) -> None:
# retriable check the expectations
attempts = 5
while self.__is_expected(expectations) is False:
attempts = attempts - 1
if attempts == 0:
break
time.sleep(1)
if attempts == 0:
raise Exception(f"{scenario_name} failed.")
else:
print(f"Scenario `{scenario_name}` expectaion meets the health report stats.")

View file

@ -0,0 +1,32 @@
name: "Abnormally terminated pipeline"
config:
- pipeline.id: abnormally-terminated-pp
config.string: |
input { heartbeat { interval => 1 } }
filter { failure_injector { crash_at => filter } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 5
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`abnormally-terminated-pp`)"
indicators:
abnormally-terminated-pp:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is not running, likely because it has encountered an error"
action: "view logs to determine the cause of abnormal pipeline shutdown"
impacts:
- description: "the pipeline is not currently processing"
impact_areas: ["pipeline_execution"]
details:
status:
state: "TERMINATED"

View file

@ -0,0 +1,38 @@
name: "Backpressured in 1min pipeline"
config:
- pipeline.id: backpressure-1m-pp
config.string: |
input { heartbeat { interval => 0.1 } }
filter { failure_injector { degrade_at => [filter] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 70 # give more seconds to make sure time is over the threshold, 1m in this case
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`backpressure-1m-pp`)"
indicators:
backpressure-1m-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- id: "logstash:health:pipeline:flow:worker_utilization:diagnosis:1m-blocked"
cause: "pipeline workers have been completely blocked for at least one minute"
action: "address bottleneck or add resources"
impacts:
- id: "logstash:health:pipeline:flow:impact:blocked_processing"
severity: 2
description: "the pipeline is blocked"
impact_areas: ["pipeline_execution"]
details:
status:
state: "RUNNING"
flow:
worker_utilization:
last_1_minute: 100.0

View file

@ -0,0 +1,39 @@
name: "Backpressured in 5min pipeline"
config:
- pipeline.id: backpressure-5m-pp
config.string: |
input { heartbeat { interval => 0.1 } }
filter { failure_injector { degrade_at => [filter] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 310 # give more seconds to make sure time is over the threshold, 1m in this case
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`backpressure-5m-pp`)"
indicators:
backpressure-5m-pp:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- id: "logstash:health:pipeline:flow:worker_utilization:diagnosis:5m-blocked"
cause: "pipeline workers have been completely blocked for at least five minutes"
action: "address bottleneck or add resources"
impacts:
- id: "logstash:health:pipeline:flow:impact:blocked_processing"
severity: 1
description: "the pipeline is blocked"
impact_areas: ["pipeline_execution"]
details:
status:
state: "RUNNING"
flow:
worker_utilization:
last_1_minute: 100.0
last_5_minutes: 100.0

View file

@ -0,0 +1,67 @@
name: "Multi pipeline"
config:
- pipeline.id: slow-start-pp-multipipeline
config.string: |
input { heartbeat {} }
filter { failure_injector { degrade_at => [register] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
- pipeline.id: normally-terminated-pp-multipipeline
config.string: |
input { generator { count => 1 } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
- pipeline.id: abnormally-terminated-pp-multipipeline
config.string: |
input { heartbeat { interval => 1 } }
filter { failure_injector { crash_at => filter } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: false
wait_seconds: 10
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`abnormally-terminated-pp-multipipeline`) and 2 indicators are concerning (`slow-start-pp-multipipeline`, `normally-terminated-pp-multipipeline`)"
indicators:
slow-start-pp-multipipeline:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is loading"
action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "LOADING"
normally-terminated-pp-multipipeline:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline has finished running because its inputs have been closed and events have been processed"
action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
impacts:
- impact_areas: [ "pipeline_execution" ]
details:
status:
state: "FINISHED"
abnormally-terminated-pp-multipipeline:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is not running, likely because it has encountered an error"
action: "view logs to determine the cause of abnormal pipeline shutdown"
impacts:
- description: "the pipeline is not currently processing"
impact_areas: [ "pipeline_execution" ]
details:
status:
state: "TERMINATED"

View file

@ -0,0 +1,30 @@
name: "Successfully terminated pipeline"
config:
- pipeline.id: normally-terminated-pp
config.string: |
input { generator { count => 1 } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 5
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`normally-terminated-pp`)"
indicators:
normally-terminated-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline has finished running because its inputs have been closed and events have been processed"
action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "FINISHED"

View file

@ -0,0 +1,31 @@
name: "Slow start pipeline"
config:
- pipeline.id: slow-start-pp
config.string: |
input { heartbeat {} }
filter { failure_injector { degrade_at => [register] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: false
wait_seconds: 0
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`slow-start-pp`)"
indicators:
slow-start-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is loading"
action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "LOADING"

View file

@ -0,0 +1,36 @@
import os
import requests
import subprocess
from requests.adapters import HTTPAdapter, Retry
def call_url_with_retry(url: str, max_retries: int = 5, delay: int = 1) -> requests.Response:
f"""
Calls the given {url} with maximum of {max_retries} retries with {delay} delay.
"""
schema = "https://" if "https://" in url else "http://"
session = requests.Session()
# retry on most common failures such as connection timeout(408), etc...
retries = Retry(total=max_retries, backoff_factor=delay, status_forcelist=[408, 502, 503, 504])
session.mount(schema, HTTPAdapter(max_retries=retries))
return session.get(url)
def git_check_out_branch(branch_name: str) -> None:
f"""
Checks out specified branch or fails with error if checkout operation fails.
"""
run_or_raise_error(["git", "checkout", branch_name],
"Error occurred while checking out the " + branch_name + " branch")
def run_or_raise_error(commands: list, error_message):
f"""
Executes the {list} commands and raises an {Exception} if opration fails.
"""
result = subprocess.run(commands, env=os.environ.copy(), universal_newlines=True, stdout=subprocess.PIPE)
if result.returncode != 0:
full_error_message = (error_message + ", output: " + result.stdout.decode('utf-8')) \
if result.stdout else error_message
raise Exception(f"{full_error_message}")

View file

@ -4,6 +4,7 @@ from dataclasses import dataclass, field
import os
import sys
import typing
from functools import partial
from ruamel.yaml import YAML
from ruamel.yaml.scalarstring import LiteralScalarString
@ -177,17 +178,15 @@ class LinuxJobs(Jobs):
super().__init__(os=os, jdk=jdk, group_key=group_key, agent=agent)
def all_jobs(self) -> list[typing.Callable[[], JobRetValues]]:
return [
self.init_annotation,
self.java_unit_test,
self.ruby_unit_test,
self.integration_tests_part_1,
self.integration_tests_part_2,
self.pq_integration_tests_part_1,
self.pq_integration_tests_part_2,
self.x_pack_unit_tests,
self.x_pack_integration,
]
jobs=list()
jobs.append(self.init_annotation)
jobs.append(self.java_unit_test)
jobs.append(self.ruby_unit_test)
jobs.extend(self.integration_test_parts(3))
jobs.extend(self.pq_integration_test_parts(3))
jobs.append(self.x_pack_unit_tests)
jobs.append(self.x_pack_integration)
return jobs
def prepare_shell(self) -> str:
jdk_dir = f"/opt/buildkite-agent/.java/{self.jdk}"
@ -259,17 +258,14 @@ ci/unit_tests.sh ruby
retry=copy.deepcopy(ENABLED_RETRIES),
)
def integration_tests_part_1(self) -> JobRetValues:
return self.integration_tests(part=1)
def integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def integration_tests_part_2(self) -> JobRetValues:
return self.integration_tests(part=2)
def integration_tests(self, part: int) -> JobRetValues:
step_name_human = f"Integration Tests - {part}"
step_key = f"{self.group_key}-integration-tests-{part}"
def integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"Integration Tests - {part}/{parts}"
step_key = f"{self.group_key}-integration-tests-{part}-of-{parts}"
test_command = f"""
ci/integration_tests.sh split {part-1}
ci/integration_tests.sh split {part-1} {parts}
"""
return JobRetValues(
@ -281,18 +277,15 @@ ci/integration_tests.sh split {part-1}
retry=copy.deepcopy(ENABLED_RETRIES),
)
def pq_integration_tests_part_1(self) -> JobRetValues:
return self.pq_integration_tests(part=1)
def pq_integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.pq_integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def pq_integration_tests_part_2(self) -> JobRetValues:
return self.pq_integration_tests(part=2)
def pq_integration_tests(self, part: int) -> JobRetValues:
step_name_human = f"IT Persistent Queues - {part}"
step_key = f"{self.group_key}-it-persistent-queues-{part}"
def pq_integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"IT Persistent Queues - {part}/{parts}"
step_key = f"{self.group_key}-it-persistent-queues-{part}-of-{parts}"
test_command = f"""
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split {part-1}
ci/integration_tests.sh split {part-1} {parts}
"""
return JobRetValues(

View file

@ -4,7 +4,7 @@ set -e
install_java() {
# TODO: let's think about regularly creating a custom image for Logstash which may align on version.yml definitions
sudo apt update && sudo apt install -y openjdk-17-jdk && sudo apt install -y openjdk-17-jre
sudo apt update && sudo apt install -y openjdk-21-jdk && sudo apt install -y openjdk-21-jre
}
install_java

View file

@ -4,22 +4,13 @@ set -e
TARGET_BRANCHES=("main")
install_java() {
# TODO: let's think about using BK agent which has Java installed
# Current caveat is Logstash BK agent doesn't support docker operatioins in it
sudo apt update && sudo apt install -y openjdk-17-jdk && sudo apt install -y openjdk-17-jre
install_java_11() {
curl -L -s "https://github.com/adoptium/temurin11-binaries/releases/download/jdk-11.0.24%2B8/OpenJDK11U-jdk_x64_linux_hotspot_11.0.24_8.tar.gz" | tar -zxf -
}
# Resolves the branches we are going to track
resolve_latest_branches() {
source .buildkite/scripts/snyk/resolve_stack_version.sh
for SNAPSHOT_VERSION in "${SNAPSHOT_VERSIONS[@]}"
do
IFS='.'
read -a versions <<< "$SNAPSHOT_VERSION"
version=${versions[0]}.${versions[1]}
TARGET_BRANCHES+=("$version")
done
}
# Build Logstash specific branch to generate Gemlock file where Snyk scans
@ -42,7 +33,7 @@ download_auth_snyk() {
report() {
REMOTE_REPO_URL=$1
echo "Reporting $REMOTE_REPO_URL branch."
if [ "$REMOTE_REPO_URL" != "main" ]; then
if [ "$REMOTE_REPO_URL" != "main" ] && [ "$REMOTE_REPO_URL" != "8.x" ]; then
MAJOR_VERSION=$(echo "$REMOTE_REPO_URL"| cut -d'.' -f 1)
REMOTE_REPO_URL="$MAJOR_VERSION".latest
echo "Using '$REMOTE_REPO_URL' remote repo url."
@ -55,13 +46,18 @@ report() {
./snyk monitor --prune-repeated-subdependencies --all-projects --org=logstash --remote-repo-url="$REMOTE_REPO_URL" --target-reference="$REMOTE_REPO_URL" --detection-depth=6 --exclude=qa,tools,devtools,requirements.txt --project-tags=branch="$TARGET_BRANCH",git_head="$GIT_HEAD" || :
}
install_java
resolve_latest_branches
download_auth_snyk
# clone Logstash repo, build and report
for TARGET_BRANCH in "${TARGET_BRANCHES[@]}"
do
if [ "$TARGET_BRANCH" == "7.17" ]; then
echo "Installing and configuring JDK11."
export OLD_PATH=$PATH
install_java_11
export PATH=$PWD/jdk-11.0.24+8/bin:$PATH
fi
git reset --hard HEAD # reset if any generated files appeared
# check if target branch exists
echo "Checking out $TARGET_BRANCH branch."
@ -71,70 +67,10 @@ do
else
echo "$TARGET_BRANCH branch doesn't exist."
fi
done
# Scan Logstash docker images and report
REPOSITORY_BASE_URL="docker.elastic.co/logstash/"
report_docker_image() {
image=$1
project_name=$2
platform=$3
echo "Reporting $image to Snyk started..."
docker pull "$image"
if [[ $platform != null ]]; then
./snyk container monitor "$image" --org=logstash --platform="$platform" --project-name="$project_name" --project-tags=version="$version" || :
else
./snyk container monitor "$image" --org=logstash --project-name="$project_name" --project-tags=version="$version" || :
if [ "$TARGET_BRANCH" == "7.17" ]; then
# reset state
echo "Removing JDK11 installation."
rm -rf jdk-11.0.24+8
export PATH=$OLD_PATH
fi
}
report_docker_images() {
version=$1
echo "Version value: $version"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT"
snyk_project_name="logstash-$version-SNAPSHOT"
report_docker_image "$image" "$snyk_project_name"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT"
snyk_project_name="logstash-oss-$version-SNAPSHOT"
report_docker_image "$image" "$snyk_project_name"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT-arm64"
snyk_project_name="logstash-$version-SNAPSHOT-arm64"
report_docker_image "$image" "$snyk_project_name" "linux/arm64"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT-amd64"
snyk_project_name="logstash-$version-SNAPSHOT-amd64"
report_docker_image "$image" "$snyk_project_name" "linux/amd64"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT-arm64"
snyk_project_name="logstash-oss-$version-SNAPSHOT-arm64"
report_docker_image "$image" "$snyk_project_name" "linux/arm64"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT-amd64"
snyk_project_name="logstash-oss-$version-SNAPSHOT-amd64"
report_docker_image "$image" "$snyk_project_name" "linux/amd64"
}
resolve_version_and_report_docker_images() {
git reset --hard HEAD # reset if any generated files appeared
git checkout "$1"
# parse version (ex: 8.8.2 from 8.8 branch, or 8.9.0 from main branch)
versions_file="$PWD/versions.yml"
version=$(awk '/logstash:/ { print $2 }' "$versions_file")
report_docker_images "$version"
}
# resolve docker artifact and report
#for TARGET_BRANCH in "${TARGET_BRANCHES[@]}"
#do
# if git show-ref --quiet refs/heads/"$TARGET_BRANCH"; then
# echo "Using $TARGET_BRANCH branch for docker images."
# resolve_version_and_report_docker_images "$TARGET_BRANCH"
# else
# echo "$TARGET_BRANCH branch doesn't exist."
# fi
#done
done

View file

@ -6,14 +6,9 @@
set -e
VERSION_URL="https://raw.githubusercontent.com/elastic/logstash/main/ci/logstash_releases.json"
VERSION_URL="https://storage.googleapis.com/artifacts-api/snapshots/branches.json"
echo "Fetching versions from $VERSION_URL"
VERSIONS=$(curl --silent $VERSION_URL)
SNAPSHOT_KEYS=$(echo "$VERSIONS" | jq -r '.snapshots | .[]')
readarray -t TARGET_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $VERSION_URL | jq -r '.branches[]')
echo "${TARGET_BRANCHES[@]}"
SNAPSHOT_VERSIONS=()
while IFS= read -r line; do
SNAPSHOT_VERSIONS+=("$line")
echo "Resolved snapshot version: $line"
done <<< "$SNAPSHOT_KEYS"

View file

@ -1,12 +1,14 @@
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 120
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "2"
memory: "4Gi"
ephemeralStorage: "64Gi"
steps:
# reports main, previous (ex: 7.latest) and current (ex: 8.latest) release branches to Snyk
- label: ":hammer: Report to Snyk"
command:
- .buildkite/scripts/snyk/report.sh
- .buildkite/scripts/snyk/report.sh
retry:
automatic:
- limit: 3

View file

@ -2,7 +2,7 @@
env:
DEFAULT_MATRIX_OS: "windows-2022"
DEFAULT_MATRIX_JDK: "adoptiumjdk_17"
DEFAULT_MATRIX_JDK: "adoptiumjdk_21"
steps:
- input: "Test Parameters"
@ -15,6 +15,8 @@ steps:
multiple: true
default: "${DEFAULT_MATRIX_OS}"
options:
- label: "Windows 2025"
value: "windows-2025"
- label: "Windows 2022"
value: "windows-2022"
- label: "Windows 2019"
@ -33,20 +35,14 @@ steps:
value: "adoptiumjdk_21"
- label: "Adoptium JDK 17 (Eclipse Temurin)"
value: "adoptiumjdk_17"
- label: "Adoptium JDK 11 (Eclipse Temurin)"
value: "adoptiumjdk_11"
- label: "OpenJDK 21"
value: "openjdk_21"
- label: "OpenJDK 17"
value: "openjdk_17"
- label: "OpenJDK 11"
value: "openjdk_11"
- label: "Zulu 21"
value: "zulu_21"
- label: "Zulu 17"
value: "zulu_17"
- label: "Zulu 11"
value: "zulu_11"
- wait: ~
if: build.source != "schedule" && build.source != "trigger_job"

42
.ci/Makefile Normal file
View file

@ -0,0 +1,42 @@
.SILENT:
MAKEFLAGS += --no-print-directory
.SHELLFLAGS = -euc
SHELL = /bin/bash
#######################
## Templates
#######################
## Mergify template
define MERGIFY_TMPL
- name: backport patches to $(BRANCH) branch
conditions:
- merged
- base=main
- label=$(BACKPORT_LABEL)
actions:
backport:
branches:
- "$(BRANCH)"
endef
# Add mergify entry for the new backport label
.PHONY: mergify
export MERGIFY_TMPL
mergify: BACKPORT_LABEL=$${BACKPORT_LABEL} BRANCH=$${BRANCH} PUSH_BRANCH=$${PUSH_BRANCH}
mergify:
@echo ">> mergify"
echo "$$MERGIFY_TMPL" >> ../.mergify.yml
git add ../.mergify.yml
git status
if [ ! -z "$$(git status --porcelain)" ]; then \
git commit -m "mergify: add $(BACKPORT_LABEL) rule"; \
git push origin $(PUSH_BRANCH) ; \
fi
# Create GitHub backport label
.PHONY: backport-label
backport-label: BACKPORT_LABEL=$${BACKPORT_LABEL}
backport-label:
@echo ">> backport-label"
gh label create $(BACKPORT_LABEL) --description "Automated backport with mergify" --color 0052cc --force

View file

@ -1,2 +1,2 @@
LS_BUILD_JAVA=adoptiumjdk_17
LS_RUNTIME_JAVA=adoptiumjdk_17
LS_BUILD_JAVA=adoptiumjdk_21
LS_RUNTIME_JAVA=adoptiumjdk_21

View file

@ -21,10 +21,6 @@ analyze:
type: gradle
target: 'dependencies-report:'
path: .
- name: ingest-converter
type: gradle
target: 'ingest-converter:'
path: .
- name: logstash-core
type: gradle
target: 'logstash-core:'

View file

@ -21,6 +21,5 @@ to reproduce locally
**Failure history**:
<!--
Link to build stats and possible indication of when this started failing and how often it fails
<https://build-stats.elastic.co/app/kibana>
-->
**Failure excerpt**:

18
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,18 @@
---
version: 2
updates:
- package-ecosystem: "github-actions"
directories:
- '/'
- '/.github/actions/*'
schedule:
interval: "weekly"
day: "sunday"
time: "22:00"
reviewers:
- "elastic/observablt-ci"
- "elastic/observablt-ci-contractors"
groups:
github-actions:
patterns:
- "*"

View file

@ -1,33 +0,0 @@
name: Docs Preview Link
on:
pull_request_target:
types: [opened, synchronize]
paths:
- docs/**
- docsk8s/**
jobs:
docs-preview-link:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- id: wait-for-status
uses: autotelic/action-wait-for-status-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
owner: elastic
# when running with on: pull_request_target we get the PR base ref by default
ref: ${{ github.event.pull_request.head.sha }}
statusName: "buildkite/docs-build-pr"
# https://elasticsearch-ci.elastic.co/job/elastic+logstash+pull-request+build-docs
# usually finishes in ~ 20 minutes
timeoutSeconds: 900
intervalSeconds: 30
- name: Add Docs Preview link in PR Comment
if: steps.wait-for-status.outputs.state == 'success'
uses: thollander/actions-comment-pull-request@v1
with:
message: |
:page_with_curl: **DOCS PREVIEW** :sparkles: https://logstash_bk_${{ github.event.number }}.docs-preview.app.elstc.co/diff
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

22
.github/workflows/backport-active.yml vendored Normal file
View file

@ -0,0 +1,22 @@
name: Backport to active branches
on:
pull_request_target:
types: [closed]
branches:
- main
permissions:
pull-requests: write
contents: read
jobs:
backport:
# Only run if the PR was merged (not just closed) and has one of the backport labels
if: |
github.event.pull_request.merged == true &&
contains(toJSON(github.event.pull_request.labels.*.name), 'backport-active-')
runs-on: ubuntu-latest
steps:
- uses: elastic/oblt-actions/github/backport-active@v1

19
.github/workflows/docs-build.yml vendored Normal file
View file

@ -0,0 +1,19 @@
name: docs-build
on:
push:
branches:
- main
pull_request_target: ~
merge_group: ~
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-build.yml@main
with:
path-pattern: docs/**
permissions:
deployments: write
id-token: write
contents: read
pull-requests: read

14
.github/workflows/docs-cleanup.yml vendored Normal file
View file

@ -0,0 +1,14 @@
name: docs-cleanup
on:
pull_request_target:
types:
- closed
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-cleanup.yml@main
permissions:
contents: none
id-token: write
deployments: write

View file

@ -0,0 +1,23 @@
name: mergify backport labels copier
on:
pull_request:
types:
- opened
permissions:
contents: read
jobs:
mergify-backport-labels-copier:
runs-on: ubuntu-latest
if: startsWith(github.head_ref, 'mergify/bp/')
permissions:
# Add GH labels
pull-requests: write
# See https://github.com/cli/cli/issues/6274
repository-projects: read
steps:
- uses: elastic/oblt-actions/mergify/labels-copier@v1
with:
excluded-labels-regex: "^backport-*"

View file

@ -1,49 +0,0 @@
name: Backport PR to another branch
on:
issue_comment:
types: [created]
permissions:
pull-requests: write
contents: write
jobs:
pr_commented:
name: PR comment
if: github.event.issue.pull_request
runs-on: ubuntu-latest
steps:
- uses: actions-ecosystem/action-regex-match@v2
id: regex-match
with:
text: ${{ github.event.comment.body }}
regex: '^@logstashmachine backport (main|[x0-9\.]+)$'
- if: ${{ steps.regex-match.outputs.group1 == '' }}
run: exit 1
- name: Fetch logstash-core team member list
uses: tspascoal/get-user-teams-membership@v1
id: checkUserMember
with:
username: ${{ github.actor }}
organization: elastic
team: logstash
GITHUB_TOKEN: ${{ secrets.READ_ORG_SECRET_JSVD }}
- name: Is user not a core team member?
if: ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
run: exit 1
- name: checkout repo content
uses: actions/checkout@v2
with:
fetch-depth: 0
ref: 'main'
- run: git config --global user.email "43502315+logstashmachine@users.noreply.github.com"
- run: git config --global user.name "logstashmachine"
- name: setup python
uses: actions/setup-python@v2
with:
python-version: 3.8
- run: |
mkdir ~/.elastic && echo ${{ github.token }} >> ~/.elastic/github.token
- run: pip install requests
- name: run backport
run: python devtools/backport ${{ steps.regex-match.outputs.group1 }} ${{ github.event.issue.number }} --remote=origin --yes

18
.github/workflows/pre-commit.yml vendored Normal file
View file

@ -0,0 +1,18 @@
name: pre-commit
on:
pull_request:
push:
branches:
- main
- 8.*
- 9.*
permissions:
contents: read
jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: elastic/oblt-actions/pre-commit@v1

View file

@ -25,9 +25,13 @@ jobs:
version_bumper:
name: Bump versions
runs-on: ubuntu-latest
env:
INPUTS_BRANCH: "${{ inputs.branch }}"
INPUTS_BUMP: "${{ inputs.bump }}"
BACKPORT_LABEL: "backport-${{ inputs.branch }}"
steps:
- name: Fetch logstash-core team member list
uses: tspascoal/get-user-teams-membership@v1
uses: tspascoal/get-user-teams-membership@57e9f42acd78f4d0f496b3be4368fc5f62696662 #v3.0.0
with:
username: ${{ github.actor }}
organization: elastic
@ -37,14 +41,14 @@ jobs:
if: ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
run: exit 1
- name: checkout repo content
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.event.inputs.branch }}
ref: ${{ env.INPUTS_BRANCH }}
- run: git config --global user.email "43502315+logstashmachine@users.noreply.github.com"
- run: git config --global user.name "logstashmachine"
- run: ./gradlew clean installDefaultGems
- run: ./vendor/jruby/bin/jruby -S bundle update --all --${{ github.event.inputs.bump }} --strict
- run: ./vendor/jruby/bin/jruby -S bundle update --all --${{ env.INPUTS_BUMP }} --strict
- run: mv Gemfile.lock Gemfile.jruby-*.lock.release
- run: echo "T=$(date +%s)" >> $GITHUB_ENV
- run: echo "BRANCH=update_lock_${T}" >> $GITHUB_ENV
@ -53,8 +57,21 @@ jobs:
git add .
git status
if [[ -z $(git status --porcelain) ]]; then echo "No changes. We're done."; exit 0; fi
git commit -m "Update ${{ github.event.inputs.bump }} plugin versions in gemfile lock" -a
git commit -m "Update ${{ env.INPUTS_BUMP }} plugin versions in gemfile lock" -a
git push origin $BRANCH
- name: Update mergify (minor only)
if: ${{ inputs.bump == 'minor' }}
continue-on-error: true
run: make -C .ci mergify BACKPORT_LABEL=$BACKPORT_LABEL BRANCH=$INPUTS_BRANCH PUSH_BRANCH=$BRANCH
- name: Create Pull Request
run: |
curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -X POST -d "{\"title\": \"bump lock file for ${{ github.event.inputs.branch }}\",\"head\": \"${BRANCH}\",\"base\": \"${{ github.event.inputs.branch }}\"}" https://api.github.com/repos/elastic/logstash/pulls
curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -X POST -d "{\"title\": \"bump lock file for ${{ env.INPUTS_BRANCH }}\",\"head\": \"${BRANCH}\",\"base\": \"${{ env.INPUTS_BRANCH }}\"}" https://api.github.com/repos/elastic/logstash/pulls
- name: Create GitHub backport label (Mergify) (minor only)
if: ${{ inputs.bump == 'minor' }}
continue-on-error: true
run: make -C .ci backport-label BACKPORT_LABEL=$BACKPORT_LABEL
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

132
.mergify.yml Normal file
View file

@ -0,0 +1,132 @@
commands_restrictions:
backport:
conditions:
- or:
- sender-permission>=write
- sender=github-actions[bot]
defaults:
actions:
backport:
title: "[{{ destination_branch }}] (backport #{{ number }}) {{ title }}"
assignees:
- "{{ author }}"
labels:
- "backport"
pull_request_rules:
# - name: ask to resolve conflict
# conditions:
# - conflict
# actions:
# comment:
# message: |
# This pull request is now in conflicts. Could you fix it @{{author}}? 🙏
# To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/
# ```
# git fetch upstream
# git checkout -b {{head}} upstream/{{head}}
# git merge upstream/{{base}}
# git push upstream {{head}}
# ```
- name: notify the backport policy
conditions:
- -label~=^backport
- base=main
actions:
comment:
message: |
This pull request does not have a backport label. Could you fix it @{{author}}? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:
* `backport-8./d` is the label to automatically backport to the `8./d` branch. `/d` is the digit.
* If no backport is necessary, please add the `backport-skip` label
- name: remove backport-skip label
conditions:
- label~=^backport-\d
actions:
label:
remove:
- backport-skip
- name: notify the backport has not been merged yet
conditions:
- -merged
- -closed
- author=mergify[bot]
- "#check-success>0"
- schedule=Mon-Mon 06:00-10:00[Europe/Paris]
actions:
comment:
message: |
This pull request has not been merged yet. Could you please review and merge it @{{ assignee | join(', @') }}? 🙏
- name: backport patches to 8.16 branch
conditions:
- merged
- base=main
- label=backport-8.16
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.16"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.17 branch
conditions:
- merged
- base=main
- label=backport-8.17
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.17"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.18 branch
conditions:
- merged
- base=main
- label=backport-8.18
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.18"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.19 branch
conditions:
- merged
- base=main
- label=backport-8.19
actions:
backport:
branches:
- "8.19"
- name: backport patches to 9.0 branch
conditions:
- merged
- base=main
- label=backport-9.0
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "9.0"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"

6
.pre-commit-config.yaml Normal file
View file

@ -0,0 +1,6 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-merge-conflict
args: ['--assume-in-merge']

View file

@ -1 +1 @@
jruby-9.3.10.0
jruby-9.4.9.0

View file

@ -13,7 +13,7 @@ gem "ruby-maven-libs", "~> 3", ">= 3.9.6.1"
gem "logstash-output-elasticsearch", ">= 11.14.0"
gem "polyglot", require: false
gem "treetop", require: false
gem "faraday", "~> 1", :require => false # due elasticsearch-transport (elastic-transport) depending faraday '~> 1'
gem "minitar", "~> 1", :group => :build
gem "childprocess", "~> 4", :group => :build
gem "fpm", "~> 1", ">= 1.14.1", :group => :build # compound due to bugfix https://github.com/jordansissel/fpm/pull/1856
gem "gems", "~> 1", :group => :build
@ -26,6 +26,8 @@ gem "stud", "~> 0.0.22", :group => :build
gem "fileutils", "~> 1.7"
gem "rubocop", :group => :development
# rubocop-ast 1.43.0 carries a dep on `prism` which requires native c extensions
gem 'rubocop-ast', '= 1.42.0', :group => :development
gem "belzebuth", :group => :development
gem "benchmark-ips", :group => :development
gem "ci_reporter_rspec", "~> 1", :group => :development
@ -39,5 +41,9 @@ gem "simplecov", "~> 0.22.0", :group => :development
gem "simplecov-json", require: false, :group => :development
gem "jar-dependencies", "= 0.4.1" # Gem::LoadError with jar-dependencies 0.4.2
gem "murmurhash3", "= 0.1.6" # Pins until version 0.1.7-java is released
gem "date", "= 3.3.3"
gem "thwait"
gem "bigdecimal", "~> 3.1"
gem "psych", "5.2.2"
gem "cgi", "0.3.7" # Pins until a new jruby version with updated cgi is released
gem "uri", "0.12.3" # Pins until a new jruby version with updated cgi is released

File diff suppressed because it is too large Load diff

View file

@ -20,7 +20,6 @@ supported platforms, from [downloads page](https://www.elastic.co/downloads/logs
- [Logstash Forum](https://discuss.elastic.co/c/logstash)
- [Logstash Documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
- [#logstash on freenode IRC](https://webchat.freenode.net/?channels=logstash)
- [Logstash Product Information](https://www.elastic.co/products/logstash)
- [Elastic Support](https://www.elastic.co/subscriptions)

View file

@ -1,10 +0,0 @@
@echo off
setlocal enabledelayedexpansion
cd /d "%~dp0\.."
for /f %%i in ('cd') do set RESULT=%%i
"%JAVACMD%" -cp "!RESULT!\tools\ingest-converter\build\libs\ingest-converter.jar;*" ^
org.logstash.ingest.Pipeline %*
endlocal

View file

@ -1,4 +0,0 @@
#!/usr/bin/env bash
java -cp "$(cd `dirname $0`/..; pwd)"'/tools/ingest-converter/build/libs/ingest-converter.jar:*' \
org.logstash.ingest.Pipeline "$@"

View file

@ -6,7 +6,7 @@ set params='%*'
if "%1" == "-V" goto version
if "%1" == "--version" goto version
call "%~dp0setup.bat" || exit /b 1
1>&2 (call "%~dp0setup.bat") || exit /b 1
if errorlevel 1 (
if not defined nopauseonerror (
pause

View file

@ -186,8 +186,8 @@ setup_vendored_jruby() {
}
setup() {
setup_java
setup_vendored_jruby
>&2 setup_java
>&2 setup_vendored_jruby
}
ruby_exec() {

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" "%JAVA_OPTS%" -cp "%CLASSPATH%" org.logstash.ackedqueue.PqCheck %*
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqCheck %*
:concat
IF not defined CLASSPATH (

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" %JAVA_OPTS% -cp "%CLASSPATH%" org.logstash.ackedqueue.PqRepair %*
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqRepair %*
:concat
IF not defined CLASSPATH (

View file

@ -42,7 +42,7 @@ if defined LS_JAVA_HOME (
)
if not exist "%JAVACMD%" (
echo could not find java; set JAVA_HOME or ensure java is in PATH 1>&2
echo could not find java; set LS_JAVA_HOME or ensure java is in PATH 1>&2
exit /b 1
)

View file

@ -101,6 +101,7 @@ allprojects {
"--add-opens=java.base/java.lang=ALL-UNNAMED",
"--add-opens=java.base/java.util=ALL-UNNAMED"
]
maxHeapSize = "2g"
//https://stackoverflow.com/questions/3963708/gradle-how-to-display-test-results-in-the-console-in-real-time
testLogging {
// set options for log level LIFECYCLE
@ -145,7 +146,6 @@ subprojects {
}
version = versionMap['logstash-core']
String artifactVersionsApi = "https://artifacts-api.elastic.co/v1/versions"
tasks.register("configureArchitecture") {
String arch = System.properties['os.arch']
@ -171,33 +171,28 @@ tasks.register("configureArtifactInfo") {
description "Set the url to download stack artifacts for select stack version"
doLast {
def versionQualifier = System.getenv('VERSION_QUALIFIER')
if (versionQualifier) {
version = "$version-$versionQualifier"
}
def splitVersion = version.split('\\.')
int major = splitVersion[0].toInteger()
int minor = splitVersion[1].toInteger()
String branch = "${major}.${minor}"
String fallbackMajorX = "${major}.x"
boolean isFallBackPreviousMajor = minor - 1 < 0
String fallbackBranch = isFallBackPreviousMajor ? "${major-1}.x" : "${major}.${minor-1}"
def qualifiedVersion = ""
boolean isReleaseBuild = System.getenv('RELEASE') == "1" || versionQualifier
String apiResponse = artifactVersionsApi.toURL().text
def dlVersions = new JsonSlurper().parseText(apiResponse)
String qualifiedVersion = dlVersions['versions'].grep(isReleaseBuild ? ~/^${version}$/ : ~/^${version}-SNAPSHOT/)[0]
if (qualifiedVersion == null) {
if (!isReleaseBuild) {
project.ext.set("useProjectSpecificArtifactSnapshotUrl", true)
project.ext.set("stackArtifactSuffix", "${version}-SNAPSHOT")
return
for (b in [branch, fallbackMajorX, fallbackBranch]) {
def url = "https://storage.googleapis.com/artifacts-api/snapshots/${b}.json"
try {
def snapshotInfo = new JsonSlurper().parseText(url.toURL().text)
qualifiedVersion = snapshotInfo.version
println "ArtifactInfo version: ${qualifiedVersion}"
break
} catch (Exception e) {
println "Failed to fetch branch ${branch} from ${url}: ${e.message}"
}
throw new GradleException("could not find the current artifact from the artifact-api ${artifactVersionsApi} for ${version}")
}
// find latest reference to last build
String buildsListApi = "${artifactVersionsApi}/${qualifiedVersion}/builds/"
apiResponse = buildsListApi.toURL().text
def dlBuilds = new JsonSlurper().parseText(apiResponse)
def stackBuildVersion = dlBuilds["builds"][0]
project.ext.set("artifactApiVersionedBuildUrl", "${artifactVersionsApi}/${qualifiedVersion}/builds/${stackBuildVersion}")
project.ext.set("stackArtifactSuffix", qualifiedVersion)
project.ext.set("useProjectSpecificArtifactSnapshotUrl", false)
project.ext.set("artifactApiVersion", qualifiedVersion)
}
}
@ -333,7 +328,6 @@ tasks.register("assembleTarDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -350,7 +344,6 @@ tasks.register("assembleOssTarDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -365,7 +358,6 @@ tasks.register("assembleZipDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -382,7 +374,6 @@ tasks.register("assembleOssZipDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -417,7 +408,7 @@ def qaBuildPath = "${buildDir}/qa/integration"
def qaVendorPath = "${qaBuildPath}/vendor"
tasks.register("installIntegrationTestGems") {
dependsOn unpackTarDistribution
dependsOn assembleTarDistribution
def gemfilePath = file("${projectDir}/qa/integration/Gemfile")
inputs.files gemfilePath
inputs.files file("${projectDir}/qa/integration/integration_tests.gemspec")
@ -440,23 +431,13 @@ tasks.register("downloadFilebeat") {
doLast {
download {
String beatVersion = project.ext.get("stackArtifactSuffix")
String downloadedFilebeatName = "filebeat-${beatVersion}-${project.ext.get("beatsArchitecture")}"
String beatsVersion = project.ext.get("artifactApiVersion")
String downloadedFilebeatName = "filebeat-${beatsVersion}-${project.ext.get("beatsArchitecture")}"
project.ext.set("unpackedFilebeatName", downloadedFilebeatName)
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("beats", beatVersion, downloadedFilebeatName)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/beats/packages/${downloadedFilebeatName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
}
def res = SnapshotArtifactURLs.packageUrls("beats", beatsVersion, downloadedFilebeatName)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
src project.ext.filebeatSnapshotUrl
onlyIfNewer true
@ -492,20 +473,12 @@ tasks.register("checkEsSHA") {
description "Download ES version remote's fingerprint file"
doLast {
String esVersion = project.ext.get("stackArtifactSuffix")
String esVersion = project.ext.get("artifactApiVersion")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
String remoteSHA
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
remoteSHA = res.packageShaUrl
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
remoteSHA = buildUrls.package.sha_url.toURL().text
}
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
remoteSHA = res.packageShaUrl
def localESArchive = new File("${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
if (localESArchive.exists()) {
@ -539,25 +512,14 @@ tasks.register("downloadEs") {
doLast {
download {
String esVersion = project.ext.get("stackArtifactSuffix")
String esVersion = project.ext.get("artifactApiVersion")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
project.ext.set("unpackedElasticsearchName", "elasticsearch-${esVersion}")
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
}
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
src project.ext.elasticsearchSnapshotURL
onlyIfNewer true
@ -597,7 +559,8 @@ project(":logstash-integration-tests") {
systemProperty 'org.logstash.integration.specs', rubyIntegrationSpecs
environment "FEATURE_FLAG", System.getenv('FEATURE_FLAG')
workingDir integrationTestPwd
dependsOn = [installIntegrationTestGems, copyProductionLog4jConfiguration]
dependsOn installIntegrationTestGems
dependsOn copyProductionLog4jConfiguration
}
}
@ -740,46 +703,66 @@ class JDKDetails {
return createElasticCatalogDownloadUrl()
}
private String createElasticCatalogDownloadUrl() {
// Ask details to catalog https://jvm-catalog.elastic.co/jdk and return the url to download the JDK
// arch x86_64 never used, only aarch64 if macos
// throws an error iff local version in versions.yml doesn't match the latest from JVM catalog.
void checkLocalVersionMatchingLatest() {
// retrieve the metadata from remote
def url = "https://jvm-catalog.elastic.co/jdk/latest_adoptiumjdk_${major}_${osName}"
// Append the cpu's arch only if Mac on aarch64, all the other OSes doesn't have CPU extension
if (arch == "aarch64") {
url += "_${arch}"
}
println "Retrieving JDK from catalog..."
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
assert catalogConnection.responseCode == 200
def metadataRetrieved = catalogConnection.content.text
println "Retrieved!"
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
return catalogMetadata.url
}
private String createAdoptDownloadUrl() {
String releaseName = major > 8 ?
"jdk-${revision}+${build}" :
"jdk${revision}u${build}"
String vendorOsName = vendorOsName(osName)
switch (vendor) {
case "adoptium":
return "https://api.adoptium.net/v3/binary/version/${releaseName}/${vendorOsName}/${arch}/jdk/hotspot/normal/adoptium"
default:
throw RuntimeException("Can't handle vendor: ${vendor}")
if (catalogMetadata.version != revision || catalogMetadata.revision != build) {
throw new GradleException("Found new jdk version. Please update version.yml to ${catalogMetadata.version} build ${catalogMetadata.revision}")
}
}
private String vendorOsName(String osName) {
if (osName == "darwin")
return "mac"
return osName
private String createElasticCatalogDownloadUrl() {
// Ask details to catalog https://jvm-catalog.elastic.co/jdk and return the url to download the JDK
// arch x86_64 is default, aarch64 if macos or linux
def url = "https://jvm-catalog.elastic.co/jdk/adoptiumjdk-${revision}+${build}-${osName}"
// Append the cpu's arch only if not x86_64, which is the default
if (arch == "aarch64") {
url += "-${arch}"
}
println "Retrieving JDK from catalog..."
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
if (catalogConnection.responseCode != 200) {
println "Can't find adoptiumjdk ${revision} for ${osName} on Elastic JVM catalog"
throw new GradleException("JVM not present on catalog")
}
def metadataRetrieved = catalogConnection.content.text
println "Retrieved!"
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
validateMetadata(catalogMetadata)
return catalogMetadata.url
}
//Verify that the artifact metadata correspond to the request, if not throws an error
private void validateMetadata(Map metadata) {
if (metadata.version != revision) {
throw new GradleException("Expected to retrieve a JDK for version ${revision} but received: ${metadata.version}")
}
if (!isSameArchitecture(metadata.architecture)) {
throw new GradleException("Expected to retrieve a JDK for architecture ${arch} but received: ${metadata.architecture}")
}
}
private boolean isSameArchitecture(String metadataArch) {
if (arch == 'x64') {
return metadataArch == 'x86_64'
}
return metadataArch == arch
}
private String parseJdkArchitecture(String jdkArch) {
@ -791,16 +774,22 @@ class JDKDetails {
return "aarch64"
break
default:
throw RuntimeException("Can't handle CPU architechture: ${jdkArch}")
throw new GradleException("Can't handle CPU architechture: ${jdkArch}")
}
}
}
tasks.register("lint") {
// Calls rake's 'lint' task
description = "Lint Ruby source files. Use -PrubySource=file1.rb,file2.rb to specify files"
dependsOn installDevelopmentGems
doLast {
rake(projectDir, buildDir, 'lint:report')
if (project.hasProperty("rubySource")) {
// Split the comma-separated files and pass them as separate arguments
def files = project.property("rubySource").split(",")
rake(projectDir, buildDir, "lint:report", *files)
} else {
rake(projectDir, buildDir, "lint:report")
}
}
}
@ -840,6 +829,15 @@ tasks.register("downloadJdk", Download) {
}
}
tasks.register("checkNewJdkVersion") {
def versionYml = new Yaml().load(new File("$projectDir/versions.yml").text)
// use Linux x86_64 as canary platform
def jdkDetails = new JDKDetails(versionYml, "linux", "x86_64")
// throws Gradle exception if local and remote doesn't match
jdkDetails.checkLocalVersionMatchingLatest()
}
tasks.register("deleteLocalJdk", Delete) {
// CLI project properties: -Pjdk_bundle_os=[windows|linux|darwin]
String osName = selectOsType()

View file

@ -32,6 +32,8 @@ spec:
- resource:logstash-linux-jdk-matrix-pipeline
- resource:logstash-windows-jdk-matrix-pipeline
- resource:logstash-benchmark-pipeline
- resource:logstash-health-report-tests-pipeline
- resource:logstash-jdk-availability-check-pipeline
# ***********************************
# Declare serverless IT pipeline
@ -140,7 +142,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -183,7 +185,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -234,7 +236,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -292,7 +294,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -343,7 +345,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -386,7 +388,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -405,6 +407,7 @@ spec:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
@ -436,7 +439,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -452,7 +455,8 @@ spec:
build_pull_requests: false
build_tags: false
trigger_mode: code
filter_condition: 'build.branch !~ /^backport.*$/'
filter_condition: >-
build.branch !~ /^backport.*$/ && build.branch !~ /^mergify\/bp\/.*$/
filter_enabled: true
cancel_intermediate_builds: false
skip_intermediate_builds: false
@ -491,7 +495,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -533,7 +537,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -613,7 +617,7 @@ spec:
kind: Pipeline
metadata:
name: logstash-benchmark-pipeline
description: ':logstash: The Benchmark pipeline'
description: ':running: The Benchmark pipeline for snapshot version'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/benchmark_pipeline.yml"
@ -642,4 +646,160 @@ spec:
# *******************************
# SECTION END: Benchmark pipeline
# *******************************
# ***********************************
# SECTION START: Benchmark Marathon
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-benchmark-marathon-pipeline
description: Buildkite pipeline for benchmarking multi-version
links:
- title: 'Logstash Benchmark Marathon'
url: https://buildkite.com/elastic/logstash-benchmark-marathon-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-benchmark-marathon-pipeline
description: ':running: The Benchmark Marathon for multi-version'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/benchmark_marathon_pipeline.yml"
maximum_timeout_in_minutes: 480
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'false'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# *******************************
# SECTION END: Benchmark Marathon
# *******************************
# ***********************************
# Declare Health Report Tests pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-health-report-tests-pipeline
description: Buildkite pipeline for the Logstash Health Report Tests
links:
- title: ':logstash Logstash Health Report Tests (Daily, Auto) pipeline'
url: https://buildkite.com/elastic/logstash-health-report-tests-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-health-report-tests-pipeline
description: ':logstash: Logstash Health Report tests :pipeline:'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/health_report_tests_pipeline.yml"
maximum_timeout_in_minutes: 30 # usually tests last max ~17mins
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Daily Health Report tests on main branch:
branch: main
cronline: 30 20 * * *
message: Daily trigger of Health Report Tests Pipeline
# *******************************
# SECTION END: Health Report Tests pipeline
# *******************************
# ***********************************
# Declare JDK check pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-jdk-availability-check-pipeline
description: ":logstash: check availability of new JDK version"
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-jdk-availability-check-pipeline
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/jdk_availability_check_pipeline.yml"
maximum_timeout_in_minutes: 10
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Weekly JDK availability check (main):
branch: main
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
Weekly JDK availability check (8.x):
branch: 8.x
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
# *******************************
# SECTION END: JDK check pipeline
# *******************************

View file

@ -19,7 +19,7 @@ function get_package_type {
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export OSS=true
if [ -n "$BUILD_JAVA_HOME" ]; then

View file

@ -1,17 +0,0 @@
{
"notice": "This file is not maintained outside of the main branch and should only be used for tooling.",
"branches": [
{
"branch": "main"
},
{
"branch": "8.13"
},
{
"branch": "8.14"
},
{
"branch": "7.17"
}
]
}

View file

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -eo pipefail
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
echo "Checking local JDK version against latest remote from JVM catalog"
./gradlew checkNewJdkVersion

View file

@ -6,7 +6,7 @@ set -x
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
if [ -n "$BUILD_JAVA_HOME" ]; then
GRADLE_OPTS="$GRADLE_OPTS -Dorg.gradle.java.home=$BUILD_JAVA_HOME"
@ -15,7 +15,6 @@ fi
# Can run either a specific flavor, or all flavors -
# eg `ci/acceptance_tests.sh oss` will run tests for open source container
# `ci/acceptance_tests.sh full` will run tests for the default container
# `ci/acceptance_tests.sh ubi8` will run tests for the ubi8 based container
# `ci/acceptance_tests.sh wolfi` will run tests for the wolfi based container
# `ci/acceptance_tests.sh` will run tests for all containers
SELECTED_TEST_SUITE=$1
@ -49,23 +48,13 @@ if [[ $SELECTED_TEST_SUITE == "oss" ]]; then
elif [[ $SELECTED_TEST_SUITE == "full" ]]; then
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME
rake artifact:docker
rake artifact:build_docker_full
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/full/*_spec.rb
elif [[ $SELECTED_TEST_SUITE == "ubi8" ]]; then
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME
rake artifact:docker_ubi8
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/ubi8/*_spec.rb
elif [[ $SELECTED_TEST_SUITE == "wolfi" ]]; then
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME

View file

@ -19,24 +19,15 @@ if [[ $1 = "setup" ]]; then
exit 0
elif [[ $1 == "split" ]]; then
cd qa/integration
glob1=(specs/*spec.rb)
glob2=(specs/**/*spec.rb)
all_specs=("${glob1[@]}" "${glob2[@]}")
# Source shared function for splitting integration tests
source "$(dirname "${BASH_SOURCE[0]}")/partition-files.lib.sh"
specs0=${all_specs[@]::$((${#all_specs[@]} / 2 ))}
specs1=${all_specs[@]:$((${#all_specs[@]} / 2 ))}
cd ../..
if [[ $2 == 0 ]]; then
echo "Running the first half of integration specs: $specs0"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="$specs0" --console=plain
elif [[ $2 == 1 ]]; then
echo "Running the second half of integration specs: $specs1"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="$specs1" --console=plain
else
echo "Error, must specify 0 or 1 after the split. For example ci/integration_tests.sh split 0"
exit 1
fi
index="${2:?index}"
count="${3:-2}"
specs=($(cd qa/integration; partition_files "${index}" "${count}" < <(find specs -name '*_spec.rb') ))
echo "Running integration tests partition[${index}] of ${count}: ${specs[*]}"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="${specs[*]}" --console=plain
elif [[ ! -z $@ ]]; then
echo "Running integration tests 'rspec $@'"

View file

@ -1,13 +1,15 @@
{
"releases": {
"5.x": "5.6.16",
"6.x": "6.8.23",
"7.x": "7.17.22",
"8.x": "8.14.1"
"7.current": "7.17.28",
"8.previous": "8.17.5",
"8.current": "8.18.0"
},
"snapshots": {
"7.x": "7.17.23-SNAPSHOT",
"8.x": "8.14.2-SNAPSHOT",
"main": "8.15.0-SNAPSHOT"
"7.current": "7.17.29-SNAPSHOT",
"8.previous": "8.17.6-SNAPSHOT",
"8.current": "8.18.1-SNAPSHOT",
"8.next": "8.19.0-SNAPSHOT",
"9.next": "9.0.1-SNAPSHOT",
"main": "9.1.0-SNAPSHOT"
}
}

78
ci/partition-files.lib.sh Executable file
View file

@ -0,0 +1,78 @@
#!/bin/bash
# partition_files returns a consistent partition of the filenames given on stdin
# Usage: partition_files <partition_index> <partition_count=2> < <(ls files)
# partition_index: the zero-based index of the partition to select `[0,partition_count)`
# partition_count: the number of partitions `[2,#files]`
partition_files() (
set -e
local files
# ensure files is consistently sorted and distinct
IFS=$'\n' read -ra files -d '' <<<"$(cat - | sort | uniq)" || true
local partition_index="${1:?}"
local partition_count="${2:?}"
_error () { >&2 echo "ERROR: ${1:-UNSPECIFIED}"; exit 1; }
# safeguard against nonsense invocations
if (( ${#files[@]} < 2 )); then
_error "#files(${#files[@]}) must be at least 2 in order to partition"
elif ( ! [[ "${partition_count}" =~ ^[0-9]+$ ]] ) || (( partition_count < 2 )) || (( partition_count > ${#files[@]})); then
_error "partition_count(${partition_count}) must be a number that is at least 2 and not greater than #files(${#files[@]})"
elif ( ! [[ "${partition_index}" =~ ^[0-9]+$ ]] ) || (( partition_index < 0 )) || (( partition_index >= $partition_count )) ; then
_error "partition_index(${partition_index}) must be a number that is greater 0 and less than partition_count(${partition_count})"
fi
# round-robbin emit those in our selected partition
for index in "${!files[@]}"; do
partition="$(( index % partition_count ))"
if (( partition == partition_index )); then
echo "${files[$index]}"
fi
done
)
if [[ "$0" == "${BASH_SOURCE[0]}" ]]; then
if [[ "$1" == "test" ]]; then
status=0
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
file_list="$( cd "${SCRIPT_DIR}"; find . -type f )"
# for any legal partitioning into N partitions, we ensure that
# the combined output of `partition_files I N` where `I` is all numbers in
# the range `[0,N)` produces no repeats and no omissions, even if the
# input list is not consistently ordered.
for n in $(seq 2 $(wc -l <<<"${file_list}")); do
result=""
for i in $(seq 0 $(( n - 1 ))); do
for file in $(partition_files $i $n <<<"$( shuf <<<"${file_list}" )"); do
result+="${file}"$'\n'
done
done
repeated="$( uniq --repeated <<<"$( sort <<<"${result}" )" )"
if (( $(printf "${repeated}" | wc -l) > 0 )); then
status=1
echo "[n=${n}]FAIL(repeated):"$'\n'"${repeated}"
fi
missing=$( comm -23 <(sort <<<"${file_list}") <( sort <<<"${result}" ) )
if (( $(printf "${missing}" | wc -l) > 0 )); then
status=1
echo "[n=${n}]FAIL(omitted):"$'\n'"${missing}"
fi
done
if (( status > 0 )); then
echo "There were failures. The input list was:"
echo "${file_list}"
fi
exit "${status}"
else
partition_files $@
fi
fi

View file

@ -28,7 +28,9 @@ build_logstash() {
}
index_test_data() {
curl -X POST -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/$INDEX_NAME/_bulk" -H 'Content-Type: application/json' --data-binary @"$CURRENT_DIR/test_data/book.json"
curl -X POST -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/$INDEX_NAME/_bulk" \
-H 'x-elastic-product-origin: logstash' \
-H 'Content-Type: application/json' --data-binary @"$CURRENT_DIR/test_data/book.json"
}
# $1: check function

View file

@ -7,7 +7,8 @@ export PIPELINE_NAME='gen_es'
# update pipeline and check response code
index_pipeline() {
RESP_CODE=$(curl -s -w "%{http_code}" -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_logstash/pipeline/$1" -H 'Content-Type: application/json' -d "$2")
RESP_CODE=$(curl -s -w "%{http_code}" -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_logstash/pipeline/$1" \
-H 'x-elastic-product-origin: logstash' -H 'Content-Type: application/json' -d "$2")
if [[ $RESP_CODE -ge '400' ]]; then
echo "failed to update pipeline for Central Pipeline Management. Got $RESP_CODE from Elasticsearch"
exit 1
@ -34,7 +35,7 @@ check_plugin() {
}
delete_pipeline() {
curl -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -X DELETE "$ES_ENDPOINT/_logstash/pipeline/$PIPELINE_NAME" -H 'Content-Type: application/json';
curl -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' -X DELETE "$ES_ENDPOINT/_logstash/pipeline/$PIPELINE_NAME" -H 'Content-Type: application/json';
}
cpm_clean_up_and_get_result() {

View file

@ -6,10 +6,12 @@ source ./$(dirname "$0")/common.sh
deploy_ingest_pipeline() {
PIPELINE_RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_ingest/pipeline/integration-logstash_test.events-default" \
-H 'Content-Type: application/json' \
-H 'x-elastic-product-origin: logstash' \
--data-binary @"$CURRENT_DIR/test_data/ingest_pipeline.json")
TEMPLATE_RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_index_template/logs-serverless-default-template" \
-H 'Content-Type: application/json' \
-H 'x-elastic-product-origin: logstash' \
--data-binary @"$CURRENT_DIR/test_data/index_template.json")
# ingest pipeline is likely be there from the last run
@ -29,7 +31,7 @@ check_integration_filter() {
}
get_doc_msg_length() {
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/logs-$INDEX_NAME.004-default/_search?size=1" | jq '.hits.hits[0]._source.message | length'
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/logs-$INDEX_NAME.004-default/_search?size=1" -H 'x-elastic-product-origin: logstash' | jq '.hits.hits[0]._source.message | length'
}
# ensure no double run of ingest pipeline

View file

@ -9,7 +9,7 @@ check_named_index() {
}
get_data_stream_count() {
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/logs-$INDEX_NAME.001-default/_count" | jq '.count // 0'
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' "$ES_ENDPOINT/logs-$INDEX_NAME.001-default/_count" | jq '.count // 0'
}
compare_data_stream_count() {

View file

@ -10,7 +10,7 @@ export EXIT_CODE="0"
create_pipeline() {
RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME" \
-H 'Content-Type: application/json' -H 'kbn-xsrf: logstash' \
-H 'Content-Type: application/json' -H 'kbn-xsrf: logstash' -H 'x-elastic-product-origin: logstash' \
--data-binary @"$CURRENT_DIR/test_data/$PIPELINE_NAME.json")
if [[ RESP_CODE -ge '400' ]]; then
@ -20,7 +20,8 @@ create_pipeline() {
}
get_pipeline() {
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME")
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' \
"$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME") \
SOURCE_BODY=$(cat "$CURRENT_DIR/test_data/$PIPELINE_NAME.json")
RESP_PIPELINE_NAME=$(echo "$RESP_BODY" | jq -r '.id')
@ -41,7 +42,8 @@ get_pipeline() {
}
list_pipeline() {
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipelines" | jq --arg name "$PIPELINE_NAME" '.pipelines[] | select(.id==$name)' )
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' \
"$KB_ENDPOINT/api/logstash/pipelines" | jq --arg name "$PIPELINE_NAME" '.pipelines[] | select(.id==$name)' )
if [[ -z "$RESP_BODY" ]]; then
EXIT_CODE=$(( EXIT_CODE + 1 ))
echo "Fail to list pipeline."
@ -49,7 +51,8 @@ list_pipeline() {
}
delete_pipeline() {
RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X DELETE -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME" \
RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X DELETE -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' \
"$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME" \
-H 'Content-Type: application/json' -H 'kbn-xsrf: logstash' \
--data-binary @"$CURRENT_DIR/test_data/$PIPELINE_NAME.json")

View file

@ -40,7 +40,7 @@ stop_metricbeat() {
}
get_monitor_count() {
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/$INDEX_NAME/_count" | jq '.count // 0'
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' "$ES_ENDPOINT/$INDEX_NAME/_count" | jq '.count // 0'
}
compare_monitor_count() {

View file

@ -6,7 +6,7 @@ set -ex
source ./$(dirname "$0")/common.sh
get_monitor_count() {
curl -s -H "Authorization: ApiKey $LS_ROLE_API_KEY_ENCODED" "$ES_ENDPOINT/.monitoring-logstash-7-*/_count" | jq '.count'
curl -s -H "Authorization: ApiKey $LS_ROLE_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' "$ES_ENDPOINT/.monitoring-logstash-7-*/_count" | jq '.count'
}
compare_monitor_count() {

View file

@ -16,10 +16,6 @@
##
################################################################
## GC configuration
11-13:-XX:+UseConcMarkSweepGC
11-13:-XX:CMSInitiatingOccupancyFraction=75
11-13:-XX:+UseCMSInitiatingOccupancyOnly
## Locale
# Set the locale language
@ -34,7 +30,7 @@
## basic
# set the I/O temp directory
#-Djava.io.tmpdir=$HOME
#-Djava.io.tmpdir=${HOME}
# set to headless, just in case
-Djava.awt.headless=true
@ -59,11 +55,7 @@
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof
## GC logging
#-Xlog:gc*,gc+age=trace,safepoint:file=@loggc@:utctime,pid,tags:filecount=32,filesize=64m
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}
#-Xlog:gc*,gc+age=trace,safepoint:file=${LS_GC_LOG_FILE}:utctime,pid,tags:filecount=32,filesize=64m
# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom
@ -79,11 +71,11 @@
# text values with sizes less than or equal to this limit will be treated as invalid.
# This value should be higher than `logstash.jackson.stream-read-constraints.max-number-length`.
# The jackson library defaults to 20000000 or 20MB, whereas Logstash defaults to 200MB or 200000000 characters.
-Dlogstash.jackson.stream-read-constraints.max-string-length=200000000
#-Dlogstash.jackson.stream-read-constraints.max-string-length=200000000
#
# Sets the maximum number length (in chars or bytes, depending on input context).
# The jackson library defaults to 1000, whereas Logstash defaults to 10000.
-Dlogstash.jackson.stream-read-constraints.max-number-length=10000
#-Dlogstash.jackson.stream-read-constraints.max-number-length=10000
#
# Sets the maximum nesting depth. The depth is a count of objects and arrays that have not
# been closed, `{` and `[` respectively.

View file

@ -154,7 +154,7 @@ appender.deprecation_rolling.policies.size.size = 100MB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 30
logger.deprecation.name = org.logstash.deprecation, deprecation
logger.deprecation.name = org.logstash.deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_plain_rolling
logger.deprecation.additivity = false

View file

@ -181,38 +181,6 @@
#
# api.auth.basic.password_policy.mode: WARN
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
@ -314,13 +282,13 @@
# * json
#
# log.format: plain
# log.format.json.fix_duplicate_message_fields: false
# log.format.json.fix_duplicate_message_fields: true
#
# path.logs:
#
# ------------ Other Settings --------------
#
# Allow or block running Logstash as superuser (default: true)
# Allow or block running Logstash as superuser (default: true). Windows are excluded from the checking
# allow_superuser: false
#
# Where to find custom plugins
@ -331,13 +299,15 @@
# pipeline.separate_logs: false
#
# Determine where to allocate memory buffers, for plugins that leverage them.
# Default to direct, optionally can be switched to heap to select Java heap space.
# pipeline.buffer.type: direct
# Defaults to heap,but can be switched to direct if you prefer using direct memory space instead.
# pipeline.buffer.type: heap
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
# Flag to allow the legacy internal monitoring (default: false)
#xpack.monitoring.allow_legacy_collection: false
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password

View file

@ -1,232 +0,0 @@
#!/usr/bin/env python3
"""Cherry pick and backport a PR"""
from __future__ import print_function
from builtins import input
import sys
import os
import argparse
from os.path import expanduser
import re
from subprocess import check_call, call, check_output
import requests
import json
usage = """
Example usage:
./devtools/backport 7.16 2565 6490604aa0cf7fa61932a90700e6ca988fc8a527
In case of backporting errors, fix them, then run
git cherry-pick --continue
./devtools/backport 7.16 2565 6490604aa0cf7fa61932a90700e6ca988fc8a527 --continue
This script does the following:
* cleanups both from_branch and to_branch (warning: drops local changes)
* creates a temporary branch named something like "branch_2565"
* calls the git cherry-pick command in this branch
* after fixing the merge errors (if needed), pushes the branch to your
remote
* it will attempt to create a PR for you using the GitHub API, but requires
the GitHub token, with the public_repo scope, available in `~/.elastic/github.token`.
Keep in mind this token has to also be authorized to the Elastic organization as
well as to work with SSO.
(see https://help.github.com/en/articles/authorizing-a-personal-access-token-for-use-with-saml-single-sign-on)
Note that you need to take the commit hashes from `git log` on the
from_branch, copying the IDs from Github doesn't work in case we squashed the
PR.
"""
def main():
"""Main"""
parser = argparse.ArgumentParser(
description="Creates a PR for cherry-picking commits",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=usage)
parser.add_argument("to_branch",
help="To branch (e.g 7.x)")
parser.add_argument("pr_number",
help="The PR number being merged (e.g. 2345)")
parser.add_argument("commit_hashes", metavar="hash", nargs="*",
help="The commit hashes to cherry pick." +
" You can specify multiple.")
parser.add_argument("--yes", action="store_true",
help="Assume yes. Warning: discards local changes.")
parser.add_argument("--continue", action="store_true",
help="Continue after fixing merging errors.")
parser.add_argument("--from_branch", default="main",
help="From branch")
parser.add_argument("--diff", action="store_true",
help="Display the diff before pushing the PR")
parser.add_argument("--remote", default="",
help="Which remote to push the backport branch to")
#parser.add_argument("--zube-team", default="",
# help="Team the PR belongs to")
#parser.add_argument("--keep-backport-label", action="store_true",
# help="Preserve label needs_backport in original PR")
args = parser.parse_args()
print(args)
create_pr(parser, args)
def create_pr(parser, args):
info("Checking if GitHub API token is available in `~/.elastic/github.token`")
token = get_github_token()
tmp_branch = "backport_{}_{}".format(args.pr_number, args.to_branch)
if not vars(args)["continue"]:
if not args.yes and input("This will destroy all local changes. " +
"Continue? [y/n]: ") != "y":
return 1
info("Destroying local changes...")
check_call("git reset --hard", shell=True)
check_call("git clean -df", shell=True)
check_call("git fetch", shell=True)
info("Checkout of {} to backport from....".format(args.from_branch))
check_call("git checkout {}".format(args.from_branch), shell=True)
check_call("git pull", shell=True)
info("Checkout of {} to backport to...".format(args.to_branch))
check_call("git checkout {}".format(args.to_branch), shell=True)
check_call("git pull", shell=True)
info("Creating backport branch {}...".format(tmp_branch))
call("git branch -D {} > /dev/null".format(tmp_branch), shell=True)
check_call("git checkout -b {}".format(tmp_branch), shell=True)
if len(args.commit_hashes) == 0:
if token:
session = github_session(token)
base = "https://api.github.com/repos/elastic/logstash"
original_pr = session.get(base + "/pulls/" + args.pr_number).json()
merge_commit = original_pr['merge_commit_sha']
if not merge_commit:
info("Could not auto resolve merge commit - PR isn't merged yet")
return 1
info("Merge commit detected from PR: {}".format(merge_commit))
commit_hashes = merge_commit
else:
info("GitHub API token not available. " +
"Please manually specify commit hash(es) argument(s)\n")
parser.print_help()
return 1
else:
commit_hashes = "{}".format(" ").join(args.commit_hashes)
info("Cherry-picking {}".format(commit_hashes))
if call("git cherry-pick -x {}".format(commit_hashes), shell=True) != 0:
info("Looks like you have cherry-pick errors.")
info("Fix them, then run: ")
info(" git cherry-pick --continue")
info(" {} --continue".format(" ".join(sys.argv)))
return 1
if len(check_output("git status -s", shell=True).strip()) > 0:
info("Looks like you have uncommitted changes." +
" Please execute first: git cherry-pick --continue")
return 1
if len(check_output("git log HEAD...{}".format(args.to_branch),
shell=True).strip()) == 0:
info("No commit to push")
return 1
if args.diff:
call("git diff {}".format(args.to_branch), shell=True)
if input("Continue? [y/n]: ") != "y":
info("Aborting cherry-pick.")
return 1
info("Ready to push branch.")
remote = args.remote
if not remote:
remote = input("To which remote should I push? (your fork): ")
info("Pushing branch {} to remote {}".format(tmp_branch, remote))
call("git push {} :{} > /dev/null".format(remote, tmp_branch), shell=True)
check_call("git push --set-upstream {} {}".format(remote, tmp_branch), shell=True)
if not token:
info("GitHub API token not available.\n" +
"Manually create a PR by following this URL: \n\t" +
"https://github.com/elastic/logstash/compare/{}...{}:{}?expand=1"
.format(args.to_branch, remote, tmp_branch))
else:
info("Automatically creating a PR for you...")
session = github_session(token)
base = "https://api.github.com/repos/elastic/logstash"
original_pr = session.get(base + "/pulls/" + args.pr_number).json()
# get the github username from the remote where we pushed
remote_url = check_output("git remote get-url {}".format(remote), shell=True)
remote_user = re.search("github.com[:/](.+)/logstash", str(remote_url)).group(1)
# create PR
request = session.post(base + "/pulls", json=dict(
title="Backport PR #{} to {}: {}".format(args.pr_number, args.to_branch, original_pr["title"]),
head=remote_user + ":" + tmp_branch,
base=args.to_branch,
body="**Backport PR #{} to {} branch, original message:**\n\n---\n\n{}"
.format(args.pr_number, args.to_branch, original_pr["body"])
))
if request.status_code > 299:
info("Creating PR failed: {}".format(request.json()))
sys.exit(1)
new_pr = request.json()
# add labels
labels = ["backport"]
# get the version (vX.Y.Z) we are backporting to
version = get_version(os.getcwd())
if version:
labels.append(version)
session.post(
base + "/issues/{}/labels".format(new_pr["number"]), json=labels)
"""
if not args.keep_backport_label:
# remove needs backport label from the original PR
session.delete(base + "/issues/{}/labels/needs_backport".format(args.pr_number))
"""
# Set a version label on the original PR
if version:
session.post(
base + "/issues/{}/labels".format(args.pr_number), json=[version])
info("Done. PR created: {}".format(new_pr["html_url"]))
info("Please go and check it and add the review tags")
def get_version(base_dir):
#pattern = re.compile(r'(const\s|)\w*(v|V)ersion\s=\s"(?P<version>.*)"')
with open(os.path.join(base_dir, "versions.yml"), "r") as f:
for line in f:
if line.startswith('logstash:'):
return "v" + line.split(':')[-1].strip()
#match = pattern.match(line)
#if match:
# return match.group('version')
def get_github_token():
try:
token = open(expanduser("~/.elastic/github.token"), "r").read().strip()
except:
token = False
return token
def github_session(token):
session = requests.Session()
session.headers.update({"Authorization": "token " + token})
return session
def info(msg):
print("\nINFO: {}".format(msg))
if __name__ == "__main__":
sys.exit(main())

View file

@ -1,163 +0,0 @@
#!/usr/bin/env python3
"""Cherry pick and backport a PR"""
from __future__ import print_function
from builtins import input
import sys
import os
import argparse
from os.path import expanduser
import re
from subprocess import check_call, call, check_output
import requests
usage = """
Example usage:
./dev-tools/create local_branch
This script does the following:
* cleanups local_branch (warning: drops local changes)
* rebases the branch against main
* it will attempt to create a PR for you using the GitHub API, but requires
the GitHub token, with the public_repo scope, available in `~/.elastic/github.token`.
Keep in mind this token has to also be authorized to the Elastic organization as
well as to work with SSO.
(see https://help.github.com/en/articles/authorizing-a-personal-access-token-for-use-with-saml-single-sign-on)
Note that you need to take the commit hashes from `git log` on the
from_branch, copying the IDs from Github doesn't work in case we squashed the
PR.
"""
def main():
"""Main"""
parser = argparse.ArgumentParser(
description="Creates a new PR from a branch",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=usage)
parser.add_argument("local_branch",
help="Branch to Create a PR for")
parser.add_argument("--to_branch", default="main",
help="Which remote to push the backport branch to")
parser.add_argument("--yes", action="store_true",
help="Assume yes. Warning: discards local changes.")
parser.add_argument("--continue", action="store_true",
help="Continue after fixing merging errors.")
parser.add_argument("--diff", action="store_true",
help="Display the diff before pushing the PR")
parser.add_argument("--remote", default="",
help="Which remote to push the backport branch to")
args = parser.parse_args()
print(args)
create_pr(args)
def create_pr(args):
if not vars(args)["continue"]:
if not args.yes and input("This will destroy all local changes. " +
"Continue? [y/n]: ") != "y":
return 1
info("Destroying local changess...")
check_call("git reset --hard", shell=True)
check_call("git clean -df", shell=True)
#check_call("git fetch", shell=True)
info("Checkout of {} to create a PR....".format(args.local_branch))
check_call("git checkout {}".format(args.local_branch), shell=True)
check_call("git rebase {}".format(args.to_branch), shell=True)
if args.diff:
call("git diff {}".format(args.to_branch), shell=True)
if input("Continue? [y/n]: ") != "y":
info("Aborting PR creation...")
return 1
info("Ready to push branch and create PR...")
remote = args.remote
if not remote:
remote = input("To which remote should I push? (your fork): ")
info("Pushing branch {} to remote {}".format(args.local_branch, remote))
call("git push {} :{} > /dev/null".format(remote, args.local_branch),
shell=True)
check_call("git push --set-upstream {} {}"
.format(remote, args.local_branch), shell=True)
info("Checking if GitHub API token is available in `~/.elastic/github.token`")
try:
token = open(expanduser("~/.elastic/github.token"), "r").read().strip()
except:
token = False
if not token:
info("GitHub API token not available.\n" +
"Manually create a PR by following this URL: \n\t" +
"https://github.com/elastic/logstash/compare/{}...{}:{}?expand=1"
.format(args.to_branch, remote, args.local_branch))
else:
info("Automatically creating a PR for you...")
base = "https://api.github.com/repos/elastic/logstash"
session = requests.Session()
session.headers.update({"Authorization": "token " + token})
# get the github username from the remote where we pushed
remote_url = check_output("git remote get-url {}".format(remote),
shell=True)
remote_user = re.search("github.com[:/](.+)/logstash", str(remote_url)).group(1)
### TODO:
title = input("Title: ")
body = input("Description: ")
# create PR
request = session.post(base + "/pulls", json=dict(
title=title,
head=remote_user + ":" + args.local_branch,
base=args.to_branch,
body=body
))
if request.status_code > 299:
info("Creating PR failed: {}".format(request.json()))
sys.exit(1)
new_pr = request.json()
"""
# add labels
labels = ["backport"]
# get the version we are backported to
version = get_version(os.getcwd())
if version:
labels.append("v" + version)
session.post(
base + "/issues/{}/labels".format(new_pr["number"]), json=labels)
# Set a version label on the original PR
if version:
session.post(
base + "/issues/{}/labels".format(args.pr_number), json=[version])
"""
info("Done. PR created: {}".format(new_pr["html_url"]))
info("Please go and check it and add the review tags")
def get_version(base_dir):
#pattern = re.compile(r'(const\s|)\w*(v|V)ersion\s=\s"(?P<version>.*)"')
with open(os.path.join(base_dir, "versions.yml"), "r") as f:
for line in f:
if line.startswith('logstash:'):
return line.split(':')[-1].strip()
#match = pattern.match(line)
#if match:
# return match.group('version')
def info(msg):
print("\nINFO: {}".format(msg))
if __name__ == "__main__":
sys.exit(main())

Some files were not shown because too many files have changed in this diff Show more