Compare commits

...

563 commits

Author SHA1 Message Date
Karen Metts
b519cf4213
Doc: Remove local k8s files (#17547) 2025-04-17 19:20:41 -04:00
Karen Metts
f91f5a692d
Remove ADOC preview link (#17496) 2025-04-17 12:15:45 -04:00
João Duarte
005358ffb4
set major-version to "9.x" used only for the package installation section (#17562)
closes #17561
2025-04-17 17:13:50 +01:00
Victor Martinez
6646400637
mergify: support 8.19 and remove support 8.x (#17567) 2025-04-16 20:07:34 +02:00
Colleen McGinnis
47d430d4fb
Doc: Fix image paths for docs-assembler (#17566) 2025-04-16 09:46:48 -04:00
Cas Donoghue
49cf7acad0
Update logstash_releases.json after 8.17.5 (#17560)
* Update logstash_releases.json after 8.17.5

* Update 9.next  post 9.0.0 release

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-04-15 11:18:37 -07:00
Andrea Selva
dae2b61ab2
Update logstash_releases.json after 8.18.0 (#17506) 2025-04-15 12:02:11 +02:00
Yehor Shvedov
86042f8c98
New way of backporting to active branches using gh action (#17551) 2025-04-15 00:39:38 +03:00
George Wallace
2c95068e04
Update index.md (#17548) 2025-04-10 16:08:27 -04:00
Karen Metts
187c925cc8
Doc: Update installation info (#17532) 2025-04-09 17:04:17 -04:00
Cas Donoghue
8e6e183adc
Ensure elasticsearch logs and data dirs exist before startup (#17531)
With a recent change in ES https://github.com/elastic/elasticsearch/pull/125449
configuring path.data or path.logs to directories that do not exist cause ES to
not be able to start up. This commit ensures those directories exist. The
teardown script already ensures they are removed 712b37e1df/qa/integration/services/elasticsearch_teardown.sh (L26-L27)
2025-04-09 13:32:32 -07:00
Rye Biesemeyer
712b37e1df
setting: enforce non-nullable (restore 8.15.x behavior) (#17522) 2025-04-09 09:28:38 -07:00
João Duarte
815fa8be1c
updates to docker image template based on feedback (#17494)
* change base images to ubi9-minimal
* do all env2yaml related copying in 1 COPY
* use -trimpath in go build
* move other COPY to end of dockerfile
* don't run package manager upgrade
* FROM and AS with same case
* ENV x=y instead of ENV x y
* remove indirect config folder
2025-04-09 16:17:39 +01:00
Mashhur
b9bac5dfc6
[Monitoring LS] Recommends collecting metricsets to fully visualize metrics on dashboards. (#17479)
* Disabling some of metricsets may cause dashboards partially display some metrics. This change recommends collecting metricsets to fully visualize metrics on dashboards.

* Paraphrasing sentences and grammar correction.

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>

* Update docs/reference/serverless-monitoring-with-elastic-agent.md

* Make recommendation as a tip.

---------

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-04-08 15:03:53 -07:00
Dimitrios Liappis
b9469e0726
Fix JDK matrix pipeline after configurable it split (#17461)
PR #17219 introduced configurable split quantities for IT tests, which
resulted in broken JDK matrix pipelines (e.g. as seen via the elastic
internal link:
https://buildkite.com/elastic/logstash-linux-jdk-matrix-pipeline/builds/444

reporting the following error

```
  File "/buildkite/builds/bk-agent-prod-k8s-1743469287077752648/elastic/logstash-linux-jdk-matrix-pipeline/.buildkite/scripts/jdk-matrix-tests/generate-steps.py", line 263
    def integration_tests(self, part: int, parts: int) -> JobRetValues:
    ^^^
SyntaxError: invalid syntax
There was a problem rendering the pipeline steps.
Exiting now.
```
)

This commit fixes the above problem, which was already fixed in #17642, using a more
idiomatic way.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-04-08 15:34:07 +03:00
Karen Metts
d66a2cf758
Release notes, deprecations, breaking for 9.0.0 (#17507) 2025-04-07 19:09:14 -04:00
Karen Metts
e13fcadad8
Doc: Incorporate field ref deep dive content (#17484)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-04-07 15:04:49 -04:00
Andrea Selva
cb4c234aee
Update uri gem required by Logstash (#17495) 2025-04-07 10:32:12 +02:00
kaisecheng
eeb2162ae4
pin cgi to 0.3.7 (#17487) 2025-04-03 18:34:58 +01:00
Mashhur
ae3b3ed17c
Remove tech preview from agent driven LS monitoring pages. (#17482)
* Remove tech preview from agent driven LS monitoring pages.

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-04-02 10:43:42 -07:00
Rob Bavey
3c6cbbf35b
Breaking changes for 9.0 (#17380) 2025-04-01 23:19:30 -04:00
Rob Bavey
5a052b33f9
Fix standalone agent access for agent-driven monitoring (#17386)
* Fix standalone agent access for agent-driven monitoring

Change incorrect dedicated instructions, and base them on those from running Elastic Agent
in standalone mode.


---------

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-04-01 14:50:14 -04:00
Karen Metts
7913f91340
Doc: Move 9.0 pre-release notes to release notes (#17439) 2025-04-01 10:16:51 -04:00
João Duarte
5f5b4bb3c3
Fix persistent-queues.md PQ sizing multiplication factors #17451 (#17452)
closes #17431 for main/9.0 branches
2025-04-01 13:22:50 +01:00
Andrea Selva
422cd4e06b
Fix syntax in BK CI script (#17462) 2025-04-01 12:26:07 +02:00
Victor Martinez
26af21df85
ci(backport): remove former approach (#17347) 2025-03-31 17:45:31 +02:00
Victor Martinez
a539695830
github-actions: enable dependabot (#17421) 2025-03-30 10:06:18 +02:00
Mashhur
4b88773726
Do not pipe Git output into a pager. (#17442) 2025-03-28 10:56:39 -07:00
Colleen McGinnis
e5bebcea17
remove reliance on redirects (#17440) 2025-03-28 12:45:19 -05:00
Karen Metts
e2c6254c81
Doc: Remove plugin docs from logstash core (#17405)
Co-authored-by: Colleen McGinnis <colleen.mcginnis@elastic.co>
2025-03-27 11:02:45 -04:00
Karen Metts
add7b3f4d3
Doc: Upgrade content improvements (#17403)
* Change upgrade prereq from 8.17 to 8.18
2025-03-26 13:25:31 -04:00
Cas Donoghue
6de59f2c02
Pin rubocop-ast development gem due to new dep on prism (#17407)
The rubocop-ast gem just introduced a new dependency on prism.
 - https://rubygems.org/gems/rubocop-ast/versions/1.43.0

In our install default gem rake task we are seeing issues trying to build native
extensions. I see that in upstream jruby they are seeing a similar problem (at
least it is the same failure mode https://github.com/jruby/jruby/pull/8415

This commit pins rubocop-ast to 1.42.0 which is the last version that did not
have an explicit prism dependency.
2025-03-26 08:48:28 -07:00
Andrea Selva
075fdb4152
Limit memory consumption in test on overflow (#17373)
Updates only test code to be able to run a test that consumes big memory if:
- the physical memory is bigger than the requested Java heap
- JDK version is greater than or equal to 21.

The reason to limit the JDK version is that on 16GB machine the G1GC is more efficient than the one on previous JDKs and so let complete the test with 10GB heap, while in JDK 17 it consistently fails with OOM error.
2025-03-26 11:29:09 +01:00
Andrea Selva
6b277ccf0d
Update logstash_releases.json after 8.17.4 (#17396) 2025-03-26 09:50:25 +01:00
Andrea Selva
f76edcea5e
Update logstash_releases.json after 8.16.6 (#17394) 2025-03-26 09:01:27 +01:00
Rob Bavey
f705a9de48
Fix Elasticsearch output SSL settings (#17391)
Replace removed Elasticsearch output SSL settings with the latest values
2025-03-24 11:45:14 -04:00
Kaarina Tungseth
7ac0423de0
Updates navigation titles and descriptions for release notes (#17381) 2025-03-21 11:58:31 -05:00
Colleen McGinnis
284272b137
[docs] Miscellaneous docs clean up (#17372)
* remove unused substitutions
* move images
* validate build
2025-03-21 11:11:24 -04:00
Karen Metts
960d997e9f
Doc: Fix upgrade TOC structure (#17361) 2025-03-21 10:28:39 -04:00
Rye Biesemeyer
3e0f488df2
tests: make integration split quantity configurable (#17219)
* tests: make integration split quantity configurable

Refactors shared splitter bash function to take a list of files on stdin
and split into a configurable number of partitions, emitting only those from
the currently-selected partition to stdout.

Also refactors the only caller in the integration_tests launcher script to
accept an optional partition_count parameter (defaulting to `2` for backward-
compatibility), to provide the list of specs to the function's stdin, and to
output relevant information about the quantity of partition splits and which
was selected.

* ci: run integration tests in 3 parts
2025-03-19 16:37:27 -07:00
Yehor Shvedov
7683983168
Disable support of OpenJDK 17 (#17338) 2025-03-20 01:34:33 +02:00
Andrea Selva
afde43f918
Added test to verify the int overflow happen (#17353)
Use long instead of int type to keep the length of the first token.

The size limit validation requires to sum two integers, one with the length of the accumulated chars till now plus the next fragment head part. If any of the two sizes is close to the max integer it generates an overflow and could successfully fail the test 9c0e50faac/logstash-core/src/main/java/org/logstash/common/BufferedTokenizerExt.java (L123).

To fall in this case it's required that sizeLimit is bigger then 2^32 bytes (2GB) and data fragments without any line delimiter is pushed to the tokenizer with a total size close to 2^32 bytes.
2025-03-19 16:50:04 +01:00
Andrea Selva
9c0e50faac
Use org.logstash.common.Util to hashing by default to SHA256 (#17346)
Removes the usage fo Apache Commons Codec MessgeDigest to use internal Util class with embodies hashing methods.
2025-03-19 09:15:25 +01:00
Rye Biesemeyer
10b5a84f84
add ci shared qualified-version script (#17311)
* ci: add shareable script for generating qualified version

* ci: use shared script to generate qualified version
2025-03-18 15:41:33 -07:00
Andrea Selva
787fd2c62f
Removed unused configHash computation that can be replaced by PipelineConfig.configHash() (#17336)
Removed unused configHash computation happening in AbstractPipeline and used only in tests replaced by PipelineConfig.configHash() invocation
2025-03-18 08:58:24 +01:00
Karen Metts
193af6a272
Doc: Refine new MD docs (WIP) (#17325) 2025-03-17 17:14:40 -04:00
Mashhur
8d10baa957
Internal collection doc update to reflect enabling legacy collection. (#17326) 2025-03-14 14:50:42 -07:00
Karen Metts
ea99e1db58
Doc: Update TOC order in API docs (#17324) 2025-03-14 15:47:16 -04:00
kaisecheng
4779d5e250
[Doc] OpenAPI spec (#17292)
This commit migrated API doc to OpenAPI spec

- expand Node Info `/_node/<types>`
- expand Node Stats 
- add health report 
- break down the share part to components
- add examples
- update authentication

Co-authored-by: Lisa Cawley <lcawley@elastic.co>
2025-03-14 17:31:39 +00:00
Cas Donoghue
964468f922
Require shellwords in artifact rake task (#17319)
The https://github.com/elastic/logstash/pull/17310 PR changed the rake task for
artifact creation to use shellwords from standard library. The acceptance tests
need to explitily load that library. This commit updates the rake file to handle
loading the required code.
2025-03-13 18:38:52 -07:00
Cas Donoghue
0d931a502a
Surface failures from nested rake/shell tasks (#17310)
Previously when rake would shell out the output would be lost. This
made debugging CI logs difficult. This commit updates the stack with
improved message surfacing on error.
2025-03-13 15:20:38 -07:00
Mashhur
e748488e4a
Upgrade elasticsearch-ruby client. (#17161)
* Upgrade elasticsearch-ruby client.

* Fix Faraday removed basic auth option and apply the ES client module name change.
2025-03-13 10:03:07 -07:00
Cas Donoghue
d916972877
Shareable function for partitioning integration tests (#17223)
For the fedramp high work https://github.com/elastic/logstash/pull/17038/files a
use case for multiple scripts consuming the partitioning functionality emerged.
As we look to more advanced partitioning we want to ensure that the
functionality will be consumable from multiple scripts.

See https://github.com/elastic/logstash/pull/17219#issuecomment-2698650296
2025-03-12 09:30:45 -07:00
Rob Bavey
bff0d5c40f
Remove automatic backport-8.x label creation (#17290)
This commit removes the automatic creation of the `backport-8.x` label on any commit
2025-03-06 17:55:01 -05:00
Colleen McGinnis
cb6886814c
Doc: Fix external links (#17288) 2025-03-06 13:38:31 -05:00
kaisecheng
feb2b92ba2
[CI] Health report integration tests use the new artifacts-api (#17274)
migrate to the new artifacts-api
2025-03-06 16:17:43 +00:00
kaisecheng
d61a83abbe
[CI] benchmark uses the new artifacts-api (#17224)
migrate benchmark to the new artifacts-api
2025-03-06 16:16:19 +00:00
Andrea Selva
07a3c8e73b
Reimplement LogStash::Numeric setting in Java (#17127)
Reimplements `LogStash::Setting::Numeric` Ruby setting class into the `org.logstash.settings.NumericSetting` and exposes it through `java_import` as `LogStash::Setting::NumericSetting`.
Updates the rspec tests:
- verifies `java.lang.IllegalArgumentException` instead of `ArgumentError` is thrown because the kind of exception thrown by Java code, during verification.
2025-03-06 10:44:22 +01:00
Ry Biesemeyer
a736178d59
Pluginmanager install preserve (#17267)
* tests: integration tests for pluginmanager install --preserve

* fix regression where pluginmanager's install --preserve flag didn't
2025-03-05 20:38:59 -08:00
Mashhur
b993bec499
Temporarily disable mergify conflict process. (#17258) 2025-03-05 12:52:00 -08:00
Rob Bavey
ba5f21576c
Fix pqcheck and pqrepair on Windows (#17210)
A recent change to pqheck, attempted to address an issue where the
pqcheck would not on Windows mahcines when located in a folder containing
a space, such as "C:\program files\elastic\logstash". While this fixed an
issue with spaces in folders, it introduced a new issue related to Java options,
and the pqcheck was still unable to run on Windows.

This PR attempts to address the issue, by removing the quotes around the Java options,
which caused the option parsing to fail, and instead removes the explicit setting of
the classpath - the use of `set CLASSPATH=` in the `:concat` function is sufficient
to set the classpath, and should also fix the spaces issue

Fixes: #17209
2025-03-05 15:48:20 -05:00
Mashhur
1e06eea86e
Additional cleanify changes to ls2ls integ tests (#17246)
* Additional cleanify changes to ls2ls integ tests: replace heartbeat-input with reload option, set queue drain to get consistent result.
2025-03-05 12:24:23 -08:00
Rob Bavey
7446e6bf6a
Update logstash_releases.json (#17253)
Update Logstash releases file with new 8.17 and 8.16 releases
2025-03-05 15:11:44 -05:00
Victor Martinez
f4ca06cfed
mergify: support for some backport aliases (#17217) 2025-03-05 20:46:06 +01:00
Ry Biesemeyer
73ffa243bf
tests: ls2ls delay checking until events have been processed (#17167)
* tests: ls2ls delay checking until events have been processed

* Make sure upstream sends expected number of events before checking the expectation with downstream. Remove unnecessary or duplicated logics from the spec.

* Add exception handling in `wait_for_rest_api` to make wait for LS REST API retriable.

---------

Co-authored-by: Mashhur <mashhur.sattorov@elastic.co>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-03-05 11:36:52 -08:00
kaisecheng
0a745686f6
gradle task migrate to the new artifacts-api (#17232)
This commit migrates gradle task to the new artifacts-api

- remove dependency on staging artifacts
- all builds use snapshot artifacts
- resolve version from current branch, major.x, previous minor,
   with priority given in that order.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-03-05 17:12:52 +00:00
Victor Martinez
7d1458fad3
ci: filter mergify backports for the Exhaustive tests pipeline (#17227) 2025-03-05 17:07:55 +01:00
Karen Metts
0a3a2a302c
Doc: Running LS in ECK (#17225) 2025-03-05 09:09:58 -05:00
kaisecheng
34416fd971
[test] ls2ls spec adds heartbeat to keep alive (#17228)
Adds heartbeat-input to the LS to LS integration test config to flush the last batch from PQ
2025-03-05 12:29:28 +00:00
Victor Martinez
c95430b586
mergify: add backport rules for 8.18 and 9.0 (#17168) 2025-03-05 08:57:03 +01:00
Cas Donoghue
062154494a
Improve warning for insufficient file resources for PQ max_bytes (#16656)
This commit refactors the `PersistedQueueConfigValidator` class to provide a
more detailed, accurate and actionable warning when pipeline's PQ configs are at
risk of running out of disk space. See
https://github.com/elastic/logstash/issues/14839 for design considerations. The
highlights of the changes include accurately determining the free resources on a
filesystem disk and then providing a breakdown of the usage for each of the
paths configured for a queue.
2025-03-04 13:27:10 -08:00
Ry Biesemeyer
8c96913807
Pluginmanager clean after mutate (#17203)
* pluginmanager: always clean after mutate

* pluginmanager: don't skip updating plugins installed with --version

* pr feedback
2025-03-04 10:59:35 -08:00
Liam Thompson
53d39adb21
[DOCS] Fix doc link (#17213) 2025-03-04 11:38:22 -05:00
Colleen McGinnis
50671709e3
add missing mapped_page (#17208) 2025-03-03 12:29:35 -06:00
Colleen McGinnis
24fd2a6c75
clean up cross-repo links (#17190) 2025-03-03 12:37:47 -05:00
kaisecheng
86785815bd
Fix empty node stats pipelines (#17185)
Fixed an issue where the `/_node/stats` API displayed empty pipeline metrics 
when X-Pack monitoring was enabled
2025-02-28 21:38:35 +00:00
João Duarte
a4cf2bcc52
add missing Makefile tasks to build oss and wolfi images from build context tarballs (#17189) 2025-02-28 20:54:46 +00:00
kaisecheng
f562f37df2
Update z_rubycheck.rake to no longer inject Xmx1g
This allows the environment variable JRUBY_OPTS to be used for setting properties like Xmx
original pr: #16420
2025-02-28 15:22:34 +00:00
João Duarte
fecfc7c602
add env2yaml source files to build context tarball (#17151)
* build full docker image from dockerfiles during docker acceptance tests
2025-02-28 11:34:06 +00:00
kaisecheng
2d69d06809
use UBI9 as base image (#17156)
- the base image change from ubi8 to ubi9
- remove installation of curl
2025-02-28 09:29:19 +00:00
Ry Biesemeyer
0f81816311
qa: don't bypass plugin manger tests on linux (#17171)
* qa: don't bypass plugin manger tests on linux

* add gradle task to build gem fixtures for integration tests
2025-02-27 13:24:04 -08:00
Ry Biesemeyer
793e8c0b45
plugin manager: add --no-expand flag for list command (#17124)
* plugin manager: add --no-expand flag for list command

Allows us to avoid expanding aliases and integration plugins

* spec: escape expected output in regexp
2025-02-27 07:24:56 -08:00
Victor Martinez
d40386a335
mergify: support backports automation with labels (#16937) 2025-02-27 15:16:51 +01:00
Colleen McGinnis
3115c78bf8
[docs] Migrate docs from AsciiDoc to Markdown (#17159)
* delete asciidoc files
* add migrated files
2025-02-26 14:19:48 -05:00
Colleen McGinnis
884ae815b5
add the new ci checks (#17158) 2025-02-26 14:19:25 -05:00
João Duarte
823dcd25fa
Update logstash_releases.json to include Logstash 7.17.28 (#17150) 2025-02-25 16:17:37 +00:00
Dimitrios Liappis
4d52b7258d
Add Windows 2025 to CI (#17133)
This commit adds Windows 2025 to the Windows JDK matrix and exhaustive tests pipelines.
2025-02-24 15:29:35 +02:00
Cas Donoghue
227c0d8150
Update container acceptance tests with stdout/stderr changes (#17138)
In https://github.com/elastic/logstash/pull/17125 jvm setup was redirected to
stderr to avoid polluting stdout. This test was actually having to do some
additional processing to parse that information. Now that we have split the
destinations the tests can be simplified to look for the data they are trying to
validate on the appropriate stream.
2025-02-21 10:40:43 -08:00
Ry Biesemeyer
91258c3f98
entrypoint: avoid polluting stdout (#17125)
routes output from setup-related functions to stderr, so that stdout can
include only the output of the actual program.
2025-02-20 12:55:40 -08:00
Cas Donoghue
e8e24a0397
Fix acceptance test assertions for updated plugin remove (#17126)
This commit updates the acceptance tests to expect messages in the updated
format for removing plugins. See https://github.com/elastic/logstash/pull/17030
for change.
2025-02-20 07:19:36 -08:00
Cas Donoghue
e094054c0e
Fix acceptance test assertions for updated plugin remove (#17122)
This commit updates the acceptance tests to expect messages in the updated
format for removing plugins. See https://github.com/elastic/logstash/pull/17030
for change.
2025-02-19 15:44:29 -08:00
Ry Biesemeyer
089558801e
plugins: improve remove command to support multiple plugins (#17030)
Removal works in a single pass by finding plugins that would have unmet
dependencies if all of the specified plugins were to be removed, and
proceeding with the removal only if no conflicts were created.

> ~~~
> ╭─{ rye@perhaps:~/src/elastic/logstash@main (pluginmanager-remove-multiple ✘) }
> ╰─● bin/logstash-plugin remove logstash-input-syslog logstash-filter-grok
> Using system java: /Users/rye/.jenv/shims/java
> Resolving dependencies......
> Successfully removed logstash-input-syslog
> Successfully removed logstash-filter-grok
> [success (00:00:05)]
~~~
2025-02-19 11:17:20 -08:00
Ry Biesemeyer
9abad6609c
spec: improve ls2ls spec (#17114)
* spec: improve ls2ls spec

 - fixes upstream/downstream convention
   - upstream: the sending logstash (has an LS output)
   - downstream: the receiving logstash (has an LS input)
 - helper `run_logstash_instance` yields the `LogstashService` instance
   and handles the teardown.
 - pass the pipeline id and node name to the LS instances via command line
   flags to make logging easier to differentiate
 - use the generator input's sequence id to ensure that the _actual_ events
   generated are received by the downstream pipeline

* start with port-offset 100

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-02-18 21:53:35 -08:00
João Duarte
637f447b88
allow concurrent Batch deserialization (#17050)
Currently the deserialization is behind the readBatch's lock, so any large batch will take time deserializing, causing any other Queue writer (e.g. netty executor threads) and any other Queue reader (pipeline worker) to block.

This commit moves the deserialization out of the lock, allowing multiple pipeline workers to deserialize batches concurrently.

- add intermediate batch-holder from `Queue` methods
- make the intermediate batch-holder a private inner class of `Queue` with a descriptive name `SerializedBatchHolder`

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2025-02-17 19:01:44 +00:00
kaisecheng
e896cd727d
CPM handle 404 response gracefully with user-friendly log (#17052)
Starting from es-output 12.0.2, a 404 response is treated as an error. Previously, central pipeline management considered 404 as an empty pipeline, not an error.

This commit restores the expected behavior by handling 404 gracefully and logs a user-friendly message.
It also removes the redundant cache of pipeline in CPM

Fixes: #17035
2025-02-17 11:08:19 +00:00
Ry Biesemeyer
d20eb4dbcb
qa: use clean expansion of LS tarball per fixture instance (#17082)
* qa: use clean expansion of LS tarball per fixture instance

Because QA tests can _modify_ the Logstash installation (e.g. those that
invoke the plugin manager), it is important that the service wrapper
begins with a clean expansion of the logstash tarball.

* qa: enable safe reuse of ls_home in ls_to_ls tests
2025-02-14 07:53:52 -08:00
Dimitrios Liappis
78c34465dc
Allow capturing heap dumps in DRA BK jobs (#17081)
This commit allows Buildkite to capture any heap dumps produced
during DRA builds.
2025-02-13 18:13:17 +02:00
Dimitrios Liappis
8cd38499b5
Use centralized source of truth for active branches (#17063)
This commit simplifies the DRA process in Logstash by removing the need to maintain a separate file for the active branches, and instead rely on a centrally maintained file containing source of truth.
While at it, we refactor/simplify the creation of an array with the versions in `.buildkite/scripts/snyk/resolve_stack_version.sh`.
2025-02-12 16:17:52 +02:00
Ry Biesemeyer
a847ef7764
Update logstash_releases.json (#17055)
- 8.18 branch was cut 2025-01-29; add 8.next and shift 8.future
 - 8.16.4 and 8.17.2 were released 2025-02-11; shift forward
2025-02-11 10:54:21 -08:00
kaisecheng
5573b5ad77
fix logstash-keystore to accept spaces in values when added via stdin (#17039)
This commit preserves spaces in values, ensuring that multi-word strings are stored as intended.
Prior to this change, `logstash-keystore` incorrectly handled values containing spaces, 
causing only the first word to be stored.
2025-02-07 21:30:11 +00:00
Dimitrios Liappis
c7204fd7d6
Don't honor VERSION_QUALIFIER if set but empty (#17032)
PR #17006 revealed that the `VERSION_QUALIFIER` env var gets honored in
various scripts when present but empty.
This shouldn't be the case as the DRA process is designed to gracefully
ignore empty values for this variable.

This commit changes various ruby scripts to not treat "" as truthy.
Bash scripts (used by CI etc.) are already ok with this as part of
refactorings done in #16907.

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-02-07 13:05:23 +02:00
Mashhur
e23da7985c
Release note placeholder might be empty, making parsing lines nil tolerant. (#17026) 2025-02-05 11:07:56 -08:00
Andrea Selva
1c8cf546c2
Fix BufferedTokenizer to properly resume after a buffer full condition respecting the encoding of the input string (#16968)
Permit to use effectively the tokenizer also in context where a line is bigger than a limit.
Fixes an issues related to token size limit error, when the offending token was bigger than the input fragment in happened that the tokenzer wasn't unable to recover the token stream from the first delimiter after the offending token but messed things, loosing part of tokens.

## How solve the problem
This is a second take to fix the processing of tokens from the tokenizer after a buffer full error. The first try #16482 was rollbacked to the encoding error #16694.
The first try failed on returning the tokens in the same encoding of the input.
This PR does a couple of things:
- accumulates the tokens, so that after a full condition can resume with the next tokens after the offending one.
- respect the encoding of the input string. Use `concat` method instead of `addAll`, which avoid to convert RubyString to String and back to RubyString. When return the head `StringBuilder` it enforce the encoding with the input charset.
2025-02-05 10:25:08 +01:00
Mashhur
32cc85b9a7
Add short living 9.0 next and update main in CI release version definition. (#17008) 2025-01-31 10:32:16 -08:00
Mashhur
14c16de0c5
Core version bump to 9.1.0 (#16991) 2025-01-30 15:13:58 -08:00
Mashhur
786911fa6d
Add 9.0 branch to the CI branches definition (#16997) 2025-01-30 11:37:30 -08:00
Jan Calanog
2172879989
github-action: Add AsciiDoc freeze warning (#16969) 2025-01-30 10:39:27 -05:00
João Duarte
51ab5d85d2
upgrade jdk to 21.0.6+7 (#16932) 2025-01-30 11:17:04 +00:00
Mashhur
7378b85f41
Adding elastic_integration upgrade guidelines. (#16979)
* Adding elastic_integration upgrade guidelines.
2025-01-29 15:20:47 -08:00
Rob Bavey
70a6c9aea6
[wip] Changing upgrade docs to refer to 9.0 instead of 8.0 (#16977) 2025-01-29 17:33:17 -05:00
Rob Bavey
8a41a4e0e5
Remove sample breaking change from breaking changes doc (#16978) 2025-01-29 17:31:37 -05:00
Andrea Selva
6660395f4d
Update branches.json for 8.18 (#16981)
Add 8.18 to CI inventory branches
2025-01-29 19:04:16 +01:00
João Duarte
d3093e4b44
integration tests: switch log input to filestream in filebeat (#16983)
Log input has been deprecated in filebeat 9.0.0 and throws an error if it's present in the configuration.
This commit switches the configuration to the "filestream" input.
2025-01-29 13:51:09 +00:00
Ry Biesemeyer
6943df5570
plugin manager: add --level=[major|minor|patch] (default: minor) (#16899)
* plugin manager: add `--level=[major|minor|patch]` (default: `minor`)

* docs: plugin manager update `--level` behavior

* Update docs/static/plugin-manager.asciidoc

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* docs: plugin update major as subheading

* docs: intention-first in major plugin updates

* Update docs/static/plugin-manager.asciidoc

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-28 08:33:37 -08:00
Victor Martinez
1fda320ed9
ci(buildkite): exclude files/folders that are not tested in Buildkite (#16929) 2025-01-28 11:54:29 +01:00
kaisecheng
2be4812118
add openssl command to wolfi image (#16966)
This commit added openssl command to logstash-wolfi image
Fixes: #16965
2025-01-27 18:58:39 +00:00
kaisecheng
3f41828ebb
remove irrelevant warning for internal pipeline (#16938)
This commit removed irrelevant warning for internal pipeline, such as monitoring pipeline.
Monitoring pipeline is expected to be one worker. The warning is not useful

Fixes: #13298
2025-01-27 16:44:12 +00:00
João Duarte
c8a6566877
fix user and password detection from environment's uri (#16955) 2025-01-27 11:38:42 +00:00
Andrea Selva
03b11e9827
Reimplement LogStash::String setting in Java (#16576)
Reimplements `LogStash::Setting::String` Ruby setting class into the `org.logstash.settings.SettingString` and exposes it through `java_import` as `LogStash::Setting::SettingString`.
Updates the rspec tests in two ways:
- logging mock is now converted to real Log4J appender that spy log line that are later verified
- verifies `java.lang.IllegalArgumentException` instead of `ArgumentError` is thrown because the kind of exception thrown by Java code, during verification.
2025-01-24 16:56:53 +01:00
kaisecheng
dc740b46ca
[doc] fix the necessary privileges of central pipeline management (#16902)
CPM requires two roles logstash_admin and logstash_system

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-23 11:04:19 +00:00
Karen Metts
f66e00ac10
Doc: Remove extra symbol to fix formatting error (#16926) 2025-01-22 18:21:25 -05:00
João Duarte
52b7fb0ae6
fix jars installer for new maven and pin psych to 5.2.2 (#16919)
handle maven output that can carry "garbage" information after the jar's name. this patch deletes that extra information, also pins psych to 5.2.2 until jruby ships with snakeyaml-engine 2.9 and jar-dependencies 0.5.2

Related to: https://github.com/jruby/jruby/issues/8579
2025-01-22 15:35:50 +00:00
kaisecheng
d4ba08c358
Update logstash_releases.json (#16917) 2025-01-21 11:29:26 +00:00
Dimitrios Liappis
9385cfac5a
Use --qualifier in release manager (#16907)
This commit uses the new --qualifier parameter in the release manager
for publishing dra artifacts. Additionally, simplifies the expected
variables to rely on a simple `VERSION_QUALIFIER`.

Snapshot builds are skipped when VERSION_QUALIFIER is set.
Finally, for helping to test DRA PRs, we also allow passing the `DRA_BRANCH`  option/env var
to override BUILDKITE_BRANCH.

Closes https://github.com/elastic/ingest-dev/issues/4856
2025-01-20 13:55:17 +02:00
Andrea Selva
58e6dac94b
Increase Xmx used by JRuby during Rake execution to 4Gb (#16911) 2025-01-19 18:02:23 +01:00
Karen Metts
92d7210146
Doc: WPS integration (#16910) 2025-01-17 13:20:53 -05:00
kaisecheng
cd729b7682
Add %{{TIME_NOW}} pattern for sprintf (#16906)
* Add a new pattern %{{TIME_NOW}} to `event.sprintf` to generate a fresh timestamp.
The timestamp is represented as a string in the default ISO 8601 format

For example,
```
input {
    heartbeat {
    add_field => { "heartbeat_time" => "%{{TIME_NOW}}" }
    }
}
```
2025-01-17 16:51:15 +00:00
João Duarte
9a2cd015d4
inject VERSION_QUALIFIER into artifacts (#16904)
VERSION_QUALIFIER was already observed in rake artifacts task but only to influence the name of the artifact.

This commit ensure that the qualifier is also displayed in the cli and in the http api.
2025-01-17 10:18:35 +00:00
kaisecheng
ff44b7cc20
Update logstash_releases.json (#16901) 2025-01-14 15:46:29 +00:00
Ry Biesemeyer
348f1627a5
remove pipeline bus v1 (#16895) 2025-01-10 15:55:25 -08:00
Cas Donoghue
356ecb3705
Replace/remove references to defunct freenode instance (#16873)
The preferred channel for communication about LS is the elastic discussion
forum, this commit updates the source code and readme files to reflect that.
2025-01-10 14:28:35 -08:00
Mashhur
ae8ad28aaa
Add beats and elastic-agent SSL changes to 9.0 breaking change doc. (#16742) 2025-01-10 09:54:04 -08:00
Mashhur
db34116c46
Doc: Environment variables cannot be in config.string comments. (#16689)
* Doc: Environment variables cannot be in config.string comments due to substitution logic.

* Apply suggestions from code review

Suggestion to revise the statements.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

Make sentences present tense.

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-10 09:53:23 -08:00
Mashhur
a215101032
Validate the size limit in BufferedTokenizer. (#16882) 2025-01-09 15:53:07 -08:00
Karen Metts
d978e07f2c
Logstash API spec - first pass (#16546)
Co-authored-by: lcawl <lcawley@elastic.co>
2025-01-09 16:27:06 -05:00
Mashhur
47d04d06b2
Initialize flow metrics if pipeline metric.collect params is enabled. (#16881) 2025-01-09 12:30:49 -08:00
Ry Biesemeyer
4554749da2
Test touchups (#16884)
* test: improved and clearer stream read constraints validation

* test: remove unused  var
2025-01-09 12:28:45 -08:00
Ry Biesemeyer
16392908e2
jackson stream read constraints: code-based defaults (#16854)
* Revert "Apply Jackson stream read constraints defaults at runtime (#16832)"

This reverts commit cc608eb88b.

* jackson stream read constraints: code-based defaults

refactors stream read constraints to couple default values with their
associated overrides, which allows us to have more descriptive logging
that includes provenance of the value that has been applied.
2025-01-09 07:18:25 -08:00
kaisecheng
dae7fd93db
remove enterprise search from default distribution (#16818)
Elastic App Search and Elastic Workplace Search are deprecated and removed from v9
- remove enterprise_search integration plugin from default plugins
- add breaking change doc

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-09 12:15:41 +00:00
Cas Donoghue
274c212d9d
Ensure plugin config marked :deprecated logs to deprecation logger (#16833)
Previously when the `:deprecated` modifier was used in the plugin config DSL a
log message was sent at `:warn` level to the main logger. This commit updates
that message to be routed *only* to the deprecation logger.
2025-01-06 12:18:25 -08:00
Karen Metts
e2b322e8c1
Doc: Message Logstash module deprecations and removal (#16840) 2025-01-06 11:17:58 -05:00
kaisecheng
ef36df6b81
Respect environment variables in jvm.options (#16834)
JvmOptionsParser adds support for ${VAR:default} syntax when parsing jvm.options
- allow dynamic resolution of environment variables in the jvm.options file
- enables fallback to default value when the environment variable is not set
2025-01-03 23:04:28 +00:00
kaisecheng
de6a6c5b0f
Add pipeline metrics to Node Stats API (#16839)
This commit introduces three new metrics per pipeline in the Node Stats API:
- workers
- batch_size
- batch_delay

```
{
  ...
  pipelines: {
    main: {
      events: {...}, 
      flow: {...}, 
      plugins: {...}, 
      reloads: {...}, 
      queue: {...}, 
      pipeline: {
        workers: 12,
        batch_size: 125,
        batch_delay: 5,
      }, 
    }
  }
  ...
}
```
2025-01-03 20:48:14 +00:00
Cas Donoghue
531f795037
Ssl standardization breaking changes docs (#16844)
* Add logstash-filter-elasticsearch obsolete ssl section to breaking changes doc

Adds information about the SSL setting obsolescence for the Elasticsearch filter to the 9.0 breaking changes doc

* Fix headers

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>

* Add logstash-output-tcp obsolete ssl section to breaking changes doc

* Fix Asciidoctor doc links

* Add logstash-input-tcp obsolete ssl section to breaking changes doc

* Fix asciidoc plugin links

* Add logstash-filter-http obsolete ssl section to breaking changes doc

* Add logstash-input-http obsolete ssl section to breaking changes doc

* Add logstash-input-http_poller obsolete ssl section to breaking changes doc

* Add logstash-input-elastic_serverless_forwarder ssl conf to breaking changes

---------

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-01-03 11:26:46 -08:00
Cas Donoghue
cc608eb88b
Apply Jackson stream read constraints defaults at runtime (#16832)
When Logstash 8.12.0 added increased Jackson stream read constraints to
jvm.options, assumptions about the existence of that file's contents
were invalidated. This led to issues like #16683.

This change ensures Logstash applies defaults from config at runtime:
- MAX_STRING_LENGTH: 200_000_000
- MAX_NUMBER_LENGTH: 10_000
- MAX_NESTING_DEPTH: 1_000

These match the jvm.options defaults and are applied even when config
is missing. Config values still override these defaults when present.
2025-01-02 14:52:39 -08:00
Ry Biesemeyer
01c8e8bb55
Avoid lock when ecs_compatibility is explicitly specified (#16786)
Because a `break` escapes a `begin`...`end` block, we must not use a `break` in order to ensure that the explicitly set value gets memoized to avoid lock contention.

> ~~~ ruby
> def fake_sync(&block)
>   puts "FAKE_SYNC:enter"
>   val = yield
>   puts "FAKE_SYNC:return(#{val})"
>   return val
> ensure
>   puts "FAKE_SYNC:ensure"
> end
> 
> fake_sync do
>   @ivar = begin
>     puts("BE:begin")
>   	break :break
>   	
>   	val = :ret
>   	puts("BE:return(#{val})")
>   	val
>   ensure
>     puts("BE:ensure")
>   end
> end
> ~~~

Note: no `FAKE_SYNC:return`:

> ~~~
> ╭─{ rye@perhaps:~/src/elastic/logstash (main ✔) }
> ╰─● ruby break-esc.rb
> FAKE_SYNC:enter
> BE:begin
> BE:ensure
> FAKE_SYNC:ensure
> [success]
> ~~~
2024-12-19 15:48:05 -08:00
kaisecheng
ae75636e17
update ironbank image to ubi9/9.5 (#16819) 2024-12-19 17:24:01 +00:00
kaisecheng
05789744d2
Remove the Arcsight module and the modules framework (#16794)
Remove all module related code
- remove arcsight module
- remove module framework
- remove module tests
- remove module configs
2024-12-19 14:28:54 +00:00
kaisecheng
03ddf12893
[doc] use UBI8 as base image (#16812) 2024-12-17 18:16:48 +00:00
João Duarte
6e0d235c9d
update releases file with 8.16.2 GA (#16807) 2024-12-17 10:22:12 +00:00
Karen Metts
e1f4e772dc
Doc: Update security docs to replace obsolete cacert setting (#16798) 2024-12-16 15:25:43 -05:00
João Duarte
e6e0f9f6eb
give more memory to tests. 1gb instead of 512mb (#16764) 2024-12-16 10:41:05 +00:00
Cas Donoghue
2d51cc0ba9
Document obsolete settings for elasticsearch output plugin (#16787)
Update breaking changes doc with the standardized ssl settings for the
logstash-output-elasticsearch plugin.
2024-12-13 11:21:07 -08:00
Cas Donoghue
188d9e7ed8
Update serverless tests to include product origin header (#16766)
This commit updates the curl scripts that interact with Kibana's
`api/logstash/pipeline/*` endpoin. Additionally this adds the header to any curl
that interacts with elasticsearch API as well.
2024-12-13 09:22:45 -08:00
kaisecheng
65495263d4
[CI] remove 8.15 DRA (#16795) 2024-12-13 13:44:33 +00:00
João Duarte
264283889e
update logstash_releases to account for 8.17.0 GA (#16785) 2024-12-12 17:32:05 +01:00
kaisecheng
5bff2ad436
[CI] benchmark readme (#16783)
- add instruction to run benchmark in v8 with `xpack.monitoring.allow_legacy_collection`
- remove scripted field 5m_num from dashboards
2024-12-12 11:09:22 +00:00
Cas Donoghue
095fbbb992
Add breaking changes docs for input-elasticsearch (#16744)
This commit follows the pattern established in
https://github.com/elastic/logstash/pull/16701 for indicating obsolete ssl
settings in logstash core plugins.
2024-12-09 13:24:07 -08:00
João Duarte
e36cacedc8
ensure inputSize state value is reset during buftok.flush (#16760) 2024-12-09 09:10:54 -08:00
Ry Biesemeyer
202d07cbbf
ensure jackson overrides are available to static initializers (#16719)
Moves the application of jackson defaults overrides into pure java, and
applies them statically _before_ the `org.logstash.ObjectMappers` has a chance
to start initializing object mappers that rely on the defaults.

We replace the runner's invocation (which was too late to be fully applied) with
a _verification_ that the configured defaults have been applied.
2024-12-04 14:27:26 -08:00
Rob Bavey
ab19769521
Pin date dependency to 3.3.3 (#16755)
Resolves: #16095, #16754
2024-12-04 14:39:50 -05:00
Mashhur
4d9942d68a
Update usage of beats-input obsoleted SSL params in the core. (#16753) 2024-12-04 11:12:21 -08:00
Rob Bavey
e3265d93e8
Pin jar-dependencies to 0.4.1 (#16747)
Pin jar-dependencies to `0.4.1`, until https://github.com/jruby/jruby/issues/7262
is resolved.
2024-12-03 17:35:00 -05:00
João Duarte
af76c45e65
Update logstash_releases.json to account for 7.17.26 GA (#16746) 2024-12-03 15:25:55 +00:00
João Duarte
1851fe6b2d
Update logstash_releases.json to account for GA of 8.15.5 (#16734) 2024-11-27 14:09:22 +00:00
mmahacek
d913e2ae3d
Docs: Troubleshooting update for JDK bug handling cgroups v1 (#16721)
---------
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-27 11:08:20 +00:00
Rob Bavey
ccde1eb8fb
Add SSL Breaking changes for logstash-output-http to breaking changes… (#16701)
* Add SSL Breaking changes for logstash-output-http to breaking changes doc
* Add missing discrete tags
* Improve formatting of plugin breaking chagnes

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-26 16:23:44 -05:00
Rob Bavey
0e58e417ee
Increase timeout for docs PR link action (#16718)
Update the timeout from 15 minutes to 30 minutes to try and fix the inline docs previews feature, and cleanup comments
2024-11-22 11:57:15 -05:00
Karen Metts
615545027f
Doc: Roll 6.3 breaking changes under 6.0 series (#16706) 2024-11-22 11:16:02 -05:00
Cas Donoghue
eb7e1253e0
Revert "Bugfix for BufferedTokenizer to completely consume lines in case of lines bigger then sizeLimit (#16482)" (#16715)
This reverts commit 85493ce864.
2024-11-21 09:18:59 -08:00
João Duarte
aff8d1cce7
update logstash_release with 8.16.1 and 8.16.2-SNAPSHOT (#16711) 2024-11-21 10:03:50 +00:00
João Duarte
2f0e10468d
Update logstash_releases.json to account for 8.17 Feature Freeze (#16709) 2024-11-21 09:35:33 +00:00
github-actions[bot]
0dd64a9d63
PipelineBusV2 deadlock proofing (#16671) (#16682)
* pipeline bus: add deadlock test for unlisten/unregisterSender

* pipeline bus: eliminate deadlock

Moves the sync-to-notify out of the `AddressStateMapping#mutate`'s effective
synchronous block to eliminate a race condition where unlistening to an address
and unregistering a sender could deadlock.

It is safe to notify an AddressState's attached input without exclusive access
to the AddressState, because notifying an input that has since been disconnected
is net zero harm.

(cherry picked from commit 8af6343a26)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-11-20 13:26:13 -08:00
Mashhur
15b203448a
[Health API E2E] Align on agent python version and avoid pip install. (#16704)
* Pip install is unnecessary and sometimes hangs with prompt dialog. This commit aligns on agent python version which has a default pip installed.

* Update .buildkite/scripts/health-report-tests/main.sh

Improve readability.

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>

---------

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>
2024-11-20 10:26:02 -08:00
Cas Donoghue
7b3d23b9d5
Replace removed yaml module (#16703)
In https://github.com/elastic/logstash/pull/16586 the module include was
removed. This causes failures in the script when the module is referenced. This
commit re-enables the include for the yaml module.
2024-11-20 10:13:30 -08:00
João Duarte
977efbddde
Update branches.json to include 8.15, 8.16 and 8.17 (#16698) 2024-11-20 16:37:10 +00:00
Cas Donoghue
e0ed994ab1
Update license checker with new logger dependency (#16695)
A new transative dependency on the `logger` gem has been added through sinatra 4.1.0. Update the
license checker to ensure this is accounted for.
2024-11-20 11:49:28 +00:00
Andrea Selva
d4fb06e498
Introduce a new flag to explicitly permit legacy monitoring (#16586)
Introduce a new flag setting `xpack.monitoring.allow_legacy_collection` which eventually enable the legacy monitoring collector.

Update the method to test if monitoring is enabled so that consider also `xpack.monitoring.allow_legacy_collection` to determine if `monitoring.*` settings are valid or not.
By default it's false, the user has to intentionally enable it to continue to use the legacy monitoring settings.


---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-19 08:52:28 +01:00
Andrea Selva
a94659cf82
Default buffer type to 'heap' for 9.0 (#16500)
Switch the default value of `pipeline.buffer.type` to use the heap memory instead of direct one.

Change the default value of the setting `pipeline.buffer.type` from direct to heap and update consequently the documentation.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-13 16:58:56 +01:00
João Duarte
2a23680cfd
Update logstash_releases.json for new branching strategy (#16585)
See naming rational in https://github.com/logstash-plugins/.ci/pull/63#issue-2597373955
After 8.16 is GA, the "releases" entry should become:

```json
  "releases": {
    "7.current": "7.17.24",
    "8.current": "8.16.0",
    "8.previous": "8.15.3"
  },
```

For snapshots we'll also test against "main", "8.next", and "8.future". The labels are:

- `main`: main branch
-  `8.future`: the future 8.x release, i.e. current version of the 8.x branch
- `8.next`: the short lived period between a minor's FF - when the new branch is cut from 8.x - and GA
- `8.current`: the most recent 8.x release
- `8.previous`: the previous, but still supported, 8.x release
2024-11-13 11:04:02 +00:00
Cas Donoghue
ff8c154c4d
Extend ruby linting tasks to handle file inputs (#16660)
This commit extends the gradle and rake tasks to pass through a list of files
for rubocop to lint. This allows more specificity and fine grained control for
linting when the consumer of the tasks only wishes to lint a select few files.
2024-11-12 12:46:52 -08:00
Yehor Shvedov
74d87c9ea0
Update catalog-info file with correct system property (#16669) 2024-11-12 21:40:17 +02:00
kaisecheng
5826c6f902
Use UBI as base image (#16599)
Logstash Docker images, full and OSS, now use UBI image as its base, replacing the previous Ubuntu base.

- change the base image of `full` and `oss` to ubi
- Set locale to C.UTF-8
- remove ubi flavour
- use go image to build env2yaml
- remove redundant and refactor steps
- add support to build image in mac aarch64
- allow customizing ELASTIC_VERSION and LOCAL_ARTIFACTS for test purpose
2024-11-08 16:25:12 +00:00
Ellie
d9ead9a8db
Remove link to deleted cluster (#16658) 2024-11-08 09:03:47 -05:00
Nicole Albee
046ea1f5a8
For custom java plugins, set the platform = 'java'. (#16628) 2024-11-06 08:48:21 +00:00
João Duarte
efbee31461
Update .ruby-version to jruby-9.4.9.0 (#16642) 2024-11-06 08:22:37 +00:00
kaisecheng
5847d77331
skip allow_superuser in Windows OS (#16629)
As user id is always zero in Windows,
this commit excluded the checking of running as root in Windows.
2024-11-05 15:37:19 +00:00
Nicole Albee
113585d4a5
Anchor the -java match pattern at the end of the string. (#16626)
This fixes the offline install problem of the logstash-input-java_filter_example off-line install.
2024-11-05 14:21:15 +00:00
João Duarte
6703aec476
bump jruby to 9.4.9.0 (#16634) 2024-11-05 13:49:07 +00:00
kaisecheng
849f431033
fix Windows java not found log (#16633) 2024-11-05 10:15:51 +00:00
Andrea Selva
852149be2e
Update JDK to latest in versions.yml (#16627)
Update JDK to version 21.0.5+11
2024-11-04 16:40:20 +01:00
kaisecheng
8ce58b8355
run acceptance tests as non-root (#16624) 2024-11-01 19:47:08 +00:00
kaisecheng
00da72378b
add boostrap to docker build to fix missing jars (#16622)
The DRA build failed because the required jars were missing, as they had been removed during the Docker build process.
2024-11-01 15:45:35 +00:00
Karen Metts
0006937e46
Doc: Reset breaking changes and release notes for 9.0 (#16603) 2024-10-31 18:06:52 -04:00
João Duarte
9eced9a106
reduce effort during build of docker images (#16619)
there's no need to build jdk-less and windows tarballs for docker images
so this change simplifies the build process.

It should reduce the time spent needed to build docker images.
2024-10-31 16:00:25 +00:00
João Duarte
472e27a014
make docker build and gradle tasks more friendly towards ci output (#16618) 2024-10-31 16:00:11 +00:00
kaisecheng
db59cd0fbd
set allow_superuser to false as default (#16558)
- set allow_superuser as false by default for v9
- change the buildkite image of ruby unit test to non-root
2024-10-31 13:33:00 +00:00
Andrea Selva
c602b851bf
[CI] Change agent for JDK availability check and add schedule also for 8.x (#16614)
Switch execution agent of JDK availability check pipeline from vm-agent to container-agent.
Moves the schedule definition from the `Logstash Pipeline Scheduler` pipeline into the pipeline definition, adding a schedule also for `8.x` branch.
2024-10-30 12:13:28 +01:00
Andrea Selva
5d523aa5c8
Fix bad reference to a variable (#16615) 2024-10-30 11:44:41 +01:00
Andrea Selva
ed5874bc27
Use jvm catalog for reproducible builds and expose new pipeline to check JDK availability (#16602)
Updates the existing `createElasticCatalogDownloadUrl` method to use the precise version retrieved `versions.yml` to download the JDK instead of using the latest of major version. This makes the build reproducible again.
Defines a new Gradle `checkNewJdkVersion` task to check if there is a new JDK version available from JVM catalog matching the same major of the current branch. 
Creates a new Buildkite pipeline to execute a `bash` script to run the Gradle task; plus it also update the `catalog-info.yaml` with the new pipeline and a trigger to execute every week.
2024-10-29 10:55:15 +01:00
João Duarte
ca19f0029e
make max inflight warning global to all pipelines (#16597)
The current max inflight error message focuses on a single pipeline and on a maximum amount of 10k events regardless of the heap size.

The new warning will take into account all loaded pipelines and also consider the heap size, giving a warning if the total number of events consumes 10% or more of the total heap.

For the purpose of the warning events are assumed to be 2KB as it a normal size for a small log entry.
2024-10-25 14:44:33 +01:00
Edmo Vamerlatti Costa
93b0913fd9
Updated CI releases inventory after 7.17.25 (#16589) 2024-10-22 15:40:02 +02:00
kaisecheng
566bdf66fc
remove http.* settings (#16552)
The commit removes the deprecated settings http.port, http.host, http.enabled, and http.environment
2024-10-18 12:15:38 +01:00
kaisecheng
467ab3f44b
Enable log.format.json.fix_duplicate_message_fields by default (#16578)
Set `log.format.json.fix_duplicate_message_fields` to `true` as default
to avoid collision of the message field in log lines when log.format is JSON
2024-10-17 16:22:30 +01:00
Edmo Vamerlatti Costa
dcafa0835e
Updated CI releases inventory after 8.15.3 (#16581) 2024-10-17 16:55:31 +02:00
João Duarte
daf979c189
default to jdk 21 on all ci/build tasks (#16464) 2024-10-17 15:47:52 +01:00
kaisecheng
3f0ad12d06
add http.* deprecation log (#16538)
- refactor deprecated alias to support obsoleted version
- add deprecation log for http.* config
2024-10-17 14:07:51 +01:00
Andrea Selva
b6f16c8b81
Adds a JMH benchmark to test BufferedTokenizerExt class (#16564)
Adds a JMH benchmark to measure the peformances of BufferedTokenizerExt.
Update also Gradle build script to remove CMS GC flags and fix deprecations for Gradle 9.0.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-10-16 16:55:52 +02:00
Andrea Selva
85493ce864
Bugfix for BufferedTokenizer to completely consume lines in case of lines bigger then sizeLimit (#16482)
Fixes the behaviour of the tokenizer to be able to work properly when buffer full conditions are met.

Updates BufferedTokenizerExt so that can accumulate token fragments coming from different data segments. When a "buffer full" condition is matched, it record this state in a local field so that on next data segment it can consume all the token fragments till the next token delimiter.
Updated the accumulation variable from RubyArray containing strings to a StringBuilder which contains the head token, plus the remaining token fragments are stored in the input array.
Furthermore it translates the `buftok_spec` tests into JUnit tests.
2024-10-16 16:48:25 +02:00
João Duarte
ab77d36daa
ensure minitar 1.x is used instead of 0.x (#16565) 2024-10-16 12:37:35 +01:00
Andrea Selva
63706c1a36
Marked ArcSight Module as deprecated in documentation. (#16553)
Updates guide for ArcSight Module to deprecate it.
2024-10-16 08:37:13 +02:00
kaisecheng
cb3b7c01dc
Remove event_api.tags.illegal for v9 (#16461)
This commit removed `--event_api.tags.illegal` option
Fix: #16356
2024-10-15 22:27:36 +01:00
Rob Bavey
3f2a659289
Update branches.json to point to 8.16 branch (#16560) 2024-10-15 15:43:16 -04:00
Mashhur
dfd256e307
[Health API] Add 1-min, 5-min backpressure and multipipeline test cases. (#16550)
* Health API: Add 1min 5min backpressure cases and improve Logstash temination logic.

* Apply suggestions from code review

Uncomment accidentally commented sources.

* Update .buildkite/scripts/health-report-tests/tests/slow-start.yaml

No need to wait for LS startup when using slow start scenario.

* Apply suggestions from code review

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* Standardize YAML structure and rename wait time to wait_seconds

---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-10-15 08:32:17 -07:00
kaisecheng
b571e8f3e3
remove deprecated modules netflow, fb_apache and azure (#16514)
This commit removes files related to netflow, fb_apache and azure modules
Fix: #16357
2024-10-15 14:03:53 +01:00
Karen Metts
fc119df24a
Doc: Update 8.15.0 release notes to future proof link (#16554) 2024-10-14 11:10:15 -04:00
Ry Biesemeyer
937a9ea49f
docs: add health details for flow/worker_utilization (#16544)
* docs: add health details for flow/worker_utilization

* plus-append to indent flow details under flow
2024-10-11 15:10:11 -07:00
kaisecheng
8cd0fa8767
refactor log for event_api.tags.illegal (#16545)
- add `obsoleted_version` and remove `deprecated_msg` from `deprecated_option` for consistent warning message
2024-10-11 21:22:29 +01:00
Andrea Selva
6064587bc4
Keeps global settings aligned across entities used in the test for StatsEventFactory
Fixes a potential flaky test, due to shared (LogStash:SETTINGS) fixture across the test base.


Forward port the commit 609155a61b used to fix the non clean backport PR #16531 of #16525 to 8.x.

LogStash:SETTINGS is used in the constructor of LogStash::Inputs::Metrics::StatsEventFactory to query the value of api.enabled. This PR keeps updated the value for the setting provided to the Agent constructor and to the StatsEventFactory.
2024-10-11 15:26:53 +02:00
Ry Biesemeyer
a931b2cde6
Flow worker utilization probe (#16532)
* flow: refactor pipeline refs to keep worker flows separate

* health: add worker_utilization probe

pipeline is:
  - RED "completely blocked" when last_5_minutes >= 99.999
  - YELLOW "nearly blocked" when last_5_minutes > 95
    - and inludes "recovering" info when last_1_minute < 80
  - YELLOW "completely blocked" when last_1_minute >= 99.999
  - YELLOW "nearly blocked" when last_1_minute > 95

* tests: improve coverage of PipelineIndicator probes

* Apply suggestions from code review
2024-10-10 17:56:22 -07:00
Ry Biesemeyer
065769636b
health: add logstash.forceApiStatus: green escape hatch (#16535) 2024-10-10 16:35:06 -07:00
Mashhur
4037adfc4a
Health api minor followups (#16533)
* Utilize default agent for Health API CI. Call python scripts from directly CI step.

* Change BK agent to support both Java and python. Install pip manually and send env vars to subprocess.
2024-10-10 14:57:41 -07:00
github-actions[bot]
7f7af057f0
Feature: health report api (#16520) (#16523)
* [health] bootstrap HealthObserver from agent to API (#16141)

* [health] bootstrap HealthObserver from agent to API

* specs: mocked agent needs health observer

* add license headers

* Merge `main` into `feature/health-report-api` (#16397)

* Add GH vault plugin bot to allowed list (#16301)

* regenerate webserver test certificates (#16331)

* correctly handle stack overflow errors during pipeline compilation (#16323)

This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320

* Doc: Reposition worker-utilization in doc (#16335)

* settings: add support for observing settings after post-process hooks (#16339)

Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332

* fix line used to determine ES is up (#16349)

* add retries to snyk buildkite job (#16343)

* Fix 8.13.1 release notes (#16363)

make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16347)

* [Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)

* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Doc: Add SNMP integration to breaking changes (#16374)

* deprecate java less-than 17 (#16370)

* Exclude substitution refinement on pipelines.yml (#16375)

* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .

* Doc: Forwardport 8.15.0 release notes to main (#16388)

* Removing 8.14 from ci/branches.json as we have 8.15. (#16390)

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Squashed merge from 8.x

* Failure injector plugin implementation. (#16466)

* Test purpose only failure injector integration (filter and output) plugins implementation. Add unit tests and include license notes.

* Fix the degrate method name typo.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Add explanation to the config params and rebuild plugin gem.

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Health report integration tests bootstrapper and initial tests implementation (#16467)

* Health Report integration tests bootstrapper and initial slow start scenario implementation.

* Apply suggestions from code review

Renaming expectation check method name.

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* Changed to branch concept, YAML structure simplified as changed to Dict.

* Apply suggestions from code review

Reflect `help_url` to the integration test.

---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* health api: expose `GET /_health_report` with pipelines/*/status probe (#16398)

Adds a `GET /_health_report` endpoint with per-pipeline status probes, and wires the
resulting report status into the other API responses, replacing their hard-coded `green`
with a meaningful status indication.

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* docs: health report API, and diagnosis links (feature-targeted) (#16518)

* docs: health report API, and diagnosis links

* Remove plus-for-passthrough markers

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* merge 8.x into feature branch... (#16519)

* Add GH vault plugin bot to allowed list (#16301)

* regenerate webserver test certificates (#16331)

* correctly handle stack overflow errors during pipeline compilation (#16323)

This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320

* Doc: Reposition worker-utilization in doc (#16335)

* settings: add support for observing settings after post-process hooks (#16339)

Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332

* fix line used to determine ES is up (#16349)

* add retries to snyk buildkite job (#16343)

* Fix 8.13.1 release notes (#16363)

make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16347)

* [Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)

* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Doc: Add SNMP integration to breaking changes (#16374)

* deprecate java less-than 17 (#16370)

* Exclude substitution refinement on pipelines.yml (#16375)

* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .

* Doc: Forwardport 8.15.0 release notes to main (#16388)

* Removing 8.14 from ci/branches.json as we have 8.15. (#16390)

* Increase Jruby -Xmx to avoid OOM during zip task in DRA (#16408)

Fix: #16406

* Generate Dataset code with meaningful fields names (#16386)

This PR is intended to help Logstash developers or users that want to better understand the code that's autogenerated to model a pipeline, assigning more meaningful names to the Datasets subclasses' fields.

Updates `FieldDefinition` to receive the name of the field from construction methods, so that it can be used during the code generation phase, instead of the existing incremental `field%n`.
Updates `ClassFields` to propagate the explicit field name down to the `FieldDefinitions`.
Update the `DatasetCompiler` that add fields to `ClassFields` to assign a proper name to generated Dataset's fields.

* Implements safe evaluation of conditional expressions, logging the error without killing the pipeline (#16322)

This PR protects the if statements against expression evaluation errors, cancel the event under processing and log it.
This avoids to crash the pipeline which encounter a runtime error during event condition evaluation, permitting to debug the root cause reporting the offending event and removing from the current processing batch.

Translates the `org.jruby.exceptions.TypeError`, `IllegalArgumentException`, `org.jruby.exceptions.ArgumentError` that could happen during `EventCodition` evaluation into a custom `ConditionalEvaluationError` which bubbles up on AST tree nodes. It's catched in the `SplitDataset` node.
Updates the generation of the `SplitDataset `so that the execution of `filterEvents` method inside the compute body is try-catch guarded and defer the execution to an instance of `AbstractPipelineExt.ConditionalEvaluationListener` to handle such error. In this particular case the error management consist in just logging the offending Event.

---------

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16426)

* Release notes for 8.15.1 (#16405) (#16427)

* Update release notes for 8.15.1

* update release note

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Kaise Cheng <kaise.cheng@elastic.co>
(cherry picked from commit 2fca7e39e8)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Fix ConditionalEvaluationError to do not include the event that errored in its serialiaxed form, because it's not expected that this class is ever serialized. (#16429) (#16430)

Make inner field of ConditionalEvaluationError transient to be avoided during serialization.

(cherry picked from commit bb7ecc203f)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* use gnu tar compatible minitar to generate tar artifact (#16432) (#16434)

Using VERSION_QUALIFIER when building the tarball distribution will fail since Ruby's TarWriter implements the older POSIX88 version of tar and paths will be longer than 100 characters.

For the long paths being used in Logstash's plugins, mainly due to nested folders from jar-dependencies, we need the tarball to follow either the 2001 ustar format or gnu tar, which is implemented by the minitar gem.

(cherry picked from commit 69f0fa54ca)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* account for the 8.x in DRA publishing task (#16436) (#16440)

the current DRA publishing task computes the branch from the version
contained in the version.yml

This is done by taking the major.minor and confirming that a branch
exists with that name.

However this pattern won't be applicable for 8.x, as that branch
currently points to 8.16.0 and there is no 8.16 branch.

This commit falls back to reading the buildkite injected
BUILDKITE_BRANCH variable.

(cherry picked from commit 17dba9f829)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Fixes the issue where LS wipes out all quotes from docker env variables. (#16456) (#16459)

* Fixes the issue where LS wipes out all quotes from docker env variables. This is an issue when running LS on docker with CONFIG_STRING, needs to keep quotes with env variable.

* Add a docker acceptance integration test.

(cherry picked from commit 7c64c7394b)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Known issue for 8.15.1 related to env vars references (#16455) (#16469)

(cherry picked from commit b54caf3fd8)

Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>

* bump .ruby_version to jruby-9.4.8.0 (#16477) (#16480)

(cherry picked from commit 51cca7320e)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Release notes for 8.15.2 (#16471) (#16478)

Co-authored-by: andsel <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 01dc76f3b5)

* Change LogStash::Util::SubstitutionVariables#replace_placeholders refine argument to optional (#16485) (#16488)

(cherry picked from commit 8368c00367)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>

* Use jruby-9.4.8.0 in exhaustive CIs. (#16489) (#16491)

(cherry picked from commit fd1de39005)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Don't use an older JRuby with oraclelinux-7 (#16499) (#16501)

A recent PR (elastic/ci-agent-images/pull/932) modernized the VM images
and removed JRuby 9.4.5.0 and some older versions.

This ended up breaking exhaustive test on Oracle Linux 7 that hard coded
JRuby 9.4.5.0.

PR https://github.com/elastic/logstash/pull/16489 worked around the
problem by pinning to the new JRuby, but actually we don't
need the conditional anymore since the original issue
https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 has
been resolved and none of our releasable branches (apart from 7.17 which
uses `9.2.20.1`) specify `9.3.x.y` in `/.ruby-version`.

Therefore, this commit removes conditional setting of JRuby for
OracleLinux 7 agents in exhaustive tests (and relies on whatever
`/.ruby-version` defines).

(cherry picked from commit 07c01f8231)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>

* Improve pipeline bootstrap error logs (#16495) (#16504)

This PR adds the cause errors details on the pipeline converge state error logs

(cherry picked from commit e84fb458ce)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>

* Logstash Health Report Tests Buildkite pipeline setup. (#16416) (#16511)

(cherry picked from commit 5195332bc6)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Make health report test runner script executable. (#16446) (#16512)

(cherry picked from commit 2ebf2658ff)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Backport PR #16423 to 8.x: DLQ-ing events that trigger an conditional evaluation error. (#16493)

* DLQ-ing events that trigger an conditional evaluation error. (#16423)

When a conditional evaluation encounter an error in the expression the event that triggered the issue is sent to pipeline's DLQ, if enabled for the executing pipeline.

This PR engage with the work done in #16322, the `ConditionalEvaluationListener` that is receives notifications about if-statements evaluation failure, is improved to also send the event to DLQ (if enabled in the pipeline) and not just logging it.

(cherry picked from commit b69d993d71)

* Fixed warning about non serializable field DeadLetterQueueWriter in serializable AbstractPipelineExt

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* add deprecation log for `--event_api.tags.illegal` (#16507) (#16515)

- move `--event_api.tags.illegal` from option to deprecated_option
- add deprecation log when the flag is explicitly used
relates: #16356

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
(cherry picked from commit a4eddb8a2a)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
(cherry picked from commit 7eb5185b4e)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-10 11:17:14 -07:00
Andrea Selva
648472106f
[test] Fix xpack test to check for http_address stats only if the webserver is enabled (#16525)
Set the 'api.enabled' setting to reflect the flag webserver_enabled and consequently test for http_address presence in settings iff the web server is enabled.
2024-10-10 18:57:34 +02:00
github-actions[bot]
dc24f02972
Fix QA failure introduced by Health API changes and update rspec dependency of the QA package. (#16521) (#16522)
* Update rspec dependency of the QA package.

* Update qa/Gemfile

Align on rspec 3.13.x

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>

* Fix the QA test failure caused after reflecting Health Report status to the Node stats.

---------

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit 1e5105fcd8)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-09 15:28:02 -07:00
kaisecheng
a4eddb8a2a
add deprecation log for --event_api.tags.illegal (#16507)
- move `--event_api.tags.illegal` from option to deprecated_option
- add deprecation log when the flag is explicitly used
relates: #16356

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-08 13:49:59 +01:00
João Duarte
3480c32b6e
install jdk 11 in agent for snyk 7.17 scanning (#16510) 2024-10-07 13:30:25 +01:00
Andrea Selva
5d4825f000
Avoid to access Java DeprecatedAlias value other than Ruby's one (#16506)
Update Settings to_hash method to also skip Java DeprecatedAlias and not just the Ruby one.
With PR #15679 was introduced org.logstash.settings.DeprecatedAlias which mirrors the behaviour of Ruby class Setting::DeprecatedAlias. The equality check at Logstash::Settings, as descibed in #16505 (comment), is implemented comparing the maps.
The conversion of Settings to the corresponding maps filtered out the Ruby implementation of DeprecatedAlias but not the Java one.
This PR adds also the Java one to the list of filter.
2024-10-04 16:46:00 +02:00
João Duarte
5aabeda5fd
fix snapshot branch detection for snyk (#16484)
* handle two 8. branches
2024-10-04 09:13:12 +01:00
Edmo Vamerlatti Costa
e84fb458ce
Improve pipeline bootstrap error logs (#16495)
This PR adds the cause errors details on the pipeline converge state error logs
2024-10-03 11:08:42 +02:00
Dimitrios Liappis
60670087cb
[ci] Skip slack for retries JDK matrix jobs (#16316)
With this commit we shush slack alerts for JDK matrix CI jobs that
succeed after (automatic) retries.
2024-10-03 10:29:58 +03:00
Dimitrios Liappis
07c01f8231
Don't use an older JRuby with oraclelinux-7 (#16499)
A recent PR (elastic/ci-agent-images/pull/932) modernized the VM images
and removed JRuby 9.4.5.0 and some older versions.

This ended up breaking exhaustive test on Oracle Linux 7 that hard coded
JRuby 9.4.5.0.

PR https://github.com/elastic/logstash/pull/16489 worked around the
problem by pinning to the new JRuby, but actually we don't
need the conditional anymore since the original issue
https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 has
been resolved and none of our releasable branches (apart from 7.17 which
uses `9.2.20.1`) specify `9.3.x.y` in `/.ruby-version`.

Therefore, this commit removes conditional setting of JRuby for
OracleLinux 7 agents in exhaustive tests (and relies on whatever
`/.ruby-version` defines).
2024-10-02 19:07:16 +03:00
Andrea Selva
4e49adc6f3
Fix jdk21 warnings (#16496)
Suppress some warnings compared with JDK 21

- this-escape uses this before it is completely initialised.
- avoid a non serialisable DeadLetterQueueWriter field from serialisable instance.
2024-10-02 15:28:37 +02:00
Andrea Selva
b69d993d71
DLQ-ing events that trigger an conditional evaluation error. (#16423)
When a conditional evaluation encounter an error in the expression the event that triggered the issue is sent to pipeline's DLQ, if enabled for the executing pipeline.

This PR engage with the work done in #16322, the `ConditionalEvaluationListener` that is receives notifications about if-statements evaluation failure, is improved to also send the event to DLQ (if enabled in the pipeline) and not just logging it.
2024-10-02 12:23:54 +02:00
Mashhur
fd1de39005
Use jruby-9.4.8.0 in exhaustive CIs. (#16489) 2024-10-02 09:30:22 +01:00
Andrea Selva
61de60fe26
[Spacetime] Reimplement config Setting classe in java (#15679)
Reimplement the root Ruby Setting class in Java and use it from the Ruby one moving the original Ruby class to a shell wrapping the Java instance.
In particular create a new symmetric hierarchy (at the time just for `Setting`, `Coercible` and `Boolean` classes) to the Ruby one, moving also the feature for setting deprecation. In this way the new `org.logstash.settings.Boolean` is syntactically and semantically equivalent to the old Ruby Boolean class, which replaces.
2024-10-02 09:09:47 +02:00
Edmo Vamerlatti Costa
8368c00367
Change LogStash::Util::SubstitutionVariables#replace_placeholders refine argument to optional (#16485) 2024-10-01 11:33:35 -07:00
github-actions[bot]
2fe91226eb
Release notes for 8.15.2 (#16471) (#16486)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: andsel <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 01dc76f3b5)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-10-01 13:28:26 -04:00
Andrea Selva
f35e10d792
Updated CI releases inventory after 8.15.2 (#16474) 2024-09-26 18:25:32 +02:00
João Duarte
5c57adebb9
simplify snyk scanning (#16475)
* remove docker image scanning as that's handled by infosec
* run buildkite job on a docker image instead of vm (no need to test docker any more)
2024-09-25 14:38:52 +01:00
João Duarte
51cca7320e
bump .ruby_version to jruby-9.4.8.0 (#16477) 2024-09-25 13:15:14 +01:00
kaisecheng
0ef4c7da32
[ci] fix wrong queue type in benchmark marathon (#16465) 2024-09-20 19:58:20 +01:00
Luca Belluccini
b54caf3fd8
Known issue for 8.15.1 related to env vars references (#16455) 2024-09-19 13:32:14 -04:00
kaisecheng
3e98cb1625
[CI] fix benchmark marathon (#16447)
- split main.sh to core.sh and main.sh
- rename all.sh to marathon.sh
- fix vault expiry issue for long running task
- fix unable to call main function
- update save object runtime field `release` to return true when `version` contains "SNAPSHOT"
2024-09-17 22:29:02 +01:00
Mashhur
7c64c7394b
Fixes the issue where LS wipes out all quotes from docker env variables. (#16456)
* Fixes the issue where LS wipes out all quotes from docker env variables. This is an issue when running LS on docker with CONFIG_STRING, needs to keep quotes with env variable.

* Add a docker acceptance integration test.
2024-09-17 06:46:19 -07:00
kaisecheng
4e82655cd5
remove ingest-converter (#16453)
Removed the tool Ingest Converter
2024-09-16 15:57:51 +01:00
Andrea Selva
1ec37b7c41
Drop JDK 11 support (#16443)
If a user runs Logstash with a hosted JDK and not the one bundled with Logstash distribution, like setting a specific LS_JAVA_HOME, which is minor than JDK 17 then Logstash refuses to start. Has to provide at least a JDK 17 or unset the LS_JAVA_HOME and let Logstash uses the bundled JDK.

Updates the jvm.options and JvmOptionsParser to remove support for JDK 11. If the options parser identifies that the running JVM is less than 17, it refuses to start.

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-09-13 17:33:16 +02:00
Mashhur
2ebf2658ff
Make health report test runner script executable. (#16446) 2024-09-12 13:24:53 -07:00
kaisecheng
5452cccf76
[CI] benchmark dashboard and pipeline for testing against multiple versions (#16421)
- add becnhmark dashboard and related save objects
- add one buildkite pipeline to test against multiple versions
- remove null field in json
- add `FLOG_FILE_CNT`, `VAULT_PATH`, `TAGS`
2024-09-12 19:45:23 +01:00
Mashhur
5195332bc6
Logstash Health Report Tests Buildkite pipeline setup. (#16416) 2024-09-10 11:14:14 -07:00
kaisecheng
701108f88b
update ci release 7.17.24 (#16439) 2024-09-10 12:20:50 +01:00
João Duarte
17dba9f829
account for the 8.x in DRA publishing task (#16436)
the current DRA publishing task computes the branch from the version
contained in the version.yml

This is done by taking the major.minor and confirming that a branch
exists with that name.

However this pattern won't be applicable for 8.x, as that branch
currently points to 8.16.0 and there is no 8.16 branch.

This commit falls back to reading the buildkite injected
BUILDKITE_BRANCH variable.
2024-09-10 10:55:34 +01:00
João Duarte
f60e987173
bump to 9.0.0 and adapt CI accordingly (#16428) 2024-09-09 13:46:00 +01:00
João Duarte
69f0fa54ca
use gnu tar compatible minitar to generate tar artifact (#16432)
Using VERSION_QUALIFIER when building the tarball distribution will fail since Ruby's TarWriter implements the older POSIX88 version of tar and paths will be longer than 100 characters.

For the long paths being used in Logstash's plugins, mainly due to nested folders from jar-dependencies, we need the tarball to follow either the 2001 ustar format or gnu tar, which is implemented by the minitar gem.
2024-09-09 11:33:44 +01:00
Andrea Selva
bb7ecc203f
Fix ConditionalEvaluationError to do not include the event that errored in its serialiaxed form, because it's not expected that this class is ever serialized. (#16429)
Make inner field of ConditionalEvaluationError transient to be avoided during serialization.
2024-09-06 12:09:58 +02:00
github-actions[bot]
58b6a0ac77
Release notes for 8.15.1 (#16405) (#16427)
* Update release notes for 8.15.1

* update release note

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Kaise Cheng <kaise.cheng@elastic.co>
(cherry picked from commit 2fca7e39e8)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-09-05 17:12:58 +01:00
kaisecheng
285d13a515
Update logstash_releases.json (#16426) 2024-09-05 17:10:52 +01:00
Andrea Selva
b88e23702c
Implements safe evaluation of conditional expressions, logging the error without killing the pipeline (#16322)
This PR protects the if statements against expression evaluation errors, cancel the event under processing and log it.
This avoids to crash the pipeline which encounter a runtime error during event condition evaluation, permitting to debug the root cause reporting the offending event and removing from the current processing batch.

Translates the `org.jruby.exceptions.TypeError`, `IllegalArgumentException`, `org.jruby.exceptions.ArgumentError` that could happen during `EventCodition` evaluation into a custom `ConditionalEvaluationError` which bubbles up on AST tree nodes. It's catched in the `SplitDataset` node.
Updates the generation of the `SplitDataset `so that the execution of `filterEvents` method inside the compute body is try-catch guarded and defer the execution to an instance of `AbstractPipelineExt.ConditionalEvaluationListener` to handle such error. In this particular case the error management consist in just logging the offending Event.


---------

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-09-05 10:57:10 +02:00
Andrea Selva
ac034a14ee
Generate Dataset code with meaningful fields names (#16386)
This PR is intended to help Logstash developers or users that want to better understand the code that's autogenerated to model a pipeline, assigning more meaningful names to the Datasets subclasses' fields.

Updates `FieldDefinition` to receive the name of the field from construction methods, so that it can be used during the code generation phase, instead of the existing incremental `field%n`.
Updates `ClassFields` to propagate the explicit field name down to the `FieldDefinitions`.
Update the `DatasetCompiler` that add fields to `ClassFields` to assign a proper name to generated Dataset's fields.
2024-09-04 11:10:29 +02:00
kaisecheng
6e93b30c7f
Increase Jruby -Xmx to avoid OOM during zip task in DRA (#16408)
Fix: #16406
2024-08-28 11:10:21 +01:00
Mashhur
b2796afc92
Removing 8.14 from ci/branches.json as we have 8.15. (#16390) 2024-08-19 12:49:34 -07:00
Karen Metts
d4519711a6
Doc: Forwardport 8.15.0 release notes to main (#16388) 2024-08-14 09:00:37 -04:00
Mashhur
e104704830
Exclude substitution refinement on pipelines.yml (#16375)
* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .
2024-08-09 09:33:01 -07:00
Ry Biesemeyer
3d13ebe33e
deprecate java less-than 17 (#16370) 2024-08-09 08:58:11 +01:00
Karen Metts
2db2a224ed
Doc: Add SNMP integration to breaking changes (#16374) 2024-08-08 11:06:48 -04:00
Mashhur
62ef8a0847
[Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)
* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-08-06 11:09:26 -07:00
Andrea Selva
09a2827802
Update logstash_releases.json (#16347) 2024-07-30 16:17:10 +01:00
João Duarte
03841cace3
Fix 8.13.1 release notes (#16363)
make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-07-30 09:19:13 +01:00
João Duarte
629d8fe5a8
add retries to snyk buildkite job (#16343) 2024-07-29 12:00:43 +01:00
João Duarte
90f303e401
fix line used to determine ES is up (#16349) 2024-07-24 16:48:42 +02:00
Ry Biesemeyer
c633ad2568
settings: add support for observing settings after post-process hooks (#16339)
Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332
2024-07-24 10:22:34 +01:00
Karen Metts
eff9b540df
Doc: Reposition worker-utilization in doc (#16335) 2024-07-19 12:34:42 -04:00
João Duarte
8f2dae618c
correctly handle stack overflow errors during pipeline compilation (#16323)
This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320
2024-07-18 10:08:38 +01:00
João Duarte
c30aa1c7f5
regenerate webserver test certificates (#16331) 2024-07-17 10:43:57 +01:00
ev1yehor
e065088cd8
Add GH vault plugin bot to allowed list (#16301) 2024-07-16 14:38:56 +03:00
github-actions[bot]
758098cdcd
Release notes for 8.14.3 (#16312) (#16318)
* Update release notes for 8.14.3

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit a60c7cb95e)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-07-11 18:12:35 +02:00
Andrea Selva
01b08c7640
Update logstash_releases.json after 8.14.3 (#16306) 2024-07-11 18:10:59 +02:00
Karen Metts
9c6550a0df
Doc: Update headers for plugins (LSR) (#16277) 2024-07-10 11:22:46 -04:00
Dimitrios Liappis
f728c44a0a
Remove Debian 10 from CI (#16300)
This commit removes Debian 10 (Buster) which is EOL
since July 1 2024[^1] from CI.

Relates https://github.com/elastic/ingest-dev/issues/2872
2024-07-10 15:17:10 +03:00
Ry Biesemeyer
66aeeeef83
Json normalization performance (#16313)
* licenses: allow elv2, standard abbreviation for Elastic License version 2

* json-dump: reduce unicode normalization cost

Since the underlying JrJackson now properly (and efficiently) encodes the
UTF-8 transcode of whichever strings it is given, we no longer need to
pre-normalize to UTF-8 in ruby _except_ when the string is flagged as BINARY
because we have alternate behaviour to preserve valid UTF-8 sequences.

By emitting a _copy_ of binary-flagged strings that have been re-flagged as
UTF-8, we allow the downstream (efficient) encoding operation in jrjackson
to produce equivalent behaviour at much lower cost.

* cleanup: remove orphan unicode normalizer
2024-07-09 14:12:21 -07:00
kaisecheng
2404bad9a9
[CI] fix benchmark to pull snapshot version (#16308)
- fixes the CI benchmark script to always runs against the latest snapshot version
- uses `/v1/versions/$VERSION/builds/latest` to get the latest build id

Fixes: #16307

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-07-08 22:20:59 +01:00
Dimitrios Liappis
ea0c16870f
Add Ubuntu 24.04 to CI (#16299)
Now that we have custom VM images for Ubuntu 24.04, this commit adds
CI for Ubuntu 24.04.

This is a revert of #16279
2024-07-08 14:43:55 +03:00
Dimitrios Liappis
db06ec415a
Remove CentOS 7 from CI (#16293)
CentOS 7 is EOL since June 30 2024[^1]. All repositories and mirrors are
now unreachable.

This commit removes CentOS 7 from CI jobs using it.

Relates https://github.com/elastic/ingest-dev/issues/3520

[^1]: https://www.redhat.com/en/topics/linux/centos-linux-eol
2024-07-04 14:13:16 +03:00
Ry Biesemeyer
a63d8a831d
bump ci releases for 8.14.2 (#16287) 2024-07-04 10:08:22 +01:00
Ry Biesemeyer
b51b5392e1
Release notes for 8.14.2 (#16266) (#16286)
---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-07-04 09:11:46 +01:00
Ry Biesemeyer
e3271db946
add flow-informed tuning guidance (#16265)
* docs: sentence-case headings

* docs-style: one-line-per-sentence asciidoc convention

* docs: add flow-informed tuning guidance

* docs: clarify `pipeline.batch.delay`

* Apply suggestions from code review

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Update docs/static/performance-checklist.asciidoc

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-07-03 15:33:37 -07:00
João Duarte
9872159c71
bump version to 8.16.0 (#16281) 2024-07-03 13:08:59 +01:00
João Duarte
83506eabe7
add 8.15 and remove 8.13 from CI testing (#16282) 2024-07-03 13:08:50 +01:00
João Duarte
121b1c9632
update jruby to 9.4.8.0 (#16278)
https://www.jruby.org/2024/07/02/jruby-9-4-8-0.html

> Fixed a bug in the bytecode JIT causing patterns to execute incorrect branches. #8283, #8284
> jruby-openssl is updated to 0.15.0, with updated Bouncy Castle libraries to avoid CVEs in older versions.
> uri is updated to 0.12.2, mitigating CVE
> net-ftp is updated to 0.3.7 with restored functionality on JRuby.

Exhaustive test suite: https://buildkite.com/elastic/logstash-exhaustive-tests-pipeline/builds/580
2024-07-02 19:57:55 +01:00
João Duarte
a046d3f273
Revert "add ubuntu 24.04 to CI (#16263)" (#16279)
This reverts commit a0bcd61ad3.
2024-07-02 17:45:50 +01:00
João Duarte
a0bcd61ad3
add ubuntu 24.04 to CI (#16263) 2024-07-02 14:34:58 +01:00
Dimitrios Liappis
7080ec5427
Add retries to aarch64 CI pipeline (#16271)
Add retries in the aarch64 CI pipeline to reduce noise from transient
network failures.

Closes https://github.com/elastic/ingest-dev/issues/3510
2024-07-01 12:49:26 +03:00
Karen Metts
095733c409
Doc: Add ecs and datastream requirement for intg filter (#16268) 2024-06-28 19:25:39 -04:00
Edmo Vamerlatti Costa
784fa186c8
Ensure pipeline metrics are cleared on the pipeline shutdown (#16264)
This commit fixed the configuration reload process to clean up the pipeline's metric store, so it does not retain references to failed pipelines components.
2024-06-28 13:13:39 +02:00
Mashhur
0cfe6b0801
Add RubyEvent#dup support and unit test case to keep Json#dump(Event) safe. (#16255)
* Add RubyEvent#dup support and unit test case to keep Json#dump(Event) safe.


Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>

---------

Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
2024-06-27 13:08:56 -07:00
João Duarte
0e1d67eda9
produce wolfi docker image in ci (#16252) 2024-06-26 13:50:47 +01:00
Alex S
bc0b9556bd
Add quotes to fix path handling in pqcheck.bat (#16205) 2024-06-26 11:05:17 +01:00
Mashhur
e6682c94b9
Pin fileutils version to 1.7+ (#16250)
* Pin fileutils version to 1.7+

* Add fileutils license notice.
2024-06-25 12:14:09 -07:00
Ry Biesemeyer
0ec16ca398
Unicode pipeline and plugin ids (#15971)
* fix: restore support for unicode pipeline- and plugin-id's

JRuby's `Ruby#newSymbol(String)` throws an exception when provided a `String`
that contains characters outside of lower-ASCII because JRuby internals expect
"the incoming String to be one of our mangled ISO-8859-1 strings" as noted in
a comment on jruby/jruby#6217.

Instead, we use `Ruby#newString(String)` to create a new `RubyString` (which
works properly), and then rely on `RubyString#intern` to get our `RubySymbol`.

This fixes a regression introduced in the 8.7 series in which pipeline id's
are consistently represented as ruby symbols in the metrics store, and ensures
similar issue does not exist when specifying a plugin id that contains
characters above the lower-ASCII plane.

* fix: use properly-encoded RubySymbol in PipelineConfig

We cannot rely on `RubySymbol#toString` to produce a properly-encoded `String`
whe the string contains characters above the lower-ASCII plane because the
result is effectively a binary ruby-internal marshal of the bytes that only
holds when the symbol contains lower-ASCII.

Instead, we can use the internally-memoizing `RubySymbol#name` to get a
properly-encoded `RubyString`, and `RubyString#asJavaString()` to get a
properly-encoded java-`String`.

* fix: properly serialize unicode pipeline names in API output

Jackson's JSON serializer leaks the JRuby-internal byte structure of Symbols,
which only aligns with the byte-structure of the symbol's actual string when
that string is wholly-comprised of lower-ASCII characters.

By pre-converting Symbols to Strings, we ensure that the result is readable
and useful.

* spec: bypass monitoring specs for unicode pipeline ids when PQ enabled
2024-06-25 08:35:28 -07:00
kaisecheng
440aa98e48
[CI] Benchmark pipeline (#16191)
Add a buildkite pipeline to do benchmark.
The script does benchmark by running Filebeats (docker) -> Logstash (docker) -> ES Cloud.
Logstash metrics and benchmark results are sent to the same ES Cloud.
- Secrets store in vault `secret/ci/elastic-logstash/benchmark`
- Use flog (docker) to generate ~2GB logs
- Pull the snapshot docker image of the main branch every day
- Logstash runs two pipelines, main and node_stats
  - The main pipeline handles beats ingestion, sending data to the data stream `logs-generic-default`
    - It runs for all combinations. (pq + mq) x worker x batch size
    - Each test runs for ~7 minutes
  - The node_stats pipeline retrieves /_node/stats API every 30s and sends it to the data stream `metrics-nodestats-logstash`
- The script sends a summary of EPS and resource usage to index `benchmark_summary`

The buildkite pipeline accepts ENV variables to customize the test
| Variable Name   | Default Value       | Comment                                            |
|-----------------|---------------------|----------------------------------------------------|
| FB_VERSION      | 8.13.4              | docker tag                                         |
| LS_VERSION      |                     | docker tag                                         |
| LS_JAVA_OPTS    | -Xmx2g              | by default, Xmx is set to half of memory           |
| MULTIPLIERS     | 2,4,6               | determine the number of workers (cpu * multiplier) |
| BATCH_SIZES     | 125,1000            |                                                    |
| CPU             | 4                   | number of cpu for Logstash container               |
| MEM             | 4                   | number of GB for Logstash container                |
| QTYPE           | memory              | queue type to test -- persisted; memory; all       |
| FB_CNT          | 4                   | number of filebeats to use in benchmark            |

To check the result
- `vault read secret/ci/elastic-logstash/benchmark` to get the host and credentials
- `curl -u "$ES_USER:$ES_PW" "$ES_HOST/benchmark_summary/_search"`

Fixes: https://github.com/elastic/ingest-dev/issues/3377
2024-06-21 22:48:34 +01:00
Ry Biesemeyer
92909cb1c4
json: remove unnecessary dup/freeze in serialization (#16213) 2024-06-20 09:15:49 -07:00
github-actions[bot]
ca1403009c
Forwardport PR #16212 to main: Release notes for 8.14.1 (#16214)
* Release notes for 8.14.1 (#16212)

* Update release notes for 8.14.1

* Snip generated context

* Manually fill release notes for Elastic Integration filter

* Reword release notes from core to be user-centric

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit f9d6b42a7e)

* add known-issue note to 8.14.0

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
2024-06-20 08:56:38 -07:00
Andrea Selva
321e407e53
Avoid to log file not found errors when DLQ segments are removed concurrently between writer and reader. (#16204)
* Rework the logic to delete DLQ eldest segments to be more resilient on file not found errors and avoid to log warn messages that there isn't any action the user can do to solve.

* Fixed test case, when path point to a file that doesn't exist, rely always on path name comparator. Reworked the code to simplify, not needing anymore the tri-state variable
2024-06-20 08:52:19 -07:00
Andrea Selva
ed930f820d
Avoid mocking the value returned in global SETTINGS constant. (#16245)
This a refactoring of test fixture.
Avoid mocking the value returned in global SETTINGS constant. Use instead the local setting map instance used in subject creation.
2024-06-20 14:25:53 +02:00
ev1yehor
0d385a9611
Update pull-requests.json (#16220) 2024-06-20 13:52:35 +03:00
João Duarte
13a8c4f1ae
remove version pinning from rexml (#16224) 2024-06-19 13:27:15 +01:00
Ry Biesemeyer
801f0f441e
Geoip database management cache invalidation (#16222)
* geoip: failing specs demonstrating elastic/logstash#16221

* geoip: invalidate cached db state when receiving updates/expiries
2024-06-18 15:11:25 -07:00
João Duarte
1484614405
Wolfi-based image flavor (#16189)
* Add wolfi as an option to the build process
* Add docker acceptance tests for the wolfi image
* Change how tests are done on the java process, due to "ps -C" not being available on wolfi

replaces and closes https://github.com/elastic/logstash/pull/16116

Co-authored-by: Andres Rodriguez <andreserl@gmail.com>
2024-06-17 15:48:02 +01:00
Ry Biesemeyer
0f6fa5c8fb
p2p: adds opt-in pipeline bus with less synchronization (#16194)
* p2p: extract interface from v1 pipeline bus

* p2p: extract pipeline push to abstract

* p2p: add opt-in unblocked "v2" implementation

Adds a v2 implementation that does not synchronize on the sender so that
multiple workers can send events through a common `pipeline` output instance
simultaneously.

In this implementation, an `AddressStateMapping` provides synchronized
mutation and cleanup of the underlying `AddressState`, and allows only
queryable mutable views (`AddressState.ReadOnly`) to escape encapsulation.

The implementation also holds indentity-keyed mapping from `PipelineOutput`s
to the set of `AddressState.ReadOnly`s it is regested as a sender for so
that they can be quickly resolved at runtime.

* p2p: more tests for pipeline restart behaviour

* p2p: make v2 pipeline bus the default
2024-06-17 07:35:54 -07:00
Andrea Selva
fab345881a
Introduce filesystem signalling from DLQ read to writer to update byte size metric accordingly when the reader uses clean_consumed (#16195)
Updates the DLQ reader to create a notification file (`.deleted_segment`) which signal when a segment is deleted in consequence of `clean_consumed` set. Updates the DLQ writer to have a filesystem watch so that can receive the reader's signal and update the exposed metric,  loading the size by listing FS segments occupation.
2024-06-17 14:27:39 +02:00
Mashhur
948a0edf1a
Logstash monitoring doc improvements. (#16208)
* Logstash monitoring doc improvements.

---------

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-06-13 09:08:08 -07:00
João Duarte
7f424c1f5d
Update logstash_releases.json to account for 7.17.22 (#16185)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-06-13 17:35:06 +02:00
Edmo Vamerlatti Costa
881f7605f1
Bump logstash-releases.json after 8.14.1 release (#16217) 2024-06-12 08:53:41 -07:00
Edmo Vamerlatti Costa
23221caddb
Pin rexml gem version to 3.2.6 (#16209)
This commit pinned the `rexml` gem version to `3.2.6`
2024-06-10 17:56:37 +02:00
Andrea Selva
efa83787a5
Revert PR #16050
The PR was created to skip resolving environment variable references in comments present in the “config.string” pipelines defined in the pipelines.yml file.
However it introduced a bug that no longer resolves env var references in values of settings like pipeline.batch.size or queue.max_bytes.
For now we’ll revert this PR and create a fix that handles both problems.
2024-06-06 20:24:45 +01:00
github-actions[bot]
70d0f1d022
Release notes for 8.14.0 (#16155) (#16198)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 78fb379282)
2024-06-05 13:11:15 -04:00
Rob Bavey
14afa851de
Bump logstash-releases.json after 8.14.0 release (#16197) 2024-06-05 09:15:44 -04:00
Andrea Selva
5c7d416798
Update log4j rollover to configure time retention (#16179)
Updates the plain, json and pipeline appenders in default config/log4j2.properties to define a delete rule executed during the rollover strategy, which deletes compressed log archives older than 7 days.
Updates the documentation that describe the logging configuration to explain how the rollover file works, how to configure the strategy, in particular how to update to setup space limitation condition on the rollover.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-06-05 09:56:35 +02:00
Edmo Vamerlatti Costa
d0606ff098
Bundle logstash-integration-snmp and remove input-snmp and input-snmptrap as default (#16180)
This commit bundled the logstash-integration-snmp plugin and removed logstash-input-snmp and logstash-input-snmptrap (#16180)
2024-06-03 10:44:26 +02:00
Karen Metts
e2acb4d6bd
Add incomplete integration plugins to metadata.json as skip (#16174) 2024-05-31 11:44:58 -04:00
kaisecheng
1d4038b27f
Add initial buildkite pipeline for Benchmark (#16190)
skeleton pipeline for benchmark
2024-05-31 15:17:50 +01:00
João Duarte
2a7f059754
Upgrade jrjackson to 0.4.20 (#16153)
* Upgrade jrjackson to 0.4.19

* Update versions.yml
2024-05-22 08:56:09 -07:00
Ry Biesemeyer
ea930861ef
PQ: avoid blocking writer when precisely full (#16176)
* pq: avoid blocking writer when queue is precisely full

A PQ is considered full (and therefore needs to block before releasing the
writer) when its persisted size on disk _exceeds_ its `queue.max_bytes`
capacity.

This removes an edge-case preemptive block when the persisted size after
writing an event _meets_ its `queue.max_bytes` precisely AND its current
head page has insufficient room to also accept a hypothetical future event.

Fixes: elastic/logstash#16172

* docs: PQ `queue.max_bytes` cannot be less than `queue.page_capacity`
2024-05-22 08:23:18 -07:00
Mashhur
d0bdc33fac
Regenerate dependencies report and add strscan. (#16169)
* Regenerate dependencies report and add strscan.

* Apply suggestions from code review

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Rename strscan notice file.

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-05-17 08:10:03 -07:00
Karen Metts
73fb21b4ac
Update plugins-metadata.json (#16137) 2024-05-17 10:17:20 -04:00
Mashhur
979d30d701
Handle non-unicode payload in Logstash. (#16072)
* A logic to handle non-unicode payload in Logstash.

* Well tested and code organized version of the logic.

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>

* Upgrade jrjackson to 0.4.20

* Code review: simplify the logic with a standard String#encode interface with replace option.

Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>

---------

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
2024-05-16 10:42:06 -07:00
Luca Belluccini
53d9480176
[DOC] Remove reference to puppet LS module (#12356)
As the module is not maintained since 2018 and it was community supported, I would like to remove it from the documentation.
2024-05-16 13:36:52 -04:00
Mashhur
439fbebe2d
Update catalog-info.yaml to properly define dependencies type. (#16163) 2024-05-15 09:35:02 -07:00
Andrea Selva
977ef89a7d
Updated to 21.0.3+9 (#16055)
Updated bundled JDK to 21
2024-05-15 18:29:58 +02:00
Andres Rodriguez
f781578c3e
Update catalog-info.yaml (#16161)
Add lifecycle option
2024-05-14 10:45:43 -04:00
Mashhur
734405dcbe
Replace stack traces with logger in DSL. (#16159) 2024-05-13 13:45:56 -07:00
Andrea Selva
2eebfd8f0e
Adds section to describe intended usage of pipeline.buffer.type (#16083)
Adds section to describe the intended usage of and impact on memory sizing.

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-05-13 12:27:57 +02:00
Andrea Selva
8e8a5b08d1
Updated target and source Java definitions in build.gradle becuase deprecated and planned for removal in next major (#16156)
Updated target and source Java definitions in build.gradle because deprecated and planned for removal in next major.
2024-05-10 09:14:25 +02:00
Andrea Selva
1e431b47c3
Update releases inventory after 8.13.4 (#16147) 2024-05-10 09:13:06 +02:00
Andres Rodriguez
21148f160e
Set correct pipeline ownership (#16149)
Set correct buildkite pipeline ownership.
2024-05-09 09:46:21 -04:00
Jonas L. B
0d6ba8d1bd
Allow comments in hashes and before EOF (#16058)
In the grammar definitions for hashes, `whitespace` was replaced with `cs` to allow either whitespace _or_ comments. 
Additionally, the grammar definition for comments was previously required to end with a newline, now it can end with a newline _or_ EOF, using the "not anything" treetop rule `!.`.

Co-authored-by: Jonas Lundholm Bertelsen <jonas.lundholm.bertelsen@beumer.com>
2024-05-08 14:07:26 +02:00
João Duarte
3068934c6f
upgrade java_input_example to 1.0.3 (#16152)
follow up to logstash-plugins/logstash-input-java_input_example@090142d
2024-05-08 13:05:42 +01:00
João Duarte
0d6117173f
update multiple dependencies (#16136)
This upgrades multiple java libraries:

* snakeyaml
* shadow
* gradle
* guava
* commons-io
* commons-logging
* commons-codec
* commons-compress
* commons-lang3
* commons-csv
* log4j
* google-java-format
* httpclient
* httpcore
* javassist
* jackson
* jackson-databind
* wiremock-standalone

Gems:

* rack
*sinatra
*octokit
* gems
* rake
* webmock

Also upgrades Java to 17.0.11+9.

Leftover upgrades:

* commons-csv 1.8 breaks license checker
* janino 3.1.12 breaks java tests
* log4j 2.21.0 breaks java compilation
2024-05-08 09:13:41 +01:00
github-actions[bot]
001fea6431
Release notes for 8.13.4 (#16144) (#16150)
Refined release notes for 8.13.4

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: andsel <selva.andre@gmail.com>
(cherry picked from commit 49a65324a8)
2024-05-07 18:07:30 -04:00
João Duarte
6dd2770058
remove unused root Dockerfiles (#16140) 2024-05-07 17:49:47 +01:00
Ry Biesemeyer
9e452d2e54
Update junit 4 13 (#16138)
* test-deps: update junit to latest 4.13

* test-deps: address deprecation of ExpectedException

* test-deps: use org.junit.Assert.assertThrows
2024-05-03 13:49:16 -07:00
João Duarte
973c2ba3aa
force ruby-maven-libs constraint to >= 3.9.6.1 (#16130) 2024-05-02 20:56:32 +01:00
kaisecheng
359a24a3ef
Release notes for 8.13.3 (#16108) (#16128)
* Update release notes for 8.13.3

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-05-02 15:16:56 +01:00
kaisecheng
bf8846d4ae
Update logstash_releases.json (#16123) 2024-05-02 11:45:06 +01:00
João Duarte
4350855e7b
upgrade jruby to 9.4.7.0 (#16125) 2024-05-02 09:59:04 +01:00
Andrea Selva
830733d758
Provide opt-in flag to avoid fields name clash when log format is json (#15969)
Adds log.format.json.fix_duplicate_message_fields feature flag to rename the clashing fields when json logging format (log.format) is selected.
In case two message fields clashes on structured log message, then the second is renamed attaching _1 suffix to the field name.
By default the feature is disabled and requires user to explicitly enable the behaviour.

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2024-04-17 16:37:05 +02:00
kaisecheng
6bf1f80a82
bump version to 8.15.0 (#16088) 2024-04-17 09:16:10 +01:00
kaisecheng
f4d33cb904
add 8.14 branch for ci (#16086) 2024-04-17 09:15:58 +01:00
Karen Metts
cb45cd28cc
Doc: Remove include statements for screenshots not rendering properly (#15981) 2024-04-16 17:24:58 -04:00
Rob Bavey
b9b9ad9395
Use dpkg --print-architecture in Dockerfile to ascertain architecture (#16076)
* Use `dpkg --print-architecture` in Dockerfile to ascertain architecture
2024-04-12 15:47:34 -04:00
Andrea Selva
cc1eca904e
Split LS_JAVA_OPTS content when contains multiple options (#16079)
Bugfix to parse correctly Java options when the environment variable LS_JAVA_OPTS contains multiple definitions separated by space character.

Adapt the parsing of `LS_JAVA_OPTS` environment variable to split by space various definitions it can contains.
2024-04-12 15:18:29 +02:00
Mashhur
9483ee04c6
Fix the exception behavior when config.string contains ${VAR} in the comments. (#16050)
* Wipe out comment lines if config comment contains.

* Remove substitution var process when loading the YAML, instead align on the generic approach which LSCL happens during the pipeline compile.

* Update logstash-core/src/main/java/org/logstash/config/ir/PipelineConfig.java

Put the logging config back as it is being used with composed configs.
2024-04-11 07:32:28 -07:00
kaisecheng
84886ac4be
remove 8.12 from CI (#16077) 2024-04-11 12:56:29 +01:00
Andrea Selva
afa646fbcb
Introduce a new setting to give preference to Java heap or direct space buffer allocation type (#16054)
Introduce a new setting named `pipeline.buffer.type` which could be valued direct or heap to enable the allocation on Java heap.
The processing of the setting is done in `LogStash::Runner#execute` and sets the Java properties considered by Netty to disable the direct allocation: `io.netty.noPreferDirect`.
However, if that system property is already configured explicitly by the user (because set in `jvm.options`or `LS_JAVA_OPTS`) the setting doesn't take place and warning log is reported, respecting the user's will.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-04-10 15:23:47 +02:00
github-actions[bot]
2ec54b42e2
Release notes for 8.13.2 (#16068) (#16071)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit 2bef137dc4)
2024-04-08 12:23:34 -04:00
Mashhur
4a379be6d5
Fix the git branch check for snyk bk jobs (#16062)
* Replace 'git show-ref' with 'git rev-parse' to fix the issue where show-ref is not working as expected.
* Use git checkout instead 'git rev-parse'.
* Apply prune dependencies recommended for big projects (like we have multi gradle projects) by Snyk.
* Apply prune repeated dependency option directly to snyk monitor.
* Avoid the exit, continue scanning to the end.
* Remove the debugging.
2024-04-08 11:34:26 +01:00
Rob Bavey
e5b2b3d92b
Update Dockerfile to select appropriate architecture on build box (#16053)
This commit adds logic to copy the appropriate env2yaml file to the Docker image
* Clean up env2yaml folder

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-04-04 18:15:54 -04:00
Andrea Selva
6eb99437e4
Update releases inventory after 8.13.1 (#16048) 2024-04-04 10:39:17 +02:00
Andrea Selva
6a04854e4c
JDK 21 move (#15719)
Adaptations to run Logstash on JDK 21:

- Java 8 support is obsolete and will be removed.
- Thread's `getId` (not final) replaced by final `threadId` https://bugs.openjdk.org/browse/JDK-8017617
- Verify the warnings "this-escape" when a constructor use other method or pass around `this` reference to other methods https://bugs.openjdk.org/browse/JDK-8015831
- URL constructor is deprecated, use `<uri_instance>.toURL()` (since JDK 20)
-  Manages new (since JDK 20) `G1 Concurrent GC` MX Bean, [ref](https://github.com/elastic/logstash/pull/15719#issuecomment-1946367785)
2024-04-03 17:08:12 +02:00
github-actions[bot]
327fbe134a
Release notes for 8.13.1 (#16046) (#16049)
Forward port release notes for 8.13.1 on main
2024-04-02 08:59:18 +02:00
Mashhur
dd1f6dd160
Fix integration tests caused by #16026. (#16038) 2024-03-28 09:08:00 -07:00
João Duarte
f6940f5a34
Update critical_vulnerability_scan.yml (#16039)
Change trigger from pull_request_target to pull_request, as the former uses the base branch instead of the PR source code.
This allows simplification of the checkout action (also took the opportunity to bump from v2 to v4).
2024-03-28 15:03:07 +00:00
kaisecheng
a7779664af
Modernize ironbank Dockerfile (#16022)
- remove golang assets (go1.17.8.linux-amd64.tar.gz)
- remove yaml lib assets (v2.3.0.tar.gz)
- use go container to build env2yaml
- remove unnecessary layers
- remove HEALTHCHECK
- switch yum to dnf

Fixes: elastic/ingest-dev#3008
2024-03-28 11:15:01 +00:00
Mashhur
e429795039
Save name came through ENV vars to let Logstash decide using value from either keystore or ENV. (#16026)
* Save  name came through ENV vars to let Logstash decide using either keystore or ENV value.

* Apply suggestions from code review to simplify array declaration.

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-03-27 09:22:27 -07:00
João Duarte
e8597cbb77
Introduce vulnerability scanner that fails PRs on critical vulnerabilities (#16028)
This github action leverages https://github.com/anchore/grype to scan a tarball artifact
2024-03-27 16:04:50 +00:00
João Duarte
96e4838e43
Fix exclusion of old ruby-maven-libs 3.3.9 jars (#16032) 2024-03-27 14:40:03 +00:00
Karen Metts
36b7cb8ab3
Doc: Remove beta tag from integrations filter doc (#16021) 2024-03-26 12:22:08 -04:00
Andrea Selva
c4bd54de58
Update releases inventory after 7.17.19 (#16024) 2024-03-26 15:53:34 +01:00
kaisecheng
716c33f4b5
update ci release for 8.13.0 (#16030) 2024-03-26 10:53:44 +00:00
github-actions[bot]
bae6210c21
Release notes for 8.13.0 (#15978) (#16027)
Co-authored-by: Kaise Cheng <kaise.cheng@elastic.co>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Mashhur <mashhur.sattorov@elastic.co>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 270c8ca110)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-03-26 09:22:00 +00:00
João Duarte
59bd376360
upgrade ruby-maven-libs to 3.8.9 (#15894)
Given that JRuby comes with ruby-maven-libs 3.3.9 this commit upgrades the gem to 3.8.9 and ensures files from 3.3.9 are not included in the distribution.
2024-03-18 14:30:13 +01:00
carrychair
d1e624b81c
remove repetitions of "the" word (#15987)
Signed-off-by: carrychair <linghuchong404@gmail.com>
2024-03-17 10:53:59 +00:00
Karen Metts
92b20bc184
Doc: Remove extra + to force page regen (#16000) 2024-03-12 11:27:07 -04:00
Rob Bavey
32052a263f
Remove curl installation step, as curl-minimal is already provided in ubi9 (#15998) 2024-03-12 09:50:52 -04:00
João Duarte
43614ede50
ensure order of jvm options from file and env vars is respected (#15997)
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-03-12 09:18:35 +00:00
Ry Biesemeyer
483059e378
geoip: avoid crash cleaning non-existing managed dbs (#15986)
* geoip: avoid crash cleaning non-existing managed dbs

* Update x-pack/spec/geoip_database_management/manager_spec.rb

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-03-11 11:56:51 -07:00
Dimitrios Liappis
c0c213d17e
Split java/ruby unit test steps on Windows (#15888)
As a follow up to #15861 this commit splits the current unit tests step
for the Windows JDK matrix pipeline to two that run
Java and Ruby unit tests separately.

Closes https://github.com/elastic/logstash/issues/15566
2024-03-11 09:27:11 +02:00
Rob Bavey
b640e7e851
Add arm64 support for env2yaml (#15980)
* Add arm64 support for env2yaml

This commit builds env2yaml in arm64 and amd64 flavors, and uses
$TARGETARCH in the Dockerfile to ensure that the correct version is used
when building for alternative architectures

Fixes: #15913

* Add env2yaml executables to build context

* Split `COPY_FILES` for readability

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-03-08 10:47:01 -05:00
Mashhur
dec53ba325
Imap plugin is no anymore default plugin. (#15425) 2024-03-08 07:25:46 -08:00
Mashhur
19637143e6
Fix the Bootstrap check test failure on Windows. (#15975) 2024-03-06 10:10:35 -08:00
github-actions[bot]
3070a637cf
Release notes for 8.12.2 (#15956) (#15972)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit 2c89bcfc7b)
2024-02-23 08:30:20 -05:00
Andrea Selva
834c779c5a
Remove dlq writer dead code add some logs (#15965)
Remove unused method createFlushScheduler and add some debug logs to the finalizeSegment method to better follow what happens during its execution in case of problems.
2024-02-22 14:06:53 +01:00
Andrea Selva
18bbb3156c
Add shutdown step of DLQ flusher scheduled service (#15964)
This PR adds a shutdown method to the SchedulerService class used to handle actions to be executed on a certain cadence. In particular is used to execute scheduled finalization of DLQ head segment.
Updates the close method of the DLQ writer to invoke this additional shutdown on the service instance.
2024-02-22 12:33:23 +01:00
Andrea Selva
ff37e1e0d3
Fix failing DLQ test due to time scheduling (#15960)
Adds a burning of time condition to avoid a collision of time which, under certain circumstances, would fail the test.
The sealing of a segment happens if the segment is considered as stale, which requires 2 conditions:

- the segment must have received a write.
- the time of the last write must exceed the flush interval.

In this failing test, the flush interval is set to ZERO because of the synchronicity of the test, to avoid time dependency. However, with coarse grain timer resolution, could happen that the last write coincide with the time of the stale check, so fail the seal condition.
2024-02-22 11:28:31 +01:00
João Duarte
14dc3d24a4
bump jruby to 9.4.6.0 and remove ffi workaround (#15961) 2024-02-22 09:47:42 +00:00
Andrea Selva
c30fe46c80
Added :jvm-options-parser subproject to the javaTests task (#15957)
Adds a global new task named javaTests which groups both :logstash-core:javaTests and :jvm-options-parser:test to include the JvmOptionParser unit test in javaTests phase.
2024-02-19 16:22:03 +01:00
Pavel Zorin
e5ca8601ba
[CI] [Sonar] Explicitly specified test sources for sonar (#15954)
* [CI] [Sonar] Explicitly specified test sources for sonar

* [CI] removed wildcards from sonar.tests
2024-02-15 14:10:54 +01:00
Dimitrios Liappis
54f73e5d22
Follow up to #15900 -- fix remaining acceptance tests (#15907)
PR#15900 missed a few more places where Logstash is installed but
a working minimal pipeline config is added.
This commit fixes that and stabilizes all acceptance tests, thus
minizing the need for time consuming BK retries of corresponding
steps.

Relates #15900
Relates https://github.com/elastic/logstash/issues/15784
2024-02-15 11:33:17 +02:00
Dimitrios Liappis
eedccea33f
Fix packaging service check failures (#15946)
This commit tightens the checks for the status
output of the Logstash OS service to specifically
scan for `org.logstash.Logstash` rather than
only the jdk path.

The reason is that the startup script first runs
an options parser, and then the logstash process
itself, both referencing the JDK path.

Closes https://github.com/elastic/ingest-dev/issues/2950
2024-02-15 10:01:47 +02:00
Mashhur
3c9db658bc
Package elastic_integration plugin. (#15769)
* Exclude plugins feature in OSS distributions.

* Set elastic_integration plugin default.

* Remove non-OSS plugins after installing default plugins.

* Testing local can't find gem bundler (= 2.3.26) issue.

* Include extract non-OSS plugins logic indocker build operations.

* Only default plugins can be excluded from OSS distros.

* Simplification: instead conditional check, use intersection to make OSS exlucluded plugin list.

* Gem and specification files still stay after removing the plugin. This change removes the stayed files.

* Rename oss-exclude to skip-oss to align namings with other params.

* Make intersection method simpler.

* [Test] Temporary excluding elastic integration plugin from default plugin list.

* Sets elastic_integration plugin default back. When removing locally installed gems, Gem::Specification doesn't recognize the gem. We have Bundle::setup in the removal logic but it is causing an issue when we re-use the bundle.

* Test the build order, remove plugin from cache logic seems invalid since we don't pack the cache.
2024-02-14 07:10:42 -08:00
kaisecheng
22a10b5b92
bump version to 8.14.0 (#15935) 2024-02-14 09:18:48 +00:00
kaisecheng
34ebdc120f
add 8.13 branch (#15938) 2024-02-13 17:39:10 +00:00
Andres Rodriguez
8eb08e1382
Add Alma 8, Alma 9, and Rocky Linux 9 to the JDK matrix (#15941) 2024-02-13 11:01:21 -05:00
Ry Biesemeyer
dc8b5b2c86
ci: provide metadata about releases that are in-flight (#15924)
Our shared CI infrastructure relies on this file being here with a specific
format, expanding a given `ELASTIC_STACK_VERSION` environment variable and
an optional `SNAPSHOT` environment variable into the information necessary
to target either the most recent published release or a snapshot representing
a candidate for the _next_ possible release in the series.

When we cut a new minor branch from `main` (at feature freeze), the tip of
that branch contains unreleased code that is in-flight for the next release.
We need to have a way to reference artifacts from this in-flight release,
instead of skipping over them while a minor release is in flight.

This change affects the _semantics_ of what is tested during a minor stack
release's feature-freeze period, ensuring that automated tests throughout
the Logstash ecosystem continue to run against the pending release with NO
changes required in those plugins.

It also introduces a new `snapshots.main` entry, which refers to snapshot
artifacts built from the un-feature-frozen `main` branch, so that plugins
can _optionally_ test against the _next_ minor in advance of its feature
freeze, as they currently do unintentionally in builds that target
 `ELASTIC_STACK_VERSION=8.x SNAPSHOT=true`.
2024-02-12 07:04:06 -08:00
Dimitrios Liappis
be3f75e346
Clean up left over scripts after CI migration (#15926)
Following the CI migration from Jenkins to Buildkite, this commit
removes a number of left over helper scripts that aren't needed
anymore.

Closes https://github.com/elastic/ingest-dev/issues/2850
2024-02-12 11:26:28 +02:00
kaisecheng
4e6815b1ea
add openssl to ubi image (#15929)
Logstash on ECK requires openssl command to build TLS keystore.
This commit adds `microdnf install -y openssl` to ensure the command exists in ubi image.
2024-02-09 18:40:02 +00:00
João Duarte
de01eb6ee3
Remove jinja2 in favor of erb templates (#15142)
This commit removes the jinja2 templates and consequently the dependency on Python
2024-02-09 13:49:58 +00:00
Andrea Selva
52ce3ff8f6
Set Netty's maxOrder options to previous default (#15925)
Updates Netty's configuration of maxOrder to a previously proven value, if not already customised by the user.

Adds a step to the JvmOption parsing tool, which is used to compose the JVM options string to pass down to Logstash at startup.
The added step rework the parsed options to set the allocator max order -Dio.netty.allocator.maxOrder=11 so that the maximum pooled buffer is up to 16MB and not 4MB.
This option is added iff it's not yet specified by the user
2024-02-09 13:49:23 +01:00
João Duarte
5c3e64d591
introduce go.mod for env2yaml (#15921)
Update the env2yaml to have a go.mod instead of relying on disabling go modules, otherwise building with golang 1.22 will fail in the future.
This change also directly uses the golang image to build the binary removing the need for an intermediate image.
2024-02-08 18:12:11 +00:00
Nassim Kammah
db193f56c4
Update docs-preview link (#15918)
Following the migration from Jenkins to Buildkite, docs previews are now available at <repo>_bk_<PR>.

More context in https://github.com/elastic/docs/pull/2898
2024-02-08 17:20:58 +01:00
Andrea Selva
4e98aa8117
Fix the lastes 8.12 version, to the actually latest released (#15915)
* Fix the latest 8.12 version, to the actually latest released
* Update version fo 7.17


Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-02-07 18:02:51 +01:00
Mashhur
c75a2d84b1
Update CI release file after releases. (#15911) 2024-02-07 06:41:02 -08:00
Ry Biesemeyer
38e8c5d3f9
flow_metrics: pull worker_utilization up to pipeline-level (#15912) 2024-02-06 11:50:34 -08:00
github-actions[bot]
529ed2eddc
Release notes for 8.12.1 (#15885) (#15910)
* Update release notes for 8.12.1

* Revise 8.12.1 relase notes.

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Mashhur <mashhur.sattorov@elastic.co>
(cherry picked from commit 1b8648b365)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-02-06 10:44:34 -08:00
Dimitrios Liappis
b66dc7f460
Fix service startup with acceptance tests (#15900)
This commit fixes the startup of the Logstash service during packaging
tests by adding a minimal pipeline config. Without it, the service was
flapping from start to start and vice versa causing test flakiness.

Relates https://github.com/elastic/logstash/issues/15784
2024-02-06 17:38:12 +02:00
Dimitrios Liappis
2fc3f4c21f
Add retries to acceptance/docker steps in BK (#15901)
Similarly to #15874, this commit adds retries
to another group, the acceptance/docker to reduce
build noise from transient issues.
2024-02-06 15:10:13 +02:00
Dimitrios Liappis
fedcf58c48
Add Debian 12 to CI (#15895)
This commit adds Debian 12 (Bookworm) to the
Linux JDK matrix pipeline and Compat Phase of the
exhaustive pipeline respectively.

Relates https://github.com/elastic/ingest-dev/issues/2871
2024-02-05 18:49:30 +02:00
Dimitrios Liappis
8ac55184b8
Allow running Java+Ruby tests on Windows separately (#15861)
This commit allows separate running of Java and Ruby tests on Windows i.e. the same way as we currently do on unix (unit_tests.sh) via a cli argument.
If no argument has been supplied, both tests are run (as it does now).

The wrapper script is also rewritten from old batch style script to Powershell.

This work allows us to split the existing Windows CI job in a subsequent PR to separate steps, as we currently do on Linux.

Relates: https://github.com/elastic/logstash/issues/15566
2024-02-01 10:04:25 +02:00
Karen Metts
d2bdc05f99
Doc: Update plugin headers to deeplink into matrix (#15870)
Co-authored by: Alessandro Stoltenberg <alessandro.stoltenberg@elastic.co>
2024-01-30 13:35:16 -05:00
Dimitrios Liappis
3b747d86b8
Add retries to JDK matrix pipeline steps (#15877)
This commit adds retries to the steps of the Linux + Windows JDK matrix
pipeline steps to avoid notification noise due to transient network
errors.
2024-01-30 18:02:57 +02:00
Dimitrios Liappis
88a32cca81
Add BK retries to exhaustive/compat steps (#15874)
As a follow up to #15787 we also add Buildkite retries for the
exhaustive pipeline / compatibility group steps to prevent
failures due to flakiness.
2024-01-30 14:33:33 +02:00
Dimitrios Liappis
0f9b6f9e87
Fix typo with aarch64 pipeline name to trigger (#15873)
This commit fixes a bug instroduced in PR#15850 and uses the correct
pipeline name when triggering the aarch64 pipeline on schedule.
2024-01-30 10:10:01 +02:00
Dimitrios Liappis
70c3fcdcbe
[ci] Skip slack alerts if retried tasks succeeded (#15869)
Currently for pipelines that have steps with auto retries, if there
is a failure of a step, even if the subsequent retry succeeds, we
received Slack alerts.
This commits uses new functionality available in the Buildkite PR Bot
to avoid Slack notifications in those cases and changes the settings
for all pipelines where we have, or expect to have, retries.
2024-01-29 13:52:00 +02:00
João Duarte
905ddd267b
add base64 notice and update generated NOTICE.txt (#15867) 2024-01-29 11:08:50 +00:00
Mashhur
eb9859276e
Update stack versions for CI. (#15864) 2024-01-28 21:02:30 -08:00
Dimitrios Liappis
0563a13665
Fix typo in catalog-info with exhaustive schema (#15858) 2024-01-28 19:40:48 +02:00
Dimitrios Liappis
5ee75803f8
Scheduled runs of exhaustive and aarch64 pipelines (#15850)
This commit adds a schedule to run the exhaustive pipeline
(biweekly, every other Wednesday @2AM UTC) and the aarch64
(weekly, every Monday@2AM UTC).

Closes https://github.com/elastic/ingest-dev/issues/2852
2024-01-28 19:30:02 +02:00
Mashhur
7063365739
Revert "[docs] Simplify and imrove Logstash monitoring docs. (#15847)" (#15862)
This reverts commit 9317052088.
2024-01-26 14:18:41 -08:00
Andrea Selva
8657ce9d21
Re-enabled collectd codec in test_plugins script (#15857) 2024-01-26 17:31:31 +01:00
Dimitrios Liappis
3f5b44a1ad
Remove Ubuntu 18.04 from CI jobs (#15855)
Relates https://github.com/elastic/ingest-dev/issues/2849
2024-01-26 17:14:41 +02:00
Mashhur
9317052088
[docs] Simplify and imrove Logstash monitoring docs. (#15847) 2024-01-25 17:28:36 -08:00
Karen Metts
cefd553a7d
Doc: Update extending integrations docs to point to plugin (#15747) 2024-01-25 11:02:47 -05:00
Dimitrios Liappis
0d808ed708
Retries for serverless-integration-testing pipeline (#15851)
This commit adds (up to 3) retries for all steps of the `serverless-integration-testing`
pipeline as a stop-gap measure to prevent network related transient failures.
2024-01-25 17:24:00 +02:00
Andrea Selva
e4ef87923d
Describe how to troubleshoot connectivity problems in Azure Blob Storage at HTTP level (#15835)
* Describe how to troubleshoot connectivity problems at HTTP level with Azure Blob Storage

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-01-24 09:46:41 +01:00
Andrea Selva
1395d953b2
Updates bundled JDK (#15840)
* Updates bundled JDK

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-01-24 09:46:18 +01:00
Dimitrios Liappis
c33afd4cd0
Mute DLQ test on Windows (#15843)
This commit mutes the DLQ test:
`testDLQWriterFlusherRemovesExpiredSegmentWhenCurrentHeadSegmentIsEmpty`
when running on Windows.

Closes https://github.com/elastic/logstash/issues/15768
2024-01-24 09:53:56 +02:00
Dimitrios Liappis
d74fea4b55
Fix IT tests after version bumps (#15827)
This commit fixes IT failures that frequently occur after
version bumps due to missing unified release snapshot builds for
the new version.

This commit uses project specific DRA snapshot URLs for ES and Filebeat
in all cases apart from release builds.

Closes #2825
2024-01-23 15:13:51 +02:00
Dimitrios Liappis
15e19a96c2
Fix acceptance/packaging upgrade test near a release (#15826)
The current mechanism of discovering the latest released version per
branch (via ARTIFACTS_API) isn't foolproof near the time of a new
release, as it may be pick a version that hasn't been released
yet. This leads to failures[^1] of the packaging upgrade tests, as we
attempt to download a package file that doesn't exist yet.

This commit switches to an API that that is more up to date regarding
the release version truth.

[^1]: https://buildkite.com/elastic/logstash-exhaustive-tests-pipeline/builds/125#018d319b-9a33-4306-b7f2-5b41937a8881/1033-1125
2024-01-22 20:58:32 +02:00
Dimitrios Liappis
c5cb1fe2ed
Annotate successful DRA builds with summary URL (#15820)
This commit makes the generated DRA URL easily accessible via
a Buildkite annotation.

Closes https://github.com/elastic/ingest-dev/issues/2608
2024-01-22 16:37:18 +02:00
Dimitrios Liappis
52d9e6286e
Update active branches used by CI (#15816)
Relates https://github.com/elastic/logstash/pull/15814
2024-01-18 15:35:18 +02:00
Nassim Kammah
9256de43c3
Remove Nassim Kammah from list of maintainers (#15709) 2024-01-18 14:05:25 +01:00
Dimitrios Liappis
fc09ad4112
Fix flaky logstash-plugin IT test (#15803)
This commit fixes the flaky IT test:
`install non bundle plugin successfully installs the plugin with debug enabled`
by being a bit more lenient with the output which can get garbled by Bundler.

Closes #15801
2024-01-18 14:59:08 +02:00
Pavel Zorin
2c83a52380
[CI] Send Java and ruby tests to sonarqube simultaneously (#15810)
* Ruby code coverage with SimpleCov json formatter

* [CI] Send Java and ruby tests to sonarqube simultaneously

* Enabled COVERAGE for ruby tests

* Enabled COVERAGE for ruby tests

* Enabled COVERAGE for ruby tests

* Enabled COVERAGE for ruby tests

* Enabled COVERAGE for ruby tests

* Added compiled classes to artifacts

* Test change

* Removed test changes

* Returned back ENABLE_SONARQUBE condition

* Removed debug line

* Diable Ruby coverage if ENABLE_SONARQUBE is not true

* Run sonar scan on pull requests and onn push to main

* Run sonar can on release branches
2024-01-17 19:04:37 +00:00
github-actions[bot]
20b298e350
Release notes for 8.12.0 (#15804) (#15812)
---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit d5d363208b)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-01-17 13:55:42 -05:00
Dimitrios Liappis
16c4d8827e
Enable packaging tests in exhaustive pipeline (#15781)
This commit adds the packaging tests (that were refactored in #15696
to not rely on Vagrant) in a new "acceptance" group.

Relates: https://github.com/elastic/ingest-dev/issues/1722
2024-01-12 17:52:06 +02:00
kaisecheng
081d8fc15c
[Doc] kerberos debug instructions (#15779)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-01-12 13:11:12 +00:00
Dimitrios Liappis
fca1fccb66
Add Docker acceptance to exhaustive BK pipeline (#15748)
This commit adds the Docker acceptance tests in the acceptance phase
of the exhaustive tests pipeline.

- Relates: https://github.com/elastic/ingest-dev/issues/1722
2024-01-12 09:02:45 +02:00
Karen Metts
968fb24450
Doc: Add monitoring for serverless (#15636) 2024-01-11 15:42:38 -05:00
Dimitrios Liappis
a0e3b35c5f
Don't trigger exhaustive tests with backports (#15791)
Having enabled branch pushes as triggers for the exhaustive test
pipeline, we have an unwanted sideeffect, that when using
PR automation (@logstashmachine) to create backports, the temporary
branches also triggered exhaustive tests.

This commit skips triggering this pipeline when upstream branches
starting with `backport` get pushed.
2024-01-11 17:12:47 +02:00
Andrea Selva
4a2104b1a4
Update releases inventory after 8.11.4 (#15783) 2024-01-11 15:54:41 +01:00
Dimitrios Liappis
739e8a3ef0
Add retries to PR pipeline steps (#15787)
There is occasional flakiness mainly with IT tests requiring us to
manually retry such failures when we raise PR (or the first
group of the exhaustive suite, which runs the same steps).

This commit adds up to 3 retries for all the steps of the PR
pipeline.
2024-01-11 15:48:45 +02:00
github-actions[bot]
5b98f7d63d
Release notes for 8.11.4 (#15753) (#15777)
* Update release notes for 8.11.4

(cherry picked from commit 93e786fd0f)
2024-01-10 11:53:09 +01:00
Andrea Selva
50a589493c
Bump Puma lower version constraint to >= 6.4.2 (#15773) 2024-01-10 09:52:56 +00:00
Dimitrios Liappis
c8726b79f3
Run BK exhaustive pipeline when code is pushed (#15738)
This commit enables running the exhaustive tests Buildkite pipeline
(i.e. the equivalent to the `main` Jenkins tests) ; the trigger is
code events, i.e. direct pushes, merge commits and creation of new branches.

CI is skipped if changes are only related to files under `docs/`.
2024-01-10 10:18:19 +02:00
Edmo Vamerlatti Costa
a21ced0946
Add system properties to configure Jackson's stream read constraints (#15720)
This commit added a few jvm.options properties to configure the Jackson read constraints defaults (Maximum Number value length, Maximum String value length, and Maximum Nesting depth).
2024-01-08 17:48:11 +01:00
Dimitrios Liappis
9f1d55c6a2
Pin childprocess gem to major version 4 (#15758)
This commit pins the `childprocess` gem to version `4` since version `5.0.0` of
https://github.com/enkessler/childprocess/pull/175 seems to have broken JRuby support for spawning.

Closes https://github.com/elastic/logstash/issues/15757

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-01-08 18:08:46 +02:00
Dimitrios Liappis
cebe4a7537
Refactor qa/acceptance tests to get away from vagrant (#15696)
This commit modernizes the qa/acceptance (packaging) test framework by
moving away from Vagrant and having the tests operate locally.

As we are migrating to Buildkite, the expectation is that those tests
will run on dedicated vms thus removing the necessity of vagrant.

Relates: https://github.com/elastic/ingest-dev/issues/1722
2024-01-08 09:40:58 +02:00
Mashhur
73c87a4151
Add elastic_integration plugin to the plugin meta list. (#15692)
* Add elastic_integration plugin non-default plugin. This will be enough to pick up it the doc generations.
2024-01-04 09:39:55 -08:00
Andrea Selva
76d1677187
Fixes a typo in plugin's organization name (#15743)
Fixes a typo in string definition for Logstash's plugins organization.
2024-01-04 17:11:55 +01:00
Dimitrios Liappis
286088915f
JUnit result annotation for Buildkite PR jobs (#15741)
This commit adds annotations for Java unit tests (in the pull request pipeline) helping
identify failing unit tests quickly.
2024-01-04 17:21:07 +02:00
Andrea Selva
6dc8423186
Add filter elastic_integration to be tested in test_plugins script (#15723)
Add the `logstash-filter-elasticsearch_integration` plugin to the list of test_plugins pipeline.
In doing so, reworked the definition of plugins, by not just specifying a plugin name but also adding support leve, type and organization. This necessary for plugins which repository is not part of `logstash-plugins` Github organization but at the same time permit an easier management of the different axes that partitions the plugin landscape.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-01-04 16:00:45 +01:00
Edmo Vamerlatti Costa
41ec183f09
Fix logstash-keystore multiple keys operations with command flags (#15737)
This commit fixes how the keystore tool handle the command's options, including validation for unknown options, and adding the --stdin flag to the add command.
2024-01-03 18:57:57 +01:00
Andrea Selva
2be661b9bc
Re-enable json filter and netflow codec plugins (#15722)
- Re-enabled logstash-filter-json and logstash-codec-netflow plugins into the test_plugins BK pipeline.
- Moved codecs tier2 to be executed in separata VM.
- Commented plugin logstash-codec-collectd till is fixed
2024-01-02 17:20:28 +01:00
Andrea Selva
f643e8606d
Update to gradle 8.5 (#15707)
Update the Gradle wrapper to version 8.5
2024-01-02 13:28:34 +01:00
Dimitrios Liappis
a8b64a32e9
Skip input step when triggering JDK matrix jobs (#15733)
PR#15729 missed the input step. As a result when the job is triggered
the steps are executed, but the pause icon still shows in the job
requiring manual unblock[^1]

This commit also skips the input step when the job is triggered from
the scheduler pipeline.

[^1] https://buildkite.com/elastic/logstash-linux-jdk-matrix-pipeline/builds/86
2024-01-02 13:06:03 +02:00
Dimitrios Liappis
82ac474b13
Don't block triggered JDK matrix Buildkite jobs (#15729)
The recent PRs #15668 and #15705 refactored jobs with a custom schedule
to leverage a centralized trigger pipeline.

An unexpected sideffect of this is that the conditional for the wait
step doesn't work anymore.

This commit skips the wait step when the JDK matrix pipelines get triggered
from another pipeline.
2024-01-02 11:29:52 +02:00
Dimitrios Liappis
4a7b31ea6d
[ci] No Slack alerts for pipeline scheduling job (#15728)
As a follow up to #15705, this commit shushes Slack alerts for the
common Buildkite scheduling pipeline. This is because the pipeline
scheduler just triggers other pipelines which have their own dedicated
alerts setup, therefore we want to avoid duplicate alerts when
there is a failure with one of the triggered pipelines.
2024-01-02 11:14:54 +02:00
Dimitrios Liappis
9538338abb
[ci] Add testing phase to exhaustive tests suite (#15711)
This is the second part of the migration of the exhaustive/main
Jenkins Job to Buildkite. So far we've migrated the "compatibility
phase" and this commit adds the "testing phase"[^1], which is essentially
the same amount of tests that we ran on PR jobs.

Relates https://github.com/elastic/ingest-dev/issues/1722
Depends https://github.com/elastic/logstash/pull/15708

[^1]: For more details, refer to the sequence diagram in https://github.com/elastic/ingest-dev/issues/1722#issuecomment-1824378635
2023-12-21 09:42:14 +02:00
Mashhur
0049717394
So far we are practicing Snyk scan of docker images. Now we can comment docker scan logic to address collected issues later with more experiements. (#15702) 2023-12-20 10:51:20 -08:00
Dimitrios Liappis
03d7b59f2a
[ci] Reusable unit + IT test steps for Buildkite (#15708)
This commit is a pre-requisite for adding unit + IT tests in a
dedicated phase of the Exhaustive tests pipeline.

It refactors the tests currently used by PR jobs, so that they become
reusable.
2023-12-20 18:22:09 +02:00
Felix Jansson
69c145d4ee
Add log.format to the logstash.yml (#15049)
* Added log.format in the logstash.yml (#11290)

* Update config/logstash.yml

---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2023-12-20 14:40:59 +00:00
Andrea Selva
48b0af1206
Replace Gradle's report.enabled setting to report's required property (#15706) 2023-12-20 12:40:10 +01:00
Dimitrios Liappis
33347ddddf
Common scheduler pipeline for Buildkite (#15705)
As an improvement from #15668 / #15700, rather than having one
dedicated side-car scheduling job per pipeline, we move to a single
scheduling job. Various pipelines that need triggering with different
schedules now live under each schedule in the new pipeline.

This reduces the amount of jobs we have to maintain in yaml.
2023-12-20 11:47:54 +02:00
Dimitrios Liappis
06ceef8e13
[ci] Allow schedule to trigger several pipelines (#15703)
This commit enhances the functionality introduced in #15668 and #15700
by allowing a single Buildkite scheduling job to trigger several
pipelines, in addition to multiple branches which it already does.

We rename the env var PIPELINE_TO_TRIGGER to PIPELINES_TO_TRIGGER
which now supports comma separate values.

This enhancement can be useful for pipelines like JDK matrix which
have variants (Linux and Windows) that we want to trigger with a single
scheduling job, thus reducing unnecessary entries in catalog-info.
2023-12-20 10:22:46 +02:00
Dimitrios Liappis
42497edebb
Bug fix for trigger pipeline Buildkite pipeline (#15700)
This commit fixes a few bugs introduced in #15668 related to paths for
the calling script. We also stop limiting the execution only from the
main branch (to facilitate e.g. tests from PRs) and, finally, remove
the async clause, which is not needed, since by default BK steps are
run in parallel.
2023-12-19 13:33:46 +02:00
Dimitrios Liappis
c2eaecce8e
[ci] Remove hardcoded branches from DRA schedule (#15668)
This commit is the first making use of #15627 to remove hard coded
branches for the DRA Snapshot build schedule.

With this pattern, we will only need to keep `ci/branches.json` up to date,
as versions evolve, and not need to update/maintain hard coded branches
in `catalog-info.yaml` anymore.

Once this is verified working, we'll add a corresponding schedule
pipeline (in `catalog.info`) for the JDK matrix job.

Relates: https://github.com/elastic/ingest-dev/issues/2664
2023-12-19 11:42:28 +02:00
Dimitrios Liappis
f062fef603
[ci] Add Java 21 option for multi JDK CI pipeline (#15691)
Now that CI VM images are pre-provisioned with various flavors of
Java 21, we add the option for the corresponding CI job.
Adoptium 17 remains the default pre-selected option.

Relates https://github.com/elastic/ci-agent-images/pull/463
2023-12-18 16:56:48 +02:00
Andrea Selva
6b22cc8dd1
Separate scheduling of segments flushes from time (#15680)
Introduces a new interface named SchedulerService to abstract from the ScheduledExecutorService to execute the DLQ flushes of segments. Abstracting from time provides a benefit in testing, where the test doesn't have to wait for things to happen, but those things could happen synchronously.
2023-12-18 11:07:02 +01:00
João Duarte
aa05205a9e
Update logstash_releases.json to include 7.17.16 and 8.11.3 (#15688) 2023-12-12 18:23:46 +00:00
João Duarte
9e93ded8ea
Update catalog-info.yaml to include testing for 8.12 branch (#15666)
Don't remove 8.11 yet as we may have 8.11.x maintenance releases until 8.12.0 is GA.
2023-12-12 17:30:26 +00:00
github-actions[bot]
64ef307a73
Release notes for 8.11.3 (#15681) (#15685)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit a45349ea47)
2023-12-12 10:28:39 -05:00
Andrea Selva
241c03274c
Avoid to run integration tests while testing all supported plugin (#15511)
Update the test_plugins pipeline script to execute only the unit tests.
Use the vendored JRuby in every Ruby related duty, such as running `bundler` and `gem`.
Temporarily comments plugins that has needs to be fixed and already fails on  their Travis CI.
Executes the testing of input tier1 plugins in VM instead of Buildkite agent.
2023-12-11 10:11:03 +01:00
github-actions[bot]
042e5d4c2d
Release notes for 8.11.2 (#15660) forwardport to main (#15676)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
(cherry picked from commit 4e3e09c5d6)
2023-12-07 17:17:32 -05:00
Mashhur
6a44d9df68
Update JRuby core to 9.4.5.0 of rubyUtils.gralde (#15671) 2023-12-07 11:29:16 -08:00
Karen Metts
6c2e123a24
Doc: Add docs for extending integrations with filter-elastic_integrations (#15518) 2023-12-07 14:21:23 -05:00
Dimitrios Liappis
365ca4e2d6
Add new 8.12 branch to CI (#15667)
Now that we have a new 8.12 branch[^1], add it to CI definitions.

[^1]: https://github.com/elastic/logstash/tree/8.12
2023-12-07 15:59:08 +02:00
João Duarte
6c12e6f4c2
bump version to 8.13.0 (#15665) 2023-12-06 23:00:47 +00:00
Karen Metts
0954a687f0
Doc: Update Logstash intro and security overview for serverless (#15313) 2023-12-06 17:00:01 -05:00
Andrea Selva
eddd91454f
Shutdown DLQ segments flusher only if it has been started (#15649)
In DLQ unit testing sometime the DLQ writer is started explicitly without starting the segments flushers. In such cases the test 's logs contains exceptions which could lead to think that the test fails silently.

Avoid to invoke scheduledFlusher's shutdown when it's not started (such behaviour is present only in tests).
2023-12-05 09:06:24 +01:00
Dimitrios Liappis
a26b1d399f
[ci] Skip Logstash PR checks when docs change (#15650)
Changes made under `docs/` are unrelated to the PR tests triggered from
this repo[^1]

This commit skips regular PR tests when changes are made (only) under the `docs/` directory.

Note that the current docs-specific Jenkins Job is getting migrated
at an org level[^2] to Buildkite, and this change will be merged soon.

Relates:

- https://github.com/elastic/ingest-dev/issues/2703

[^1]: via `.buildkite/pull_requests_pipeline.yml`
[^2]: https://github.com/elastic/docs/blob/master/.buildkite/pull-requests.org-wide.json
2023-12-04 19:14:57 +02:00
Dimitrios Liappis
d42b938f81
[ci] Compatibility tests for Exhaustive suite (#15641)
This commit adds the compatibility tier for the Exhaustive tests suite.
Specifically, we introduce two new groups (running in parallel) for Linux and Windows compat tests.
Linux picks one OS per family from [^] and likewise Windows one of the three available choices from the same file.

We also support manual override, if user chooses to, by setting `LINUX_OS` or `WINDOWS_OS` as env vars in the Buildkite build prompt (in this case there is no randomization, and only one OS can be defined for Linux and Windows respectively).

For example:
```
LINUX_OS=rhel-9
WINDOWS_OS=windows=216
```

Relates:

- https://github.com/elastic/ingest-dev/issues/1722

[^1]: 4d6bd955e6/.buildkite/scripts/common/vm-images.json
2023-11-30 20:28:19 +02:00
Dimitrios Liappis
0b1d15e250
Add list of active branches for automation (#15627)
This commit introduces a json file that contains a list of active
branches. It's essentially the same thing done in the Elasticsearch
repo in https://github.com/elastic/elasticsearch/pull/99026 and helps
simplify and automate Buildkite scheduled jobs.

Relates:

- https://github.com/elastic/ingest-dev/issues/2664
- https://elasticco.atlassian.net/browse/ENGPRD-318
- https://github.com/elastic/ingest-dev/issues/2663
2023-11-30 16:55:46 +02:00
kaisecheng
05392ad16e
Added missing method of logger wrapper for puma (#15640)
This commit fixes no method error when node stats API got
invalid API path, which triggers puma to print error using stderr

Fix: #15639
2023-11-30 13:53:18 +00:00
Edmo Vamerlatti Costa
5543e3c3b2
Add support to add and remove multiple keystore keys in a single operation (#15612)
This commit added support to add and remove multiple keystore keys in a single operation. It also fixed the empty value validation for editing existing key values and added ASCII validation for values.
2023-11-30 10:21:51 +01:00
github-actions[bot]
8b9acf12fa
Release notes for 8.11.1 (#15572) forwardport to main (#15637)
(cherry picked from commit 3d4e7e121d)
2023-11-29 18:12:10 -05:00
Dimitrios Liappis
6446bba962
[ci] Switch to Adoptium distro for JDK 11 (#15629)
AdoptOpenJDK is nowadays Adoptium, so we replace it in favor of
the latter which is actively maintained.

Relates https://github.com/elastic/logstash/pull/15628
2023-11-29 11:37:27 +02:00
Mashhur
6785038435
Logstash core integration tests for Logstash to Logstash communication. (#15541)
* Logstash core integration tests for Logsthas to Logstash communication.

* Cleanify: Logstash core integration tests for Logsthas to Logstash communication.
2023-11-28 10:02:22 -08:00
Dimitrios Liappis
66d37412e8
[ci] Switch to Adoptium JDK 17 for Jenkins CI (#15628)
The last remaining Jenkins job prior to BK migration is for
exhaustive tests. The compatibility phase seems to be failing
since 57dc14c92
with Java 17.0.2

This commit switches from OpenJDK 17 (whose last release was 17.0.2)
to AdoptiumJDK 17 which actively receives updates and is bundled
in the custom images used by Jenkins.
2023-11-28 19:28:44 +02:00
Dimitrios Liappis
f0019bf33c
[ci] Fix scheduled JDK matrix CI jobs (#15623)
This commit fixes failed scheduled JDK matrix CI jobs, that
can't access the default values for the OS and JDK from the input
steps, as observed in [^1] and [^2].

[^1] https://buildkite.com/elastic/logstash-linux-jdk-matrix-pipeline/builds/53#018c1371-b760-4c28-9203-340c0a1df150
[^2]: https://buildkite.com/elastic/logstash-windows-jdk-matrix-pipeline/builds/35#018c1371-b72e-48b4-b707-ce103eb6039c
2023-11-28 16:00:32 +02:00
Karen Metts
906c2513c3
Doc: Improvements to monitoring with agent (#15619) 2023-11-27 14:18:06 -05:00
Mashhur
1e65f53d68
Use proper BK agent and simplify some operations in Snyk report pipeline. (#15610)
* Use Java installed BK agent and remove unnecessary git clone operation since repo is already cloned.

* Switch back to normal VM since Logstash BK agent doesn't support docker operations.
2023-11-27 09:31:25 -08:00
Dimitrios Liappis
db50983ab5
[ci] Initial Exhaustive tests Buildkite pipeline (#15607)
This commit adds a skeleton Buildkite pipeline for the Exhaustive tests
suite.
2023-11-27 11:06:19 +02:00
Dimitrios Liappis
ade2fb0687
[ci] Initial Exhaustive tests Buildkite resource (#15608)
This commit adds a new resource for the Exhaustive tests Buildkite
pipeline.

Relates:

- https://github.com/elastic/ingest-dev/issues/1722
- https://github.com/elastic/logstash/pull/15607
2023-11-27 11:06:02 +02:00
Dimitrios Liappis
2cbb4500dc
[ci] Fix slack alerts for Linux JDK matrix pipeline (#15609)
This commit enables slack alerts for the Linux part of the JDK
matrix pipeline, which was missed in the previous PR#15593.
2023-11-24 18:03:01 +02:00
Dimitrios Liappis
f8bb0480fc
[ci] Explicit maximum timeout for pipelines (#15601)
This commit adds a maximum default (global) timeout for every pipeline
definition (now that it's possible to define this programmatically in
an RRE).

The default values have been chosen arbitrarily based on intuition
about how much (in the worst case) we should wait for each job to
run until we consider them stuck/failed.

While at it, we update the yaml schema for RREs to point to the
latest commit (rather than a pinned commit that doesn't reflect the
latest changes, e.g. `maximum_default_timeout`).

Relates #15380
2023-11-24 17:37:22 +02:00
github-actions[bot]
b7c31f3fda
swap dataformat-yaml with snakeyaml (#15599) (#15606)
This removes the dependency on jackson's dataformat-yaml. Since there's only a single place where this library is used in core: to load the plugin alias definition, the code can be replaced by the underlying snakeyaml.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
(cherry picked from commit 93d8a9da32)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2023-11-22 12:07:10 +00:00
Edmo Vamerlatti Costa
57dc14c92c
Fix issue with Jackson 2.15: Can not write a field name, expecting a value (#15564)
This commit fixes the issue with Jackson > 2.15 and `log.format=json`: "Can not write a field name, expecting a value", by adding a default serializers for JRuby's objects.
2023-11-21 15:30:24 +01:00
Dimitrios Liappis
db8f87bf9d
[ci] Enable auto runs of JDK matrix pipelines (#15593)
This commit enables scheduled runs of the JDK matrix pipelines for
both Windows and Linux, once a week, Tuesdays at 1am UTC, using the
pipeline defaults for OS and JDK.
2023-11-21 16:11:54 +02:00
Dimitrios Liappis
e259e04e53
[ci] Use GCP prod image for Linux JDK matrix job (#15600)
So far we've been using images from the -qa GCP image project throughput
the development of the Logstash Linux JDK matrix pipeline for quicker
iteration.

As we have scheduled weekly builds of those images that promote to
prod[^1] we can now switch to the prod version of the GCP images.

[^1]: https://buildkite.com/elastic/ci-vm-images/builds/2888

Relates https://github.com/elastic/ingest-dev/issues/1725
2023-11-20 17:27:04 +02:00
Dimitrios Liappis
8fa3bd0d7f
[ci] Support amazonlinux for JDK matrix pipeline (#15595)
The last part of the Logstash JDK matrix CI migration from Jenkins to
Buildkite is AmazonLinux 2023.

While we have a working image[^1], this is the only step that requires
a agent that runs on AWS.

This commit refactors the builder to support GCP or AWS agents depending
on the OS.

[^1]: https://github.com/elastic/ci-agent-images/pull/441
2023-11-20 11:32:47 +02:00
Dimitrios Liappis
cd01abb1c7
[ci] Add yaml language server to pipelines (#15590)
Add missing yaml-language-server definition to Buildkite pipeline files
(static and dynamic generated) for consistency and to ease spotting
errors with editors.
2023-11-15 17:47:15 +02:00
Dimitrios Liappis
abc1384ea3
[ci] Add 90min timeout for supported plugins test (#15581)
This commit adds a global max timeout of 90min for the supported plugins
Buildkite pipeline. This prevents hanging builds (for 24hrs, which is
the default).

Relates: #15380
2023-11-15 12:11:20 +02:00
kaisecheng
53a346c57f
[Doc] add reference to ECK (#15565)
add ECK reference to doc

Fixed: #15471

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2023-11-14 23:26:45 +00:00
Karen Metts
c060c00d7c
Doc: Add Elastic Agent collection (#15528)
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2023-11-14 13:55:39 -05:00
Dimitrios Liappis
ce63ea4a51
[ci] Fix image name for Rocky Linux 8 (#15584)
Fix typo for image name of Rocky Linux 8 for JDK matrix jobs.
2023-11-14 19:26:49 +02:00
João Duarte
6547f8c5c4
Update logstash_releases.json to include 7.17.15 and 8.11.1 (#15578) 2023-11-14 11:03:54 +00:00
verogo
496f18effc
[Buildkite] Add pipeline max timeout property (#15579)
Build's maximum_timeout_in_minutes and default_timeout_in_minutes are available now through the catalog-info.yaml file.
As this change was made manually before we implemented this in RRE/Terrazzo, they got reverted to the default value (0); thus, I am raising this PR to get it as it was specified before the upgrade (120 mins.)
2023-11-14 09:12:24 +02:00
Mashhur
8700179cf2
Update 8.x CI to 8.11 (#15551) 2023-11-12 05:10:56 +05:00
Dimitrios Liappis
0ede19a0e1
[ci] JDK matrix Buildkite pipelines (pt 2/Windows) (#15563)
This commit adds JDK matrix Buildkite pipelines for
Windows 2022, 2019 and 2016.

It also makes the groups easier to read (on both Linux and Windows
pipelines) by removing the os-jdk prefix from the job labels.

`testDLQWriterFlusherRemovesExpiredSegmentWhenCurrentHeadSegmentIsEmpty`
fails on Windows Buildkite agents and it's a test issue tracked in
https://github.com/elastic/logstash/issues/15562.

Relates:

- https://github.com/elastic/logstash/pull/15539
- https://github.com/elastic/ingest-dev/issues/1725
2023-11-10 17:25:13 +02:00
Dimitrios Liappis
f87651fee0
[ci] Remove 8.10 scheduled builds for DRA (#15544)
This commit removes 8.10 from the automated builds of DRA
artifacts, and should be merged once 8.11.0 is out.
2023-11-10 17:18:10 +02:00
Andrea Selva
5af14f4e1c
Fixed functional test in case the LS_JAVA_HOME is configured (#15535)
Adds filtering on Logstash output message in an integration tests when setting LS_JAVA_HOME environment variable.
2023-11-10 11:01:42 +01:00
Dimitrios Liappis
956bf483f2
[ci] JDK matrix Buildkite pipelines (part 1) (#15539)
This commit is the first part of the migration of JDK matrix tests
from Jenkins to Buildkite. There will be two separate pipelines, for
Linux and Windows.

Linux is currently limited to Ubuntu 22.04 and 20.04, but
additional operating systems will be added outside of the Logstash
repository seamlessly through additional VM images.

Steps are created dynamically and the underlying script is meant to be
common for Linux and Windows. Windows is currently a stub and
will be added in a follow up PR.

Relates:

- https://github.com/elastic/ingest-dev/issues/1725
- https://github.com/elastic/ci-agent-images/pull/424
2023-11-09 09:53:27 +02:00
github-actions[bot]
c762deceae
Release notes for 8.11.0 (#15465) (#15548)
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit d4715a36c0)
2023-11-07 19:10:04 -05:00
Ry Biesemeyer
a7e2839a83
build meta: bump 8.x snapshot to 8.12 (#15546)
once 8.11 was cut, snapshot build for 8.x should be 8.12
2023-11-07 14:34:08 -08:00
Andrea Selva
7a055c34d1
Fixed definitions of licenses for some dependencies (#15540)
Update some dependency's licenses definitions.
2023-11-07 17:12:28 +01:00
João Duarte
bd6189db8e
Update JRuby to 9.4.5.0 (#15531) 2023-11-07 13:41:04 +00:00
Dimitrios Liappis
07147b3e40
[ci] Split JDK matrix pipelines per OS (#15534)
This commit splits the generic Buildkite pipelines introduced
in #15520 for JDK tests to separate pipelines for Linux and Windows.
2023-11-07 15:22:37 +02:00
Dimitrios Liappis
b86ad3038a
[ci] Split JDK matrix resources per OS (#15533)
This commit splits the generic Buildkite catalog resource introduced
in #15519 for JDK tests to separate resources for Linux and Windows.
2023-11-07 14:19:47 +02:00
Mashhur
fa1382fd22
Update the Logstash to Logstash Native doc to reflect the multiple hosts usage. (#15512)
* Update the Logstash to Logstash Native doc to reflect the multiple hosts usage.

* Logstash to Logstash comm page, adding LS-to-LS native HA support.

* Apply suggestions from code review

Refining the context.

Co-authored-by: Andres Rodriguez <andreserl@gmail.com>

---------

Co-authored-by: Andres Rodriguez <andreserl@gmail.com>
2023-11-06 12:32:48 -08:00
Ry Biesemeyer
51886b9102
geoip: extract database manager to stand-alone feature (#15348)
* geoip: extract database manager to stand-alone feature

Introduces an Elastic-licensed GeoipDatabaseManagement tool that can be used
by ANY plugin running on Elastic-licensed Logstash to retrieve a subscription
to a GeoIP database that ensures EULA-compliance and frequent updates, and
migrates the previous Elastic-licensed code-in-Logstash-core extension to
the Geoip Filter to use this new tool, requiring ZERO changes to in-the-wild
versions of the plugin.

The implementation of the new tool follows the previous implementation as
closely as possible, but presents a new interface that ensures that a
consumer can ATOMICALLY subscribe to a database path without risk that the
subscriber will receive an update or expiry before it is finished applying
the initial value:

~~~ ruby
geoip_manager = LogStash::GeoipDatabaseManagement::Manager.instance
subscription = geoip_manager.subscribe('City')

subscription.observe(construct: ->(initial_dbinfo){ },
                     on_update: ->(updated_dbinfo){ },
                     on_expire: ->(       _      ){ })

subscription.release!
~~~

* docs: link in geoip database manager docs

* docs: reorganize pending 'geoip database management' feature

* docs: link to geoip pages from feature index

* geoip: add SubscriptionObserver "interface"

simplifies using Subscription#observe from Java

* geoip: fixup SubscriptionObserver after rename

* geoip: quacking like a SubscriptionObserver is enough

* geoip: simplify constants of legacy geoip filter extension

* geoip: bump logging level to debug for non-actionable log

* geoip: refine log message to omit non-actionable info

* re-enable invokedynamic (was disabled to avoid upstream bug)

* geoip: resolve testing fall-out from filter extension's "private" constants removal

* geoip: consistently use `DataPath#resolve` internally, too
2023-11-06 09:22:23 -08:00
Andres Rodriguez
e3584fd53e
Plugin Tests: Exit 1 on error (#15527)
The plugin tests were not correctly exiting when plugin errors were present. It will now correctly exit 1 if there are plugins with errors.
2023-11-03 12:38:01 -04:00
Andres Rodriguez
cf71dae3ff
Add support to test unsupported plugins (#15526)
Add support to test unsupported plugins. Only enable input-rss for now.
2023-11-03 12:08:01 -04:00
Dimitrios Liappis
ccc41d76ff
[ci] Initial JDK matrix Buildkite pipeline (#15520)
This commit adds a skeleton Buildkite pipeline for the JDK matrix tests.

Relates:

- https://github.com/elastic/logstash/pull/15519
- https://github.com/elastic/ingest-dev/issues/1725
2023-11-02 17:51:50 +02:00
Dimitrios Liappis
5df18f1053
[ci] Initial jdk matrix Buildkite resource (#15519)
This commit adds a skeleton resource definition for a Buildkite
pipeline for JDK matrix tests.

Relates:
https://github.com/elastic/ingest-dev/issues/1725
https://github.com/elastic/logstash/pull/15520
2023-11-02 17:51:27 +02:00
Andrea Selva
73daec05ed
Download of JDK from the Elastic catalog instead of Adoptium (#15514)
* Adapted the JDK's download URL creation to intereact with Elastic catalog to get metadata, and return the catalog download link instead of directly pointing to Adoptium API

* Silenced the Download task of JDK to print the full url
2023-10-30 16:24:27 +01:00
João Duarte
206362212a
Update JDK to 17.0.9+9 (#15509) 2023-10-27 10:20:26 +01:00
Rob Bavey
a398c93eec
Update Iron Bank base image to ubi9.2 (#15490) 2023-10-26 09:53:29 -04:00
Dimitrios Liappis
1b794dfcd6
[ci] Slack notifications for aarch64 pipeline (#15507)
This commit enables slack notifications for aarch64 Buildkite pipeline
failures.
2023-10-26 15:20:16 +03:00
Dimitrios Liappis
c384190718
[ci] aarch64 Buildkite pipeline part 2 (#15473)
This commit is the follow up PR after #15466, which migrates away
the remaining aarch64 acceptance test Jenkins jobs to Buildkite.

Relates:

- #15466
- https://github.com/elastic/ingest-dev/issues/1724
2023-10-25 18:08:56 +03:00
Andrea Selva
90964fb559
Updates callsites synthax for i18n.t method to avoid deprecated and prohibited format (#15500)
Updates invocations of i18n.t method which are leftovers and missed in the original Ruby 3.1 update PR #14861

Without this, some error reporting logs are hidden by the mismatch of arguments error in translate the error message.
2023-10-25 11:42:46 +02:00
Dimitrios Liappis
16da966290
[ci] re-enable Java unit tests on aarch64 pipeline (#15492)
PR#15466 skipped the Java unit tests as on the `main` and `8.11`
branches they attempted to run sonar scans (which are only meant to
run for PRs).

This commit re-enables the Java unit tests, taking advantage of #15486,
disabling the sonar scan part of the test suite.
2023-10-24 18:05:14 +03:00
Dimitrios Liappis
b1ab2fad3e
Option to disable SonarQube with Java Tests (#15486)
This commit introduces a way to optionally disable SonarQube scanning
and coverage reports when running Java unit tests. The integration
was introduced in #15279, however, there are cases (e.g. running Java
unit tests outside of PRs) where we don't want this integration.

Disabling can be achieved by setting the env var `ENABLE_SONARQUBE`
to `false`.
2023-10-24 10:32:44 +03:00
Rob Bavey
57cc392d0e
Update ubi8 base image to 8.7 (#15487) 2023-10-23 12:47:27 -04:00
Dimitrios Liappis
36656de4f0
[ci] aarch64 Buildkite pipeline part 1 (#15466)
This commit is the first part of migrating away the aarch64 Jenkins
jobs to Buildkite. It adds a group of exhaustive test steps in the
aarch64 pipeline.

The java unit tests are temporarily disabled as they run SonarQube
scans which need to be associated with pull requests.

Relates:

https://github.com/elastic/ingest-dev/issues/1724
2023-10-23 15:01:14 +03:00
João Duarte
e626409695
Update JRuby, JDK and Jackson (#15477)
* update JDK to 17.0.8.1+1
* update JRuby to 9.4.4.0
* update jackson to 2.15.3
2023-10-23 11:17:01 +01:00
João Duarte
6046402a35
Remove pinning of psych 5.1.0 (#15474)
https://github.com/ruby/psych/issues/655 has been fixed with the release of 5.1.1.1
2023-10-18 16:02:37 +01:00
Dimitrios Liappis
0083738cde
[ci] Initial aarch64 Buildkite pipeline (#15460)
This commit adds a skeleton Buildkite pipeline for aarch64
exhaustive tests.

- https://github.com/elastic/logstash/pull/15459
- https://github.com/elastic/ingest-dev/issues/1724
2023-10-18 09:45:05 +03:00
Dimitrios Liappis
9aa291a2a1
[ci] Initial aarch64 Buildkite pipeline (#15459)
This commit adds a skeleton resource definition for a Buildkite
pipeline for aarch64 exhaustive tests.

Relates:
https://github.com/elastic/ingest-dev/issues/1724
2023-10-18 09:27:34 +03:00
Rob Bavey
860785bbb0
Release notes for 8.10.4 (#15435) (#15463)
* Update release notes for 8.10.4
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
Co-authored-by: Karen Metts <karen.metts@elastic.co>

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-10-17 09:54:08 -04:00
Dimitrios Liappis
9a924f45ef
[ci] Slack notifications for PR Buildkite builds (#15452)
This commit enables Slack notifications for the pull request Buildkite
pipeline.
2023-10-16 18:32:19 +03:00
Andres Rodriguez
235ba422cc
Change plugin test buildkite pipeline to run in parallel
Closes #15440
2023-10-16 10:42:05 -04:00
Dimitrios Liappis
1284c675d1
[ci] Enable Buildkite PR comments (#15453)
This commit enables automatically updated comments from Buildkite for
PRs. It posts comments to PRs whenever a Buildkite build finishes or
encounters its first failing step and keeps the comment updated.
2023-10-16 17:12:14 +03:00
Dimitrios Liappis
784ca0243e
[docs] Small typo fixes in README (#15429)
Fix a few typos in the Logstash README.md
2023-10-16 16:49:52 +03:00
Dimitrios Liappis
555a8ff8e3
Migrate remaining PR jobs to Buildkite #15444
This commit adds the remaining steps for the Buildkite pull request pipeline i.e.

- `IT Persistent Queues / part 1`
- `IT Persistent Queues / part 2`
- `x-pack unit tests`
- `x-pack integration`

Once merged we will be able to retire the corresponding Logstash pull request multijob.

Relates:

- https://github.com/elastic/logstash/pull/15438
- https://github.com/elastic/logstash/pull/15437
- https://github.com/elastic/ingest-dev/issues/1721
- https://github.com/elastic/logstash/pull/15279
2023-10-14 09:46:52 +03:00
Dimitrios Liappis
816d7e6b2b
[ci] Add PR it-tests (part 1+2) to Buildkite (#15438)
This commit adds integration tests to the
Buildkite PR pipeline.

Relates:

- https://github.com/elastic/logstash/pull/15437
- https://github.com/elastic/ingest-dev/issues/1721
- https://github.com/elastic/logstash/pull/15279
2023-10-13 17:01:01 +03:00
Dimitrios Liappis
e285425d54
[ci] Commit PR based Java unit tests for Buildkite (#15437)
This commit adds Java unit tests (including sonar scans) to the
Buildkite pull request pipeline.

Relates: https://github.com/elastic/ingest-dev/issues/1721
2023-10-13 10:48:10 +03:00
Mashhur
3a753e49e8
Update Logstash core versions in CI after release. (#15424) 2023-10-12 22:57:43 -07:00
Dimitrios Liappis
4ee74c9de6
[ci] Daily DRA snapshot builds with Buildkite (#15395)
This commit enables the snapshot job schedule for Buildkite jobs.
They are set to run after Jenkins (scheduled @midnight) so that any
remote/unknown chance of conflict with the release manager is limited.

While at it, we also enable slack notifications for failures to the
same channels as Jenkins.
2023-10-11 21:00:45 +03:00
João Duarte
401d166a89
pin psych 5.1.0 due to ruby/psych#655 (#15433)
The new psych 5.1.1 gem seems to not work when installed in JRuby 9.4 (used in main).

This change pins the version back to 5.1.0 until ruby/psych#655 is sorted.
2023-10-11 14:58:42 +01:00
Andrea Selva
a03a05b697
Defines a pipeline to run specs tests on supported plugins (#15380)
Defines a Buildkite pipeline to run specs tests on tier1 and tier2 plugins, using container images. The tasks are divided by plugin type for each tier, to run them in parallel. Once all tier1 testing is completed then starts with tier2.
2023-10-11 11:40:54 +02:00
Dimitrios Liappis
d4eccd256f
[ci] Don't trigger BK PR jobs on pushes (#15426)
Currently while PR jobs get triggered as expected in Buildkite via
the bot, they also get re-triggered after something gets pushed,
including e.g. the squash merge commit when the PR is merged.

This commit disables this behavior as documented in Buildkite[^1].
and by tightening our filter.

Relates:

- #15402
- https://github.com/elastic/ingest-dev/issues/1721

[^1]: https://buildkite.com/docs/apis/rest-api/pipelines
2023-10-11 11:45:30 +03:00
Dimitrios Liappis
fd2910a6ef
[ci] Improve PR step names for Buildkite (#15421)
This commit shortens the descriptions of the PR Buildkite steps for
better readability on the UI.
2023-10-11 09:29:30 +03:00
Dimitrios Liappis
bc5ad292d4
[ci] Fix agents in BK pipeline for PRs (#15415)
This commit fixes a typo introduced in #15413 and #15402
2023-10-10 18:55:40 +03:00
Andrea Selva
c0e812158c
Update Guava dependency to 32.1.2 (#15394) 2023-10-10 17:14:23 +02:00
Dimitrios Liappis
e1eb1978e5
[ci] Add missing repository in Buildkite PR (#15413)
PR #15402 missed the required repository property for the Buildkite
pull request resource.
2023-10-10 17:41:53 +03:00
Dimitrios Liappis
3bfd824f7c
[ci] Initial pull request jobs for buildkite (#15402)
This commit adds a Buildkite resource for pull requests and
the two simple jobs, license checking and ruby unit tests that are
already part of the Jenkins PR multi job setup.

As this is WiP, slack notifications aren't enabled.
2023-10-10 16:39:41 +03:00
Andres Rodriguez
64ddec5c9d
Fix a few lint format issues
Fix lint issues found by 'rake lint:format'
2023-10-10 09:00:54 -04:00
github-actions[bot]
be063e2eab
Release notes for 8.10.3 (#15390) (#15404)
* Add known imap and email plugin issues section to Logstash 8.10+ versions.

Co-authored-by: Mashhur <mashhur.sattorov@elastic.co>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit a88f82e77f)
2023-10-09 20:19:44 -04:00
github-actions[bot]
537f1b942d
Add bigdecimal > 3.1 dependency. (#15377) (#15384)
(cherry picked from commit 9680bbd82a)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2023-10-09 14:54:28 +01:00
Andrea Selva
2b7515cfee
Add definition for plugin's test pipeline to Buildkite catalog (#15382)
Add definition of pipeline  to Buildkite catalog.  The new pipeline execute supported plugins' tests
2023-10-05 11:32:24 +02:00
Dimitrios Liappis
94dfd4773b
[DRA] Remove wait step in buildkite (#15366)
DRA artifact builds support two optional parameters
`VERSION_QUALIFIER_OPT` and `DRA_DRY_RUN`.  The most important is
`VERSION_QUALIFIER_OPT` which should be provided when `alpha1` or
similar versions need to be built.

Currently, after clicking new build, the pipeline takes ~20s to
assemble the steps and then pauses the job waiting for these options to be
filled (or just accept the empty defaults) and press continue.

I feel that this could be trappy behavior because the majority of the
use cases don't need it, and it's likely that a user manually clicks
build and forgets that they'd need to confirm these parameters later
on, left with a hanging build.

This commit makes the parameters optional. If needed, they should be
defined explicitly as Environment Variables in the New Build prompt,
after expanding the Options section.

The downside of this approach is that when needed, users need to
consult the documentation about the environment variable names.
2023-10-05 10:06:07 +03:00
Mashhur
0adcc09bf6
Bump core version to 8.12 (#15378) 2023-10-03 23:33:46 -07:00
Ry Biesemeyer
70081bbcac
deps: downgrade jruby, keep updated default-gem dependencies (forward-port #15283) (#15369)
* deps: downgrade jruby, keep updated default-gem dependencies (#15283)

forward-ports non-release-branch components of #15283 to `main`

* deps: downgrade jruby, keep updated default-gem dependencies

By downgrading JRuby to 9.4.2.0 we avoid the silent global crash of the
scheduler backing `Concurrent::TimerTask` that occurs when Jruby 9.4.3.0's
invokedynamic promotes a method to run natively, incorrectly.

Upstream bug: https://github.com/jruby/jruby/issues/7904

Along with the downgrade of JRuby itself to 9.4.2.0, we cherry-pick the
updates to gems that were included in the latest JRuby 9.4.3.0 to ensure
we don't back out relevant fixes to stdlib.

We also remove a pinned-dependency on `racc` that is no longer relevant.

Resolves: https://github.com/elastic/logstash/issues/15282

* Imported the licenses for some gems

- cgi
- date
- ffi-binary-libfixposix
- io-console
- net-http
- net-protocol
- reline
- time
- timeout
- uri

* specs: avoid mocking global ::Gem::Dependency::new

* build: remove redundanct dependsOn declaration

* deps: notice use of ffi-binary-libfixposix via Ruby license

this gem is tri-licensed `Ruby` / `EPL-2.0` / `LGPL-2.1-or-later` and
the Ruby license is preferred to EPL when available

---------

Co-authored-by: andsel <selva.andre@gmail.com>

* deps: add license notices for gems moved from default to bundled

---------

Co-authored-by: andsel <selva.andre@gmail.com>
2023-10-03 14:32:28 -07:00
2177 changed files with 40273 additions and 62179 deletions

View file

@ -0,0 +1,161 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
agents:
provider: aws
imagePrefix: platform-ingest-logstash-ubuntu-2204-aarch64
instanceType: "m6g.4xlarge"
diskSizeGb: 200
steps:
- group: "Testing Phase"
key: "testing-phase"
steps:
- label: ":rspec: Ruby unit tests"
key: "ruby-unit-tests"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/unit_tests.sh ruby
retry:
automatic:
- limit: 3
- label: ":java: Java unit tests"
key: "java-unit-tests"
env:
# https://github.com/elastic/logstash/pull/15486 for background
ENABLE_SONARQUBE: "false"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/unit_tests.sh java
retry:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 0 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 1 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 2 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: x-pack unit tests"
key: "x-pack-unit-tests"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
x-pack/ci/unit_tests.sh
retry:
automatic:
- limit: 3
- label: ":lab_coat: x-pack integration"
key: "integration-tests-x-pack"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
x-pack/ci/integration_tests.sh
retry:
automatic:
- limit: 3
- group: "Acceptance Phase"
depends_on: "testing-phase"
key: "acceptance-phase"
steps:
- label: "Docker [{{matrix}}] flavor acceptance"
command:
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh && ci/docker_acceptance_tests.sh {{matrix}}
retry:
automatic:
- limit: 3
matrix:
- "full"
- "oss"
# *** TODO: enable after clarifying if acceptance tests really need vagrant on aarch64
# - label: "Acceptance tests on {{matrix.distribution}}"
# agents:
# provider: aws
# imagePrefix: platform-ingest-logstash-{{matrix.distribution}}-aarch64
# instanceType: "m6g.4xlarge"
# diskSizeGb: 200
# command:
# set -euo pipefail
# source .buildkite/scripts/common/vm-agent.sh && ci/acceptance_tests.sh {{matrix.suite}}
# matrix:
# setup:
# suite:
# - "debian"
# distribution:
# - "ubuntu-2204"

View file

@ -0,0 +1,11 @@
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 100
diskType: pd-ssd
steps:
- label: "Benchmark Marathon"
command: .buildkite/scripts/benchmark/marathon.sh

View file

@ -0,0 +1,14 @@
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 100
diskType: pd-ssd
steps:
- label: "Benchmark Snapshot"
retry:
automatic:
- limit: 3
command: .buildkite/scripts/benchmark/main.sh

View file

@ -1,33 +1,15 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
steps:
- input: "Build parameters"
if: build.source != "schedule"
fields:
- text: "VERSION_QUALIFIER_OPT"
key: "VERSION_QUALIFIER_OPT"
default: ""
required: false
hint: "Optional version qualifier for built artifacts e.g.: alpha1,beta1"
- select: "DRA DRY-RUN"
key: "DRA_DRY_RUN"
required: false
default: ""
options:
- label: "True"
value: "--dry-run"
- label: "False"
value: ""
hint: "Whether the DRA release manager will actually publish artifacts, or run in dry-run mode."
- wait: ~
if: build.source != "schedule"
- label: ":pipeline: Generate steps"
command: |
set -euo pipefail
echo "--- Building [${WORKFLOW_TYPE}] artifacts"
echo "--- Building [$${WORKFLOW_TYPE}] artifacts"
python3 -m pip install pyyaml
echo "--- Building dynamic pipeline steps"
python3 .buildkite/scripts/dra/generatesteps.py | buildkite-agent pipeline upload
python3 .buildkite/scripts/dra/generatesteps.py > steps.yml
echo "--- Printing dynamic pipeline steps"
cat steps.yml
echo "--- Uploading dynamic pipeline steps"
cat steps.yml | buildkite-agent pipeline upload

View file

@ -0,0 +1,39 @@
steps:
- label: "Exhaustive tests pipeline"
command: |
#!/usr/bin/env bash
echo "--- Check for docs changes"
set +e
.buildkite/scripts/common/check-files-changed.sh '^docs/.*'
if [[ $$? -eq 0 ]]; then
echo "^^^ +++"
echo "Skipping running pipeline as all changes are related to docs."
exit 0
else
echo "Changes are not exclusively related to docs, continuing."
fi
set -eo pipefail
echo "--- Downloading prerequisites"
python3 -m pip install ruamel.yaml
curl -fsSL --retry-max-time 60 --retry 3 --retry-delay 5 -o /usr/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
chmod a+x /usr/bin/yq
echo "--- Printing generated dynamic steps"
set +e
python3 .buildkite/scripts/exhaustive-tests/generate-steps.py >pipeline_steps.yml
if [[ $$? -ne 0 ]]; then
echo "^^^ +++"
echo "There was a problem rendering the pipeline steps."
cat pipeline_steps.yml
echo "Exiting now."
exit 1
else
set -eo pipefail
cat pipeline_steps.yml | yq .
fi
set -eo pipefail
echo "--- Uploading steps to buildkite"
cat pipeline_steps.yml | buildkite-agent pipeline upload

View file

@ -0,0 +1,20 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 64
steps:
- group: ":logstash: Health API integration tests"
key: "testing-phase"
steps:
- label: "main branch"
key: "integ-tests-on-main-branch"
command:
- .buildkite/scripts/health-report-tests/main.sh
retry:
automatic:
- limit: 3

View file

@ -0,0 +1,14 @@
steps:
- label: "JDK Availability check"
key: "jdk-availability-check"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "4"
memory: "6Gi"
ephemeralStorage: "100Gi"
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export GRADLE_OPTS="-Xmx2g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info"
ci/check_jdk_version_availability.sh

View file

@ -0,0 +1,96 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
# Before adding an Operating System to the JDK matrix:
# - Ensure the image is available in https://github.com/elastic/ci-agent-images/blob/main/vm-images/platform-ingest/logstash-multi-jdk.yml
# - If image doesn't exist, review https://docs.elastic.dev/ingest-dev-docs/engineer-productivity/custom-ci-images
env:
DEFAULT_MATRIX_OS: "ubuntu-2204"
DEFAULT_MATRIX_JDK: "adoptiumjdk_21"
steps:
- input: "Test Parameters"
if: build.source != "schedule" && build.source != "trigger_job"
fields:
- select: "Operating System"
key: "matrix-os"
hint: "The operating system variant(s) to run on:"
required: true
multiple: true
default: "${DEFAULT_MATRIX_OS}"
options:
- label: "Ubuntu 24.04"
value: "ubuntu-2404"
- label: "Ubuntu 22.04"
value: "ubuntu-2204"
- label: "Ubuntu 20.04"
value: "ubuntu-2004"
- label: "Debian 12"
value: "debian-12"
- label: "Debian 11"
value: "debian-11"
- label: "RHEL 9"
value: "rhel-9"
- label: "RHEL 8"
value: "rhel-8"
- label: "Oracle Linux 8"
value: "oraclelinux-8"
- label: "Oracle Linux 7"
value: "oraclelinux-7"
- label: "Rocky Linux 8"
value: "rocky-linux-8"
- label: "Rocky Linux 9"
value: "rocky-linux-9"
- label: "Alma Linux 8"
value: "almalinux-8"
- label: "Alma Linux 9"
value: "almalinux-9"
- label: "Amazon Linux (2023)"
value: "amazonlinux-2023"
- label: "OpenSUSE Leap 15"
value: "opensuse-leap-15"
- select: "Java"
key: "matrix-jdk"
hint: "The JDK to test with:"
required: true
multiple: true
default: "${DEFAULT_MATRIX_JDK}"
options:
- label: "Adoptium JDK 21 (Eclipse Temurin)"
value: "adoptiumjdk_21"
- label: "Adoptium JDK 17 (Eclipse Temurin)"
value: "adoptiumjdk_17"
- label: "OpenJDK 21"
value: "openjdk_21"
- label: "Zulu 21"
value: "zulu_21"
- label: "Zulu 17"
value: "zulu_17"
- wait: ~
if: build.source != "schedule" && build.source != "trigger_job"
- command: |
set -euo pipefail
echo "--- Downloading prerequisites"
python3 -m pip install ruamel.yaml
echo "--- Printing generated dynamic steps"
export MATRIX_OSES="$(buildkite-agent meta-data get matrix-os --default=${DEFAULT_MATRIX_OS})"
export MATRIX_JDKS="$(buildkite-agent meta-data get matrix-jdk --default=${DEFAULT_MATRIX_JDK})"
set +eo pipefail
python3 .buildkite/scripts/jdk-matrix-tests/generate-steps.py >pipeline_steps.yml
if [[ $$? -ne 0 ]]; then
echo "^^^ +++"
echo "There was a problem rendering the pipeline steps."
cat pipeline_steps.yml
echo "Exiting now."
exit 1
else
set -eo pipefail
cat pipeline_steps.yml
fi
echo "--- Uploading steps to buildkite"
cat pipeline_steps.yml | buildkite-agent pipeline upload

View file

@ -0,0 +1,26 @@
{
"jobs": [
{
"enabled": true,
"pipeline_slug": "logstash-pull-request-pipeline",
"allow_org_users": true,
"allowed_repo_permissions": ["admin", "write"],
"allowed_list": ["dependabot[bot]", "mergify[bot]", "github-actions[bot]", "elastic-vault-github-plugin-prod[bot]"],
"set_commit_status": true,
"build_on_commit": true,
"build_on_comment": true,
"trigger_comment_regex": "^(?:(?:buildkite\\W+)?(?:build|test)\\W+(?:this|it))",
"always_trigger_comment_regex": "^(?:(?:buildkite\\W+)?(?:build|test)\\W+(?:this|it))",
"skip_ci_labels": [ ],
"skip_target_branches": [ ],
"skip_ci_on_only_changed": [
"^.github/",
"^docs/",
"^.mergify.yml$",
"^.pre-commit-config.yaml",
"\\.md$"
],
"always_require_ci_on_changed": [ ]
}
]
}

View file

@ -0,0 +1,243 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
steps:
- label: ":passport_control: License check"
key: "license-check"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "4"
memory: "6Gi"
ephemeralStorage: "100Gi"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx2g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info"
ci/license_check.sh -m 4G
- label: ":rspec: Ruby unit tests"
key: "ruby-unit-tests"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "4"
memory: "8Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/unit_tests.sh ruby
artifact_paths:
- "coverage/coverage.json"
- label: ":java: Java unit tests"
key: "java-unit-tests"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
retry:
automatic:
- limit: 3
env:
ENABLE_SONARQUBE: true
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/unit_tests.sh java
artifact_paths:
- "**/build/test-results/javaTests/TEST-*.xml"
- "**/jacocoTestReport.xml"
- "**/build/classes/**/*.*"
- label: ":sonarqube: Continuous Code Inspection"
if: |
build.pull_request.id != null ||
build.branch == "main" ||
build.branch =~ /^[0-9]+\.[0-9]+\$/
env:
VAULT_SONAR_TOKEN_PATH: "kv/ci-shared/platform-ingest/elastic/logstash/sonar-analyze-token"
agents:
image: "docker.elastic.co/cloud-ci/sonarqube/buildkite-scanner:latest"
command:
- "buildkite-agent artifact download --step ruby-unit-tests coverage/coverage.json ."
- "buildkite-agent artifact download --step java-unit-tests **/jacocoTestReport.xml ."
- "buildkite-agent artifact download --step java-unit-tests **/build/classes/**/*.* ."
- "/scan-source-code.sh"
depends_on:
- "ruby-unit-tests"
- "java-unit-tests"
retry:
manual:
allowed: true
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 0 3
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 1 3
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 2 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as non root (logstash) user. UID is hardcoded in image.
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0 3
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as non root (logstash) user. UID is hardcoded in image.
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as non root (logstash) user. UID is hardcoded in image.
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
- label: ":lab_coat: x-pack unit tests"
key: "x-pack-unit-tests"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as non root (logstash) user. UID is hardcoded in image.
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
x-pack/ci/unit_tests.sh
- label: ":lab_coat: x-pack integration"
key: "integration-tests-x-pack"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as non root (logstash) user. UID is hardcoded in image.
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
x-pack/ci/integration_tests.sh
- wait: ~
continue_on_failure: true
- label: "🏁 Annotate JUnit results"
# the plugin requires docker run, hence the use of a VM
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-2"
plugins:
- junit-annotate#v2.4.1:
artifacts: "**/TEST-*.xml"

View file

@ -0,0 +1,22 @@
## Steps to set up GCP instance to run benchmark script
- Create an instance "n2-standard-16" with Ubuntu image
- Install docker
- `sudo snap install docker`
- `sudo usermod -a -G docker $USER`
- Install jq
- Install vault
- `sudo snap install vault`
- `vault login --method github`
- `vault kv get -format json secret/ci/elastic-logstash/benchmark`
- Setup Elasticsearch index mapping and alias with `setup/*`
- Import Kibana dashboard with `save-objects/*`
- Run the benchmark script
- Send data to your own Elasticsearch. Customise `VAULT_PATH="secret/ci/elastic-logstash/your/path"`
- Run the script `main.sh`
- or run in background `nohup bash -x main.sh > log.log 2>&1 &`
## Notes
- Benchmarks should only be compared using the same hardware setup.
- Please do not send the test metrics to the benchmark cluster. You can set `VAULT_PATH` to send data and metrics to your own server.
- Run `all.sh` as calibration which gives you a baseline of performance in different versions.
- [#16586](https://github.com/elastic/logstash/pull/16586) allows legacy monitoring using the configuration `xpack.monitoring.allow_legacy_collection: true`, which is not recognized in version 8. To run benchmarks in version 8, use the script of the corresponding branch (e.g. `8.16`) instead of `main` in buildkite.

View file

@ -0,0 +1,15 @@
http.enabled: false
filebeat.inputs:
- type: log
symlinks: true
paths:
- "/usr/share/filebeat/flog/*.log"
logging.level: info
output.logstash:
hosts:
- "localhost:5044"
ttl: 10ms
bulk_max_size: 2048
# queue.mem:
# events: 4096
# flush.min_events: 2048

View file

@ -0,0 +1,10 @@
api.http.host: 0.0.0.0
pipeline.workers: ${WORKER}
pipeline.batch.size: ${BATCH_SIZE}
queue.type: ${QTYPE}
xpack.monitoring.allow_legacy_collection: true
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: ${MONITOR_ES_USER}
xpack.monitoring.elasticsearch.password: ${MONITOR_ES_PW}
xpack.monitoring.elasticsearch.hosts: ["${MONITOR_ES_HOST}"]

View file

@ -0,0 +1,44 @@
- pipeline.id: main
config.string: |
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => [ "${BENCHMARK_ES_HOST}" ]
user => "${BENCHMARK_ES_USER}"
password => "${BENCHMARK_ES_PW}"
}
}
- pipeline.id: node_stats
config.string: |
input {
http_poller {
urls => {
NodeStats => {
method => get
url => "http://localhost:9600/_node/stats"
}
}
schedule => { every => "30s"}
codec => "json"
}
}
filter {
mutate {
remove_field => [ "host", "[pipelines][.monitoring-logstash]", "event" ]
add_field => { "[benchmark][label]" => "${QTYPE}_w${WORKER}b${BATCH_SIZE}" }
}
}
output {
elasticsearch {
hosts => [ "${BENCHMARK_ES_HOST}" ]
user => "${BENCHMARK_ES_USER}"
password => "${BENCHMARK_ES_PW}"
data_stream_type => "metrics"
data_stream_dataset => "nodestats"
data_stream_namespace => "logstash"
}
}

View file

@ -0,0 +1 @@
f74f1a28-25e9-494f-ba41-ca9f13d4446d

View file

@ -0,0 +1,315 @@
#!/usr/bin/env bash
set -eo pipefail
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
CONFIG_PATH="$SCRIPT_PATH/config"
source "$SCRIPT_PATH/util.sh"
usage() {
echo "Usage: $0 [FB_CNT] [QTYPE] [CPU] [MEM]"
echo "Example: $0 4 {persisted|memory|all} 2 2"
exit 1
}
parse_args() {
while [[ "$#" -gt 0 ]]; do
if [ -z "$FB_CNT" ]; then
FB_CNT=$1
elif [ -z "$QTYPE" ]; then
case $1 in
all | persisted | memory)
QTYPE=$1
;;
*)
echo "Error: wrong queue type $1"
usage
;;
esac
elif [ -z "$CPU" ]; then
CPU=$1
elif [ -z "$MEM" ]; then
MEM=$1
else
echo "Error: Too many arguments"
usage
fi
shift
done
# set default value
# number of filebeat
FB_CNT=${FB_CNT:-4}
# all | persisted | memory
QTYPE=${QTYPE:-all}
CPU=${CPU:-4}
MEM=${MEM:-4}
XMX=$((MEM / 2))
IFS=','
# worker multiplier: 1,2,4
MULTIPLIERS="${MULTIPLIERS:-1,2,4}"
read -ra MULTIPLIERS <<< "$MULTIPLIERS"
BATCH_SIZES="${BATCH_SIZES:-500}"
read -ra BATCH_SIZES <<< "$BATCH_SIZES"
# tags to json array
read -ra TAG_ARRAY <<< "$TAGS"
JSON_TAGS=$(printf '"%s",' "${TAG_ARRAY[@]}" | sed 's/,$//')
JSON_TAGS="[$JSON_TAGS]"
IFS=' '
echo "filebeats: $FB_CNT, cpu: $CPU, mem: $MEM, Queue: $QTYPE, worker multiplier: ${MULTIPLIERS[@]}, batch size: ${BATCH_SIZES[@]}"
}
get_secret() {
VAULT_PATH=${VAULT_PATH:-secret/ci/elastic-logstash/benchmark}
VAULT_DATA=$(vault kv get -format json $VAULT_PATH)
BENCHMARK_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.es_host')
BENCHMARK_ES_USER=$(echo $VAULT_DATA | jq -r '.data.es_user')
BENCHMARK_ES_PW=$(echo $VAULT_DATA | jq -r '.data.es_pw')
MONITOR_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.monitor_es_host')
MONITOR_ES_USER=$(echo $VAULT_DATA | jq -r '.data.monitor_es_user')
MONITOR_ES_PW=$(echo $VAULT_DATA | jq -r '.data.monitor_es_pw')
}
pull_images() {
echo "--- Pull docker images"
if [[ -n "$LS_VERSION" ]]; then
# pull image if it doesn't exist in local
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker pull "docker.elastic.co/logstash/logstash:$LS_VERSION"
else
# pull the latest snapshot logstash image
# select the SNAPSHOT artifact with the highest semantic version number
LS_VERSION=$( curl --retry-all-errors --retry 5 --retry-delay 1 -s "https://storage.googleapis.com/artifacts-api/snapshots/main.json" | jq -r '.version' )
BUILD_ID=$(curl --retry-all-errors --retry 5 --retry-delay 1 -s "https://storage.googleapis.com/artifacts-api/snapshots/main.json" | jq -r '.build_id')
ARCH=$(arch)
IMAGE_URL="https://snapshots.elastic.co/${BUILD_ID}/downloads/logstash/logstash-$LS_VERSION-docker-image-$ARCH.tar.gz"
IMAGE_FILENAME="$LS_VERSION.tar.gz"
echo "Download $LS_VERSION from $IMAGE_URL"
[[ ! -e $IMAGE_FILENAME ]] && curl -fsSL --retry-max-time 60 --retry 3 --retry-delay 5 -o "$IMAGE_FILENAME" "$IMAGE_URL"
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker load -i "$IMAGE_FILENAME"
fi
# pull filebeat image
FB_DEFAULT_VERSION="8.13.4"
FB_VERSION=${FB_VERSION:-$FB_DEFAULT_VERSION}
docker pull "docker.elastic.co/beats/filebeat:$FB_VERSION"
}
generate_logs() {
FLOG_FILE_CNT=${FLOG_FILE_CNT:-4}
SINGLE_SIZE=524288000
TOTAL_SIZE="$((FLOG_FILE_CNT * SINGLE_SIZE))"
FLOG_PATH="$SCRIPT_PATH/flog"
mkdir -p $FLOG_PATH
if [[ ! -e "$FLOG_PATH/log${FLOG_FILE_CNT}.log" ]]; then
echo "--- Generate logs in background. log: ${FLOG_FILE_CNT}, each size: 500mb"
docker run -d --name=flog --rm -v $FLOG_PATH:/go/src/data mingrammer/flog -t log -w -o "/go/src/data/log.log" -b $TOTAL_SIZE -p $SINGLE_SIZE
fi
}
check_logs() {
echo "--- Check log generation"
local cnt=0
until [[ -e "$FLOG_PATH/log${FLOG_FILE_CNT}.log" || $cnt -gt 600 ]]; do
echo "wait 30s" && sleep 30
cnt=$((cnt + 30))
done
ls -lah $FLOG_PATH
}
start_logstash() {
LS_CONFIG_PATH=$SCRIPT_PATH/ls/config
mkdir -p $LS_CONFIG_PATH
cp $CONFIG_PATH/pipelines.yml $LS_CONFIG_PATH/pipelines.yml
cp $CONFIG_PATH/logstash.yml $LS_CONFIG_PATH/logstash.yml
cp $CONFIG_PATH/uuid $LS_CONFIG_PATH/uuid
LS_JAVA_OPTS=${LS_JAVA_OPTS:--Xmx${XMX}g}
docker run -d --name=ls --net=host --cpus=$CPU --memory=${MEM}g -e LS_JAVA_OPTS="$LS_JAVA_OPTS" \
-e QTYPE="$QTYPE" -e WORKER="$WORKER" -e BATCH_SIZE="$BATCH_SIZE" \
-e BENCHMARK_ES_HOST="$BENCHMARK_ES_HOST" -e BENCHMARK_ES_USER="$BENCHMARK_ES_USER" -e BENCHMARK_ES_PW="$BENCHMARK_ES_PW" \
-e MONITOR_ES_HOST="$MONITOR_ES_HOST" -e MONITOR_ES_USER="$MONITOR_ES_USER" -e MONITOR_ES_PW="$MONITOR_ES_PW" \
-v $LS_CONFIG_PATH/logstash.yml:/usr/share/logstash/config/logstash.yml:ro \
-v $LS_CONFIG_PATH/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro \
-v $LS_CONFIG_PATH/uuid:/usr/share/logstash/data/uuid:ro \
docker.elastic.co/logstash/logstash:$LS_VERSION
}
start_filebeat() {
for ((i = 0; i < FB_CNT; i++)); do
FB_PATH="$SCRIPT_PATH/fb${i}"
mkdir -p $FB_PATH
cp $CONFIG_PATH/filebeat.yml $FB_PATH/filebeat.yml
docker run -d --name=fb$i --net=host --user=root \
-v $FB_PATH/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v $SCRIPT_PATH/flog:/usr/share/filebeat/flog \
docker.elastic.co/beats/filebeat:$FB_VERSION filebeat -e --strict.perms=false
done
}
capture_stats() {
CURRENT=$(jq -r '.flow.output_throughput.current' $NS_JSON)
local eps_1m=$(jq -r '.flow.output_throughput.last_1_minute' $NS_JSON)
local eps_5m=$(jq -r '.flow.output_throughput.last_5_minutes' $NS_JSON)
local worker_util=$(jq -r '.pipelines.main.flow.worker_utilization.last_1_minute' $NS_JSON)
local worker_concurr=$(jq -r '.pipelines.main.flow.worker_concurrency.last_1_minute' $NS_JSON)
local cpu_percent=$(jq -r '.process.cpu.percent' $NS_JSON)
local heap=$(jq -r '.jvm.mem.heap_used_in_bytes' $NS_JSON)
local non_heap=$(jq -r '.jvm.mem.non_heap_used_in_bytes' $NS_JSON)
local q_event_cnt=$(jq -r '.pipelines.main.queue.events_count' $NS_JSON)
local q_size=$(jq -r '.pipelines.main.queue.queue_size_in_bytes' $NS_JSON)
TOTAL_EVENTS_OUT=$(jq -r '.pipelines.main.events.out' $NS_JSON)
printf "current: %s, 1m: %s, 5m: %s, worker_utilization: %s, worker_concurrency: %s, cpu: %s, heap: %s, non-heap: %s, q_events: %s, q_size: %s, total_events_out: %s\n" \
$CURRENT $eps_1m $eps_5m $worker_util $worker_concurr $cpu_percent $heap $non_heap $q_event_cnt $q_size $TOTAL_EVENTS_OUT
}
aggregate_stats() {
local file_glob="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_*.json"
MAX_EPS_1M=$( jqmax '.flow.output_throughput.last_1_minute' "$file_glob" )
MAX_EPS_5M=$( jqmax '.flow.output_throughput.last_5_minutes' "$file_glob" )
MAX_WORKER_UTIL=$( jqmax '.pipelines.main.flow.worker_utilization.last_1_minute' "$file_glob" )
MAX_WORKER_CONCURR=$( jqmax '.pipelines.main.flow.worker_concurrency.last_1_minute' "$file_glob" )
MAX_Q_EVENT_CNT=$( jqmax '.pipelines.main.queue.events_count' "$file_glob" )
MAX_Q_SIZE=$( jqmax '.pipelines.main.queue.queue_size_in_bytes' "$file_glob" )
AVG_CPU_PERCENT=$( jqavg '.process.cpu.percent' "$file_glob" )
AVG_VIRTUAL_MEM=$( jqavg '.process.mem.total_virtual_in_bytes' "$file_glob" )
AVG_HEAP=$( jqavg '.jvm.mem.heap_used_in_bytes' "$file_glob" )
AVG_NON_HEAP=$( jqavg '.jvm.mem.non_heap_used_in_bytes' "$file_glob" )
}
send_summary() {
echo "--- Send summary to Elasticsearch"
# build json
local timestamp
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%S")
SUMMARY="{\"timestamp\": \"$timestamp\", \"version\": \"$LS_VERSION\", \"cpu\": \"$CPU\", \"mem\": \"$MEM\", \"workers\": \"$WORKER\", \"batch_size\": \"$BATCH_SIZE\", \"queue_type\": \"$QTYPE\""
not_empty "$TOTAL_EVENTS_OUT" && SUMMARY="$SUMMARY, \"total_events_out\": \"$TOTAL_EVENTS_OUT\""
not_empty "$MAX_EPS_1M" && SUMMARY="$SUMMARY, \"max_eps_1m\": \"$MAX_EPS_1M\""
not_empty "$MAX_EPS_5M" && SUMMARY="$SUMMARY, \"max_eps_5m\": \"$MAX_EPS_5M\""
not_empty "$MAX_WORKER_UTIL" && SUMMARY="$SUMMARY, \"max_worker_utilization\": \"$MAX_WORKER_UTIL\""
not_empty "$MAX_WORKER_CONCURR" && SUMMARY="$SUMMARY, \"max_worker_concurrency\": \"$MAX_WORKER_CONCURR\""
not_empty "$AVG_CPU_PERCENT" && SUMMARY="$SUMMARY, \"avg_cpu_percentage\": \"$AVG_CPU_PERCENT\""
not_empty "$AVG_HEAP" && SUMMARY="$SUMMARY, \"avg_heap\": \"$AVG_HEAP\""
not_empty "$AVG_NON_HEAP" && SUMMARY="$SUMMARY, \"avg_non_heap\": \"$AVG_NON_HEAP\""
not_empty "$AVG_VIRTUAL_MEM" && SUMMARY="$SUMMARY, \"avg_virtual_memory\": \"$AVG_VIRTUAL_MEM\""
not_empty "$MAX_Q_EVENT_CNT" && SUMMARY="$SUMMARY, \"max_queue_events\": \"$MAX_Q_EVENT_CNT\""
not_empty "$MAX_Q_SIZE" && SUMMARY="$SUMMARY, \"max_queue_bytes_size\": \"$MAX_Q_SIZE\""
not_empty "$TAGS" && SUMMARY="$SUMMARY, \"tags\": $JSON_TAGS"
SUMMARY="$SUMMARY}"
tee summary.json << EOF
{"index": {}}
$SUMMARY
EOF
# send to ES
local resp
local err_status
resp=$(curl -s -X POST -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" "$BENCHMARK_ES_HOST/benchmark_summary/_bulk" -H 'Content-Type: application/json' --data-binary @"summary.json")
echo "$resp"
err_status=$(echo "$resp" | jq -r ".errors")
if [[ "$err_status" == "true" ]]; then
echo "Failed to send summary"
exit 1
fi
}
# $1: snapshot index
node_stats() {
NS_JSON="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_$1.json" # m_w8b1000_0.json
# curl inside container because docker on mac cannot resolve localhost to host network interface
docker exec -i ls curl localhost:9600/_node/stats > "$NS_JSON" 2> /dev/null
}
# $1: index
snapshot() {
node_stats $1
capture_stats
}
create_directory() {
NS_DIR="fb${FB_CNT}c${CPU}m${MEM}" # fb4c4m4
mkdir -p "$SCRIPT_PATH/$NS_DIR"
}
queue() {
for QTYPE in "persisted" "memory"; do
worker
done
}
worker() {
for m in "${MULTIPLIERS[@]}"; do
WORKER=$((CPU * m))
batch
done
}
batch() {
for BATCH_SIZE in "${BATCH_SIZES[@]}"; do
run_pipeline
stop_pipeline
done
}
run_pipeline() {
echo "--- Run pipeline. queue type: $QTYPE, worker: $WORKER, batch size: $BATCH_SIZE"
start_logstash
start_filebeat
docker ps
echo "(0) sleep 3m" && sleep 180
snapshot "0"
for i in {1..8}; do
echo "($i) sleep 30s" && sleep 30
snapshot "$i"
# print docker log when ingestion rate is zero
# remove '.' in number and return max val
[[ $(max -g "${CURRENT/./}" "0") -eq 0 ]] &&
docker logs fb0 &&
docker logs ls
done
aggregate_stats
send_summary
}
stop_pipeline() {
echo "--- Stop Pipeline"
for ((i = 0; i < FB_CNT; i++)); do
docker stop fb$i
docker rm fb$i
done
docker stop ls
docker rm ls
curl -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" -X DELETE $BENCHMARK_ES_HOST/_data_stream/logs-generic-default
echo " data stream deleted "
# TODO: clean page caches, reduce memory fragmentation
# https://github.com/elastic/logstash/pull/16191#discussion_r1647050216
}
clean_up() {
# stop log generation if it has not done yet
[[ -n $(docker ps | grep flog) ]] && docker stop flog || true
# remove image
docker image rm docker.elastic.co/logstash/logstash:$LS_VERSION
}

View file

@ -0,0 +1,59 @@
#!/usr/bin/env bash
set -eo pipefail
# *******************************************************
# This script does benchmark by running Filebeats (docker) -> Logstash (docker) -> ES Cloud.
# Logstash metrics and benchmark results are sent to the same ES Cloud.
# Highlights:
# - Use flog (docker) to generate ~2GB log
# - Pull the snapshot docker image of the main branch every day
# - Logstash runs two pipelines, main and node_stats
# - The main pipeline handles beats ingestion, sending data to the data stream `logs-generic-default`
# - It runs for all combinations. (pq + mq) x worker x batch size
# - Each test runs for ~7 minutes
# - The node_stats pipeline retrieves Logstash /_node/stats every 30s and sends it to the data stream `metrics-nodestats-logstash`
# - The script sends a summary of EPS and resource usage to index `benchmark_summary`
# *******************************************************
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
source "$SCRIPT_PATH/core.sh"
## usage:
## main.sh FB_CNT QTYPE CPU MEM
## main.sh 4 all 4 4 # default launch 4 filebeats to benchmark pq and mq
## main.sh 4 memory
## main.sh 4 persisted
## main.sh 4
## main.sh
## accept env vars:
## FB_VERSION=8.13.4 # docker tag
## LS_VERSION=8.15.0-SNAPSHOT # docker tag
## LS_JAVA_OPTS=-Xmx2g # by default, Xmx is set to half of memory
## MULTIPLIERS=1,2,4 # determine the number of workers (cpu * multiplier)
## BATCH_SIZES=125,500
## CPU=4 # number of cpu for Logstash container
## MEM=4 # number of GB for Logstash container
## QTYPE=memory # queue type to test {persisted|memory|all}
## FB_CNT=4 # number of filebeats to use in benchmark
## FLOG_FILE_CNT=4 # number of files to generate for ingestion
## VAULT_PATH=secret/path # vault path point to Elasticsearch credentials. The default value points to benchmark cluster.
## TAGS=test,other # tags with "," separator.
main() {
parse_args "$@"
get_secret
generate_logs
pull_images
check_logs
create_directory
if [[ $QTYPE == "all" ]]; then
queue
else
worker
fi
clean_up
}
main "$@"

View file

@ -0,0 +1,44 @@
#!/usr/bin/env bash
set -eo pipefail
# *******************************************************
# Run benchmark for versions that have flow metrics
# When the hardware changes, run the marathon task to establish a new baseline.
# Usage:
# nohup bash -x all.sh > log.log 2>&1 &
# Accept env vars:
# STACK_VERSIONS=8.15.0,8.15.1,8.16.0-SNAPSHOT # versions to test. It is comma separator string
# *******************************************************
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
source "$SCRIPT_PATH/core.sh"
parse_stack_versions() {
IFS=','
STACK_VERSIONS="${STACK_VERSIONS:-8.6.0,8.7.0,8.8.0,8.9.0,8.10.0,8.11.0,8.12.0,8.13.0,8.14.0,8.15.0}"
read -ra STACK_VERSIONS <<< "$STACK_VERSIONS"
}
main() {
parse_stack_versions
parse_args "$@"
get_secret
generate_logs
check_logs
USER_QTYPE="$QTYPE"
for V in "${STACK_VERSIONS[@]}" ; do
LS_VERSION="$V"
QTYPE="$USER_QTYPE"
pull_images
create_directory
if [[ $QTYPE == "all" ]]; then
queue
else
worker
fi
done
}
main "$@"

View file

@ -0,0 +1,8 @@
## 20241210
Remove scripted field `5m_num` from dashboards
## 20240912
Updated runtime field `release` to return `true` when `version` contains "SNAPSHOT"
## 20240912
Initial dashboards

View file

@ -0,0 +1,14 @@
benchmark_objects.ndjson contains the following resources
- Dashboards
- daily snapshot
- released versions
- Data Views
- benchmark
- runtime fields
- | Fields Name | Type | Comment |
|--------------|---------------------------------------------------------------------------------------|--------------------------------------------------|
| versions_num | long | convert semantic versioning to number for graph sorting |
| release | boolean | `true` for released version. `false` for snapshot version. It is for graph filtering. |
To import objects to Kibana, navigate to Stack Management > Save Objects and click Import

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,6 @@
POST /_aliases
{
"actions": [
{ "add": { "index": "benchmark_summary_v2", "alias": "benchmark_summary" } }
]
}

View file

@ -0,0 +1,179 @@
PUT /benchmark_summary_v2/_mapping
{
"properties": {
"avg_cpu_percentage": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_heap": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_non_heap": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_virtual_memory": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"batch_size": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"cpu": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_eps_1m": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_eps_5m": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_queue_bytes_size": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_queue_events": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_worker_concurrency": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_worker_utilization": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"mem": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"queue_type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"tag": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"timestamp": {
"type": "date"
},
"total_events_out": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"workers": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"tags" : {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}

View file

@ -0,0 +1,41 @@
#!/usr/bin/env bash
arch() { uname -m | sed -e "s|amd|x86_|" -e "s|arm|aarch|"; }
# return the min value
# usage:
# g: float; h: human; d: dictionary; M: month
# min -g 3 2 5 1
# max -g 1.5 5.2 2.5 1.2 5.7
# max -g "null" "0"
# min -h 25M 13G 99K 1098M
min() { printf "%s\n" "${@:2}" | sort "$1" | head -n1 ; }
max() { min ${1}r ${@:2} ; }
# return the average of values
# usage:
# jqavg '.process.cpu.percent' m_w8b1000_*.json
# $1: jq field
# $2: file path in glob pattern
jqavg() {
jq -r "$1 | select(. != null)" $2 | jq -s . | jq 'add / length'
}
# return the maximum of values
# usage:
# jqmax '.process.cpu.percent' m_w8b1000_*.json
# $1: jq field
# $2: file path in glob pattern
jqmax() {
jq -r "$1 | select(. != null)" $2 | jq -s . | jq 'max'
}
# return true if $1 is non empty and not "null"
not_empty() {
if [[ -n "$1" && "$1" != "null" ]]; then
return 0
else
return 1
fi
}

View file

@ -0,0 +1,26 @@
#!/usr/bin/env bash
# **********************************************************
# Returns true if current checkout compared to parent commit
# has changes ONLY matching the argument regexp
#
# Used primarily to skip running the exhaustive pipeline
# when only docs changes have happened.
# ********************************************************
if [[ -z "$1" ]]; then
echo "Usage: $0 <regexp>"
exit 1
fi
previous_commit=$(git rev-parse HEAD^)
changed_files=$(git diff --name-only $previous_commit)
if [[ -n "$changed_files" ]] && [[ -z "$(echo "$changed_files" | grep -vE "$1")" ]]; then
echo "All files compared to the previous commit [$previous_commit] match the specified regex: [$1]"
echo "Files changed:"
git --no-pager diff --name-only HEAD^
exit 0
else
exit 1
fi

View file

@ -0,0 +1,17 @@
#!/usr/bin/env bash
# ********************************************************
# This file contains prerequisite bootstrap invocations
# required for Logstash CI when using containerized agents
# ********************************************************
set -euo pipefail
if [[ $(whoami) == "logstash" ]]
then
export PATH="/home/logstash/.rbenv/bin:$PATH"
eval "$(rbenv init -)"
else
export PATH="/usr/local/rbenv/bin:$PATH"
eval "$(rbenv init -)"
fi

View file

@ -0,0 +1,29 @@
#!/usr/bin/env bash
# ********************************************************
# Source this script to get the QUALIFIED_VERSION env var
# or execute it to receive the qualified version on stdout
# ********************************************************
set -euo pipefail
export QUALIFIED_VERSION="$(
# Extract the version number from the version.yml file
# e.g.: 8.6.0
printf '%s' "$(awk -F':' '{ if ("logstash" == $1) { gsub(/^ | $/,"",$2); printf $2; exit } }' versions.yml)"
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases for staging builds only:
# e.g: 8.0.0-alpha1
printf '%s' "${VERSION_QUALIFIER:+-${VERSION_QUALIFIER}}"
# add the SNAPSHOT tag unless WORKFLOW_TYPE=="staging" or RELEASE=="1"
if [[ ! ( "${WORKFLOW_TYPE:-}" == "staging" || "${RELEASE:+$RELEASE}" == "1" ) ]]; then
printf '%s' "-SNAPSHOT"
fi
)"
# if invoked directly, output the QUALIFIED_VERSION to stdout
if [[ "$0" == "${BASH_SOURCE:-${ZSH_SCRIPT:-}}" ]]; then
printf '%s' "${QUALIFIED_VERSION}"
fi

View file

@ -0,0 +1,106 @@
#!/usr/bin/env bash
set -eo pipefail
# *******************************************************
# this script is used by schedule-type pipelines
# to automate triggering other pipelines (e.g. DRA) based
# on ci/branches.json
#
# See:
# https://elasticco.atlassian.net/browse/ENGPRD-318 /
# https://github.com/elastic/ingest-dev/issues/2664
# *******************************************************
ACTIVE_BRANCHES_URL="https://storage.googleapis.com/artifacts-api/snapshots/branches.json"
EXCLUDE_BRANCHES_ARRAY=()
BRANCHES=()
function install_yq() {
if ! [[ -x $(which yq) && $(yq --version) == *mikefarah* ]]; then
echo "--- Downloading prerequisites"
curl -fsSL --retry-max-time 60 --retry 3 --retry-delay 5 -o /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
chmod a+x /usr/local/bin/yq
fi
}
function parse_pipelines() {
IFS="," read -ra PIPELINES <<< "$PIPELINES_TO_TRIGGER"
}
# converts the (optional) $EXCLUDE_BRANCHES env var, containing a comma separated branches string, to $EXCLUDE_BRANCHES_ARRAY
function exclude_branches_to_array() {
if [[ ! -z "$EXCLUDE_BRANCHES" ]]; then
IFS="," read -ra EXCLUDE_BRANCHES_ARRAY <<< "$EXCLUDE_BRANCHES"
fi
}
function check_if_branch_in_exclude_array() {
local branch=$1
local _excl_br
local ret_val="false"
for _excl_br in "${EXCLUDE_BRANCHES_ARRAY[@]}"; do
if [[ "$branch" == "$_excl_br" ]]; then
ret_val="true"
break
fi
done
echo $ret_val
}
if [[ -z $PIPELINES_TO_TRIGGER ]]; then
echo "^^^ +++"
echo "Required environment variable [PIPELINES_TO_TRIGGER] is missing."
echo "Exiting now."
exit 1
fi
parse_pipelines
exclude_branches_to_array
set -u
set +e
# pull releaseable branches from $ACTIVE_BRANCHES_URL
readarray -t ELIGIBLE_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $ACTIVE_BRANCHES_URL | jq -r '.branches[]')
if [[ $? -ne 0 ]]; then
echo "There was an error downloading or parsing the json output from [$ACTIVE_BRANCHES_URL]. Exiting."
exit 1
fi
set -e
if [[ ${#EXCLUDE_BRANCHES_ARRAY[@]} -eq 0 ]]; then
# no branch exclusions
BRANCHES=("${ELIGIBLE_BRANCHES[@]}")
else
# exclude any branches passed via optional $EXCLUDE_BRANCHES
for _branch in "${ELIGIBLE_BRANCHES[@]}"; do
res=$(check_if_branch_in_exclude_array $_branch)
if [[ "$res" == "false" ]]; then
BRANCHES+=("$_branch")
fi
done
fi
install_yq
echo 'steps:' >pipeline_steps.yaml
for PIPELINE in "${PIPELINES[@]}"; do
for BRANCH in "${BRANCHES[@]}"; do
cat >>pipeline_steps.yaml <<EOF
- trigger: $PIPELINE
label: ":testexecute: Triggering ${PIPELINE} / ${BRANCH}"
build:
branch: "$BRANCH"
message: ":testexecute: Scheduled build for ${BRANCH}"
EOF
done
done
echo "--- Printing generated steps"
yq . pipeline_steps.yaml
echo "--- Uploading steps to buildkite"
cat pipeline_steps.yaml | buildkite-agent pipeline upload

View file

@ -0,0 +1,16 @@
#!/usr/bin/env bash
# **************************************************************
# This file contains prerequisite bootstrap invocations
# required for Logstash CI when using custom multi-jdk VM images
# It is primarily used by the exhaustive BK pipeline.
# **************************************************************
set -euo pipefail
source .ci/java-versions.properties
export BUILD_JAVA_HOME=/opt/buildkite-agent/.java/$LS_BUILD_JAVA
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$BUILD_JAVA_HOME/bin:$PATH"
eval "$(rbenv init -)"

View file

@ -0,0 +1,12 @@
#!/usr/bin/env bash
# ********************************************************
# This file contains prerequisite bootstrap invocations
# required for Logstash CI when using VM/baremetal agents
# ********************************************************
set -euo pipefail
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:/opt/buildkite-agent/.java/bin:$PATH"
export JAVA_HOME="/opt/buildkite-agent/.java"
eval "$(rbenv init -)"

View file

@ -0,0 +1,13 @@
{
"#comment": "This file lists all custom vm images. We use it to make decisions about randomized CI jobs.",
"linux": {
"ubuntu": ["ubuntu-2404", "ubuntu-2204", "ubuntu-2004"],
"debian": ["debian-12", "debian-11"],
"rhel": ["rhel-9", "rhel-8"],
"oraclelinux": ["oraclelinux-8", "oraclelinux-7"],
"rocky": ["rocky-linux-8"],
"amazonlinux": ["amazonlinux-2023"],
"opensuse": ["opensuse-leap-15"]
},
"windows": ["windows-2025", "windows-2022", "windows-2019", "windows-2016"]
}

View file

@ -7,59 +7,28 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
# WORKFLOW_TYPE is a CI externally configured environment variable that could assume "snapshot" or "staging" values
info "Building artifacts for the $WORKFLOW_TYPE workflow ..."
case "$WORKFLOW_TYPE" in
snapshot)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
rake artifact:docker || error "artifact:docker build failed."
rake artifact:docker_oss || error "artifact:docker_oss build failed."
rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
else
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker || error "artifact:docker build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_oss || error "artifact:docker_oss build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
: # no-op
;;
staging)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
RELEASE=1 rake artifact:docker || error "artifact:docker build failed."
RELEASE=1 rake artifact:docker_oss || error "artifact:docker_oss build failed."
RELEASE=1 rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
RELEASE=1 rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
else
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker || error "artifact:docker build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_oss || error "artifact:docker_oss build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
export RELEASE=1
;;
*)
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
rake artifact:docker || error "artifact:docker build failed."
rake artifact:docker_oss || error "artifact:docker_oss build failed."
rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
STACK_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Saving tar.gz for docker images"
save_docker_tarballs "${ARCH}" "${STACK_VERSION}"
@ -68,11 +37,7 @@ for file in build/logstash-*; do shasum $file;done
info "Uploading DRA artifacts in buildkite's artifact store ..."
# Note the deb, rpm tar.gz AARCH64 files generated has already been loaded by the build_packages.sh
images="logstash logstash-oss"
if [ "$ARCH" != "aarch64" ]; then
# No logstash-ubi8 for AARCH64
images="logstash logstash-oss logstash-ubi8"
fi
images="logstash logstash-oss logstash-wolfi"
for image in ${images}; do
buildkite-agent artifact upload "build/$image-${STACK_VERSION}-docker-image-${ARCH}.tar.gz"
done
@ -80,7 +45,7 @@ done
# Upload 'docker-build-context.tar.gz' files only when build x86_64, otherwise they will be
# overwritten when building aarch64 (or viceversa).
if [ "$ARCH" != "aarch64" ]; then
for image in logstash logstash-oss logstash-ubi8 logstash-ironbank; do
for image in logstash logstash-oss logstash-wolfi logstash-ironbank; do
buildkite-agent artifact upload "build/${image}-${STACK_VERSION}-docker-build-context.tar.gz"
done
fi

View file

@ -7,39 +7,25 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
# WORKFLOW_TYPE is a CI externally configured environment variable that could assume "snapshot" or "staging" values
info "Building artifacts for the $WORKFLOW_TYPE workflow ..."
case "$WORKFLOW_TYPE" in
snapshot)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
else
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
: # no-op
;;
staging)
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
RELEASE=1 SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
else
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
export RELEASE=1
;;
*)
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Generated Artifacts"
for file in build/logstash-*; do shasum $file;done

View file

@ -10,11 +10,7 @@ function error {
function save_docker_tarballs {
local arch="${1:?architecture required}"
local version="${2:?stack-version required}"
local images="logstash logstash-oss"
if [ "${arch}" != "aarch64" ]; then
# No logstash-ubi8 for AARCH64
images="logstash logstash-oss logstash-ubi8"
fi
local images="logstash logstash-oss logstash-wolfi"
for image in ${images}; do
tar_file="${image}-${version}-docker-image-${arch}.tar"
@ -29,19 +25,18 @@ function save_docker_tarballs {
# Since we are using the system jruby, we need to make sure our jvm process
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export JRUBY_OPTS="-J-Xmx4g"
# Extract the version number from the version.yml file
# e.g.: 8.6.0
# The suffix part like alpha1 etc is managed by the optional VERSION_QUALIFIER_OPT environment variable
# The suffix part like alpha1 etc is managed by the optional VERSION_QUALIFIER environment variable
STACK_VERSION=`cat versions.yml | sed -n 's/^logstash\:[[:space:]]\([[:digit:]]*\.[[:digit:]]*\.[[:digit:]]*\)$/\1/p'`
info "Agent is running on architecture [$(uname -i)]"
export VERSION_QUALIFIER_OPT=$(buildkite-agent meta-data get VERSION_QUALIFIER_OPT --default "")
export DRA_DRY_RUN=$(buildkite-agent meta-data get DRA_DRY_RUN --default "")
export VERSION_QUALIFIER=${VERSION_QUALIFIER:-""}
export DRA_DRY_RUN=${DRA_DRY_RUN:-""}
if [[ ! -z $DRA_DRY_RUN && $BUILDKITE_STEP_KEY == "logstash_publish_dra" ]]; then
info "Release manager will run in dry-run mode [$DRA_DRY_RUN]"
fi

View file

@ -3,6 +3,8 @@ import sys
import yaml
YAML_HEADER = '# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json\n'
def to_bk_key_friendly_string(key):
"""
Convert and return key to an acceptable format for Buildkite's key: field
@ -28,6 +30,8 @@ def package_x86_step(branch, workflow_type):
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
eval "$(rbenv init -)"
.buildkite/scripts/dra/build_packages.sh
artifact_paths:
- "**/*.hprof"
'''
return step
@ -42,6 +46,8 @@ def package_x86_docker_step(branch, workflow_type):
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 200
artifact_paths:
- "**/*.hprof"
command: |
export WORKFLOW_TYPE="{workflow_type}"
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
@ -61,6 +67,8 @@ def package_aarch64_docker_step(branch, workflow_type):
imagePrefix: platform-ingest-logstash-ubuntu-2204-aarch64
instanceType: "m6g.4xlarge"
diskSizeGb: 200
artifact_paths:
- "**/*.hprof"
command: |
export WORKFLOW_TYPE="{workflow_type}"
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
@ -106,6 +114,7 @@ def build_steps_to_yaml(branch, workflow_type):
if __name__ == "__main__":
try:
workflow_type = os.environ["WORKFLOW_TYPE"]
version_qualifier = os.environ.get("VERSION_QUALIFIER", "")
except ImportError:
print(f"Missing env variable WORKFLOW_TYPE. Use export WORKFLOW_TYPE=<staging|snapshot>\n.Exiting.")
exit(1)
@ -114,18 +123,25 @@ if __name__ == "__main__":
structure = {"steps": []}
# Group defining parallel steps that build and save artifacts
group_key = to_bk_key_friendly_string(f"logstash_dra_{workflow_type}")
if workflow_type.upper() == "SNAPSHOT" and len(version_qualifier)>0:
structure["steps"].append({
"label": f"no-op pipeline because prerelease builds (VERSION_QUALIFIER is set to [{version_qualifier}]) don't support the [{workflow_type}] workflow",
"command": ":",
"skip": "VERSION_QUALIFIER (prerelease builds) not supported with SNAPSHOT DRA",
})
else:
# Group defining parallel steps that build and save artifacts
group_key = to_bk_key_friendly_string(f"logstash_dra_{workflow_type}")
structure["steps"].append({
"group": f":Build Artifacts - {workflow_type.upper()}",
"key": group_key,
"steps": build_steps_to_yaml(branch, workflow_type),
})
structure["steps"].append({
"group": f":Build Artifacts - {workflow_type.upper()}",
"key": group_key,
"steps": build_steps_to_yaml(branch, workflow_type),
})
# Final step: pull artifacts built above and publish them via the release-manager
structure["steps"].extend(
yaml.safe_load(publish_dra_step(branch, workflow_type, depends_on=group_key)),
)
# Final step: pull artifacts built above and publish them via the release-manager
structure["steps"].extend(
yaml.safe_load(publish_dra_step(branch, workflow_type, depends_on=group_key)),
)
print(yaml.dump(structure, Dumper=yaml.Dumper, sort_keys=False))
print(YAML_HEADER + yaml.dump(structure, Dumper=yaml.Dumper, sort_keys=False))

View file

@ -7,7 +7,9 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
PLAIN_STACK_VERSION=$STACK_VERSION
# DRA_BRANCH can be used for manually testing packaging with PRs
# e.g. define `DRA_BRANCH="main"` and `RUN_SNAPSHOT="true"` under Options/Environment Variables in the Buildkite UI after clicking new Build
BRANCH="${DRA_BRANCH:="${BUILDKITE_BRANCH:=""}"}"
# This is the branch selector that needs to be passed to the release-manager
# It has to be the name of the branch which originates the artifacts.
@ -15,29 +17,24 @@ RELEASE_VER=`cat versions.yml | sed -n 's/^logstash\:[[:space:]]\([[:digit:]]*\.
if [ -n "$(git ls-remote --heads origin $RELEASE_VER)" ] ; then
RELEASE_BRANCH=$RELEASE_VER
else
RELEASE_BRANCH=main
RELEASE_BRANCH="${BRANCH:="main"}"
fi
echo "RELEASE BRANCH: $RELEASE_BRANCH"
if [ -n "$VERSION_QUALIFIER_OPT" ]; then
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
PLAIN_STACK_VERSION="${PLAIN_STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
VERSION_QUALIFIER="${VERSION_QUALIFIER:=""}"
case "$WORKFLOW_TYPE" in
snapshot)
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
:
;;
staging)
;;
*)
error "Worklflow (WORKFLOW_TYPE variable) is not set, exiting..."
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
info "Uploading artifacts for ${WORKFLOW_TYPE} workflow on branch: ${RELEASE_BRANCH}"
info "Uploading artifacts for ${WORKFLOW_TYPE} workflow on branch: ${RELEASE_BRANCH} for version: ${STACK_VERSION} with version_qualifier: ${VERSION_QUALIFIER}"
if [ "$RELEASE_VER" != "7.17" ]; then
# Version 7.17.x doesn't generates ARM artifacts for Darwin
@ -45,17 +42,12 @@ if [ "$RELEASE_VER" != "7.17" ]; then
:
fi
# Deleting ubi8 for aarch64 for the time being. This image itself is not being built, and it is not expected
# by the release manager.
# See https://github.com/elastic/infra/blob/master/cd/release/release-manager/project-configs/8.5/logstash.gradle
# for more details.
# TODO filter it out when uploading artifacts instead
rm -f build/logstash-ubi8-${STACK_VERSION}-docker-image-aarch64.tar.gz
info "Downloaded ARTIFACTS sha report"
for file in build/logstash-*; do shasum $file;done
mv build/distributions/dependencies-reports/logstash-${STACK_VERSION}.csv build/distributions/dependencies-${STACK_VERSION}.csv
FINAL_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
mv build/distributions/dependencies-reports/logstash-${FINAL_VERSION}.csv build/distributions/dependencies-${FINAL_VERSION}.csv
# set required permissions on artifacts and directory
chmod -R a+r build/*
@ -73,6 +65,22 @@ release_manager_login
# ensure the latest image has been pulled
docker pull docker.elastic.co/infra/release-manager:latest
echo "+++ :clipboard: Listing DRA artifacts for version [$STACK_VERSION], branch [$RELEASE_BRANCH], workflow [$WORKFLOW_TYPE], QUALIFIER [$VERSION_QUALIFIER]"
docker run --rm \
--name release-manager \
-e VAULT_ROLE_ID \
-e VAULT_SECRET_ID \
--mount type=bind,readonly=false,src="$PWD",target=/artifacts \
docker.elastic.co/infra/release-manager:latest \
cli list \
--project logstash \
--branch "${RELEASE_BRANCH}" \
--commit "$(git rev-parse HEAD)" \
--workflow "${WORKFLOW_TYPE}" \
--version "${STACK_VERSION}" \
--artifact-set main \
--qualifier "${VERSION_QUALIFIER}"
info "Running the release manager ..."
# collect the artifacts for use with the unified build
@ -88,9 +96,19 @@ docker run --rm \
--branch ${RELEASE_BRANCH} \
--commit "$(git rev-parse HEAD)" \
--workflow "${WORKFLOW_TYPE}" \
--version "${PLAIN_STACK_VERSION}" \
--version "${STACK_VERSION}" \
--artifact-set main \
${DRA_DRY_RUN}
--qualifier "${VERSION_QUALIFIER}" \
${DRA_DRY_RUN} | tee rm-output.txt
# extract the summary URL from a release manager output line like:
# Report summary-8.22.0.html can be found at https://artifacts-staging.elastic.co/logstash/8.22.0-ABCDEFGH/summary-8.22.0.html
SUMMARY_URL=$(grep -E '^Report summary-.* can be found at ' rm-output.txt | grep -oP 'https://\S+' | awk '{print $1}')
rm rm-output.txt
# and make it easily clickable as a Builkite annotation
printf "**Summary link:** [${SUMMARY_URL}](${SUMMARY_URL})\n" | buildkite-agent annotate --style=success
info "Teardown logins"
$(dirname "$0")/docker-env-teardown.sh

View file

@ -0,0 +1,219 @@
import json
import os
import random
import sys
import typing
from ruamel.yaml import YAML
from ruamel.yaml.scalarstring import LiteralScalarString
VM_IMAGES_FILE = ".buildkite/scripts/common/vm-images.json"
VM_IMAGE_PREFIX = "platform-ingest-logstash-multi-jdk-"
ACCEPTANCE_LINUX_OSES = ["ubuntu-2404", "ubuntu-2204", "ubuntu-2004", "debian-11", "rhel-8", "oraclelinux-7", "rocky-linux-8", "opensuse-leap-15", "amazonlinux-2023"]
CUR_PATH = os.path.dirname(os.path.abspath(__file__))
def slugify_bk_key(key: str) -> str:
"""
Convert and return key to an acceptable format for Buildkite's key: field
Only alphanumerics, dashes and underscores are allowed.
"""
mapping_table = str.maketrans({'.': '_', ' ': '_', '/': '_'})
return key.translate(mapping_table)
def testing_phase_steps() -> typing.Dict[str, typing.List[typing.Any]]:
with open(os.path.join(CUR_PATH, "..", "..", "pull_request_pipeline.yml")) as fp:
return YAML().load(fp)
def compat_linux_step(imagesuffix: str) -> dict[str, typing.Any]:
linux_command = LiteralScalarString("""#!/usr/bin/env bash
set -eo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/unit_tests.sh""")
return compat_step(imagesuffix, command=linux_command)
def compat_windows_step(imagesuffix: str) -> dict[str, typing.Any]:
windows_command = LiteralScalarString(r'''.\\ci\\unit_tests.ps1''')
return compat_step(imagesuffix, command=windows_command)
def compat_step(imagesuffix: str, command: LiteralScalarString) -> dict[str, typing.Any]:
step = {
"label": imagesuffix,
"key": slugify_bk_key(f"compat-linux-{imagesuffix}"),
"command": command,
"agents": {},
"retry": {"automatic": [{"limit": 3}]},
}
if "amazon" in imagesuffix.lower():
step["agents"] = {
"provider": "aws",
"imagePrefix": f"{VM_IMAGE_PREFIX}{imagesuffix}",
"instanceType": "m5.2xlarge",
"diskSizeGb": 200,
}
else:
step["agents"] = {
"provider": "gcp",
"imageProject": "elastic-images-prod",
"image": f"family/{VM_IMAGE_PREFIX}{imagesuffix}",
"machineType": "n2-standard-4",
"diskSizeGb": 200,
"diskType": "pd-ssd",
}
return step
def randomized_linux_oses() -> typing.List[str]:
with open(VM_IMAGES_FILE, "r") as fp:
all_oses = json.load(fp)
randomized_oses = []
for _, family_oses in all_oses["linux"].items():
randomized_oses.append(random.choice(family_oses))
return randomized_oses
def randomized_windows_os() -> str:
with open(VM_IMAGES_FILE, "r") as fp:
all_oses = json.load(fp)
return random.choice(all_oses["windows"])
def aws_agent(vm_name: str, instance_type: str, image_prefix: str = "platform-ingest-logstash-multi-jdk", disk_size_gb: int = 200) -> dict[str, typing.Any]:
return {
"provider": "aws",
"imagePrefix": f"{image_prefix}-{vm_name}",
"instanceType": instance_type,
"diskSizeGb": disk_size_gb,
}
def gcp_agent(vm_name: str, instance_type: str = "n2-standard-4", image_prefix: str = "family/platform-ingest-logstash-multi-jdk", disk_size_gb: int = 200) -> dict[str, typing.Any]:
return {
"provider": "gcp",
"imageProject": "elastic-images-prod",
"image": f"{image_prefix}-{vm_name}",
"machineType": instance_type,
"diskSizeGb": disk_size_gb,
"diskType": "pd-ssd",
}
def acceptance_linux_vms() -> typing.List[str]:
acceptance_linux_vms = os.getenv("ACCEPTANCE_LINUX_OSES")
if acceptance_linux_vms:
acceptance_linux_vms = acceptance_linux_vms.split(",")
else:
acceptance_linux_vms = ACCEPTANCE_LINUX_OSES
return acceptance_linux_vms
def acceptance_linux_steps() -> list[typing.Any]:
steps = []
build_artifacts_step = {
"label": "Build artifacts",
"key": "acceptance-build-artifacts",
# use the same agent as the one we use for building DRA artifacts
"agents": gcp_agent("ubuntu-2204", instance_type="n2-standard-16", image_prefix="family/platform-ingest-logstash"),
"command": LiteralScalarString("""#!/usr/bin/env bash
set -eo pipefail
source .buildkite/scripts/common/vm-agent.sh
echo "--- Building all artifacts"
./gradlew clean bootstrap
rake artifact:deb artifact:rpm
"""),
"artifact_paths": [
"build/*rpm",
"build/*deb",
"build/*tar.gz",
],
}
steps.append(build_artifacts_step)
for vm in acceptance_linux_vms():
step = {
"label": vm,
"key": slugify_bk_key(vm),
"agents": aws_agent(vm,instance_type="m5.4xlarge") if "amazonlinux" in vm else gcp_agent(vm),
"depends_on": "acceptance-build-artifacts",
"retry": {"automatic": [{"limit": 3}]},
"command": LiteralScalarString("""#!/usr/bin/env bash
set -eo pipefail
source .buildkite/scripts/common/vm-agent-multi-jdk.sh
source /etc/os-release
ci/acceptance_tests.sh"""),
}
steps.append(step)
return steps
def acceptance_docker_steps()-> list[typing.Any]:
steps = []
for flavor in ["full", "oss", "ubi", "wolfi"]:
steps.append({
"label": f":docker: {flavor} flavor acceptance",
"agents": gcp_agent(vm_name="ubuntu-2204", image_prefix="family/platform-ingest-logstash"),
"command": LiteralScalarString(f"""#!/usr/bin/env bash
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/docker_acceptance_tests.sh {flavor}"""),
"retry": {"automatic": [{"limit": 3}]},
})
return steps
if __name__ == "__main__":
LINUX_OS_ENV_VAR_OVERRIDE = os.getenv("LINUX_OS")
WINDOWS_OS_ENV_VAR_OVERRIDE = os.getenv("WINDOWS_OS")
compat_linux_steps = []
linux_test_oses = [LINUX_OS_ENV_VAR_OVERRIDE] if LINUX_OS_ENV_VAR_OVERRIDE else randomized_linux_oses()
for linux_os in linux_test_oses:
compat_linux_steps.append(compat_linux_step(linux_os))
windows_test_os = WINDOWS_OS_ENV_VAR_OVERRIDE or randomized_windows_os()
structure = {"steps": []}
structure["steps"].append({
"group": "Testing Phase",
"key": "testing-phase",
**testing_phase_steps(),
})
structure["steps"].append({
"group": "Compatibility / Linux",
"key": "compatibility-linux",
"depends_on": "testing-phase",
"steps": compat_linux_steps,
})
structure["steps"].append({
"group": "Compatibility / Windows",
"key": "compatibility-windows",
"depends_on": "testing-phase",
"steps": [compat_windows_step(imagesuffix=windows_test_os)],
})
structure["steps"].append({
"group": "Acceptance / Packaging",
"key": "acceptance-packaging",
"depends_on": ["testing-phase"],
"steps": acceptance_linux_steps(),
})
structure["steps"].append({
"group": "Acceptance / Docker",
"key": "acceptance-docker",
"depends_on": ["testing-phase"],
"steps": acceptance_docker_steps(),
})
print('# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json')
YAML().dump(structure, sys.stdout)

View file

@ -0,0 +1,18 @@
## Description
This package for integration tests of the Health Report API.
Export `LS_BRANCH` to run on a specific branch. By default, it uses the main branch.
## How to run the Health Report Integration test?
### Prerequisites
Make sure you have python installed. Install the integration test dependencies with the following command:
```shell
python3 -mpip install -r .buildkite/scripts/health-report-tests/requirements.txt
```
### Run the integration tests
```shell
python3 .buildkite/scripts/health-report-tests/main.py
```
### Troubleshooting
- If you get `WARNING: pip is configured with locations that require TLS/SSL,...` warning message, make sure you have python >=3.12.4 installed.

View file

@ -0,0 +1,111 @@
"""
Health Report Integration test bootstrapper with Python script
- A script to resolve Logstash version if not provided
- Download LS docker image and spin up
- When tests finished, teardown the Logstash
"""
import os
import subprocess
import time
import util
import yaml
class Bootstrap:
ELASTIC_STACK_RELEASED_VERSION_URL = "https://storage.googleapis.com/artifacts-api/releases/current/"
def __init__(self) -> None:
f"""
A constructor of the {Bootstrap}.
Returns:
Resolves Logstash branch considering provided LS_BRANCH
Checks out git branch
"""
logstash_branch = os.environ.get("LS_BRANCH")
if logstash_branch is None:
# version is not specified, use the main branch, no need to git checkout
print(f"LS_BRANCH is not specified, using main branch.")
else:
# LS_BRANCH accepts major latest as a major.x or specific branch as X.Y
if logstash_branch.find(".x") == -1:
print(f"Using specified branch: {logstash_branch}")
util.git_check_out_branch(logstash_branch)
else:
major_version = logstash_branch.split(".")[0]
if major_version and major_version.isnumeric():
resolved_version = self.__resolve_latest_stack_version_for(major_version)
minor_version = resolved_version.split(".")[1]
branch = major_version + "." + minor_version
print(f"Using resolved branch: {branch}")
util.git_check_out_branch(branch)
else:
raise ValueError(f"Invalid value set to LS_BRANCH. Please set it properly (ex: 8.x or 9.0) and "
f"rerun again")
def __resolve_latest_stack_version_for(self, major_version: str) -> str:
resp = util.call_url_with_retry(self.ELASTIC_STACK_RELEASED_VERSION_URL + major_version)
release_version = resp.text.strip()
print(f"Resolved latest version for {major_version} is {release_version}.")
if release_version == "":
raise ValueError(f"Cannot resolve latest version for {major_version} major")
return release_version
def install_plugin(self, plugin_path: str) -> None:
util.run_or_raise_error(
["bin/logstash-plugin", "install", plugin_path],
f"Failed to install {plugin_path}")
def build_logstash(self):
print(f"Building Logstash...")
util.run_or_raise_error(
["./gradlew", "clean", "bootstrap", "assemble", "installDefaultGems"],
"Failed to build Logstash")
print(f"Logstash has successfully built.")
def apply_config(self, config: dict) -> None:
with open(os.getcwd() + "/.buildkite/scripts/health-report-tests/config/pipelines.yml", 'w') as pipelines_file:
yaml.dump(config, pipelines_file)
def run_logstash(self, full_start_required: bool) -> subprocess.Popen:
# --config.reload.automatic is to make instance active
# it is helpful when testing crash pipeline cases
config_path = os.getcwd() + "/.buildkite/scripts/health-report-tests/config"
process = subprocess.Popen(["bin/logstash", "--config.reload.automatic", "--path.settings", config_path,
"-w 1"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, shell=False)
if process.poll() is not None:
print(f"Logstash failed to run, check the the config and logs, then rerun.")
return None
# Read stdout and stderr in real-time
logs = []
for stdout_line in iter(process.stdout.readline, ""):
logs.append(stdout_line.strip())
# we don't wait for Logstash fully start as we also test slow pipeline start scenarios
if full_start_required is False and "Starting pipeline" in stdout_line:
break
if full_start_required is True and "Pipeline started" in stdout_line:
break
if "Logstash shut down" in stdout_line or "Logstash stopped" in stdout_line:
print(f"Logstash couldn't spin up.")
print(logs)
return None
print(f"Logstash is running with PID: {process.pid}.")
return process
def stop_logstash(self, process: subprocess.Popen):
start_time = time.time() # in seconds
process.terminate()
for stdout_line in iter(process.stdout.readline, ""):
# print(f"STDOUT: {stdout_line.strip()}")
if "Logstash shut down" in stdout_line or "Logstash stopped" in stdout_line:
print(f"Logstash stopped.")
return None
# shudown watcher keep running, we should be good with considering time spent
if time.time() - start_time > 60:
print(f"Logstash didn't stop in 1min, sending SIGTERM signal.")
process.kill()
if time.time() - start_time > 70:
print(f"Logstash didn't stop over 1min, exiting.")
return None

View file

@ -0,0 +1 @@
# Intentionally left blank

View file

@ -0,0 +1,70 @@
import yaml
from typing import Any, List, Dict
class ConfigValidator:
REQUIRED_KEYS = {
"root": ["name", "config", "conditions", "expectation"],
"config": ["pipeline.id", "config.string"],
"conditions": ["full_start_required", "wait_seconds"],
"expectation": ["status", "symptom", "indicators"],
"indicators": ["pipelines"],
"pipelines": ["status", "symptom", "indicators"],
"DYNAMIC": ["status", "symptom", "diagnosis", "impacts", "details"], # pipeline-id is a DYNAMIC
"details": ["status"],
"status": ["state"]
}
def __init__(self):
self.yaml_content = None
def __has_valid_keys(self, data: any, key_path: str, repeated: bool) -> bool:
# we reached the value
if isinstance(data, str) or isinstance(data, bool) or isinstance(data, int) or isinstance(data, float):
return True
# we have two indicators section and for the next repeated ones, we go deeper
first_key = next(iter(data))
data = data[first_key] if repeated and key_path == "indicators" else data
if isinstance(data, dict):
# pipeline-id is a DYNAMIC
required = self.REQUIRED_KEYS.get("DYNAMIC" if repeated and key_path == "indicators" else key_path, [])
repeated = not repeated if key_path == "indicators" else repeated
for key in required:
if key not in data:
print(f"Missing key '{key}' in '{key_path}'")
return False
else:
dic_keys_result = self.__has_valid_keys(data[key], key, repeated)
if dic_keys_result is False:
return False
elif isinstance(data, list):
for item in data:
list_keys_result = self.__has_valid_keys(item, key_path, repeated)
if list_keys_result is False:
return False
return True
def load(self, file_path: str) -> None:
"""Load the YAML file content into self.yaml_content."""
self.yaml_content: [Dict[str, Any]] = None
try:
with open(file_path, 'r') as file:
self.yaml_content = yaml.safe_load(file)
except yaml.YAMLError as exc:
print(f"Error in YAML file: {exc}")
self.yaml_content = None
def is_valid(self) -> bool:
"""Validate the entire YAML structure."""
if self.yaml_content is None:
print(f"YAML content is empty.")
return False
if not isinstance(self.yaml_content, dict):
print(f"YAML structure is not as expected, it should start with a Dict.")
return False
result = self.__has_valid_keys(self.yaml_content, "root", False)
return True if result is True else False

View file

@ -0,0 +1,16 @@
"""
A class to provide information about Logstash node stats.
"""
import util
class LogstashHealthReport:
LOGSTASH_HEALTH_REPORT_URL = "http://localhost:9600/_health_report"
def __init__(self):
pass
def get(self):
response = util.call_url_with_retry(self.LOGSTASH_HEALTH_REPORT_URL)
return response.json()

View file

@ -0,0 +1,89 @@
"""
Main entry point of the LS health report API integration test suites
"""
import glob
import os
import time
import traceback
import yaml
from bootstrap import Bootstrap
from scenario_executor import ScenarioExecutor
from config_validator import ConfigValidator
class BootstrapContextManager:
def __init__(self):
pass
def __enter__(self):
print(f"Starting Logstash Health Report Integration test.")
self.bootstrap = Bootstrap()
self.bootstrap.build_logstash()
plugin_path = os.getcwd() + "/qa/support/logstash-integration-failure_injector/logstash-integration" \
"-failure_injector-*.gem"
matching_files = glob.glob(plugin_path)
if len(matching_files) == 0:
raise ValueError(f"Could not find logstash-integration-failure_injector plugin.")
self.bootstrap.install_plugin(matching_files[0])
print(f"logstash-integration-failure_injector successfully installed.")
return self.bootstrap
def __exit__(self, exc_type, exc_value, exc_traceback):
if exc_type is not None:
print(traceback.format_exception(exc_type, exc_value, exc_traceback))
def main():
with BootstrapContextManager() as bootstrap:
scenario_executor = ScenarioExecutor()
config_validator = ConfigValidator()
working_dir = os.getcwd()
scenario_files_path = working_dir + "/.buildkite/scripts/health-report-tests/tests/*.yaml"
scenario_files = glob.glob(scenario_files_path)
for scenario_file in scenario_files:
print(f"Validating {scenario_file} scenario file.")
config_validator.load(scenario_file)
if config_validator.is_valid() is False:
print(f"{scenario_file} scenario file is not valid.")
return
else:
print(f"Validation succeeded.")
has_failed_scenario = False
for scenario_file in scenario_files:
with open(scenario_file, 'r') as file:
# scenario_content: Dict[str, Any] = None
scenario_content = yaml.safe_load(file)
print(f"Testing `{scenario_content.get('name')}` scenario.")
scenario_name = scenario_content['name']
is_full_start_required = scenario_content.get('conditions').get('full_start_required')
wait_seconds = scenario_content.get('conditions').get('wait_seconds')
config = scenario_content['config']
if config is not None:
bootstrap.apply_config(config)
expectations = scenario_content.get("expectation")
process = bootstrap.run_logstash(is_full_start_required)
if process is not None:
if wait_seconds is not None:
print(f"Test requires to wait for `{wait_seconds}` seconds.")
time.sleep(wait_seconds) # wait for Logstash to start
try:
scenario_executor.on(scenario_name, expectations)
except Exception as e:
print(e)
has_failed_scenario = True
bootstrap.stop_logstash(process)
if has_failed_scenario:
# intentionally fail due to visibility
raise Exception("Some of scenarios failed, check the log for details.")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,16 @@
#!/bin/bash
set -euo pipefail
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:/opt/buildkite-agent/.java/bin:$PATH"
export JAVA_HOME="/opt/buildkite-agent/.java"
export PYENV_VERSION="3.11.5"
eval "$(rbenv init -)"
eval "$(pyenv init -)"
echo "--- Installing dependencies"
python3 -m pip install -r .buildkite/scripts/health-report-tests/requirements.txt
echo "--- Running tests"
python3 .buildkite/scripts/health-report-tests/main.py

View file

@ -0,0 +1,2 @@
requests==2.32.3
pyyaml==6.0.2

View file

@ -0,0 +1,67 @@
"""
A class to execute the given scenario for Logstash Health Report integration test
"""
import time
from logstash_health_report import LogstashHealthReport
class ScenarioExecutor:
logstash_health_report_api = LogstashHealthReport()
def __init__(self):
pass
def __has_intersection(self, expects, results):
# TODO: this logic is aligned on current Health API response
# there is no guarantee that method correctly runs if provided multi expects and results
# we expect expects to be existing in results
for expect in expects:
for result in results:
if result.get('help_url') and "health-report-pipeline-" not in result.get('help_url'):
return False
if not all(key in result and result[key] == value for key, value in expect.items()):
return False
return True
def __get_difference(self, differences: list, expectations: dict, reports: dict) -> dict:
for key in expectations.keys():
if type(expectations.get(key)) != type(reports.get(key)):
differences.append(f"Scenario expectation and Health API report structure differs for {key}.")
return differences
if isinstance(expectations.get(key), str):
if expectations.get(key) != reports.get(key):
differences.append({key: {"expected": expectations.get(key), "got": reports.get(key)}})
continue
elif isinstance(expectations.get(key), dict):
self.__get_difference(differences, expectations.get(key), reports.get(key))
elif isinstance(expectations.get(key), list):
if not self.__has_intersection(expectations.get(key), reports.get(key)):
differences.append({key: {"expected": expectations.get(key), "got": reports.get(key)}})
return differences
def __is_expected(self, expectations: dict) -> None:
reports = self.logstash_health_report_api.get()
differences = self.__get_difference([], expectations, reports)
if differences:
print("Differences found in 'expectation' section between YAML content and stats:")
for diff in differences:
print(f"Difference: {diff}")
return False
else:
return True
def on(self, scenario_name: str, expectations: dict) -> None:
# retriable check the expectations
attempts = 5
while self.__is_expected(expectations) is False:
attempts = attempts - 1
if attempts == 0:
break
time.sleep(1)
if attempts == 0:
raise Exception(f"{scenario_name} failed.")
else:
print(f"Scenario `{scenario_name}` expectaion meets the health report stats.")

View file

@ -0,0 +1,32 @@
name: "Abnormally terminated pipeline"
config:
- pipeline.id: abnormally-terminated-pp
config.string: |
input { heartbeat { interval => 1 } }
filter { failure_injector { crash_at => filter } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 5
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`abnormally-terminated-pp`)"
indicators:
abnormally-terminated-pp:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is not running, likely because it has encountered an error"
action: "view logs to determine the cause of abnormal pipeline shutdown"
impacts:
- description: "the pipeline is not currently processing"
impact_areas: ["pipeline_execution"]
details:
status:
state: "TERMINATED"

View file

@ -0,0 +1,38 @@
name: "Backpressured in 1min pipeline"
config:
- pipeline.id: backpressure-1m-pp
config.string: |
input { heartbeat { interval => 0.1 } }
filter { failure_injector { degrade_at => [filter] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 70 # give more seconds to make sure time is over the threshold, 1m in this case
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`backpressure-1m-pp`)"
indicators:
backpressure-1m-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- id: "logstash:health:pipeline:flow:worker_utilization:diagnosis:1m-blocked"
cause: "pipeline workers have been completely blocked for at least one minute"
action: "address bottleneck or add resources"
impacts:
- id: "logstash:health:pipeline:flow:impact:blocked_processing"
severity: 2
description: "the pipeline is blocked"
impact_areas: ["pipeline_execution"]
details:
status:
state: "RUNNING"
flow:
worker_utilization:
last_1_minute: 100.0

View file

@ -0,0 +1,39 @@
name: "Backpressured in 5min pipeline"
config:
- pipeline.id: backpressure-5m-pp
config.string: |
input { heartbeat { interval => 0.1 } }
filter { failure_injector { degrade_at => [filter] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 310 # give more seconds to make sure time is over the threshold, 1m in this case
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`backpressure-5m-pp`)"
indicators:
backpressure-5m-pp:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- id: "logstash:health:pipeline:flow:worker_utilization:diagnosis:5m-blocked"
cause: "pipeline workers have been completely blocked for at least five minutes"
action: "address bottleneck or add resources"
impacts:
- id: "logstash:health:pipeline:flow:impact:blocked_processing"
severity: 1
description: "the pipeline is blocked"
impact_areas: ["pipeline_execution"]
details:
status:
state: "RUNNING"
flow:
worker_utilization:
last_1_minute: 100.0
last_5_minutes: 100.0

View file

@ -0,0 +1,67 @@
name: "Multi pipeline"
config:
- pipeline.id: slow-start-pp-multipipeline
config.string: |
input { heartbeat {} }
filter { failure_injector { degrade_at => [register] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
- pipeline.id: normally-terminated-pp-multipipeline
config.string: |
input { generator { count => 1 } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
- pipeline.id: abnormally-terminated-pp-multipipeline
config.string: |
input { heartbeat { interval => 1 } }
filter { failure_injector { crash_at => filter } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: false
wait_seconds: 10
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`abnormally-terminated-pp-multipipeline`) and 2 indicators are concerning (`slow-start-pp-multipipeline`, `normally-terminated-pp-multipipeline`)"
indicators:
slow-start-pp-multipipeline:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is loading"
action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "LOADING"
normally-terminated-pp-multipipeline:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline has finished running because its inputs have been closed and events have been processed"
action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
impacts:
- impact_areas: [ "pipeline_execution" ]
details:
status:
state: "FINISHED"
abnormally-terminated-pp-multipipeline:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is not running, likely because it has encountered an error"
action: "view logs to determine the cause of abnormal pipeline shutdown"
impacts:
- description: "the pipeline is not currently processing"
impact_areas: [ "pipeline_execution" ]
details:
status:
state: "TERMINATED"

View file

@ -0,0 +1,30 @@
name: "Successfully terminated pipeline"
config:
- pipeline.id: normally-terminated-pp
config.string: |
input { generator { count => 1 } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 5
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`normally-terminated-pp`)"
indicators:
normally-terminated-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline has finished running because its inputs have been closed and events have been processed"
action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "FINISHED"

View file

@ -0,0 +1,31 @@
name: "Slow start pipeline"
config:
- pipeline.id: slow-start-pp
config.string: |
input { heartbeat {} }
filter { failure_injector { degrade_at => [register] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: false
wait_seconds: 0
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`slow-start-pp`)"
indicators:
slow-start-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is loading"
action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "LOADING"

View file

@ -0,0 +1,36 @@
import os
import requests
import subprocess
from requests.adapters import HTTPAdapter, Retry
def call_url_with_retry(url: str, max_retries: int = 5, delay: int = 1) -> requests.Response:
f"""
Calls the given {url} with maximum of {max_retries} retries with {delay} delay.
"""
schema = "https://" if "https://" in url else "http://"
session = requests.Session()
# retry on most common failures such as connection timeout(408), etc...
retries = Retry(total=max_retries, backoff_factor=delay, status_forcelist=[408, 502, 503, 504])
session.mount(schema, HTTPAdapter(max_retries=retries))
return session.get(url)
def git_check_out_branch(branch_name: str) -> None:
f"""
Checks out specified branch or fails with error if checkout operation fails.
"""
run_or_raise_error(["git", "checkout", branch_name],
"Error occurred while checking out the " + branch_name + " branch")
def run_or_raise_error(commands: list, error_message):
f"""
Executes the {list} commands and raises an {Exception} if opration fails.
"""
result = subprocess.run(commands, env=os.environ.copy(), universal_newlines=True, stdout=subprocess.PIPE)
if result.returncode != 0:
full_error_message = (error_message + ", output: " + result.stdout.decode('utf-8')) \
if result.stdout else error_message
raise Exception(f"{full_error_message}")

View file

@ -0,0 +1,398 @@
import abc
import copy
from dataclasses import dataclass, field
import os
import sys
import typing
from functools import partial
from ruamel.yaml import YAML
from ruamel.yaml.scalarstring import LiteralScalarString
ENABLED_RETRIES = {"automatic": [{"limit": 3}]}
@dataclass
class JobRetValues:
step_label: str
command: str
step_key: str
depends: str
agent: typing.Dict[typing.Any, typing.Any]
retry: typing.Optional[typing.Dict[typing.Any, typing.Any]] = None
artifact_paths: list = field(default_factory=list)
class GCPAgent:
def __init__(self, image: str, machineType: str, diskSizeGb: int = 200, diskType: str = "pd-ssd") -> None:
self.provider = "gcp"
self.imageProject = "elastic-images-prod"
self.image = image
self.machineType = machineType
self.diskSizeGb = diskSizeGb
self.diskType = diskType
def to_dict(self):
return {
"provider": self.provider,
"imageProject": self.imageProject,
"image": self.image,
"machineType": self.machineType,
"diskSizeGb": self.diskSizeGb,
"diskType": self.diskType,
}
class AWSAgent:
def __init__(self, imagePrefix: str, instanceType: str, diskSizeGb: int):
self.provider = "aws"
self.imagePrefix = imagePrefix
self.instanceType = instanceType
self.diskSizeGb = diskSizeGb
def to_dict(self):
return {
"provider": self.provider,
"imagePrefix": self.imagePrefix,
"instanceType": self.instanceType,
"diskSizeGb": self.diskSizeGb,
}
class DefaultAgent:
"""
Represents an empty agent definition which makes Buildkite use the default agent i.e. a container
"""
def __init__(self) -> None:
pass
def to_json(self) -> typing.Dict:
return {}
@dataclass
class BuildkiteEmojis:
running: str = ":bk-status-running:"
success: str = ":bk-status-passed:"
failed: str = ":bk-status-failed:"
def slugify_bk_key(key: str) -> str:
"""
Convert and return key to an acceptable format for Buildkite's key: field
Only alphanumerics, dashes and underscores are allowed.
"""
mapping_table = str.maketrans({'.': '_', ' ': '_', '/': '_'})
return key.translate(mapping_table)
def get_bk_metadata(key: str) -> typing.List[str]:
try:
return os.environ[key].split()
except KeyError:
print(f"Missing environment variable [{key}]. This should be set before calling this script using buildkite-agent meta-data get. Exiting.")
exit(1)
def bk_annotate(body: str, context: str, mode = "") -> str:
cmd = f"""buildkite-agent annotate --style=info --context={context} """
if mode:
cmd += f"--{mode} "
cmd += f"\"{body}\n\""
return cmd
class Jobs(abc.ABC):
def __init__(self, os: str, jdk: str, group_key: str, agent: typing.Union[GCPAgent, AWSAgent]):
self.os = os
self.jdk = jdk
self.group_key = group_key
self.init_annotation_key = f"{os}-{jdk}-initialize-annotation"
self.agent = agent
def init_annotation(self) -> JobRetValues:
"""
Command for creating the header of a new annotation for a group step
"""
body = f"### Group: {self.os} / {self.jdk}\n| **Status** | **Test** |\n| --- | ----|"
return JobRetValues(
step_label="Initialize annotation",
command=LiteralScalarString(bk_annotate(body=body, context=self.group_key)),
step_key=self.init_annotation_key,
depends="",
agent=DefaultAgent().to_json(),
)
@abc.abstractmethod
def all_jobs(self) -> list[typing.Callable[[], typing.Tuple[str, str]]]:
pass
class WindowsJobs(Jobs):
def __init__(self, os: str, jdk: str, group_key: str, agent: typing.Union[GCPAgent, AWSAgent]):
super().__init__(os=os, jdk=jdk, group_key=group_key, agent=agent)
def all_jobs(self) -> list[typing.Callable[[], JobRetValues]]:
return [
self.init_annotation,
self.java_unit_test,
self.ruby_unit_test,
]
def java_unit_test(self) -> JobRetValues:
step_name_human = "Java Unit Test"
step_key = f"{self.group_key}-java-unit-test"
test_command = rf'''.\\.buildkite\\scripts\\jdk-matrix-tests\\launch-command.ps1 -JDK "{self.jdk}" -StepNameHuman "{step_name_human}" -AnnotateContext "{self.group_key}" -CIScript ".\\ci\\unit_tests.ps1 java" -Annotate
'''
return JobRetValues(
step_label=step_name_human,
command=LiteralScalarString(test_command),
step_key=step_key,
depends=self.init_annotation_key,
artifact_paths=["build_reports.zip"],
agent=self.agent.to_dict(),
retry=copy.deepcopy(ENABLED_RETRIES),
)
def ruby_unit_test(self) -> JobRetValues:
step_name_human = "Ruby Unit Test"
step_key = f"{self.group_key}-ruby-unit-test"
test_command = rf'''.\\.buildkite\\scripts\\jdk-matrix-tests\\launch-command.ps1 -JDK "{self.jdk}" -StepNameHuman "{step_name_human}" -AnnotateContext "{self.group_key}" -CIScript ".\\ci\\unit_tests.ps1 ruby" -Annotate
'''
return JobRetValues(
step_label=step_name_human,
command=LiteralScalarString(test_command),
step_key=step_key,
depends=self.init_annotation_key,
artifact_paths=["build_reports.zip"],
agent=self.agent.to_dict(),
retry=copy.deepcopy(ENABLED_RETRIES),
)
class LinuxJobs(Jobs):
def __init__(self, os: str, jdk: str, group_key: str, agent: typing.Union[GCPAgent, AWSAgent]):
super().__init__(os=os, jdk=jdk, group_key=group_key, agent=agent)
def all_jobs(self) -> list[typing.Callable[[], JobRetValues]]:
jobs=list()
jobs.append(self.init_annotation)
jobs.append(self.java_unit_test)
jobs.append(self.ruby_unit_test)
jobs.extend(self.integration_test_parts(3))
jobs.extend(self.pq_integration_test_parts(3))
jobs.append(self.x_pack_unit_tests)
jobs.append(self.x_pack_integration)
return jobs
def prepare_shell(self) -> str:
jdk_dir = f"/opt/buildkite-agent/.java/{self.jdk}"
return f"""#!/usr/bin/env bash
set -euo pipefail
# unset generic JAVA_HOME
unset JAVA_HOME
# LS env vars for JDK matrix tests
export BUILD_JAVA_HOME={jdk_dir}
export RUNTIME_JAVA_HOME={jdk_dir}
export LS_JAVA_HOME={jdk_dir}
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
eval "$(rbenv init -)"
"""
def failed_step_annotation(self, step_name_human) -> str:
return bk_annotate(body=f"| {BuildkiteEmojis.failed} | {step_name_human} |", context=self.group_key, mode="append")
def succeeded_step_annotation(self, step_name_human) -> str:
return bk_annotate(body=f"| {BuildkiteEmojis.success} | {step_name_human} |", context=self.group_key, mode="append")
def emit_command(self, step_name_human, test_command: str) -> str:
return LiteralScalarString(f"""
{self.prepare_shell()}
# temporarily disable immediate failure on errors, so that we can update the BK annotation
set +eo pipefail
{test_command}
if [[ $$? -ne 0 ]]; then
{self.failed_step_annotation(step_name_human)}
exit 1
else
{self.succeeded_step_annotation(step_name_human)}
fi
""")
def java_unit_test(self) -> JobRetValues:
step_name_human = "Java Unit Test"
step_key = f"{self.group_key}-java-unit-test"
test_command = '''
export ENABLE_SONARQUBE="false"
ci/unit_tests.sh java
'''
return JobRetValues(
step_label=step_name_human,
command=self.emit_command(step_name_human, test_command),
step_key=step_key,
depends=self.init_annotation_key,
agent=self.agent.to_dict(),
retry=copy.deepcopy(ENABLED_RETRIES),
)
def ruby_unit_test(self) -> JobRetValues:
step_name_human = "Ruby Unit Test"
step_key = f"{self.group_key}-ruby-unit-test"
test_command = """
ci/unit_tests.sh ruby
"""
return JobRetValues(
step_label=step_name_human,
command=self.emit_command(step_name_human, test_command),
step_key=step_key,
depends=self.init_annotation_key,
agent=self.agent.to_dict(),
retry=copy.deepcopy(ENABLED_RETRIES),
)
def integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"Integration Tests - {part}/{parts}"
step_key = f"{self.group_key}-integration-tests-{part}-of-{parts}"
test_command = f"""
ci/integration_tests.sh split {part-1} {parts}
"""
return JobRetValues(
step_label=step_name_human,
command=self.emit_command(step_name_human, test_command),
step_key=step_key,
depends=self.init_annotation_key,
agent=self.agent.to_dict(),
retry=copy.deepcopy(ENABLED_RETRIES),
)
def pq_integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.pq_integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def pq_integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"IT Persistent Queues - {part}/{parts}"
step_key = f"{self.group_key}-it-persistent-queues-{part}-of-{parts}"
test_command = f"""
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split {part-1} {parts}
"""
return JobRetValues(
step_label=step_name_human,
command=self.emit_command(step_name_human, test_command),
step_key=step_key,
depends=self.init_annotation_key,
agent=self.agent.to_dict(),
retry=copy.deepcopy(ENABLED_RETRIES),
)
def x_pack_unit_tests(self) -> JobRetValues:
step_name_human = "x-pack unit tests"
step_key = f"{self.group_key}-x-pack-unit-test"
test_command = """
x-pack/ci/unit_tests.sh
"""
return JobRetValues(
step_label=step_name_human,
command=self.emit_command(step_name_human, test_command),
step_key=step_key,
depends=self.init_annotation_key,
agent=self.agent.to_dict(),
retry=copy.deepcopy(ENABLED_RETRIES),
)
def x_pack_integration(self) -> JobRetValues:
step_name_human = "x-pack integration"
step_key = f"{self.group_key}-x-pack-integration"
test_command = """
x-pack/ci/integration_tests.sh
"""
return JobRetValues(
step_label=step_name_human,
command=self.emit_command(step_name_human, test_command),
step_key=step_key,
depends=self.init_annotation_key,
agent=self.agent.to_dict(),
retry=copy.deepcopy(ENABLED_RETRIES),
)
if __name__ == "__main__":
matrix_oses = get_bk_metadata(key="MATRIX_OSES")
matrix_jdkes = get_bk_metadata(key="MATRIX_JDKS")
pipeline_name = os.environ.get("BUILDKITE_PIPELINE_NAME", "").lower()
structure = {"steps": []}
for matrix_os in matrix_oses:
gcpAgent = GCPAgent(
image=f"family/platform-ingest-logstash-multi-jdk-{matrix_os}",
machineType="n2-standard-4",
diskSizeGb=200,
diskType="pd-ssd",
)
awsAgent = AWSAgent(
imagePrefix=f"platform-ingest-logstash-multi-jdk-{matrix_os}",
instanceType="m5.2xlarge",
diskSizeGb=200,
)
for matrix_jdk in matrix_jdkes:
group_name = f"{matrix_os}/{matrix_jdk}"
group_key = slugify_bk_key(group_name)
agent = awsAgent if "amazon" in matrix_os else gcpAgent
if "windows" in pipeline_name:
jobs = WindowsJobs(os=matrix_os, jdk=matrix_jdk, group_key=group_key, agent=agent)
else:
jobs = LinuxJobs(os=matrix_os, jdk=matrix_jdk, group_key=group_key, agent=agent)
group_steps = []
for job in jobs.all_jobs():
job_values = job()
step = {
"label": f"{job_values.step_label}",
"key": job_values.step_key,
}
if job_values.depends:
step["depends_on"] = job_values.depends
if job_values.agent:
step["agents"] = job_values.agent
if job_values.artifact_paths:
step["artifact_paths"] = job_values.artifact_paths
if job_values.retry:
step["retry"] = job_values.retry
step["command"] = job_values.command
group_steps.append(step)
structure["steps"].append({
"group": group_name,
"key": slugify_bk_key(group_name),
"steps": group_steps})
print('# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json')
YAML().dump(structure, sys.stdout)

View file

@ -0,0 +1,51 @@
# ********************************************************
# This file contains prerequisite bootstrap invocations
# required for Logstash CI JDK matrix tests
# ********************************************************
param (
[string]$JDK,
[string]$CIScript,
[string]$StepNameHuman,
[string]$AnnotateContext,
[switch]$Annotate
)
# expand previous buildkite folded section (command invocation)
Write-Host "^^^ +++"
# unset generic JAVA_HOME
if (Test-Path env:JAVA_HOME) {
Remove-Item -Path env:JAVA_HOME
Write-Host "--- Environment variable 'JAVA_HOME' has been unset."
} else {
Write-Host "--- Environment variable 'JAVA_HOME' doesn't exist. Continuing."
}
# LS env vars for JDK matrix tests
$JAVA_CUSTOM_DIR = "C:\Users\buildkite\.java\$JDK"
$env:BUILD_JAVA_HOME = $JAVA_CUSTOM_DIR
$env:RUNTIME_JAVA_HOME = $JAVA_CUSTOM_DIR
$env:LS_JAVA_HOME = $JAVA_CUSTOM_DIR
Write-Host "--- Running test: $CIScript"
try {
Invoke-Expression $CIScript
if ($LASTEXITCODE -ne 0) {
throw "Test script $CIScript failed with a non-zero code: $LASTEXITCODE"
}
if ($Annotate) {
C:\buildkite-agent\bin\buildkite-agent.exe annotate --context="$AnnotateContext" --append "| :bk-status-passed: | $StepNameHuman |`n"
}
} catch {
# tests failed
Write-Host "^^^ +++"
if ($Annotate) {
C:\buildkite-agent\bin\buildkite-agent.exe annotate --context="$AnnotateContext" --append "| :bk-status-failed: | $StepNameHuman |`n"
Write-Host "--- Archiving test reports"
& "7z.exe" a -r .\build_reports.zip .\logstash-core\build\reports\tests
}
exit 1
}

View file

@ -4,7 +4,7 @@ set -e
install_java() {
# TODO: let's think about regularly creating a custom image for Logstash which may align on version.yml definitions
sudo apt update && sudo apt install -y openjdk-17-jdk && sudo apt install -y openjdk-17-jre
sudo apt update && sudo apt install -y openjdk-21-jdk && sudo apt install -y openjdk-21-jre
}
install_java

View file

@ -3,42 +3,23 @@
set -e
TARGET_BRANCHES=("main")
cd .buildkite/scripts
install_java() {
# TODO: let's think about regularly creating a custom image for Logstash which may align on version.yml definitions
sudo apt update && sudo apt install -y openjdk-17-jdk && sudo apt install -y openjdk-17-jre
install_java_11() {
curl -L -s "https://github.com/adoptium/temurin11-binaries/releases/download/jdk-11.0.24%2B8/OpenJDK11U-jdk_x64_linux_hotspot_11.0.24_8.tar.gz" | tar -zxf -
}
# Resolves the branches we are going to track
resolve_latest_branches() {
source snyk/resolve_stack_version.sh
for SNAPSHOT_VERSION in "${SNAPSHOT_VERSIONS[@]}"
do
IFS='.'
read -a versions <<< "$SNAPSHOT_VERSION"
version=${versions[0]}.${versions[1]}
TARGET_BRANCHES+=("$version")
done
}
# Clones the Logstash repo
clone_logstash_repo() {
echo "Cloning logstash repo..."
git clone https://github.com/elastic/logstash.git
source .buildkite/scripts/snyk/resolve_stack_version.sh
}
# Build Logstash specific branch to generate Gemlock file where Snyk scans
build_logstash() {
cd logstash
git reset --hard HEAD # reset if any generated files appeared
git checkout "$1"
./gradlew clean bootstrap assemble installDefaultGems && cd ..
./gradlew clean bootstrap assemble installDefaultGems
}
# Downloads snyk distribution
download_auth_snyk() {
cd logstash
echo "Downloading snyk..."
curl https://static.snyk.io/cli/latest/snyk-linux -o snyk
chmod +x ./snyk
@ -46,98 +27,50 @@ download_auth_snyk() {
vault_path=secret/ci/elastic-logstash/snyk-creds
SNYK_TOKEN=$(vault read -field=token "${vault_path}")
./snyk auth "$SNYK_TOKEN"
cd ..
}
# Reports vulnerabilities to the Snyk
report() {
echo "Reporting to Snyk..."
cd logstash
REMOTE_REPO_URL=$1
if [ "$REMOTE_REPO_URL" != "main" ]; then
echo "Reporting $REMOTE_REPO_URL branch."
if [ "$REMOTE_REPO_URL" != "main" ] && [ "$REMOTE_REPO_URL" != "8.x" ]; then
MAJOR_VERSION=$(echo "$REMOTE_REPO_URL"| cut -d'.' -f 1)
REMOTE_REPO_URL="$MAJOR_VERSION".latest
echo "Using '$REMOTE_REPO_URL' remote repo url."
fi
# adding git commit hash to Snyk tag to improve visibility
# for big projects Snyk recommends pruning dependencies
# https://support.snyk.io/hc/en-us/articles/360002061438-CLI-returns-the-error-Failed-to-get-Vulns
GIT_HEAD=$(git rev-parse --short HEAD 2> /dev/null)
./snyk monitor --all-projects --org=logstash --remote-repo-url="$REMOTE_REPO_URL" --target-reference="$REMOTE_REPO_URL" --detection-depth=6 --exclude=qa,tools,devtools,requirements.txt --project-tags=branch="$TARGET_BRANCH",git_head="$GIT_HEAD" && true
cd ..
./snyk monitor --prune-repeated-subdependencies --all-projects --org=logstash --remote-repo-url="$REMOTE_REPO_URL" --target-reference="$REMOTE_REPO_URL" --detection-depth=6 --exclude=qa,tools,devtools,requirements.txt --project-tags=branch="$TARGET_BRANCH",git_head="$GIT_HEAD" || :
}
install_java
resolve_latest_branches
clone_logstash_repo
download_auth_snyk
# clone Logstash repo, build and report
for TARGET_BRANCH in "${TARGET_BRANCHES[@]}"
do
echo "Using $TARGET_BRANCH branch."
build_logstash "$TARGET_BRANCH"
report "$TARGET_BRANCH"
done
report_docker_image() {
image=$1
project_name=$2
platform=$3
echo "Reporting $image to Snyk started..."
docker pull "$image"
if [[ $platform != null ]]; then
./snyk container monitor "$image" --org=logstash --platform="$platform" --project-name="$project_name" --project-tags=version="$version" && true
else
./snyk container monitor "$image" --org=logstash --project-name="$project_name" --project-tags=version="$version" && true
if [ "$TARGET_BRANCH" == "7.17" ]; then
echo "Installing and configuring JDK11."
export OLD_PATH=$PATH
install_java_11
export PATH=$PWD/jdk-11.0.24+8/bin:$PATH
fi
}
report_docker_images() {
version=$1
echo "Version value: $version"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT"
snyk_project_name="logstash-$version-SNAPSHOT"
report_docker_image "$image" "$snyk_project_name"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT"
snyk_project_name="logstash-oss-$version-SNAPSHOT"
report_docker_image "$image" "$snyk_project_name"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT-arm64"
snyk_project_name="logstash-$version-SNAPSHOT-arm64"
report_docker_image "$image" "$snyk_project_name" "linux/arm64"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT-amd64"
snyk_project_name="logstash-$version-SNAPSHOT-amd64"
report_docker_image "$image" "$snyk_project_name" "linux/amd64"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT-arm64"
snyk_project_name="logstash-oss-$version-SNAPSHOT-arm64"
report_docker_image "$image" "$snyk_project_name" "linux/arm64"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT-amd64"
snyk_project_name="logstash-oss-$version-SNAPSHOT-amd64"
report_docker_image "$image" "$snyk_project_name" "linux/amd64"
}
resolve_version_and_report_docker_images() {
cd logstash
git reset --hard HEAD # reset if any generated files appeared
git checkout "$1"
# parse version (ex: 8.8.2 from 8.8 branch, or 8.9.0 from main branch)
versions_file="$PWD/versions.yml"
version=$(awk '/logstash:/ { print $2 }' "$versions_file")
report_docker_images "$version"
cd ..
}
REPOSITORY_BASE_URL="docker.elastic.co/logstash/"
# resolve docker artifact and report
for TARGET_BRANCH in "${TARGET_BRANCHES[@]}"
do
echo "Using $TARGET_BRANCH branch for docker images."
resolve_version_and_report_docker_images "$TARGET_BRANCH"
done
# check if target branch exists
echo "Checking out $TARGET_BRANCH branch."
if git checkout "$TARGET_BRANCH"; then
build_logstash
report "$TARGET_BRANCH"
else
echo "$TARGET_BRANCH branch doesn't exist."
fi
if [ "$TARGET_BRANCH" == "7.17" ]; then
# reset state
echo "Removing JDK11 installation."
rm -rf jdk-11.0.24+8
export PATH=$OLD_PATH
fi
done

View file

@ -6,14 +6,9 @@
set -e
VERSION_URL="https://raw.githubusercontent.com/elastic/logstash/main/ci/logstash_releases.json"
VERSION_URL="https://storage.googleapis.com/artifacts-api/snapshots/branches.json"
echo "Fetching versions from $VERSION_URL"
VERSIONS=$(curl --silent $VERSION_URL)
SNAPSHOT_KEYS=$(echo "$VERSIONS" | jq -r '.snapshots | .[]')
readarray -t TARGET_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $VERSION_URL | jq -r '.branches[]')
echo "${TARGET_BRANCHES[@]}"
SNAPSHOT_VERSIONS=()
while IFS= read -r line; do
SNAPSHOT_VERSIONS+=("$line")
echo "Resolved snapshot version: $line"
done <<< "$SNAPSHOT_KEYS"

View file

@ -6,25 +6,52 @@ agents:
steps:
- label: "DLQ rspec integration test"
command: ./.buildkite/scripts/setup_java.sh && ./ci/serverless/dlq_rspec_tests.sh
retry:
automatic:
- limit: 3
- label: "es-output test"
command: ./.buildkite/scripts/setup_java.sh && ./ci/serverless/es_output_tests.sh
retry:
automatic:
- limit: 3
- label: "es-input test"
command: ./.buildkite/scripts/setup_java.sh && ./ci/serverless/es_input_tests.sh
retry:
automatic:
- limit: 3
- label: "es-filter test"
command: ./.buildkite/scripts/setup_java.sh && ./ci/serverless/es_filter_tests.sh
retry:
automatic:
- limit: 3
- label: "elastic_integration filter test"
command: ./.buildkite/scripts/setup_java.sh && ./ci/serverless/elastic_integration_filter_tests.sh
retry:
automatic:
- limit: 3
- label: "central pipeline management test"
command: ./.buildkite/scripts/setup_java.sh && ./ci/serverless/cpm_tests.sh
# Legacy monitoring is disabled. Serverless does not support /_monitoring/bulk, hence the test always fails to ingest metrics.
retry:
automatic:
- limit: 3
- label: "Logstash legacy monitoring test"
command: ./.buildkite/scripts/setup_java.sh && ./ci/serverless/monitoring_tests.sh
skip: true
# Kibana API is disabled as it is not available with the current configuration in QA
retry:
automatic:
- limit: 3
- label: "Kibana API test"
command: ./.buildkite/scripts/setup_java.sh && ./ci/serverless/kibana_api_tests.sh
skip: true
# Metricbeat stack monitoring is disabled
retry:
automatic:
- limit: 3
- label: "metricbeat test is disabled as metricbeat has not disabled /_ilm yet"
command: ./.buildkite/scripts/setup_java.sh && ./ci/serverless/metricbeat_monitoring_tests.sh
skip: true
skip: true
retry:
automatic:
- limit: 3

View file

@ -1,10 +1,14 @@
agents:
provider: "gcp"
machineType: "n1-standard-4"
image: family/core-ubuntu-2204
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "2"
memory: "4Gi"
ephemeralStorage: "64Gi"
steps:
# reports main, previous (ex: 7.latest) and current (ex: 8.latest) release branches to Snyk
- label: ":hammer: Report to Snyk"
command:
- .buildkite/scripts/snyk/report.sh
- .buildkite/scripts/snyk/report.sh
retry:
automatic:
- limit: 3

View file

@ -0,0 +1,77 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "200Gi"
steps:
- group: "Tier1 plugins test group"
key: "tier1_plugins"
steps:
- label: "Test input file plugin"
command: |
source .buildkite/scripts/common/vm-agent.sh
./ci/test_plugins.sh -p logstash-input-file
# Workaround for https://github.com/elastic/ingest-dev/issues/2676
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 120
- label: "Test rest of Tier1 inputs"
command: |
source .buildkite/scripts/common/vm-agent.sh
./ci/test_plugins.sh -p logstash-input-azure_event_hubs,logstash-input-beats,logstash-input-elasticsearch,logstash-input-generator,logstash-input-heartbeat,logstash-input-http,logstash-input-http_poller,logstash-input-redis,logstash-input-stdin,logstash-input-syslog,logstash-input-udp
# Workaround for https://github.com/elastic/ingest-dev/issues/2676
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 120
- label: "Test Tier1 filters"
command: ./ci/test_plugins.sh -t tier1 -k filter
- label: "Test Tier1 codecs"
command: ./ci/test_plugins.sh -t tier1 -k codec
- label: "Test Tier1 outputs"
command: ./ci/test_plugins.sh -t tier1 -k output
- label: "Test Tier1 integrations"
command: |
source .buildkite/scripts/common/vm-agent.sh
./ci/test_plugins.sh -t tier1 -k integration
# Workaround to avoid errors on chmod of files
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 120
- group: "Tier2 plugins test group"
key: "tier2_plugins"
steps:
- label: "Test Tier2 inputs"
command: ./ci/test_plugins.sh -t tier2 -k input
- label: "Test Tier2 filters"
command: ./ci/test_plugins.sh -t tier2 -k filter
- label: "Test Tier2 codecs"
command: |
source .buildkite/scripts/common/vm-agent.sh
./ci/test_plugins.sh -t tier2 -k codec
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 120
- label: "Test Tier2 outputs"
command: ./ci/test_plugins.sh -t tier2 -k output
- group: "Unsupported plugins test group"
key: "unsupported_plugins"
steps:
- label: "Test unsupported inputs"
command: ./ci/test_plugins.sh -t unsupported -k input

View file

@ -0,0 +1,5 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
steps:
- label: ":pipeline: Generate trigger steps for $PIPELINES_TO_TRIGGER"
command: ".buildkite/scripts/common/trigger-pipeline-generate-steps.sh"

View file

@ -0,0 +1,73 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
env:
DEFAULT_MATRIX_OS: "windows-2022"
DEFAULT_MATRIX_JDK: "adoptiumjdk_21"
steps:
- input: "Test Parameters"
if: build.source != "schedule" && build.source != "trigger_job"
fields:
- select: "Operating System"
key: "matrix-os"
hint: "The operating system variant(s) to run on:"
required: true
multiple: true
default: "${DEFAULT_MATRIX_OS}"
options:
- label: "Windows 2025"
value: "windows-2025"
- label: "Windows 2022"
value: "windows-2022"
- label: "Windows 2019"
value: "windows-2019"
- label: "Windows 2016"
value: "windows-2016"
- select: "Java"
key: "matrix-jdk"
hint: "The JDK to test with:"
required: true
multiple: true
default: "${DEFAULT_MATRIX_JDK}"
options:
- label: "Adoptium JDK 21 (Eclipse Temurin)"
value: "adoptiumjdk_21"
- label: "Adoptium JDK 17 (Eclipse Temurin)"
value: "adoptiumjdk_17"
- label: "OpenJDK 21"
value: "openjdk_21"
- label: "OpenJDK 17"
value: "openjdk_17"
- label: "Zulu 21"
value: "zulu_21"
- label: "Zulu 17"
value: "zulu_17"
- wait: ~
if: build.source != "schedule" && build.source != "trigger_job"
- command: |
set -euo pipefail
echo "--- Downloading prerequisites"
python3 -m pip install ruamel.yaml
echo "--- Printing generated dynamic steps"
export MATRIX_OSES="$(buildkite-agent meta-data get matrix-os --default=${DEFAULT_MATRIX_OS})"
export MATRIX_JDKS="$(buildkite-agent meta-data get matrix-jdk --default=${DEFAULT_MATRIX_JDK})"
set +eo pipefail
python3 .buildkite/scripts/jdk-matrix-tests/generate-steps.py >pipeline_steps.yml
if [[ $$? -ne 0 ]]; then
echo "^^^ +++"
echo "There was a problem rendering the pipeline steps."
cat pipeline_steps.yml
echo "Exiting now."
exit 1
else
set -eo pipefail
cat pipeline_steps.yml
fi
echo "--- Uploading steps to buildkite"
cat pipeline_steps.yml | buildkite-agent pipeline upload

42
.ci/Makefile Normal file
View file

@ -0,0 +1,42 @@
.SILENT:
MAKEFLAGS += --no-print-directory
.SHELLFLAGS = -euc
SHELL = /bin/bash
#######################
## Templates
#######################
## Mergify template
define MERGIFY_TMPL
- name: backport patches to $(BRANCH) branch
conditions:
- merged
- base=main
- label=$(BACKPORT_LABEL)
actions:
backport:
branches:
- "$(BRANCH)"
endef
# Add mergify entry for the new backport label
.PHONY: mergify
export MERGIFY_TMPL
mergify: BACKPORT_LABEL=$${BACKPORT_LABEL} BRANCH=$${BRANCH} PUSH_BRANCH=$${PUSH_BRANCH}
mergify:
@echo ">> mergify"
echo "$$MERGIFY_TMPL" >> ../.mergify.yml
git add ../.mergify.yml
git status
if [ ! -z "$$(git status --porcelain)" ]; then \
git commit -m "mergify: add $(BACKPORT_LABEL) rule"; \
git push origin $(PUSH_BRANCH) ; \
fi
# Create GitHub backport label
.PHONY: backport-label
backport-label: BACKPORT_LABEL=$${BACKPORT_LABEL}
backport-label:
@echo ">> backport-label"
gh label create $(BACKPORT_LABEL) --description "Automated backport with mergify" --color 0052cc --force

View file

@ -1,2 +1,2 @@
LS_BUILD_JAVA=jdk17
LS_RUNTIME_JAVA=jdk17
LS_BUILD_JAVA=adoptiumjdk_21
LS_RUNTIME_JAVA=adoptiumjdk_21

View file

@ -21,10 +21,6 @@ analyze:
type: gradle
target: 'dependencies-report:'
path: .
- name: ingest-converter
type: gradle
target: 'ingest-converter:'
path: .
- name: logstash-core
type: gradle
target: 'logstash-core:'

View file

@ -21,6 +21,5 @@ to reproduce locally
**Failure history**:
<!--
Link to build stats and possible indication of when this started failing and how often it fails
<https://build-stats.elastic.co/app/kibana>
-->
**Failure excerpt**:

18
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,18 @@
---
version: 2
updates:
- package-ecosystem: "github-actions"
directories:
- '/'
- '/.github/actions/*'
schedule:
interval: "weekly"
day: "sunday"
time: "22:00"
reviewers:
- "elastic/observablt-ci"
- "elastic/observablt-ci-contractors"
groups:
github-actions:
patterns:
- "*"

View file

@ -1,33 +0,0 @@
name: Docs Preview Link
on:
pull_request_target:
types: [opened, synchronize]
paths:
- docs/**
- docsk8s/**
jobs:
docs-preview-link:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- id: wait-for-status
uses: autotelic/action-wait-for-status-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
owner: elastic
# when running with on: pull_request_target we get the PR base ref by default
ref: ${{ github.event.pull_request.head.sha }}
statusName: "elasticsearch-ci/docs"
# https://elasticsearch-ci.elastic.co/job/elastic+logstash+pull-request+build-docs
# usually finishes in ~ 10 minutes
timeoutSeconds: 900
intervalSeconds: 30
- name: Add Docs Preview link in PR Comment
if: steps.wait-for-status.outputs.state == 'success'
uses: thollander/actions-comment-pull-request@v1
with:
message: |
:page_with_curl: **DOCS PREVIEW** :sparkles: https://logstash_${{ github.event.number }}.docs-preview.app.elstc.co/diff
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

22
.github/workflows/backport-active.yml vendored Normal file
View file

@ -0,0 +1,22 @@
name: Backport to active branches
on:
pull_request_target:
types: [closed]
branches:
- main
permissions:
pull-requests: write
contents: read
jobs:
backport:
# Only run if the PR was merged (not just closed) and has one of the backport labels
if: |
github.event.pull_request.merged == true &&
contains(toJSON(github.event.pull_request.labels.*.name), 'backport-active-')
runs-on: ubuntu-latest
steps:
- uses: elastic/oblt-actions/github/backport-active@v1

View file

@ -0,0 +1,24 @@
name: Scan for vulnerabilities
on:
pull_request:
types: [opened, synchronize]
workflow_dispatch:
jobs:
scan_image:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout@v4
- name: build tar distribution
run: ./gradlew clean assembleTarDistribution
- run: mkdir scan
- run: tar -zxf ../build/logstash-*.tar.gz
working-directory: ./scan
- name: scan image
uses: anchore/scan-action@v3
with:
path: "./scan"
fail-build: true
severity-cutoff: critical

19
.github/workflows/docs-build.yml vendored Normal file
View file

@ -0,0 +1,19 @@
name: docs-build
on:
push:
branches:
- main
pull_request_target: ~
merge_group: ~
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-build.yml@main
with:
path-pattern: docs/**
permissions:
deployments: write
id-token: write
contents: read
pull-requests: read

14
.github/workflows/docs-cleanup.yml vendored Normal file
View file

@ -0,0 +1,14 @@
name: docs-cleanup
on:
pull_request_target:
types:
- closed
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-cleanup.yml@main
permissions:
contents: none
id-token: write
deployments: write

View file

@ -0,0 +1,23 @@
name: mergify backport labels copier
on:
pull_request:
types:
- opened
permissions:
contents: read
jobs:
mergify-backport-labels-copier:
runs-on: ubuntu-latest
if: startsWith(github.head_ref, 'mergify/bp/')
permissions:
# Add GH labels
pull-requests: write
# See https://github.com/cli/cli/issues/6274
repository-projects: read
steps:
- uses: elastic/oblt-actions/mergify/labels-copier@v1
with:
excluded-labels-regex: "^backport-*"

View file

@ -1,49 +0,0 @@
name: Backport PR to another branch
on:
issue_comment:
types: [created]
permissions:
pull-requests: write
contents: write
jobs:
pr_commented:
name: PR comment
if: github.event.issue.pull_request
runs-on: ubuntu-latest
steps:
- uses: actions-ecosystem/action-regex-match@v2
id: regex-match
with:
text: ${{ github.event.comment.body }}
regex: '^@logstashmachine backport (main|[x0-9\.]+)$'
- if: ${{ steps.regex-match.outputs.group1 == '' }}
run: exit 1
- name: Fetch logstash-core team member list
uses: tspascoal/get-user-teams-membership@v1
id: checkUserMember
with:
username: ${{ github.actor }}
organization: elastic
team: logstash
GITHUB_TOKEN: ${{ secrets.READ_ORG_SECRET_JSVD }}
- name: Is user not a core team member?
if: ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
run: exit 1
- name: checkout repo content
uses: actions/checkout@v2
with:
fetch-depth: 0
ref: 'main'
- run: git config --global user.email "43502315+logstashmachine@users.noreply.github.com"
- run: git config --global user.name "logstashmachine"
- name: setup python
uses: actions/setup-python@v2
with:
python-version: 3.8
- run: |
mkdir ~/.elastic && echo ${{ github.token }} >> ~/.elastic/github.token
- run: pip install requests
- name: run backport
run: python devtools/backport ${{ steps.regex-match.outputs.group1 }} ${{ github.event.issue.number }} --remote=origin --yes

18
.github/workflows/pre-commit.yml vendored Normal file
View file

@ -0,0 +1,18 @@
name: pre-commit
on:
pull_request:
push:
branches:
- main
- 8.*
- 9.*
permissions:
contents: read
jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: elastic/oblt-actions/pre-commit@v1

View file

@ -25,9 +25,13 @@ jobs:
version_bumper:
name: Bump versions
runs-on: ubuntu-latest
env:
INPUTS_BRANCH: "${{ inputs.branch }}"
INPUTS_BUMP: "${{ inputs.bump }}"
BACKPORT_LABEL: "backport-${{ inputs.branch }}"
steps:
- name: Fetch logstash-core team member list
uses: tspascoal/get-user-teams-membership@v1
uses: tspascoal/get-user-teams-membership@57e9f42acd78f4d0f496b3be4368fc5f62696662 #v3.0.0
with:
username: ${{ github.actor }}
organization: elastic
@ -37,14 +41,14 @@ jobs:
if: ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
run: exit 1
- name: checkout repo content
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.event.inputs.branch }}
ref: ${{ env.INPUTS_BRANCH }}
- run: git config --global user.email "43502315+logstashmachine@users.noreply.github.com"
- run: git config --global user.name "logstashmachine"
- run: ./gradlew clean installDefaultGems
- run: ./vendor/jruby/bin/jruby -S bundle update --all --${{ github.event.inputs.bump }} --strict
- run: ./vendor/jruby/bin/jruby -S bundle update --all --${{ env.INPUTS_BUMP }} --strict
- run: mv Gemfile.lock Gemfile.jruby-*.lock.release
- run: echo "T=$(date +%s)" >> $GITHUB_ENV
- run: echo "BRANCH=update_lock_${T}" >> $GITHUB_ENV
@ -53,8 +57,21 @@ jobs:
git add .
git status
if [[ -z $(git status --porcelain) ]]; then echo "No changes. We're done."; exit 0; fi
git commit -m "Update ${{ github.event.inputs.bump }} plugin versions in gemfile lock" -a
git commit -m "Update ${{ env.INPUTS_BUMP }} plugin versions in gemfile lock" -a
git push origin $BRANCH
- name: Update mergify (minor only)
if: ${{ inputs.bump == 'minor' }}
continue-on-error: true
run: make -C .ci mergify BACKPORT_LABEL=$BACKPORT_LABEL BRANCH=$INPUTS_BRANCH PUSH_BRANCH=$BRANCH
- name: Create Pull Request
run: |
curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -X POST -d "{\"title\": \"bump lock file for ${{ github.event.inputs.branch }}\",\"head\": \"${BRANCH}\",\"base\": \"${{ github.event.inputs.branch }}\"}" https://api.github.com/repos/elastic/logstash/pulls
curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -X POST -d "{\"title\": \"bump lock file for ${{ env.INPUTS_BRANCH }}\",\"head\": \"${BRANCH}\",\"base\": \"${{ env.INPUTS_BRANCH }}\"}" https://api.github.com/repos/elastic/logstash/pulls
- name: Create GitHub backport label (Mergify) (minor only)
if: ${{ inputs.bump == 'minor' }}
continue-on-error: true
run: make -C .ci backport-label BACKPORT_LABEL=$BACKPORT_LABEL
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

132
.mergify.yml Normal file
View file

@ -0,0 +1,132 @@
commands_restrictions:
backport:
conditions:
- or:
- sender-permission>=write
- sender=github-actions[bot]
defaults:
actions:
backport:
title: "[{{ destination_branch }}] (backport #{{ number }}) {{ title }}"
assignees:
- "{{ author }}"
labels:
- "backport"
pull_request_rules:
# - name: ask to resolve conflict
# conditions:
# - conflict
# actions:
# comment:
# message: |
# This pull request is now in conflicts. Could you fix it @{{author}}? 🙏
# To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/
# ```
# git fetch upstream
# git checkout -b {{head}} upstream/{{head}}
# git merge upstream/{{base}}
# git push upstream {{head}}
# ```
- name: notify the backport policy
conditions:
- -label~=^backport
- base=main
actions:
comment:
message: |
This pull request does not have a backport label. Could you fix it @{{author}}? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:
* `backport-8./d` is the label to automatically backport to the `8./d` branch. `/d` is the digit.
* If no backport is necessary, please add the `backport-skip` label
- name: remove backport-skip label
conditions:
- label~=^backport-\d
actions:
label:
remove:
- backport-skip
- name: notify the backport has not been merged yet
conditions:
- -merged
- -closed
- author=mergify[bot]
- "#check-success>0"
- schedule=Mon-Mon 06:00-10:00[Europe/Paris]
actions:
comment:
message: |
This pull request has not been merged yet. Could you please review and merge it @{{ assignee | join(', @') }}? 🙏
- name: backport patches to 8.16 branch
conditions:
- merged
- base=main
- label=backport-8.16
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.16"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.17 branch
conditions:
- merged
- base=main
- label=backport-8.17
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.17"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.18 branch
conditions:
- merged
- base=main
- label=backport-8.18
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.18"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.19 branch
conditions:
- merged
- base=main
- label=backport-8.19
actions:
backport:
branches:
- "8.19"
- name: backport patches to 9.0 branch
conditions:
- merged
- base=main
- label=backport-9.0
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "9.0"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"

6
.pre-commit-config.yaml Normal file
View file

@ -0,0 +1,6 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-merge-conflict
args: ['--assume-in-merge']

View file

@ -1 +1 @@
jruby-9.3.10.0
jruby-9.4.9.0

View file

@ -1,64 +0,0 @@
FROM ubuntu:bionic
RUN apt-get update && \
apt-get install -y zlib1g-dev build-essential vim rake git curl libssl-dev libreadline-dev libyaml-dev \
libxml2-dev libxslt-dev openjdk-11-jdk-headless curl iputils-ping netcat && \
apt-get clean
WORKDIR /root
RUN adduser --disabled-password --gecos "" --home /home/logstash logstash && \
mkdir -p /usr/local/share/ruby-build && \
mkdir -p /opt/logstash && \
mkdir -p /opt/logstash/data && \
mkdir -p /mnt/host && \
chown logstash:logstash /opt/logstash
USER logstash
WORKDIR /home/logstash
# used by the purge policy
LABEL retention="keep"
# Setup gradle wrapper. When running any `gradle` command, a `settings.gradle` is expected (and will soon be required).
# This section adds the gradle wrapper, `settings.gradle` and sets the permissions (setting the user to root for `chown`
# and working directory to allow this and then reverts back to the previous working directory and user.
COPY --chown=logstash:logstash gradlew /opt/logstash/gradlew
COPY --chown=logstash:logstash gradle/wrapper /opt/logstash/gradle/wrapper
COPY --chown=logstash:logstash settings.gradle /opt/logstash/settings.gradle
WORKDIR /opt/logstash
RUN for iter in `seq 1 10`; do ./gradlew wrapper --warning-mode all && exit_code=0 && break || exit_code=$? && echo "gradlew error: retry $iter in 10s" && sleep 10; done; exit $exit_code
WORKDIR /home/logstash
COPY versions.yml /opt/logstash/versions.yml
COPY LICENSE.txt /opt/logstash/LICENSE.txt
COPY NOTICE.TXT /opt/logstash/NOTICE.TXT
COPY licenses /opt/logstash/licenses
COPY CONTRIBUTORS /opt/logstash/CONTRIBUTORS
COPY Gemfile.template Gemfile.jruby-3.1.lock.* /opt/logstash/
COPY Rakefile /opt/logstash/Rakefile
COPY build.gradle /opt/logstash/build.gradle
COPY rubyUtils.gradle /opt/logstash/rubyUtils.gradle
COPY rakelib /opt/logstash/rakelib
COPY config /opt/logstash/config
COPY spec /opt/logstash/spec
COPY qa /opt/logstash/qa
COPY lib /opt/logstash/lib
COPY pkg /opt/logstash/pkg
COPY buildSrc /opt/logstash/buildSrc
COPY tools /opt/logstash/tools
COPY logstash-core /opt/logstash/logstash-core
COPY logstash-core-plugin-api /opt/logstash/logstash-core-plugin-api
COPY bin /opt/logstash/bin
COPY modules /opt/logstash/modules
COPY x-pack /opt/logstash/x-pack
COPY ci /opt/logstash/ci
USER root
RUN rm -rf build && \
mkdir -p build && \
chown -R logstash:logstash /opt/logstash
USER logstash
WORKDIR /opt/logstash
LABEL retention="prune"

View file

@ -1,22 +0,0 @@
#logstash-base image, use ci/docker_update_base_image.sh to push updates
FROM ubuntu:bionic
RUN apt-get update && \
apt-get install -y zlib1g-dev build-essential vim rake git curl libssl-dev libreadline-dev libyaml-dev \
libxml2-dev libxslt-dev openjdk-11-jdk-headless curl iputils-ping netcat && \
apt-get clean
WORKDIR /root
RUN adduser --disabled-password --gecos "" --home /home/logstash logstash && \
mkdir -p /usr/local/share/ruby-build && \
mkdir -p /opt/logstash && \
mkdir -p /mnt/host && \
chown logstash:logstash /opt/logstash
USER logstash
WORKDIR /home/logstash
RUN mkdir -p /opt/logstash/data
# used by the purge policy
LABEL retention="keep"

View file

@ -9,27 +9,41 @@ gem "paquet", "~> 0.2"
gem "pleaserun", "~>0.0.28", require: false
gem "rake", "~> 13", require: false
gem "ruby-progressbar", "~> 1", require: false
gem "ruby-maven-libs", "~> 3", ">= 3.9.6.1"
gem "logstash-output-elasticsearch", ">= 11.14.0"
gem "polyglot", require: false
gem "treetop", require: false
gem "faraday", "~> 1", :require => false # due elasticsearch-transport (elastic-transport) depending faraday '~> 1'
gem "minitar", "~> 1", :group => :build
gem "childprocess", "~> 4", :group => :build
gem "fpm", "~> 1", ">= 1.14.1", :group => :build # compound due to bugfix https://github.com/jordansissel/fpm/pull/1856
gem "gems", "~> 1", :group => :build
gem "octokit", "~> 4.25", :group => :build
gem "rubyzip", "~> 1", :group => :build
gem "stud", "~> 0.0.22", :group => :build
# remove fileutils declaration when start using Ruby 3.2+, by default includes `fileutils-v1.7.0`
# (https://git.ruby-lang.org/ruby.git/commit/?h=ruby_3_2&id=05caafb4731c796890027cafedaac59dc108a23a)
# note that the reason to use 1.7.0 is due to https://github.com/logstash-plugins/logstash-integration-aws/issues/28
gem "fileutils", "~> 1.7"
gem "rubocop", :group => :development
# rubocop-ast 1.43.0 carries a dep on `prism` which requires native c extensions
gem 'rubocop-ast', '= 1.42.0', :group => :development
gem "belzebuth", :group => :development
gem "benchmark-ips", :group => :development
gem "ci_reporter_rspec", "~> 1", :group => :development
gem "flores", "~> 0.0.8", :group => :development
gem "json-schema", "~> 2", :group => :development
gem "logstash-devutils", "~> 2", :group => :development
gem "logstash-devutils", "~> 2.6.0", :group => :development
gem "rack-test", :require => "rack/test", :group => :development
gem "rspec", "~> 3.5", :group => :development
gem "webmock", "~> 3", :group => :development
gem "simplecov", "~> 0.22.0", :group => :development
gem "simplecov-json", require: false, :group => :development
gem "jar-dependencies", "= 0.4.1" # Gem::LoadError with jar-dependencies 0.4.2
gem "murmurhash3", "= 0.1.6" # Pins until version 0.1.7-java is released
gem "date", "= 3.3.3"
gem "thwait"
gem "bigdecimal", "~> 3.1"
gem "psych", "5.2.2"
gem "cgi", "0.3.7" # Pins until a new jruby version with updated cgi is released
gem "uri", "0.12.3" # Pins until a new jruby version with updated cgi is released

1705
NOTICE.TXT

File diff suppressed because it is too large Load diff

View file

@ -20,7 +20,6 @@ supported platforms, from [downloads page](https://www.elastic.co/downloads/logs
- [Logstash Forum](https://discuss.elastic.co/c/logstash)
- [Logstash Documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
- [#logstash on freenode IRC](https://webchat.freenode.net/?channels=logstash)
- [Logstash Product Information](https://www.elastic.co/products/logstash)
- [Elastic Support](https://www.elastic.co/subscriptions)
@ -146,7 +145,7 @@ Run the doc build script from within the `docs` repo. For example:
## Testing
Most of the unit tests in Logstash are written using [rspec](http://rspec.info/) for the Ruby parts. For the Java parts, we use junit. For testing you can use the *test* `rake` tasks and the `bin/rspec` command, see instructions below:
Most of the unit tests in Logstash are written using [rspec](http://rspec.info/) for the Ruby parts. For the Java parts, we use [junit](https://junit.org). For testing you can use the *test* `rake` tasks and the `bin/rspec` command, see instructions below:
### Core tests
@ -253,10 +252,10 @@ All contributions are welcome: ideas, patches, documentation, bug reports,
complaints, and even something you drew up on a napkin.
Programming is not a required skill. Whatever you've seen about open source and
maintainers or community members saying "send patches or die" - you will not
maintainers or community members saying "send patches or die" - you will not
see that here.
It is more important to me that you are able to contribute.
It is more important that you are able to contribute.
For more information about contributing, see the
[CONTRIBUTING](./CONTRIBUTING.md) file.

View file

@ -1,10 +0,0 @@
@echo off
setlocal enabledelayedexpansion
cd /d "%~dp0\.."
for /f %%i in ('cd') do set RESULT=%%i
"%JAVACMD%" -cp "!RESULT!\tools\ingest-converter\build\libs\ingest-converter.jar;*" ^
org.logstash.ingest.Pipeline %*
endlocal

View file

@ -1,4 +0,0 @@
#!/usr/bin/env bash
java -cp "$(cd `dirname $0`/..; pwd)"'/tools/ingest-converter/build/libs/ingest-converter.jar:*' \
org.logstash.ingest.Pipeline "$@"

View file

@ -6,7 +6,7 @@ set params='%*'
if "%1" == "-V" goto version
if "%1" == "--version" goto version
call "%~dp0setup.bat" || exit /b 1
1>&2 (call "%~dp0setup.bat") || exit /b 1
if errorlevel 1 (
if not defined nopauseonerror (
pause

View file

@ -186,8 +186,8 @@ setup_vendored_jruby() {
}
setup() {
setup_java
setup_vendored_jruby
>&2 setup_java
>&2 setup_vendored_jruby
}
ruby_exec() {

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" %JAVA_OPTS% -cp "%CLASSPATH%" org.logstash.ackedqueue.PqCheck %*
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqCheck %*
:concat
IF not defined CLASSPATH (

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" %JAVA_OPTS% -cp "%CLASSPATH%" org.logstash.ackedqueue.PqRepair %*
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqRepair %*
:concat
IF not defined CLASSPATH (

View file

@ -42,7 +42,7 @@ if defined LS_JAVA_HOME (
)
if not exist "%JAVACMD%" (
echo could not find java; set JAVA_HOME or ensure java is in PATH 1>&2
echo could not find java; set LS_JAVA_HOME or ensure java is in PATH 1>&2
exit /b 1
)

View file

@ -19,8 +19,8 @@
buildscript {
ext {
snakeYamlVersion = '2.0'
shadowGradlePluginVersion = '7.0.0'
snakeYamlVersion = '2.2'
shadowGradlePluginVersion = '8.1.1'
}
repositories {
@ -37,8 +37,6 @@ buildscript {
plugins {
id "de.undercouch.download" version "4.0.4"
id "com.dorongold.task-tree" version "2.1.0"
// id "jacoco"
// id "org.sonarqube" version "4.3.0.3225"
}
apply plugin: 'de.undercouch.download'
@ -51,6 +49,7 @@ import org.logstash.gradle.tooling.ListProjectDependencies
import org.logstash.gradle.tooling.ExtractBundledJdkVersion
import org.logstash.gradle.tooling.SignAliasDefinitions
import org.logstash.gradle.tooling.ToolingUtils
import org.logstash.gradle.tooling.SnapshotArtifactURLs
allprojects {
group = 'org.logstash'
@ -58,8 +57,11 @@ allprojects {
apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'java-library'
project.sourceCompatibility = JavaVersion.VERSION_11
project.targetCompatibility = JavaVersion.VERSION_11
java {
sourceCompatibility = JavaVersion.VERSION_11
targetCompatibility = JavaVersion.VERSION_11
}
tasks.withType(JavaCompile).configureEach {
options.compilerArgs.add("-Xlint:all")
@ -99,8 +101,9 @@ allprojects {
"--add-opens=java.base/java.lang=ALL-UNNAMED",
"--add-opens=java.base/java.util=ALL-UNNAMED"
]
//https://stackoverflow.com/questions/3963708/gradle-how-to-display-test-results-in-the-console-in-real-time
testLogging {
maxHeapSize = "2g"
//https://stackoverflow.com/questions/3963708/gradle-how-to-display-test-results-in-the-console-in-real-time
testLogging {
// set options for log level LIFECYCLE
events "passed", "skipped", "failed", "standardOut"
showExceptions true
@ -143,7 +146,6 @@ subprojects {
}
version = versionMap['logstash-core']
String artifactVersionsApi = "https://artifacts-api.elastic.co/v1/versions"
tasks.register("configureArchitecture") {
String arch = System.properties['os.arch']
@ -169,27 +171,28 @@ tasks.register("configureArtifactInfo") {
description "Set the url to download stack artifacts for select stack version"
doLast {
def versionQualifier = System.getenv('VERSION_QUALIFIER')
if (versionQualifier) {
version = "$version-$versionQualifier"
def splitVersion = version.split('\\.')
int major = splitVersion[0].toInteger()
int minor = splitVersion[1].toInteger()
String branch = "${major}.${minor}"
String fallbackMajorX = "${major}.x"
boolean isFallBackPreviousMajor = minor - 1 < 0
String fallbackBranch = isFallBackPreviousMajor ? "${major-1}.x" : "${major}.${minor-1}"
def qualifiedVersion = ""
for (b in [branch, fallbackMajorX, fallbackBranch]) {
def url = "https://storage.googleapis.com/artifacts-api/snapshots/${b}.json"
try {
def snapshotInfo = new JsonSlurper().parseText(url.toURL().text)
qualifiedVersion = snapshotInfo.version
println "ArtifactInfo version: ${qualifiedVersion}"
break
} catch (Exception e) {
println "Failed to fetch branch ${branch} from ${url}: ${e.message}"
}
}
def isReleaseBuild = System.getenv('RELEASE') == "1" || versionQualifier
String apiResponse = artifactVersionsApi.toURL().text
def dlVersions = new JsonSlurper().parseText(apiResponse)
String qualifiedVersion = dlVersions['versions'].grep(isReleaseBuild ? ~/^${version}$/ : ~/^${version}-SNAPSHOT/)[0]
if (qualifiedVersion == null) {
throw new GradleException("could not find the current artifact from the artifact-api ${artifactVersionsApi} for ${version}")
}
// find latest reference to last build
String buildsListApi = "${artifactVersionsApi}/${qualifiedVersion}/builds/"
apiResponse = buildsListApi.toURL().text
def dlBuilds = new JsonSlurper().parseText(apiResponse)
def stackBuildVersion = dlBuilds["builds"][0]
project.ext.set("artifactApiVersionedBuildUrl", "${artifactVersionsApi}/${qualifiedVersion}/builds/${stackBuildVersion}")
project.ext.set("stackArtifactSuffix", qualifiedVersion)
project.ext.set("artifactApiVersion", qualifiedVersion)
}
}
@ -325,7 +328,6 @@ tasks.register("assembleTarDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -342,7 +344,6 @@ tasks.register("assembleOssTarDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -357,7 +358,6 @@ tasks.register("assembleZipDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -374,7 +374,6 @@ tasks.register("assembleOssZipDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -409,7 +408,7 @@ def qaBuildPath = "${buildDir}/qa/integration"
def qaVendorPath = "${qaBuildPath}/vendor"
tasks.register("installIntegrationTestGems") {
dependsOn unpackTarDistribution
dependsOn assembleTarDistribution
def gemfilePath = file("${projectDir}/qa/integration/Gemfile")
inputs.files gemfilePath
inputs.files file("${projectDir}/qa/integration/integration_tests.gemspec")
@ -432,15 +431,12 @@ tasks.register("downloadFilebeat") {
doLast {
download {
String downloadedFilebeatName = "filebeat-${project.ext.get("stackArtifactSuffix")}-${project.ext.get("beatsArchitecture")}"
String beatsVersion = project.ext.get("artifactApiVersion")
String downloadedFilebeatName = "filebeat-${beatsVersion}-${project.ext.get("beatsArchitecture")}"
project.ext.set("unpackedFilebeatName", downloadedFilebeatName)
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/beats/packages/${downloadedFilebeatName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
def res = SnapshotArtifactURLs.packageUrls("beats", beatsVersion, downloadedFilebeatName)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
src project.ext.filebeatSnapshotUrl
@ -477,14 +473,12 @@ tasks.register("checkEsSHA") {
description "Download ES version remote's fingerprint file"
doLast {
String downloadedElasticsearchName = "elasticsearch-${project.ext.get("stackArtifactSuffix")}-${project.ext.get("esArchitecture")}"
String esVersion = project.ext.get("artifactApiVersion")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
String remoteSHA
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
String remoteSHA = buildUrls.package.sha_url.toURL().text
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
remoteSHA = res.packageShaUrl
def localESArchive = new File("${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
if (localESArchive.exists()) {
@ -512,21 +506,19 @@ tasks.register("checkEsSHA") {
tasks.register("downloadEs") {
dependsOn = [configureArtifactInfo, checkEsSHA]
description "Download ES Snapshot for current branch version: ${version}"
description "Download ES Snapshot for current branch version: ${version}"
inputs.file("${projectDir}/versions.yml")
doLast {
download {
String downloadedElasticsearchName = "elasticsearch-${project.ext.get("stackArtifactSuffix")}-${project.ext.get("esArchitecture")}"
project.ext.set("unpackedElasticsearchName", "elasticsearch-${project.ext.get("stackArtifactSuffix")}")
String esVersion = project.ext.get("artifactApiVersion")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("unpackedElasticsearchName", "elasticsearch-${esVersion}")
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
src project.ext.elasticsearchSnapshotURL
@ -567,7 +559,8 @@ project(":logstash-integration-tests") {
systemProperty 'org.logstash.integration.specs', rubyIntegrationSpecs
environment "FEATURE_FLAG", System.getenv('FEATURE_FLAG')
workingDir integrationTestPwd
dependsOn = [installIntegrationTestGems, copyProductionLog4jConfiguration]
dependsOn installIntegrationTestGems
dependsOn copyProductionLog4jConfiguration
}
}
@ -707,22 +700,69 @@ class JDKDetails {
}
String createDownloadUrl() {
String releaseName = major > 8 ?
"jdk-${revision}+${build}":
"jdk${revision}u${build}"
String vendorOsName = vendorOsName(osName)
switch (vendor) {
case "adoptium":
return "https://api.adoptium.net/v3/binary/version/${releaseName}/${vendorOsName}/${arch}/jdk/hotspot/normal/adoptium"
default:
throw RuntimeException("Can't handle vendor: ${vendor}")
return createElasticCatalogDownloadUrl()
}
// throws an error iff local version in versions.yml doesn't match the latest from JVM catalog.
void checkLocalVersionMatchingLatest() {
// retrieve the metadata from remote
def url = "https://jvm-catalog.elastic.co/jdk/latest_adoptiumjdk_${major}_${osName}"
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
assert catalogConnection.responseCode == 200
def metadataRetrieved = catalogConnection.content.text
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
if (catalogMetadata.version != revision || catalogMetadata.revision != build) {
throw new GradleException("Found new jdk version. Please update version.yml to ${catalogMetadata.version} build ${catalogMetadata.revision}")
}
}
private String vendorOsName(String osName) {
if (osName == "darwin")
return "mac"
return osName
private String createElasticCatalogDownloadUrl() {
// Ask details to catalog https://jvm-catalog.elastic.co/jdk and return the url to download the JDK
// arch x86_64 is default, aarch64 if macos or linux
def url = "https://jvm-catalog.elastic.co/jdk/adoptiumjdk-${revision}+${build}-${osName}"
// Append the cpu's arch only if not x86_64, which is the default
if (arch == "aarch64") {
url += "-${arch}"
}
println "Retrieving JDK from catalog..."
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
if (catalogConnection.responseCode != 200) {
println "Can't find adoptiumjdk ${revision} for ${osName} on Elastic JVM catalog"
throw new GradleException("JVM not present on catalog")
}
def metadataRetrieved = catalogConnection.content.text
println "Retrieved!"
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
validateMetadata(catalogMetadata)
return catalogMetadata.url
}
//Verify that the artifact metadata correspond to the request, if not throws an error
private void validateMetadata(Map metadata) {
if (metadata.version != revision) {
throw new GradleException("Expected to retrieve a JDK for version ${revision} but received: ${metadata.version}")
}
if (!isSameArchitecture(metadata.architecture)) {
throw new GradleException("Expected to retrieve a JDK for architecture ${arch} but received: ${metadata.architecture}")
}
}
private boolean isSameArchitecture(String metadataArch) {
if (arch == 'x64') {
return metadataArch == 'x86_64'
}
return metadataArch == arch
}
private String parseJdkArchitecture(String jdkArch) {
@ -734,16 +774,22 @@ class JDKDetails {
return "aarch64"
break
default:
throw RuntimeException("Can't handle CPU architechture: ${jdkArch}")
throw new GradleException("Can't handle CPU architechture: ${jdkArch}")
}
}
}
tasks.register("lint") {
// Calls rake's 'lint' task
description = "Lint Ruby source files. Use -PrubySource=file1.rb,file2.rb to specify files"
dependsOn installDevelopmentGems
doLast {
rake(projectDir, buildDir, 'lint:report')
if (project.hasProperty("rubySource")) {
// Split the comma-separated files and pass them as separate arguments
def files = project.property("rubySource").split(",")
rake(projectDir, buildDir, "lint:report", *files)
} else {
rake(projectDir, buildDir, "lint:report")
}
}
}
@ -772,6 +818,7 @@ tasks.register("downloadJdk", Download) {
src project.ext.jdkURL
onlyIfNewer true
overwrite false
quiet true
inputs.file("${projectDir}/versions.yml")
outputs.file(project.ext.jdkDownloadLocation)
dest new File(project.ext.jdkDownloadLocation)
@ -782,6 +829,15 @@ tasks.register("downloadJdk", Download) {
}
}
tasks.register("checkNewJdkVersion") {
def versionYml = new Yaml().load(new File("$projectDir/versions.yml").text)
// use Linux x86_64 as canary platform
def jdkDetails = new JDKDetails(versionYml, "linux", "x86_64")
// throws Gradle exception if local and remote doesn't match
jdkDetails.checkLocalVersionMatchingLatest()
}
tasks.register("deleteLocalJdk", Delete) {
// CLI project properties: -Pjdk_bundle_os=[windows|linux|darwin]
String osName = selectOsType()
@ -841,6 +897,11 @@ tasks.register("extractBundledJdkVersion", ExtractBundledJdkVersion) {
osName = selectOsType()
}
tasks.register("javaTests") {
dependsOn ":logstash-core:javaTests"
dependsOn ":jvm-options-parser:test"
}
clean {
String jdkVersionFilename = tasks.findByName("extractBundledJdkVersion").outputFilename
delete "${projectDir}/${jdkVersionFilename}"
@ -863,4 +924,4 @@ if (System.getenv('OSS') != 'true') {
tasks.register("runXPackIntegrationTests") {
dependsOn copyPluginTestAlias
dependsOn ":logstash-xpack:rubyIntegrationTests"
}
}

View file

@ -0,0 +1,28 @@
package org.logstash.gradle.tooling
import groovy.json.JsonSlurper
/**
* Helper class to obtain project specific (e.g. Elasticsearch or beats/filebeat) snapshot-DRA artifacts.
* We use it in all cases apart from release builds.
* */
class SnapshotArtifactURLs {
/**
* Returns a list of the package and package SHA(512) URLs for a given project / version / downloadedPackageName
* */
static def packageUrls(String project, String projectVersion, String downloadedPackageName) {
String artifactSnapshotVersionsApiPrefix = "https://artifacts-snapshot.elastic.co"
// e.g. https://artifacts-snapshot.elastic.co/elasticsearch/latest/8.11.5-SNAPSHOT.json
String apiResponse = "${artifactSnapshotVersionsApiPrefix}/${project}/latest/${projectVersion}.json".toURL().text
def artifactUrls = new JsonSlurper().parseText(apiResponse)
String manifestUrl = artifactUrls["manifest_url"]
// e.g. https://artifacts-snapshot.elastic.co/elasticsearch/8.11.5-12345678/manifest-8.11.5-SNAPSHOT.json
apiResponse = manifestUrl.toURL().text
def packageArtifactUrls = new JsonSlurper().parseText(apiResponse)
String packageUrl = packageArtifactUrls["projects"]["${project}"]["packages"]["${downloadedPackageName}.tar.gz"]["url"]
String packageShaUrl = packageArtifactUrls["projects"]["${project}"]["packages"]["${downloadedPackageName}.tar.gz"]["sha_url"]
return ["packageUrl": packageUrl, "packageShaUrl": packageShaUrl]
}
}

View file

@ -2,7 +2,7 @@
# Declare Backstage Component that represents the Logstash tool
# *************************************************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/e57ee3bed7a6f73077a3f55a38e76e40ec87a7cf/rre.schema.json
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
@ -21,19 +21,25 @@ metadata:
url: https://elastic.co/logstash
spec:
type: tool
owner: group:ingest-fp
owner: group:logstash
system: platform-ingest
lifecycle: production
dependsOn:
- resource:buildkite-logstash-serverless-integration-testing
- resource:logstash-snyk-report
- logstash-dra-snapshot-pipeline
- logstash-dra-staging-pipeline
- resource:logstash-dra-snapshot-pipeline
- resource:logstash-dra-staging-pipeline
- resource:logstash-linux-jdk-matrix-pipeline
- resource:logstash-windows-jdk-matrix-pipeline
- resource:logstash-benchmark-pipeline
- resource:logstash-health-report-tests-pipeline
- resource:logstash-jdk-availability-check-pipeline
# ***********************************
# Declare serverless IT pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/e57ee3bed7a6f73077a3f55a38e76e40ec87a7cf/rre.schema.json
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
@ -41,8 +47,8 @@ metadata:
description: Buildkite pipeline for the Serverless Integration Test
spec:
type: buildkite-pipeline
owner: group:ingest-fp
system: buildkite
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -51,14 +57,18 @@ spec:
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/serverless_integration_pipeline.yml"
maximum_timeout_in_minutes: 90
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
@ -73,7 +83,7 @@ spec:
# Declare snyk-repo pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/e57ee3bed7a6f73077a3f55a38e76e40ec87a7cf/rre.schema.json
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
@ -81,8 +91,8 @@ metadata:
description: 'The logstash-snyk-report pipeline.'
spec:
type: buildkite-pipeline
owner: group:ingest-fp
system: buildkite
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -92,15 +102,21 @@ spec:
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/snyk_report_pipeline.yml"
maximum_timeout_in_minutes: 60
provider_settings:
trigger_mode: none # don't trigger jobs
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
@ -114,7 +130,7 @@ spec:
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/e57ee3bed7a6f73077a3f55a38e76e40ec87a7cf/rre.schema.json
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
@ -126,38 +142,26 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-dra-snapshot-pipeline
description: ':logstash: The DRA SNAPSHOT (Daily, Auto) pipeline'
description: ':logstash: The DRA SNAPSHOT pipeline'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/dra_pipeline.yml"
# TODO: uncomment out the schedule after testing + disabling Jenkins Job
# schedules:
# Daily 7_17:
# branch: '7.17'
# cronline: 30 01 * * *
# message: Daily SNAPSHOT build for 7.17
# Daily 8_10:
# branch: '8.10'
# cronline: 30 01 * * *
# message: Daily SNAPSHOT build for 8.10
# Daily main:
# branch: main
# cronline: 30 01 * * *
# message: Daily SNAPSHOT build for main
maximum_timeout_in_minutes: 120
skip_intermediate_builds: true
provider_settings:
trigger_mode: none
env:
WORKFLOW_TYPE: 'snapshot'
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'false' # don't alert during development
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
@ -169,7 +173,7 @@ spec:
access_level: READ_ONLY
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/e57ee3bed7a6f73077a3f55a38e76e40ec87a7cf/rre.schema.json
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
@ -181,7 +185,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: buildkite
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -191,14 +195,16 @@ spec:
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/dra_pipeline.yml"
maximum_timeout_in_minutes: 120
skip_intermediate_builds: true
provider_settings:
trigger_mode: none
env:
WORKFLOW_TYPE: 'staging'
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'false' # don't alert during development
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
@ -212,3 +218,588 @@ spec:
# ***********************************
# SECTION END: DRA pipelines
# ***********************************
# ***********************************
# SECTION START: Pull requests
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-pull-request-pipeline
description: 'Logstash Pull Request pipeline'
links:
- title: 'Logstash Pull Request pipeline'
url: https://buildkite.com/elastic/logstash-pull-request-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-pull-request-pipeline
description: ':logstash: Testing for Logstash Pull Requests'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/pull_request_pipeline.yml"
maximum_timeout_in_minutes: 120
provider_settings:
build_pull_request_forks: false
build_pull_requests: true # requires filter_enabled and filter_condition settings as below when used with buildkite-pr-bot
build_branches: false
build_tags: false
filter_enabled: true
filter_condition: >-
build.creator.name == 'elasticmachine' && build.pull_request.id != null
cancel_intermediate_builds: true
skip_intermediate_builds: true
env:
ELASTIC_PR_COMMENTS_ENABLED: 'true'
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# ***********************************
# SECTION END: Pull requests
# ***********************************
# *******************************
# SECTION START: aarch64 pipeline
# *******************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-aarch64-pipeline
description: 'Logstash aarch64 pipeline'
links:
- title: 'Logstash aarch64 pipeline'
url: https://buildkite.com/elastic/logstash-aarch64-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: "Logstash aarch64 pipeline"
description: ':logstash: Exhaustive tests for the aarch64 architecture'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/aarch64_pipeline.yml"
maximum_timeout_in_minutes: 90
provider_settings:
trigger_mode: none
cancel_intermediate_builds: true
skip_intermediate_builds: true
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# *****************************
# SECTION END: aarch64 pipeline
# *****************************
# *************************************************************
# SECTION START: JDK matrix tests (Linux and Windows) pipelines
# *************************************************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-linux-jdk-matrix-pipeline
description: 'Logstash Linux JDK matrix pipeline'
links:
- title: 'Logstash Linux JDK matrix pipeline'
url: https://buildkite.com/elastic/logstash-linux-jdk-matrix-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: "Logstash Linux JDK matrix pipeline"
description: ':java: :linux: Test Logstash against a matrix of JDKs and Linux Distributions'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/linux_jdk_matrix_pipeline.yml"
maximum_timeout_in_minutes: 120
provider_settings:
trigger_mode: none
cancel_intermediate_builds: true
skip_intermediate_builds: true
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-windows-jdk-matrix-pipeline
description: 'Logstash Windows JDK matrix pipeline'
links:
- title: 'Logstash Windows JDK matrix pipeline'
url: https://buildkite.com/elastic/logstash-windows-jdk-matrix-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: "Logstash Windows JDK matrix pipeline"
description: ':java: :windows: Test Logstash against a matrix of JDKs and Windows releases'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/windows_jdk_matrix_pipeline.yml"
maximum_timeout_in_minutes: 120
provider_settings:
trigger_mode: none
cancel_intermediate_builds: true
skip_intermediate_builds: true
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# ***********************************************************
# SECTION END: JDK matrix tests (Linux and Windows) pipelines
# ***********************************************************
# ****************************************
# SECTION START: Exhaustive tests pipeline
# ****************************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-exhaustive-tests-pipeline
description: 'Logstash Exhaustive tests pipeline'
links:
- title: 'Logstash Exhaustive tests pipeline'
url: https://buildkite.com/elastic/logstash-exhaustive-tests-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: "Logstash Exhaustive tests pipeline"
description: '🔍 Run exhaustive tests against Logstash using different operating systems'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/exhaustive_tests_pipeline.yml"
provider_settings:
build_branches: true
build_pull_request_forks: false
build_pull_requests: false
build_tags: false
trigger_mode: code
filter_condition: >-
build.branch !~ /^backport.*$/ && build.branch !~ /^mergify\/bp\/.*$/
filter_enabled: true
cancel_intermediate_builds: false
skip_intermediate_builds: false
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# **************************************
# SECTION END: Exhaustive tests pipeline
# **************************************
# ********************************************
# Declare supported plugin tests pipeline
# ********************************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-supported-plugins-test-pipeline
description: "Execute spec tests af all supported tier1 and tier2 plugins using the current branch's Logstash"
links:
- title: 'Logstash supported plugins test pipeline'
url: https://buildkite.com/elastic/logstash-supported-plugins-test-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: 'Logstash supported plugins test pipeline'
description: ':logstash: Supported plugins test pipeline'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/supported_plugins_test_pipeline.yml"
maximum_timeout_in_minutes: 90
skip_intermediate_builds: true
provider_settings:
trigger_mode: none
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# **************************************************************
# SECTION START: Scheduler pipeline
# (Definitions for scheduled runs of various Logstash pipelines)
# **************************************************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-pipeline-scheduler
description: 'Scheduled runs of Logstash pipelines per release branch'
links:
- title: 'Scheduled runs of Logstash pipelines per release branch'
url: https://buildkite.com/elastic/logstash-pipeline-scheduler
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: Logstash Pipeline Scheduler
description: ':alarm_clock: Scheduled runs of Logstash pipelines per release branch'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/trigger_pipelines.yml"
maximum_timeout_in_minutes: 240
schedules:
Daily Snapshot DRA:
branch: main
cronline: 30 02 * * *
message: Daily trigger of Snapshot DRA Pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-dra-snapshot-pipeline'
Weekly JDK matrix:
branch: main
cronline: 0 1 * * 2
message: Weekly trigger of JDK matrix pipelines per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-linux-jdk-matrix-pipeline,logstash-windows-jdk-matrix-pipeline'
AARCH64 Tests:
branch: main
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of AARCH64 pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-aarch64-pipeline'
Exhaustive Tests:
branch: main
cronline: 0 3 * * 1%2 # every other Wednesday@3AM UTC, see https://buildkite.com/docs/pipelines/scheduled-builds#schedule-intervals-crontab-time-syntax
message: Biweekly trigger of Exhaustive pipeline for non-main branches
env:
PIPELINES_TO_TRIGGER: 'logstash-exhaustive-tests-pipeline'
EXCLUDE_BRANCHES: 'main'
skip_intermediate_builds: true
provider_settings:
trigger_mode: none
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'false'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# *******************************
# SECTION END: Scheduler pipeline
# *******************************
# ***********************************
# Declare Benchmark pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-benchmark-pipeline
description: Buildkite pipeline for the Logstash benchmark
links:
- title: 'Logstash Benchmark (Daily, Auto) pipeline'
url: https://buildkite.com/elastic/logstash-benchmark-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-benchmark-pipeline
description: ':running: The Benchmark pipeline for snapshot version'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/benchmark_pipeline.yml"
maximum_timeout_in_minutes: 90
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'false'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Daily serverless test on core_serverless_test branch:
branch: main
cronline: 30 04 * * *
message: Daily trigger of Benchmark Pipeline
# *******************************
# SECTION END: Benchmark pipeline
# *******************************
# ***********************************
# SECTION START: Benchmark Marathon
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-benchmark-marathon-pipeline
description: Buildkite pipeline for benchmarking multi-version
links:
- title: 'Logstash Benchmark Marathon'
url: https://buildkite.com/elastic/logstash-benchmark-marathon-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-benchmark-marathon-pipeline
description: ':running: The Benchmark Marathon for multi-version'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/benchmark_marathon_pipeline.yml"
maximum_timeout_in_minutes: 480
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'false'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# *******************************
# SECTION END: Benchmark Marathon
# *******************************
# ***********************************
# Declare Health Report Tests pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-health-report-tests-pipeline
description: Buildkite pipeline for the Logstash Health Report Tests
links:
- title: ':logstash Logstash Health Report Tests (Daily, Auto) pipeline'
url: https://buildkite.com/elastic/logstash-health-report-tests-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-health-report-tests-pipeline
description: ':logstash: Logstash Health Report tests :pipeline:'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/health_report_tests_pipeline.yml"
maximum_timeout_in_minutes: 30 # usually tests last max ~17mins
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Daily Health Report tests on main branch:
branch: main
cronline: 30 20 * * *
message: Daily trigger of Health Report Tests Pipeline
# *******************************
# SECTION END: Health Report Tests pipeline
# *******************************
# ***********************************
# Declare JDK check pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-jdk-availability-check-pipeline
description: ":logstash: check availability of new JDK version"
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-jdk-availability-check-pipeline
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/jdk_availability_check_pipeline.yml"
maximum_timeout_in_minutes: 10
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Weekly JDK availability check (main):
branch: main
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
Weekly JDK availability check (8.x):
branch: 8.x
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
# *******************************
# SECTION END: JDK check pipeline
# *******************************

View file

@ -1,90 +1,58 @@
#!/usr/bin/env bash
set -e
set -x
set -eo pipefail
function get_package_type {
# determines OS packaging system; at the moment either rpm or deb
source /etc/os-release
if [[ $ID == "ubuntu" || $ID == "debian" || $ID_LIKE == "debian" ]]; then
PACKAGE_TYPE="deb"
elif [[ $ID_LIKE == *"rhel"* || $ID_LIKE == *"fedora"* || $ID_LIKE == *"suse"* ]]; then
PACKAGE_TYPE="rpm"
else
echo "^^^ +++ Unsupported Linux distribution [$ID]. Acceptance packaging tests only support deb or rpm based distributions. Exiting."
exit 1
fi
}
# Since we are using the system jruby, we need to make sure our jvm process
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export OSS=true
if [ -n "$BUILD_JAVA_HOME" ]; then
GRADLE_OPTS="$GRADLE_OPTS -Dorg.gradle.java.home=$BUILD_JAVA_HOME"
fi
SELECTED_TEST_SUITE=$1
# The acceptance test in our CI infrastructure doesn't clear the workspace between run
# this mean the lock of the Gemfile can be sticky from a previous run, before generating any package
# we will clear them out to make sure we use the latest version of theses files
# If we don't do this we will run into gem Conflict error.
[ -f Gemfile ] && rm Gemfile
[ -f Gemfile.lock ] && rm Gemfile.lock
# When running these tests in a Jenkins matrix, in parallel, once one Vagrant job is done, the Jenkins ProcessTreeKiller will kill any other Vagrant processes with the same
# BUILD_ID unless you set this magic flag: https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller
export BUILD_ID=dontKillMe
LS_HOME="$PWD"
QA_DIR="$PWD/qa"
# Always run the halt, even if the test times out or an exit is sent
cleanup() {
cd $LS_HOME
cd $QA_DIR
bundle check || bundle install
bundle exec rake qa:vm:halt
}
trap cleanup EXIT
get_package_type
# Cleanup any stale VMs from old jobs first
cleanup
if [[ $SELECTED_TEST_SUITE == $"redhat" ]]; then
echo "Generating the RPM, make sure you start with a clean environment before generating other packages."
cd $LS_HOME
# in CI (Buildkite), packaging artifacts are pre-built from a previous step
if [[ $BUILDKITE == true ]]; then
export LS_ARTIFACTS_PATH="$HOME/build"
echo "--- Downloading artifacts from \"build/*${PACKAGE_TYPE}\" to $LS_ARTIFACTS_PATH"
set -x
# also creates build/ under $HOME
buildkite-agent artifact download "build/*${PACKAGE_TYPE}" $HOME
set +x
echo "--- Running gradle"
./gradlew clean bootstrap
rake artifact:rpm
echo "Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "Acceptance: Running the tests"
bundle exec rake qa:vm:setup["redhat"]
bundle exec rake qa:vm:ssh_config
bundle exec rake qa:acceptance:redhat
bundle exec rake qa:vm:halt["redhat"]
elif [[ $SELECTED_TEST_SUITE == $"debian" ]]; then
echo "Generating the DEB, make sure you start with a clean environment before generating other packages."
cd $LS_HOME
else
echo "--- Detected a distribution that supports \033[33m[$PACKAGE_TYPE]\033[0m packages. Running gradle."
./gradlew clean bootstrap
rake artifact:deb
echo "Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "Acceptance: Running the tests"
bundle exec rake qa:vm:setup["debian"]
bundle exec rake qa:vm:ssh_config
bundle exec rake qa:acceptance:debian
bundle exec rake qa:vm:halt["debian"]
elif [[ $SELECTED_TEST_SUITE == $"all" ]]; then
echo "Building Logstash artifacts"
cd $LS_HOME
./gradlew clean bootstrap
rake artifact:all
echo "Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "Acceptance: Running the tests"
bundle exec rake qa:vm:setup
bundle exec rake qa:vm:ssh_config
bundle exec rake qa:acceptance:all
bundle exec rake qa:vm:halt
echo "--- Building Logstash artifacts"
rake artifact:$PACKAGE_TYPE
fi
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "--- Acceptance: Running the tests"
rake qa:acceptance:all

View file

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -eo pipefail
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
echo "Checking local JDK version against latest remote from JVM catalog"
./gradlew checkNewJdkVersion

View file

@ -6,7 +6,7 @@ set -x
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
if [ -n "$BUILD_JAVA_HOME" ]; then
GRADLE_OPTS="$GRADLE_OPTS -Dorg.gradle.java.home=$BUILD_JAVA_HOME"
@ -15,7 +15,7 @@ fi
# Can run either a specific flavor, or all flavors -
# eg `ci/acceptance_tests.sh oss` will run tests for open source container
# `ci/acceptance_tests.sh full` will run tests for the default container
# `ci/acceptance_tests.sh ubi8` will run tests for the ubi8 based container
# `ci/acceptance_tests.sh wolfi` will run tests for the wolfi based container
# `ci/acceptance_tests.sh` will run tests for all containers
SELECTED_TEST_SUITE=$1
@ -36,44 +36,44 @@ echo "Building Logstash artifacts"
cd $LS_HOME
if [[ $SELECTED_TEST_SUITE == "oss" ]]; then
echo "building oss docker images"
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME
rake artifact:docker_oss
echo "Acceptance: Installing dependencies"
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "Acceptance: Running the tests"
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/oss/*_spec.rb
elif [[ $SELECTED_TEST_SUITE == "full" ]]; then
echo "building full docker images"
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME
rake artifact:docker
echo "Acceptance: Installing dependencies"
rake artifact:build_docker_full
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "Acceptance: Running the tests"
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/full/*_spec.rb
elif [[ $SELECTED_TEST_SUITE == "ubi8" ]]; then
echo "building ubi8 docker images"
elif [[ $SELECTED_TEST_SUITE == "wolfi" ]]; then
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME
rake artifact:docker_ubi8
echo "Acceptance: Installing dependencies"
rake artifact:docker_wolfi
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "Acceptance: Running the tests"
bundle exec rspec docker/spec/ubi8/*_spec.rb
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/wolfi/*_spec.rb
else
echo "Building all docker images"
echo "--- Building all docker images"
cd $LS_HOME
rake artifact:docker_only
echo "Acceptance: Installing dependencies"
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "Acceptance: Running the tests"
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/**/*_spec.rb
fi

View file

@ -1,4 +0,0 @@
#!/bin/bash
# we may pass "persistent_queues" to FEATURE_FLAG to enable PQ in the integration tests
export DOCKER_ENV_OPTS="${DOCKER_ENV_OPTS} -e FEATURE_FLAG"
ci/docker_run.sh logstash-integration-tests ci/integration_tests.sh $@

Some files were not shown because too many files have changed in this diff Show more