Compare commits

..

90 commits

Author SHA1 Message Date
github-actions[bot]
8fc4b75e66
bump lock file for 8.16 (#17153)
* Update patch plugin versions in gemfile lock

* Remove universal-java-17 to remain consistent with previous version

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2025-02-26 08:34:40 -05:00
github-actions[bot]
bbc80ec0ff
Add Windows 2025 to CI (#17133) (#17145)
This commit adds Windows 2025 to the Windows JDK matrix and exhaustive tests pipelines.

(cherry picked from commit 4d52b7258d)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2025-02-24 16:48:53 +02:00
github-actions[bot]
4205eb7a75
allow concurrent Batch deserialization (#17050) (#17108)
Currently the deserialization is behind the readBatch's lock, so any large batch will take time deserializing, causing any other Queue writer (e.g. netty executor threads) and any other Queue reader (pipeline worker) to block.

This commit moves the deserialization out of the lock, allowing multiple pipeline workers to deserialize batches concurrently.

- add intermediate batch-holder from `Queue` methods
- make the intermediate batch-holder a private inner class of `Queue` with a descriptive name `SerializedBatchHolder`

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
(cherry picked from commit 637f447b88)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-02-17 22:22:50 +00:00
github-actions[bot]
4d04d6c7a4
CPM handle 404 response gracefully with user-friendly log (#17052) (#17100)
Starting from es-output 12.0.2, a 404 response is treated as an error. Previously, central pipeline management considered 404 as an empty pipeline, not an error.

This commit restores the expected behavior by handling 404 gracefully and logs a user-friendly message.
It also removes the redundant cache of pipeline in CPM

Fixes: #17035
(cherry picked from commit e896cd727d)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-02-17 13:51:22 +00:00
github-actions[bot]
a7ca91af8d
Allow capturing heap dumps in DRA BK jobs (#17081) (#17088)
This commit allows Buildkite to capture any heap dumps produced
during DRA builds.

(cherry picked from commit 78c34465dc)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2025-02-14 09:38:11 +02:00
Dimitrios Liappis
f9606502ec
Fix conflicts (#17076) 2025-02-12 10:45:13 -08:00
github-actions[bot]
ce8d43f39b
Don't honor VERSION_QUALIFIER if set but empty (#17032) (#17071)
PR #17006 revealed that the `VERSION_QUALIFIER` env var gets honored in
various scripts when present but empty.
This shouldn't be the case as the DRA process is designed to gracefully
ignore empty values for this variable.

This commit changes various ruby scripts to not treat "" as truthy.
Bash scripts (used by CI etc.) are already ok with this as part of
refactorings done in #16907.

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
(cherry picked from commit c7204fd7d6)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2025-02-12 15:55:29 +02:00
github-actions[bot]
69e554b799
inject VERSION_QUALIFIER into artifacts (#16904) (#17049) (#17066)
VERSION_QUALIFIER was already observed in rake artifacts task but only to influence the name of the artifact.

This commit ensure that the qualifier is also displayed in the cli and in the http api.

(cherry picked from commit 00f8b91c35)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-02-12 09:10:35 +00:00
Ry Biesemeyer
6dceb2453d
version bump 8.16.5 (#17053) 2025-02-11 08:33:49 -08:00
github-actions[bot]
168f969013
Release notes for 8.16.4 (#17043)
* Update release notes for 8.16.4

* humanize release notes 8.16.4

* Apply suggestions from code review

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-02-10 22:40:02 -08:00
github-actions[bot]
3a35abc24e
bump lock file for 8.16 (#17023)
* Update patch plugin versions in gemfile lock

* pull in minors from ES input and filter

* remove addition of java 17 platform

already covered by `java` platform

* pull in minor from http input to get fixed netty thread names

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2025-02-05 10:07:11 -08:00
github-actions[bot]
002d4897c4
Backport PR #16968 to 8.16: Fix BufferedTokenizer to properly resume after a buffer full condition respecting the encoding of the input string (#16968) (#17021)
Backport PR #16968 to 8.16 branch, original message:

----

Permit to use effectively the tokenizer also in context where a line is bigger than a limit.
Fixes an issues related to token size limit error, when the offending token was bigger than the input fragment in happened that the tokenzer wasn't unable to recover the token stream from the first delimiter after the offending token but messed things, loosing part of tokens.

## How solve the problem
This is a second take to fix the processing of tokens from the tokenizer after a buffer full error. The first try #16482 was rollbacked to the encoding error #16694.
The first try failed on returning the tokens in the same encoding of the input.
This PR does a couple of things:
- accumulates the tokens, so that after a full condition can resume with the next tokens after the offending one.
- respect the encoding of the input string. Use `concat` method instead of `addAll`, which avoid to convert RubyString to String and back to RubyString. When return the head `StringBuilder` it enforce the encoding with the input charset.

(cherry picked from commit 1c8cf546c2)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-02-05 12:15:37 +01:00
João Duarte
32e6def9f8
Upgrade JDK to 21.0.6+7 (#16990) 2025-01-30 15:57:06 +00:00
Mashhur
4e87a30c1c
Validate the size limit in BufferedTokenizer. (#16882) (#16892)
(cherry picked from commit a215101032)
2025-01-29 13:52:33 -08:00
github-actions[bot]
589f1b652d
plugin manager: add --level=[major|minor|patch] (default: minor) (#16899) (#16975)
* plugin manager: add `--level=[major|minor|patch]` (default: `minor`)

* docs: plugin manager update `--level` behavior

* Update docs/static/plugin-manager.asciidoc

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* docs: plugin update major as subheading

* docs: intention-first in major plugin updates

* Update docs/static/plugin-manager.asciidoc

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 6943df5570)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2025-01-28 17:18:54 -08:00
github-actions[bot]
430424aac8
[doc] fix the necessary privileges of central pipeline management (#16902) (#16976)
CPM requires two roles logstash_admin and logstash_system

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit dc740b46ca)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-01-28 18:35:22 +00:00
kaisecheng
5ead3a9880
add openssl command to wolfi image (#16972) 2025-01-28 17:31:56 +00:00
github-actions[bot]
baa181ae39
remove irrelevant warning for internal pipeline (#16938) (#16964)
This commit removed irrelevant warning for internal pipeline, such as monitoring pipeline.
Monitoring pipeline is expected to be one worker. The warning is not useful

Fixes: #13298
(cherry picked from commit 3f41828ebb)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-01-27 17:30:08 +00:00
Andrea Selva
ed38254f45
[Backport 8.x] Reimplement LogStash::String setting in Java (#16576) (#16959) (#16961)
Non clean backport of #16959 from 8.x to 8.16 (which is a non clean backport of #16576 to 8.x)

Changes respect to 8.x:
- in 8.x the `with_deprecated_alias` receives also a parameter specifying the version where the setting is removed. In this case we removed the extra parameter to the method a0378c05cb/logstash-core/lib/logstash/environment.rb (L79)
- fixed the test removing the check for the remove version a0378c05cb/logstash-core/spec/logstash/settings/setting_with_deprecated_alias_spec.rb (L157)

----

Reimplements `LogStash::Setting::String` Ruby setting class into the `org.logstash.settings.SettingString` and exposes it through `java_import` as `LogStash::Setting::SettingString`.
Updates the rspec tests in two ways:
- logging mock is now converted to real Log4J appender that spy log line that are later verified
- verifies `java.lang.IllegalArgumentException` instead of `ArgumentError` is thrown because the kind of exception thrown by Java code, during verification.

* Fixed the rename of NullableString to SettingNullableString

* Fixed runner test to use real spy logger from Java Settings instead of mock test double
2025-01-27 17:42:38 +01:00
github-actions[bot]
1adecfe3b7
fix user and password detection from environment's uri (#16955) (#16957)
(cherry picked from commit c8a6566877)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-01-27 11:51:07 +00:00
github-actions[bot]
f2a7b3022f
Increase Xmx used by JRuby during Rake execution to 4Gb (#16911) (#16944)
(cherry picked from commit 58e6dac94b)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2025-01-24 14:54:35 +01:00
Dimitrios Liappis
15e4e25048
Backport 16907 to 8.16: Use --qualifier in release manager (#16907) #16942
Backport of #16907 cherry picked from 9385cfa

This commit uses the new --qualifier parameter in the release manager
for publishing dra artifacts. Additionally, simplifies the expected
variables to rely on a simple `VERSION_QUALIFIER`.

Snapshot builds are skipped when VERSION_QUALIFIER is set.
Finally, for helping to test DRA PRs, we also allow passing the `DRA_BRANCH`  option/env var
to override BUILDKITE_BRANCH.

Closes https://github.com/elastic/ingest-dev/issues/4856
2025-01-24 14:52:58 +02:00
github-actions[bot]
2a4eb2da01
Doc: Remove extra symbol to fix formatting error (#16926) (#16936)
(cherry picked from commit f66e00ac10)

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-23 17:35:34 -05:00
github-actions[bot]
7f19931c77
fix jars installer for new maven and pin psych to 5.2.2 (#16919) (#16925)
handle maven output that can carry "garbage" information after the jar's name. this patch deletes that extra information, also pins psych to 5.2.2 until jruby ships with snakeyaml-engine 2.9 and jar-dependencies 0.5.2

Related to: https://github.com/jruby/jruby/issues/8579

(cherry picked from commit 52b7fb0ae6)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2025-01-22 17:00:17 +00:00
kaisecheng
3ac74007f2
bump core 8.16.4 (#16915) 2025-01-21 10:10:45 +00:00
github-actions[bot]
a3f7aece28
Release notes for 8.16.3 (#16879)
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-21 01:43:23 +00:00
github-actions[bot]
946c78c941
Initialize flow metrics if pipeline metric.collect params is enabled. (#16881) (#16889)
(cherry picked from commit 47d04d06b2)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2025-01-09 13:31:56 -08:00
Mashhur
afeef994de
elastic_integration plugin version updated. (#16874) 2025-01-07 18:22:29 -08:00
github-actions[bot]
ad16facaae
bump lock file for 8.16 (#16869)
* Update patch plugin versions in gemfile lock

* Update Gemfile.jruby-3.1.lock.release

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2025-01-07 17:26:50 +00:00
github-actions[bot]
8716e77313
Doc: Add appropriate alternate for deprecated module in 8.x (#16856) (#16861)
(cherry picked from commit 4bf1fb514e)

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2025-01-06 16:15:25 -05:00
Karen Metts
ed4b2e2f6f
Doc: Add json_lines known issue to 8.16 release notes (#16837) 2025-01-06 15:33:14 -05:00
João Duarte
1b1a76b03f
Update logstash-input-azure_event_hubs to 1.5.1 (#16849) 2025-01-03 16:10:58 +00:00
github-actions[bot]
ff27fae426
Apply Jackson stream read constraints defaults at runtime (#16832) (#16847)
When Logstash 8.12.0 added increased Jackson stream read constraints to
jvm.options, assumptions about the existence of that file's contents
were invalidated. This led to issues like #16683.

This change ensures Logstash applies defaults from config at runtime:
- MAX_STRING_LENGTH: 200_000_000
- MAX_NUMBER_LENGTH: 10_000
- MAX_NESTING_DEPTH: 1_000

These match the jvm.options defaults and are applied even when config
is missing. Config values still override these defaults when present.

(cherry picked from commit cc608eb88b)

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>
2025-01-02 15:36:24 -08:00
github-actions[bot]
911f553dbc
Update patch plugin versions in gemfile lock (#16843)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2025-01-02 13:37:42 +00:00
github-actions[bot]
bd8a67c146
Avoid lock when ecs_compatibility is explicitly specified (#16786) (#16830)
Because a `break` escapes a `begin`...`end` block, we must not use a `break` in order to ensure that the explicitly set value gets memoized to avoid lock contention.

> ~~~ ruby
> def fake_sync(&block)
>   puts "FAKE_SYNC:enter"
>   val = yield
>   puts "FAKE_SYNC:return(#{val})"
>   return val
> ensure
>   puts "FAKE_SYNC:ensure"
> end
>
> fake_sync do
>   @ivar = begin
>     puts("BE:begin")
>   	break :break
>
>   	val = :ret
>   	puts("BE:return(#{val})")
>   	val
>   ensure
>     puts("BE:ensure")
>   end
> end
> ~~~

Note: no `FAKE_SYNC:return`:

> ~~~
> ╭─{ rye@perhaps:~/src/elastic/logstash (main ✔) }
> ╰─● ruby break-esc.rb
> FAKE_SYNC:enter
> BE:begin
> BE:ensure
> FAKE_SYNC:ensure
> [success]
> ~~~

(cherry picked from commit 01c8e8bb55)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-12-23 10:43:06 -08:00
github-actions[bot]
886a0c7ed7
update ironbank image to ubi9/9.5 (#16825) (#16827)
(cherry picked from commit dbb06c20cf)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-12-19 22:57:27 +00:00
João Duarte
2264a557ed
Fix docs dependencies link in 8.16.2 release notes (#16811) 2024-12-18 12:42:49 +00:00
João Duarte
11f433f7ec
bump to 8.16.3 (#16806) 2024-12-17 10:22:22 +00:00
github-actions[bot]
8497db1cd8
Release notes for 8.16.2 (#16799)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-12-17 09:32:42 +00:00
github-actions[bot]
5521023240
Doc: Update security docs to replace obsolete cacert setting (#16798) (#16804) 2024-12-16 16:35:08 -05:00
github-actions[bot]
0770fff960
give more memory to tests. 1gb instead of 512mb (#16764) (#16801)
(cherry picked from commit e6e0f9f6eb)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-12-16 11:59:34 +00:00
github-actions[bot]
2e7b1fe6fd
bump lock file for 8.16 (#16776) 2024-12-11 15:52:45 +00:00
github-actions[bot]
c0ae02e44d
Pin date dependency to 3.3.3 (#16755) (#16779)
Resolves: #16095, #16754
(cherry picked from commit ab19769521)

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2024-12-11 13:15:23 +00:00
github-actions[bot]
1f93dc5b65
ensure inputSize state value is reset during buftok.flush (#16760) (#16771)
(cherry picked from commit e36cacedc8)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-12-09 11:23:42 -08:00
github-actions[bot]
36feb29597
ensure jackson overrides are available to static initializers (#16719) (#16758)
Moves the application of jackson defaults overrides into pure java, and
applies them statically _before_ the `org.logstash.ObjectMappers` has a chance
to start initializing object mappers that rely on the defaults.

We replace the runner's invocation (which was too late to be fully applied) with
a _verification_ that the configured defaults have been applied.

(cherry picked from commit 202d07cbbf)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-12-04 16:03:07 -08:00
github-actions[bot]
12645e9755
Pin jar-dependencies to 0.4.1 (#16747) (#16749)
Pin jar-dependencies to `0.4.1`, until https://github.com/jruby/jruby/issues/7262
is resolved.

(cherry picked from commit e3265d93e8)

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2024-12-04 10:10:03 -05:00
github-actions[bot]
eeec9e0621
bump lock file for 8.16 (#16726)
Also update license checker with new logger dependency (#16695) (#16700)

A new transative dependency on the `logger` gem has been added through sinatra 4.1.0. Update the
license checker to ensure this is accounted for.

(cherry picked from commit e0ed994ab1)

Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Cas Donoghue <cas.donoghue@gmail.com>
2024-11-26 15:25:31 +00:00
João Duarte
0beea5ad52
bump version to 8.16.2 (#16710) 2024-11-21 09:42:13 +00:00
github-actions[bot]
8b97c052e6
Release notes for 8.16.1 (#16691)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-11-20 17:24:15 -05:00
Ry Biesemeyer
a769327be8
Revert "Bugfix for BufferedTokenizer to completely consume lines in case of lines bigger then sizeLimit (#16482) (#16580)" (#16688)
This reverts commit 2c982112ad.
2024-11-18 15:57:45 -08:00
github-actions[bot]
913d705e7c
PipelineBusV2 deadlock proofing (#16671) (#16680)
* pipeline bus: add deadlock test for unlisten/unregisterSender

* pipeline bus: eliminate deadlock

Moves the sync-to-notify out of the `AddressStateMapping#mutate`'s effective
synchronous block to eliminate a race condition where unlistening to an address
and unregistering a sender could deadlock.

It is safe to notify an AddressState's attached input without exclusive access
to the AddressState, because notifying an input that has since been disconnected
is net zero harm.

(cherry picked from commit 8af6343a26)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-11-18 09:57:36 -08:00
github-actions[bot]
e5e6f5fc1a
Update patch plugin versions in gemfile lock (#16679)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2024-11-18 12:20:58 -05:00
github-actions[bot]
2c982112ad
Bugfix for BufferedTokenizer to completely consume lines in case of lines bigger then sizeLimit (#16482) (#16580)
Fixes the behaviour of the tokenizer to be able to work properly when buffer full conditions are met.

Updates BufferedTokenizerExt so that can accumulate token fragments coming from different data segments. When a "buffer full" condition is matched, it record this state in a local field so that on next data segment it can consume all the token fragments till the next token delimiter.
Updated the accumulation variable from RubyArray containing strings to a StringBuilder which contains the head token, plus the remaining token fragments are stored in the input array.
Furthermore it translates the `buftok_spec` tests into JUnit tests.

(cherry picked from commit 85493ce864)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-11-12 15:18:29 -05:00
Karen Metts
15cdf5c63d
Doc: Realign release notes and add known issue (#16663) 2024-11-12 10:23:12 -05:00
Rob Bavey
ccab5448cb
Bump version to 8.16.1 (#16666) 2024-11-12 10:13:48 -05:00
github-actions[bot]
8b897915cd
Release notes for 8.16.0 (#16605)
* Update release notes for 8.16.0

* Refine release notes

* Apply suggestions from code review

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Refine release notes

* Apply suggestions from code review

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Apply suggestions from code review

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Added dependency update section for new jruby version

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: edmocosta <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
2024-11-11 15:15:25 -05:00
Andrea Selva
e4cb5c1ff7
[Backport 8.16] Update deprecation warning to provide the version the ArcSight modue is removed (#16648) 2024-11-06 15:24:18 +01:00
github-actions[bot]
0db1565e7d
Update .ruby-version to jruby-9.4.9.0 (#16642) (#16646)
(cherry picked from commit efbee31461)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-11-06 08:49:54 +00:00
github-actions[bot]
b3aad84b84
bump jruby to 9.4.9.0 (#16634) (#16640)
(cherry picked from commit 6703aec476)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-11-06 08:23:44 +00:00
github-actions[bot]
efe776eee0
bump lock file for 8.16 (#16600)
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-10-25 15:49:31 +01:00
github-actions[bot]
8c460d6a24
Release notes for 8.15.3 (#16527) (#16583)
* Update release notes for 8.15.3

* Refined release notes

* Apply suggestions from code review

* Update docs/static/releasenotes.asciidoc

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: edmocosta <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 2788841f5c)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-10-18 11:13:10 +02:00
github-actions[bot]
c2d6e9636f
Update minor plugin versions in gemfile lock (#16568)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2024-10-16 17:11:56 +02:00
Rob Bavey
d32b98758e
Add lockfile from 8.15 branch (#16561) 2024-10-16 08:46:35 -04:00
github-actions[bot]
f25fcc303f
ensure minitar 1.x is used instead of 0.x (#16565) (#16567)
(cherry picked from commit ab77d36daa)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-10-16 13:35:52 +01:00
Andrea Selva
c1374a1d81
Log deprecation warn if memory buffer type not defined (#16498)
On 8.x series log a deprecation log if the user didn't explicitly specify a selection for pipeline.buffer.type. Before this change the default was silently set to direct, after this change if not explicitly defined, the default is still direct but log a deprecation log.

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-15 16:03:41 +02:00
kaisecheng
4677cb22ed
add modules deprecation log for netflow, fb_apache and azure (#16548)
relates: #16357
2024-10-14 12:40:45 +01:00
github-actions[bot]
dc0739bdaf
refactor log for event_api.tags.illegal (#16545) (#16547)
- add `obsoleted_version` and remove `deprecated_msg` from `deprecated_option` for consistent warning message

(cherry picked from commit 8cd0fa8767)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-10-11 21:51:53 +01:00
github-actions[bot]
a1f4633b55
Backport PR #16525 to 8.x: [test] Fix xpack test to check for http_address stats only if the webserver is enabled (#16531)
* [test] Fix xpack test to check for http_address stats only if the webserver is enabled (#16525)

Set the 'api.enabled' setting to reflect the flag webserver_enabled and consequently test for http_address presence in settings iff the web server is enabled.

(cherry picked from commit 648472106f)

* Update also the global LogStash::SETTINGS's 'api.enabled' setting value becuase used in the constructor of StatsEventFactory and needs to be in synch with the settings provided to the Agent constructor

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-11 15:24:11 +02:00
github-actions[bot]
8c6832e3d9
Backport PR #16506 to 8.x: Avoid to access Java DeprecatedAlias value other than Ruby's one
Update Settings to_hash method to also skip Java DeprecatedAlias and not just the Ruby one.
With PR #15679 was introduced org.logstash.settings.DeprecatedAlias which mirrors the behaviour of Ruby class Setting::DeprecatedAlias. The equality check at Logstash::Settings, as descibed in #16505 (comment), is implemented comparing the maps.
The conversion of Settings to the corresponding maps filtered out the Ruby implementation of DeprecatedAlias but not the Java one.
This PR adds also the Java one to the list of filter.

(cherry picked from commit 5d4825f000)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-11 11:00:48 +02:00
github-actions[bot]
0594c8867f
Backport PR #15679 to 8.x: [Spacetime] Reimplement config Setting classe in java (#16490)
* [Spacetime] Reimplement config Setting classe in java (#15679)

Reimplement the root Ruby Setting class in Java and use it from the Ruby one moving the original Ruby class to a shell wrapping the Java instance.
In particular create a new symmetric hierarchy (at the time just for `Setting`, `Coercible` and `Boolean` classes) to the Ruby one, moving also the feature for setting deprecation. In this way the new `org.logstash.settings.Boolean` is syntactically and semantically equivalent to the old Ruby Boolean class, which replaces.

(cherry picked from commit 61de60fe26)

* Adds supress warnings related to this-escape for Java Settings classes

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-11 08:54:03 +02:00
github-actions[bot]
26c2f61276
Flow worker utilization probe (#16532) (#16537)
* flow: refactor pipeline refs to keep worker flows separate

* health: add worker_utilization probe

pipeline is:
  - RED "completely blocked" when last_5_minutes >= 99.999
  - YELLOW "nearly blocked" when last_5_minutes > 95
    - and inludes "recovering" info when last_1_minute < 80
  - YELLOW "completely blocked" when last_1_minute >= 99.999
  - YELLOW "nearly blocked" when last_1_minute > 95

* tests: improve coverage of PipelineIndicator probes

* Apply suggestions from code review

(cherry picked from commit a931b2cde6)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-10 19:59:43 -07:00
github-actions[bot]
986a253db8
health: add logstash.forceApiStatus: green escape hatch (#16535) (#16536)
(cherry picked from commit 065769636b)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-10 17:21:35 -07:00
github-actions[bot]
ad7c61448f
Health api minor followups (#16533) (#16534)
* Utilize default agent for Health API CI. Call python scripts from directly CI step.

* Change BK agent to support both Java and python. Install pip manually and send env vars to subprocess.

(cherry picked from commit 4037adfc4a)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-10 15:25:53 -07:00
Mashhur
1e5105fcd8
Fix QA failure introduced by Health API changes and update rspec dependency of the QA package. (#16521)
* Update rspec dependency of the QA package.

* Update qa/Gemfile

Align on rspec 3.13.x

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>

* Fix the QA test failure caused after reflecting Health Report status to the Node stats.

---------

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-10-09 14:47:01 -07:00
Ry Biesemeyer
7eb5185b4e
Feature: health report api (#16520)
* [health] bootstrap HealthObserver from agent to API (#16141)

* [health] bootstrap HealthObserver from agent to API

* specs: mocked agent needs health observer

* add license headers

* Merge `main` into `feature/health-report-api` (#16397)

* Add GH vault plugin bot to allowed list (#16301)

* regenerate webserver test certificates (#16331)

* correctly handle stack overflow errors during pipeline compilation (#16323)

This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320

* Doc: Reposition worker-utilization in doc (#16335)

* settings: add support for observing settings after post-process hooks (#16339)

Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332

* fix line used to determine ES is up (#16349)

* add retries to snyk buildkite job (#16343)

* Fix 8.13.1 release notes (#16363)

make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16347)

* [Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)

* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Doc: Add SNMP integration to breaking changes (#16374)

* deprecate java less-than 17 (#16370)

* Exclude substitution refinement on pipelines.yml (#16375)

* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .

* Doc: Forwardport 8.15.0 release notes to main (#16388)

* Removing 8.14 from ci/branches.json as we have 8.15. (#16390)

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Squashed merge from 8.x

* Failure injector plugin implementation. (#16466)

* Test purpose only failure injector integration (filter and output) plugins implementation. Add unit tests and include license notes.

* Fix the degrate method name typo.

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Add explanation to the config params and rebuild plugin gem.

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* Health report integration tests bootstrapper and initial tests implementation (#16467)

* Health Report integration tests bootstrapper and initial slow start scenario implementation.

* Apply suggestions from code review

Renaming expectation check method name.

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* Changed to branch concept, YAML structure simplified as changed to Dict.

* Apply suggestions from code review

Reflect `help_url` to the integration test.

---------

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

* health api: expose `GET /_health_report` with pipelines/*/status probe (#16398)

Adds a `GET /_health_report` endpoint with per-pipeline status probes, and wires the
resulting report status into the other API responses, replacing their hard-coded `green`
with a meaningful status indication.

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* docs: health report API, and diagnosis links (feature-targeted) (#16518)

* docs: health report API, and diagnosis links

* Remove plus-for-passthrough markers

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* merge 8.x into feature branch... (#16519)

* Add GH vault plugin bot to allowed list (#16301)

* regenerate webserver test certificates (#16331)

* correctly handle stack overflow errors during pipeline compilation (#16323)

This commit improves error handling when pipelines that are too big hit the Xss limit and throw a StackOverflowError. Currently the exception is printed outside of the logger, and doesn’t even show if log.format is json, leaving the user to wonder what happened.

A couple of thoughts on the way this is implemented:

* There should be a first barrier to handle pipelines that are too large based on the PipelineIR compilation. The barrier would use the detection of Xss to determine how big a pipeline could be. This however doesn't reduce the need to still handle a StackOverflow if it happens.
* The catching of StackOverflowError could also be done on the WorkerLoop. However I'd suggest that this is unrelated to the Worker initialization itself, it just so happens that compiledPipeline.buildExecution is computed inside the WorkerLoop class for performance reasons. So I'd prefer logging to not come from the existing catch, but from a dedicated catch clause.

Solves #16320

* Doc: Reposition worker-utilization in doc (#16335)

* settings: add support for observing settings after post-process hooks (#16339)

Because logging configuration occurs after loading the `logstash.yml`
settings, deprecation logs from `LogStash::Settings::DeprecatedAlias#set` are
effectively emitted to a null logger and lost.

By re-emitting after the post-process hooks, we can ensure that they make
their way to the deprecation log. This change adds support for any setting
that responds to `Object#observe_post_process` to receive it after all
post-processing hooks have been executed.

Resolves: elastic/logstash#16332

* fix line used to determine ES is up (#16349)

* add retries to snyk buildkite job (#16343)

* Fix 8.13.1 release notes (#16363)

make a note of the fix that went to 8.13.1: #16026

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16347)

* [Bugfix] Resolve the array and char (single | double quote) escaped values of ${ENV} (#16365)

* Properly resolve the values from ENV vars if literal array string provided with ENV var.

* Docker acceptance test for persisting  keys and use actual values in docker container.

* Review suggestion.

Simplify the code by stripping whitespace before `gsub`, no need to check comma and split.

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

---------

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Doc: Add SNMP integration to breaking changes (#16374)

* deprecate java less-than 17 (#16370)

* Exclude substitution refinement on pipelines.yml (#16375)

* Exclude substitution refinement on pipelines.yml (applies on ENV vars and logstash.yml where env2yaml saves vars)

* Safety integration test for pipeline config.string contains ENV .

* Doc: Forwardport 8.15.0 release notes to main (#16388)

* Removing 8.14 from ci/branches.json as we have 8.15. (#16390)

* Increase Jruby -Xmx to avoid OOM during zip task in DRA (#16408)

Fix: #16406

* Generate Dataset code with meaningful fields names (#16386)

This PR is intended to help Logstash developers or users that want to better understand the code that's autogenerated to model a pipeline, assigning more meaningful names to the Datasets subclasses' fields.

Updates `FieldDefinition` to receive the name of the field from construction methods, so that it can be used during the code generation phase, instead of the existing incremental `field%n`.
Updates `ClassFields` to propagate the explicit field name down to the `FieldDefinitions`.
Update the `DatasetCompiler` that add fields to `ClassFields` to assign a proper name to generated Dataset's fields.

* Implements safe evaluation of conditional expressions, logging the error without killing the pipeline (#16322)

This PR protects the if statements against expression evaluation errors, cancel the event under processing and log it.
This avoids to crash the pipeline which encounter a runtime error during event condition evaluation, permitting to debug the root cause reporting the offending event and removing from the current processing batch.

Translates the `org.jruby.exceptions.TypeError`, `IllegalArgumentException`, `org.jruby.exceptions.ArgumentError` that could happen during `EventCodition` evaluation into a custom `ConditionalEvaluationError` which bubbles up on AST tree nodes. It's catched in the `SplitDataset` node.
Updates the generation of the `SplitDataset `so that the execution of `filterEvents` method inside the compute body is try-catch guarded and defer the execution to an instance of `AbstractPipelineExt.ConditionalEvaluationListener` to handle such error. In this particular case the error management consist in just logging the offending Event.


---------

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

* Update logstash_releases.json (#16426)

* Release notes for 8.15.1 (#16405) (#16427)

* Update release notes for 8.15.1

* update release note

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Kaise Cheng <kaise.cheng@elastic.co>
(cherry picked from commit 2fca7e39e8)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Fix ConditionalEvaluationError to do not include the event that errored in its serialiaxed form, because it's not expected that this class is ever serialized. (#16429) (#16430)

Make inner field of ConditionalEvaluationError transient to be avoided during serialization.

(cherry picked from commit bb7ecc203f)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* use gnu tar compatible minitar to generate tar artifact (#16432) (#16434)

Using VERSION_QUALIFIER when building the tarball distribution will fail since Ruby's TarWriter implements the older POSIX88 version of tar and paths will be longer than 100 characters.

For the long paths being used in Logstash's plugins, mainly due to nested folders from jar-dependencies, we need the tarball to follow either the 2001 ustar format or gnu tar, which is implemented by the minitar gem.

(cherry picked from commit 69f0fa54ca)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* account for the 8.x in DRA publishing task (#16436) (#16440)

the current DRA publishing task computes the branch from the version
contained in the version.yml

This is done by taking the major.minor and confirming that a branch
exists with that name.

However this pattern won't be applicable for 8.x, as that branch
currently points to 8.16.0 and there is no 8.16 branch.

This commit falls back to reading the buildkite injected
BUILDKITE_BRANCH variable.

(cherry picked from commit 17dba9f829)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Fixes the issue where LS wipes out all quotes from docker env variables. (#16456) (#16459)

* Fixes the issue where LS wipes out all quotes from docker env variables. This is an issue when running LS on docker with CONFIG_STRING, needs to keep quotes with env variable.

* Add a docker acceptance integration test.

(cherry picked from commit 7c64c7394b)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Known issue for 8.15.1 related to env vars references (#16455) (#16469)

(cherry picked from commit b54caf3fd8)

Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>

* bump .ruby_version to jruby-9.4.8.0 (#16477) (#16480)

(cherry picked from commit 51cca7320e)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Release notes for 8.15.2 (#16471) (#16478)

Co-authored-by: andsel <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 01dc76f3b5)

* Change LogStash::Util::SubstitutionVariables#replace_placeholders refine argument to optional (#16485) (#16488)

(cherry picked from commit 8368c00367)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>

* Use jruby-9.4.8.0 in exhaustive CIs. (#16489) (#16491)

(cherry picked from commit fd1de39005)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Don't use an older JRuby with oraclelinux-7 (#16499) (#16501)

A recent PR (elastic/ci-agent-images/pull/932) modernized the VM images
and removed JRuby 9.4.5.0 and some older versions.

This ended up breaking exhaustive test on Oracle Linux 7 that hard coded
JRuby 9.4.5.0.

PR https://github.com/elastic/logstash/pull/16489 worked around the
problem by pinning to the new JRuby, but actually we don't
need the conditional anymore since the original issue
https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 has
been resolved and none of our releasable branches (apart from 7.17 which
uses `9.2.20.1`) specify `9.3.x.y` in `/.ruby-version`.

Therefore, this commit removes conditional setting of JRuby for
OracleLinux 7 agents in exhaustive tests (and relies on whatever
`/.ruby-version` defines).

(cherry picked from commit 07c01f8231)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>

* Improve pipeline bootstrap error logs (#16495) (#16504)

This PR adds the cause errors details on the pipeline converge state error logs

(cherry picked from commit e84fb458ce)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>

* Logstash Health Report Tests Buildkite pipeline setup. (#16416) (#16511)

(cherry picked from commit 5195332bc6)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Make health report test runner script executable. (#16446) (#16512)

(cherry picked from commit 2ebf2658ff)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

* Backport PR #16423 to 8.x: DLQ-ing events that trigger an conditional evaluation error. (#16493)

* DLQ-ing events that trigger an conditional evaluation error. (#16423)

When a conditional evaluation encounter an error in the expression the event that triggered the issue is sent to pipeline's DLQ, if enabled for the executing pipeline.

This PR engage with the work done in #16322, the `ConditionalEvaluationListener` that is receives notifications about if-statements evaluation failure, is improved to also send the event to DLQ (if enabled in the pipeline) and not just logging it.

(cherry picked from commit b69d993d71)

* Fixed warning about non serializable field DeadLetterQueueWriter in serializable AbstractPipelineExt

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>

* add deprecation log for `--event_api.tags.illegal` (#16507) (#16515)

- move `--event_api.tags.illegal` from option to deprecated_option
- add deprecation log when the flag is explicitly used
relates: #16356

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
(cherry picked from commit a4eddb8a2a)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>

---------

Co-authored-by: ev1yehor <146825775+ev1yehor@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
Co-authored-by: Andrea Selva <selva.andre@gmail.com>
Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2024-10-09 09:48:12 -07:00
github-actions[bot]
c2c62fdce4
add deprecation log for --event_api.tags.illegal (#16507) (#16515)
- move `--event_api.tags.illegal` from option to deprecated_option
- add deprecation log when the flag is explicitly used
relates: #16356

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
(cherry picked from commit a4eddb8a2a)

Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-10-08 14:40:49 +01:00
github-actions[bot]
e854ac7bf5
Backport PR #16423 to 8.x: DLQ-ing events that trigger an conditional evaluation error. (#16493)
* DLQ-ing events that trigger an conditional evaluation error. (#16423)

When a conditional evaluation encounter an error in the expression the event that triggered the issue is sent to pipeline's DLQ, if enabled for the executing pipeline.

This PR engage with the work done in #16322, the `ConditionalEvaluationListener` that is receives notifications about if-statements evaluation failure, is improved to also send the event to DLQ (if enabled in the pipeline) and not just logging it.

(cherry picked from commit b69d993d71)

* Fixed warning about non serializable field DeadLetterQueueWriter in serializable AbstractPipelineExt

---------

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-10-08 13:45:15 +01:00
github-actions[bot]
3b751d9794
Make health report test runner script executable. (#16446) (#16512)
(cherry picked from commit 2ebf2658ff)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-05 07:32:15 -07:00
github-actions[bot]
2f3f6a9651
Logstash Health Report Tests Buildkite pipeline setup. (#16416) (#16511)
(cherry picked from commit 5195332bc6)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-05 07:30:51 -07:00
github-actions[bot]
d1155988c1
Improve pipeline bootstrap error logs (#16495) (#16504)
This PR adds the cause errors details on the pipeline converge state error logs

(cherry picked from commit e84fb458ce)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
2024-10-03 13:31:22 +02:00
github-actions[bot]
476b9216f2
Don't use an older JRuby with oraclelinux-7 (#16499) (#16501)
A recent PR (elastic/ci-agent-images/pull/932) modernized the VM images
and removed JRuby 9.4.5.0 and some older versions.

This ended up breaking exhaustive test on Oracle Linux 7 that hard coded
JRuby 9.4.5.0.

PR https://github.com/elastic/logstash/pull/16489 worked around the
problem by pinning to the new JRuby, but actually we don't
need the conditional anymore since the original issue
https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 has
been resolved and none of our releasable branches (apart from 7.17 which
uses `9.2.20.1`) specify `9.3.x.y` in `/.ruby-version`.

Therefore, this commit removes conditional setting of JRuby for
OracleLinux 7 agents in exhaustive tests (and relies on whatever
`/.ruby-version` defines).

(cherry picked from commit 07c01f8231)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2024-10-02 19:57:58 +03:00
github-actions[bot]
2c024daecd
Use jruby-9.4.8.0 in exhaustive CIs. (#16489) (#16491)
(cherry picked from commit fd1de39005)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-10-02 09:33:31 +01:00
github-actions[bot]
eafcf577dd
Change LogStash::Util::SubstitutionVariables#replace_placeholders refine argument to optional (#16485) (#16488)
(cherry picked from commit 8368c00367)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
2024-10-01 12:16:03 -07:00
github-actions[bot]
8f20bd90c9
Release notes for 8.15.2 (#16471) (#16478)
Co-authored-by: andsel <selva.andre@gmail.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
(cherry picked from commit 01dc76f3b5)
2024-09-26 18:28:55 +02:00
github-actions[bot]
1ccfb161d7
bump .ruby_version to jruby-9.4.8.0 (#16477) (#16480)
(cherry picked from commit 51cca7320e)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-09-25 13:27:54 +01:00
github-actions[bot]
30803789fc
Known issue for 8.15.1 related to env vars references (#16455) (#16469)
(cherry picked from commit b54caf3fd8)

Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
2024-09-19 14:34:35 -04:00
github-actions[bot]
14f52c0472
Fixes the issue where LS wipes out all quotes from docker env variables. (#16456) (#16459)
* Fixes the issue where LS wipes out all quotes from docker env variables. This is an issue when running LS on docker with CONFIG_STRING, needs to keep quotes with env variable.

* Add a docker acceptance integration test.

(cherry picked from commit 7c64c7394b)

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-09-17 07:30:45 -07:00
github-actions[bot]
d2b19001de
account for the 8.x in DRA publishing task (#16436) (#16440)
the current DRA publishing task computes the branch from the version
contained in the version.yml

This is done by taking the major.minor and confirming that a branch
exists with that name.

However this pattern won't be applicable for 8.x, as that branch
currently points to 8.16.0 and there is no 8.16 branch.

This commit falls back to reading the buildkite injected
BUILDKITE_BRANCH variable.

(cherry picked from commit 17dba9f829)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-09-10 10:56:27 +01:00
github-actions[bot]
32e7a25a15
use gnu tar compatible minitar to generate tar artifact (#16432) (#16434)
Using VERSION_QUALIFIER when building the tarball distribution will fail since Ruby's TarWriter implements the older POSIX88 version of tar and paths will be longer than 100 characters.

For the long paths being used in Logstash's plugins, mainly due to nested folders from jar-dependencies, we need the tarball to follow either the 2001 ustar format or gnu tar, which is implemented by the minitar gem.

(cherry picked from commit 69f0fa54ca)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-09-09 11:35:21 +01:00
github-actions[bot]
5ef86a8aa1
Fix ConditionalEvaluationError to do not include the event that errored in its serialiaxed form, because it's not expected that this class is ever serialized. (#16429) (#16430)
Make inner field of ConditionalEvaluationError transient to be avoided during serialization.

(cherry picked from commit bb7ecc203f)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-09-06 11:40:42 +01:00
1839 changed files with 60907 additions and 21716 deletions

View file

@ -35,71 +35,48 @@ steps:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
- label: ":lab_coat: Integration Tests / part 1"
key: "integration-tests-part-1"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 0 3
ci/integration_tests.sh split 0
retry:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
- label: ":lab_coat: Integration Tests / part 2"
key: "integration-tests-part-2"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 1 3
ci/integration_tests.sh split 1
retry:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 2 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
- label: ":lab_coat: IT Persistent Queues / part 1"
key: "integration-tests-qa-part-1"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0 3
ci/integration_tests.sh split 0
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
- label: ":lab_coat: IT Persistent Queues / part 2"
key: "integration-tests-qa-part-2"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
ci/integration_tests.sh split 1
retry:
automatic:
- limit: 3

View file

@ -1,11 +0,0 @@
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 100
diskType: pd-ssd
steps:
- label: "Benchmark Marathon"
command: .buildkite/scripts/benchmark/marathon.sh

View file

@ -1,14 +0,0 @@
steps:
- label: "JDK Availability check"
key: "jdk-availability-check"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "4"
memory: "6Gi"
ephemeralStorage: "100Gi"
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export GRADLE_OPTS="-Xmx2g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info"
ci/check_jdk_version_availability.sh

View file

@ -5,7 +5,7 @@
env:
DEFAULT_MATRIX_OS: "ubuntu-2204"
DEFAULT_MATRIX_JDK: "adoptiumjdk_21"
DEFAULT_MATRIX_JDK: "adoptiumjdk_17"
steps:
- input: "Test Parameters"
@ -60,12 +60,20 @@ steps:
value: "adoptiumjdk_21"
- label: "Adoptium JDK 17 (Eclipse Temurin)"
value: "adoptiumjdk_17"
- label: "Adoptium JDK 11 (Eclipse Temurin)"
value: "adoptiumjdk_11"
- label: "OpenJDK 21"
value: "openjdk_21"
- label: "OpenJDK 17"
value: "openjdk_17"
- label: "OpenJDK 11"
value: "openjdk_11"
- label: "Zulu 21"
value: "zulu_21"
- label: "Zulu 17"
value: "zulu_17"
- label: "Zulu 11"
value: "zulu_11"
- wait: ~
if: build.source != "schedule" && build.source != "trigger_job"

View file

@ -14,11 +14,7 @@
"skip_ci_labels": [ ],
"skip_target_branches": [ ],
"skip_ci_on_only_changed": [
"^.github/",
"^docs/",
"^.mergify.yml$",
"^.pre-commit-config.yaml",
"\\.md$"
"^docs/"
],
"always_require_ci_on_changed": [ ]
}

View file

@ -22,12 +22,10 @@ steps:
- label: ":rspec: Ruby unit tests"
key: "ruby-unit-tests"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "4"
memory: "8Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
@ -81,8 +79,8 @@ steps:
manual:
allowed: true
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
- label: ":lab_coat: Integration Tests / part 1"
key: "integration-tests-part-1"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -97,10 +95,10 @@ steps:
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 0 3
ci/integration_tests.sh split 0
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
- label: ":lab_coat: Integration Tests / part 2"
key: "integration-tests-part-2"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -115,28 +113,10 @@ steps:
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 1 3
ci/integration_tests.sh split 1
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 2 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
- label: ":lab_coat: IT Persistent Queues / part 1"
key: "integration-tests-qa-part-1"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -152,10 +132,10 @@ steps:
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0 3
ci/integration_tests.sh split 0
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
- label: ":lab_coat: IT Persistent Queues / part 2"
key: "integration-tests-qa-part-2"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -171,26 +151,7 @@ steps:
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as non root (logstash) user. UID is hardcoded in image.
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
ci/integration_tests.sh split 1
- label: ":lab_coat: x-pack unit tests"
key: "x-pack-unit-tests"

View file

@ -1,22 +0,0 @@
## Steps to set up GCP instance to run benchmark script
- Create an instance "n2-standard-16" with Ubuntu image
- Install docker
- `sudo snap install docker`
- `sudo usermod -a -G docker $USER`
- Install jq
- Install vault
- `sudo snap install vault`
- `vault login --method github`
- `vault kv get -format json secret/ci/elastic-logstash/benchmark`
- Setup Elasticsearch index mapping and alias with `setup/*`
- Import Kibana dashboard with `save-objects/*`
- Run the benchmark script
- Send data to your own Elasticsearch. Customise `VAULT_PATH="secret/ci/elastic-logstash/your/path"`
- Run the script `main.sh`
- or run in background `nohup bash -x main.sh > log.log 2>&1 &`
## Notes
- Benchmarks should only be compared using the same hardware setup.
- Please do not send the test metrics to the benchmark cluster. You can set `VAULT_PATH` to send data and metrics to your own server.
- Run `all.sh` as calibration which gives you a baseline of performance in different versions.
- [#16586](https://github.com/elastic/logstash/pull/16586) allows legacy monitoring using the configuration `xpack.monitoring.allow_legacy_collection: true`, which is not recognized in version 8. To run benchmarks in version 8, use the script of the corresponding branch (e.g. `8.16`) instead of `main` in buildkite.

View file

@ -3,7 +3,6 @@ pipeline.workers: ${WORKER}
pipeline.batch.size: ${BATCH_SIZE}
queue.type: ${QTYPE}
xpack.monitoring.allow_legacy_collection: true
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: ${MONITOR_ES_USER}
xpack.monitoring.elasticsearch.password: ${MONITOR_ES_PW}

View file

@ -1 +0,0 @@
f74f1a28-25e9-494f-ba41-ca9f13d4446d

View file

@ -1,315 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
CONFIG_PATH="$SCRIPT_PATH/config"
source "$SCRIPT_PATH/util.sh"
usage() {
echo "Usage: $0 [FB_CNT] [QTYPE] [CPU] [MEM]"
echo "Example: $0 4 {persisted|memory|all} 2 2"
exit 1
}
parse_args() {
while [[ "$#" -gt 0 ]]; do
if [ -z "$FB_CNT" ]; then
FB_CNT=$1
elif [ -z "$QTYPE" ]; then
case $1 in
all | persisted | memory)
QTYPE=$1
;;
*)
echo "Error: wrong queue type $1"
usage
;;
esac
elif [ -z "$CPU" ]; then
CPU=$1
elif [ -z "$MEM" ]; then
MEM=$1
else
echo "Error: Too many arguments"
usage
fi
shift
done
# set default value
# number of filebeat
FB_CNT=${FB_CNT:-4}
# all | persisted | memory
QTYPE=${QTYPE:-all}
CPU=${CPU:-4}
MEM=${MEM:-4}
XMX=$((MEM / 2))
IFS=','
# worker multiplier: 1,2,4
MULTIPLIERS="${MULTIPLIERS:-1,2,4}"
read -ra MULTIPLIERS <<< "$MULTIPLIERS"
BATCH_SIZES="${BATCH_SIZES:-500}"
read -ra BATCH_SIZES <<< "$BATCH_SIZES"
# tags to json array
read -ra TAG_ARRAY <<< "$TAGS"
JSON_TAGS=$(printf '"%s",' "${TAG_ARRAY[@]}" | sed 's/,$//')
JSON_TAGS="[$JSON_TAGS]"
IFS=' '
echo "filebeats: $FB_CNT, cpu: $CPU, mem: $MEM, Queue: $QTYPE, worker multiplier: ${MULTIPLIERS[@]}, batch size: ${BATCH_SIZES[@]}"
}
get_secret() {
VAULT_PATH=${VAULT_PATH:-secret/ci/elastic-logstash/benchmark}
VAULT_DATA=$(vault kv get -format json $VAULT_PATH)
BENCHMARK_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.es_host')
BENCHMARK_ES_USER=$(echo $VAULT_DATA | jq -r '.data.es_user')
BENCHMARK_ES_PW=$(echo $VAULT_DATA | jq -r '.data.es_pw')
MONITOR_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.monitor_es_host')
MONITOR_ES_USER=$(echo $VAULT_DATA | jq -r '.data.monitor_es_user')
MONITOR_ES_PW=$(echo $VAULT_DATA | jq -r '.data.monitor_es_pw')
}
pull_images() {
echo "--- Pull docker images"
if [[ -n "$LS_VERSION" ]]; then
# pull image if it doesn't exist in local
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker pull "docker.elastic.co/logstash/logstash:$LS_VERSION"
else
# pull the latest snapshot logstash image
# select the SNAPSHOT artifact with the highest semantic version number
LS_VERSION=$( curl --retry-all-errors --retry 5 --retry-delay 1 -s "https://storage.googleapis.com/artifacts-api/snapshots/main.json" | jq -r '.version' )
BUILD_ID=$(curl --retry-all-errors --retry 5 --retry-delay 1 -s "https://storage.googleapis.com/artifacts-api/snapshots/main.json" | jq -r '.build_id')
ARCH=$(arch)
IMAGE_URL="https://snapshots.elastic.co/${BUILD_ID}/downloads/logstash/logstash-$LS_VERSION-docker-image-$ARCH.tar.gz"
IMAGE_FILENAME="$LS_VERSION.tar.gz"
echo "Download $LS_VERSION from $IMAGE_URL"
[[ ! -e $IMAGE_FILENAME ]] && curl -fsSL --retry-max-time 60 --retry 3 --retry-delay 5 -o "$IMAGE_FILENAME" "$IMAGE_URL"
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker load -i "$IMAGE_FILENAME"
fi
# pull filebeat image
FB_DEFAULT_VERSION="8.13.4"
FB_VERSION=${FB_VERSION:-$FB_DEFAULT_VERSION}
docker pull "docker.elastic.co/beats/filebeat:$FB_VERSION"
}
generate_logs() {
FLOG_FILE_CNT=${FLOG_FILE_CNT:-4}
SINGLE_SIZE=524288000
TOTAL_SIZE="$((FLOG_FILE_CNT * SINGLE_SIZE))"
FLOG_PATH="$SCRIPT_PATH/flog"
mkdir -p $FLOG_PATH
if [[ ! -e "$FLOG_PATH/log${FLOG_FILE_CNT}.log" ]]; then
echo "--- Generate logs in background. log: ${FLOG_FILE_CNT}, each size: 500mb"
docker run -d --name=flog --rm -v $FLOG_PATH:/go/src/data mingrammer/flog -t log -w -o "/go/src/data/log.log" -b $TOTAL_SIZE -p $SINGLE_SIZE
fi
}
check_logs() {
echo "--- Check log generation"
local cnt=0
until [[ -e "$FLOG_PATH/log${FLOG_FILE_CNT}.log" || $cnt -gt 600 ]]; do
echo "wait 30s" && sleep 30
cnt=$((cnt + 30))
done
ls -lah $FLOG_PATH
}
start_logstash() {
LS_CONFIG_PATH=$SCRIPT_PATH/ls/config
mkdir -p $LS_CONFIG_PATH
cp $CONFIG_PATH/pipelines.yml $LS_CONFIG_PATH/pipelines.yml
cp $CONFIG_PATH/logstash.yml $LS_CONFIG_PATH/logstash.yml
cp $CONFIG_PATH/uuid $LS_CONFIG_PATH/uuid
LS_JAVA_OPTS=${LS_JAVA_OPTS:--Xmx${XMX}g}
docker run -d --name=ls --net=host --cpus=$CPU --memory=${MEM}g -e LS_JAVA_OPTS="$LS_JAVA_OPTS" \
-e QTYPE="$QTYPE" -e WORKER="$WORKER" -e BATCH_SIZE="$BATCH_SIZE" \
-e BENCHMARK_ES_HOST="$BENCHMARK_ES_HOST" -e BENCHMARK_ES_USER="$BENCHMARK_ES_USER" -e BENCHMARK_ES_PW="$BENCHMARK_ES_PW" \
-e MONITOR_ES_HOST="$MONITOR_ES_HOST" -e MONITOR_ES_USER="$MONITOR_ES_USER" -e MONITOR_ES_PW="$MONITOR_ES_PW" \
-v $LS_CONFIG_PATH/logstash.yml:/usr/share/logstash/config/logstash.yml:ro \
-v $LS_CONFIG_PATH/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro \
-v $LS_CONFIG_PATH/uuid:/usr/share/logstash/data/uuid:ro \
docker.elastic.co/logstash/logstash:$LS_VERSION
}
start_filebeat() {
for ((i = 0; i < FB_CNT; i++)); do
FB_PATH="$SCRIPT_PATH/fb${i}"
mkdir -p $FB_PATH
cp $CONFIG_PATH/filebeat.yml $FB_PATH/filebeat.yml
docker run -d --name=fb$i --net=host --user=root \
-v $FB_PATH/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v $SCRIPT_PATH/flog:/usr/share/filebeat/flog \
docker.elastic.co/beats/filebeat:$FB_VERSION filebeat -e --strict.perms=false
done
}
capture_stats() {
CURRENT=$(jq -r '.flow.output_throughput.current' $NS_JSON)
local eps_1m=$(jq -r '.flow.output_throughput.last_1_minute' $NS_JSON)
local eps_5m=$(jq -r '.flow.output_throughput.last_5_minutes' $NS_JSON)
local worker_util=$(jq -r '.pipelines.main.flow.worker_utilization.last_1_minute' $NS_JSON)
local worker_concurr=$(jq -r '.pipelines.main.flow.worker_concurrency.last_1_minute' $NS_JSON)
local cpu_percent=$(jq -r '.process.cpu.percent' $NS_JSON)
local heap=$(jq -r '.jvm.mem.heap_used_in_bytes' $NS_JSON)
local non_heap=$(jq -r '.jvm.mem.non_heap_used_in_bytes' $NS_JSON)
local q_event_cnt=$(jq -r '.pipelines.main.queue.events_count' $NS_JSON)
local q_size=$(jq -r '.pipelines.main.queue.queue_size_in_bytes' $NS_JSON)
TOTAL_EVENTS_OUT=$(jq -r '.pipelines.main.events.out' $NS_JSON)
printf "current: %s, 1m: %s, 5m: %s, worker_utilization: %s, worker_concurrency: %s, cpu: %s, heap: %s, non-heap: %s, q_events: %s, q_size: %s, total_events_out: %s\n" \
$CURRENT $eps_1m $eps_5m $worker_util $worker_concurr $cpu_percent $heap $non_heap $q_event_cnt $q_size $TOTAL_EVENTS_OUT
}
aggregate_stats() {
local file_glob="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_*.json"
MAX_EPS_1M=$( jqmax '.flow.output_throughput.last_1_minute' "$file_glob" )
MAX_EPS_5M=$( jqmax '.flow.output_throughput.last_5_minutes' "$file_glob" )
MAX_WORKER_UTIL=$( jqmax '.pipelines.main.flow.worker_utilization.last_1_minute' "$file_glob" )
MAX_WORKER_CONCURR=$( jqmax '.pipelines.main.flow.worker_concurrency.last_1_minute' "$file_glob" )
MAX_Q_EVENT_CNT=$( jqmax '.pipelines.main.queue.events_count' "$file_glob" )
MAX_Q_SIZE=$( jqmax '.pipelines.main.queue.queue_size_in_bytes' "$file_glob" )
AVG_CPU_PERCENT=$( jqavg '.process.cpu.percent' "$file_glob" )
AVG_VIRTUAL_MEM=$( jqavg '.process.mem.total_virtual_in_bytes' "$file_glob" )
AVG_HEAP=$( jqavg '.jvm.mem.heap_used_in_bytes' "$file_glob" )
AVG_NON_HEAP=$( jqavg '.jvm.mem.non_heap_used_in_bytes' "$file_glob" )
}
send_summary() {
echo "--- Send summary to Elasticsearch"
# build json
local timestamp
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%S")
SUMMARY="{\"timestamp\": \"$timestamp\", \"version\": \"$LS_VERSION\", \"cpu\": \"$CPU\", \"mem\": \"$MEM\", \"workers\": \"$WORKER\", \"batch_size\": \"$BATCH_SIZE\", \"queue_type\": \"$QTYPE\""
not_empty "$TOTAL_EVENTS_OUT" && SUMMARY="$SUMMARY, \"total_events_out\": \"$TOTAL_EVENTS_OUT\""
not_empty "$MAX_EPS_1M" && SUMMARY="$SUMMARY, \"max_eps_1m\": \"$MAX_EPS_1M\""
not_empty "$MAX_EPS_5M" && SUMMARY="$SUMMARY, \"max_eps_5m\": \"$MAX_EPS_5M\""
not_empty "$MAX_WORKER_UTIL" && SUMMARY="$SUMMARY, \"max_worker_utilization\": \"$MAX_WORKER_UTIL\""
not_empty "$MAX_WORKER_CONCURR" && SUMMARY="$SUMMARY, \"max_worker_concurrency\": \"$MAX_WORKER_CONCURR\""
not_empty "$AVG_CPU_PERCENT" && SUMMARY="$SUMMARY, \"avg_cpu_percentage\": \"$AVG_CPU_PERCENT\""
not_empty "$AVG_HEAP" && SUMMARY="$SUMMARY, \"avg_heap\": \"$AVG_HEAP\""
not_empty "$AVG_NON_HEAP" && SUMMARY="$SUMMARY, \"avg_non_heap\": \"$AVG_NON_HEAP\""
not_empty "$AVG_VIRTUAL_MEM" && SUMMARY="$SUMMARY, \"avg_virtual_memory\": \"$AVG_VIRTUAL_MEM\""
not_empty "$MAX_Q_EVENT_CNT" && SUMMARY="$SUMMARY, \"max_queue_events\": \"$MAX_Q_EVENT_CNT\""
not_empty "$MAX_Q_SIZE" && SUMMARY="$SUMMARY, \"max_queue_bytes_size\": \"$MAX_Q_SIZE\""
not_empty "$TAGS" && SUMMARY="$SUMMARY, \"tags\": $JSON_TAGS"
SUMMARY="$SUMMARY}"
tee summary.json << EOF
{"index": {}}
$SUMMARY
EOF
# send to ES
local resp
local err_status
resp=$(curl -s -X POST -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" "$BENCHMARK_ES_HOST/benchmark_summary/_bulk" -H 'Content-Type: application/json' --data-binary @"summary.json")
echo "$resp"
err_status=$(echo "$resp" | jq -r ".errors")
if [[ "$err_status" == "true" ]]; then
echo "Failed to send summary"
exit 1
fi
}
# $1: snapshot index
node_stats() {
NS_JSON="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_$1.json" # m_w8b1000_0.json
# curl inside container because docker on mac cannot resolve localhost to host network interface
docker exec -i ls curl localhost:9600/_node/stats > "$NS_JSON" 2> /dev/null
}
# $1: index
snapshot() {
node_stats $1
capture_stats
}
create_directory() {
NS_DIR="fb${FB_CNT}c${CPU}m${MEM}" # fb4c4m4
mkdir -p "$SCRIPT_PATH/$NS_DIR"
}
queue() {
for QTYPE in "persisted" "memory"; do
worker
done
}
worker() {
for m in "${MULTIPLIERS[@]}"; do
WORKER=$((CPU * m))
batch
done
}
batch() {
for BATCH_SIZE in "${BATCH_SIZES[@]}"; do
run_pipeline
stop_pipeline
done
}
run_pipeline() {
echo "--- Run pipeline. queue type: $QTYPE, worker: $WORKER, batch size: $BATCH_SIZE"
start_logstash
start_filebeat
docker ps
echo "(0) sleep 3m" && sleep 180
snapshot "0"
for i in {1..8}; do
echo "($i) sleep 30s" && sleep 30
snapshot "$i"
# print docker log when ingestion rate is zero
# remove '.' in number and return max val
[[ $(max -g "${CURRENT/./}" "0") -eq 0 ]] &&
docker logs fb0 &&
docker logs ls
done
aggregate_stats
send_summary
}
stop_pipeline() {
echo "--- Stop Pipeline"
for ((i = 0; i < FB_CNT; i++)); do
docker stop fb$i
docker rm fb$i
done
docker stop ls
docker rm ls
curl -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" -X DELETE $BENCHMARK_ES_HOST/_data_stream/logs-generic-default
echo " data stream deleted "
# TODO: clean page caches, reduce memory fragmentation
# https://github.com/elastic/logstash/pull/16191#discussion_r1647050216
}
clean_up() {
# stop log generation if it has not done yet
[[ -n $(docker ps | grep flog) ]] && docker stop flog || true
# remove image
docker image rm docker.elastic.co/logstash/logstash:$LS_VERSION
}

View file

@ -15,8 +15,9 @@ set -eo pipefail
# - The script sends a summary of EPS and resource usage to index `benchmark_summary`
# *******************************************************
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
source "$SCRIPT_PATH/core.sh"
SCRIPT_PATH="$(cd "$(dirname "$0")"; pwd)"
CONFIG_PATH="$SCRIPT_PATH/config"
source "$SCRIPT_PATH/util.sh"
## usage:
## main.sh FB_CNT QTYPE CPU MEM
@ -35,9 +36,272 @@ source "$SCRIPT_PATH/core.sh"
## MEM=4 # number of GB for Logstash container
## QTYPE=memory # queue type to test {persisted|memory|all}
## FB_CNT=4 # number of filebeats to use in benchmark
## FLOG_FILE_CNT=4 # number of files to generate for ingestion
## VAULT_PATH=secret/path # vault path point to Elasticsearch credentials. The default value points to benchmark cluster.
## TAGS=test,other # tags with "," separator.
usage() {
echo "Usage: $0 [FB_CNT] [QTYPE] [CPU] [MEM]"
echo "Example: $0 4 {persisted|memory|all} 2 2"
exit 1
}
parse_args() {
while [[ "$#" -gt 0 ]]; do
if [ -z "$FB_CNT" ]; then
FB_CNT=$1
elif [ -z "$QTYPE" ]; then
case $1 in
all | persisted | memory)
QTYPE=$1
;;
*)
echo "Error: wrong queue type $1"
usage
;;
esac
elif [ -z "$CPU" ]; then
CPU=$1
elif [ -z "$MEM" ]; then
MEM=$1
else
echo "Error: Too many arguments"
usage
fi
shift
done
# set default value
# number of filebeat
FB_CNT=${FB_CNT:-4}
# all | persisted | memory
QTYPE=${QTYPE:-all}
CPU=${CPU:-4}
MEM=${MEM:-4}
XMX=$((MEM / 2))
IFS=','
# worker multiplier: 1,2,4
MULTIPLIERS="${MULTIPLIERS:-1,2,4}"
read -ra MULTIPLIERS <<< "$MULTIPLIERS"
BATCH_SIZES="${BATCH_SIZES:-500}"
read -ra BATCH_SIZES <<< "$BATCH_SIZES"
IFS=' '
echo "filebeats: $FB_CNT, cpu: $CPU, mem: $MEM, Queue: $QTYPE, worker multiplier: ${MULTIPLIERS[@]}, batch size: ${BATCH_SIZES[@]}"
}
get_secret() {
VAULT_PATH=secret/ci/elastic-logstash/benchmark
VAULT_DATA=$(vault kv get -format json $VAULT_PATH)
BENCHMARK_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.es_host')
BENCHMARK_ES_USER=$(echo $VAULT_DATA | jq -r '.data.es_user')
BENCHMARK_ES_PW=$(echo $VAULT_DATA | jq -r '.data.es_pw')
MONITOR_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.monitor_es_host')
MONITOR_ES_USER=$(echo $VAULT_DATA | jq -r '.data.monitor_es_user')
MONITOR_ES_PW=$(echo $VAULT_DATA | jq -r '.data.monitor_es_pw')
}
pull_images() {
echo "--- Pull docker images"
# pull the latest snapshot logstash image
if [[ -n "$LS_VERSION" ]]; then
docker pull "docker.elastic.co/logstash/logstash:$LS_VERSION"
else
# select the SNAPSHOT artifact with the highest semantic version number
LS_VERSION=$( curl --retry-all-errors --retry 5 --retry-delay 1 -s https://artifacts-api.elastic.co/v1/versions | jq -r '.versions | map(select(endswith("-SNAPSHOT"))) | max_by(rtrimstr("-SNAPSHOT")|split(".")|map(tonumber))' )
BUILD_ID=$(curl --retry-all-errors --retry 5 --retry-delay 1 -s "https://artifacts-api.elastic.co/v1/versions/${LS_VERSION}/builds/latest" | jq -re '.build.build_id')
ARCH=$(arch)
IMAGE_URL="https://snapshots.elastic.co/${BUILD_ID}/downloads/logstash/logstash-$LS_VERSION-docker-image-$ARCH.tar.gz"
IMAGE_FILENAME="$LS_VERSION.tar.gz"
echo "Download $LS_VERSION from $IMAGE_URL"
[[ ! -e $IMAGE_FILENAME ]] && curl -fsSL --retry-max-time 60 --retry 3 --retry-delay 5 -o "$IMAGE_FILENAME" "$IMAGE_URL"
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker load -i "$IMAGE_FILENAME"
fi
# pull filebeat image
FB_DEFAULT_VERSION="8.13.4"
FB_VERSION=${FB_VERSION:-$FB_DEFAULT_VERSION}
docker pull "docker.elastic.co/beats/filebeat:$FB_VERSION"
}
generate_logs() {
FLOG_PATH="$SCRIPT_PATH/flog"
mkdir -p $FLOG_PATH
if [[ ! -e "$FLOG_PATH/log4.log" ]]; then
echo "--- Generate logs in background. log: 5, size: 500mb"
docker run -d --name=flog --rm -v $FLOG_PATH:/go/src/data mingrammer/flog -t log -w -o "/go/src/data/log.log" -b 2621440000 -p 524288000
fi
}
check_logs() {
echo "--- Check log generation"
local cnt=0
until [[ -e "$FLOG_PATH/log4.log" || $cnt -gt 600 ]]; do
echo "wait 30s" && sleep 30
cnt=$((cnt + 30))
done
ls -lah $FLOG_PATH
}
start_logstash() {
LS_CONFIG_PATH=$SCRIPT_PATH/ls/config
mkdir -p $LS_CONFIG_PATH
cp $CONFIG_PATH/pipelines.yml $LS_CONFIG_PATH/pipelines.yml
cp $CONFIG_PATH/logstash.yml $LS_CONFIG_PATH/logstash.yml
LS_JAVA_OPTS=${LS_JAVA_OPTS:--Xmx${XMX}g}
docker run -d --name=ls --net=host --cpus=$CPU --memory=${MEM}g -e LS_JAVA_OPTS="$LS_JAVA_OPTS" \
-e QTYPE="$QTYPE" -e WORKER="$WORKER" -e BATCH_SIZE="$BATCH_SIZE" \
-e BENCHMARK_ES_HOST="$BENCHMARK_ES_HOST" -e BENCHMARK_ES_USER="$BENCHMARK_ES_USER" -e BENCHMARK_ES_PW="$BENCHMARK_ES_PW" \
-e MONITOR_ES_HOST="$MONITOR_ES_HOST" -e MONITOR_ES_USER="$MONITOR_ES_USER" -e MONITOR_ES_PW="$MONITOR_ES_PW" \
-v $LS_CONFIG_PATH/logstash.yml:/usr/share/logstash/config/logstash.yml:ro \
-v $LS_CONFIG_PATH/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro \
docker.elastic.co/logstash/logstash:$LS_VERSION
}
start_filebeat() {
for ((i = 0; i < FB_CNT; i++)); do
FB_PATH="$SCRIPT_PATH/fb${i}"
mkdir -p $FB_PATH
cp $CONFIG_PATH/filebeat.yml $FB_PATH/filebeat.yml
docker run -d --name=fb$i --net=host --user=root \
-v $FB_PATH/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v $SCRIPT_PATH/flog:/usr/share/filebeat/flog \
docker.elastic.co/beats/filebeat:$FB_VERSION filebeat -e --strict.perms=false
done
}
capture_stats() {
CURRENT=$(jq -r '.flow.output_throughput.current' $NS_JSON)
local eps_1m=$(jq -r '.flow.output_throughput.last_1_minute' $NS_JSON)
local eps_5m=$(jq -r '.flow.output_throughput.last_5_minutes' $NS_JSON)
local worker_util=$(jq -r '.pipelines.main.flow.worker_utilization.last_1_minute' $NS_JSON)
local worker_concurr=$(jq -r '.pipelines.main.flow.worker_concurrency.last_1_minute' $NS_JSON)
local cpu_percent=$(jq -r '.process.cpu.percent' $NS_JSON)
local heap=$(jq -r '.jvm.mem.heap_used_in_bytes' $NS_JSON)
local non_heap=$(jq -r '.jvm.mem.non_heap_used_in_bytes' $NS_JSON)
local q_event_cnt=$(jq -r '.pipelines.main.queue.events_count' $NS_JSON)
local q_size=$(jq -r '.pipelines.main.queue.queue_size_in_bytes' $NS_JSON)
TOTAL_EVENTS_OUT=$(jq -r '.pipelines.main.events.out' $NS_JSON)
printf "current: %s, 1m: %s, 5m: %s, worker_utilization: %s, worker_concurrency: %s, cpu: %s, heap: %s, non-heap: %s, q_events: %s, q_size: %s, total_events_out: %s\n" \
$CURRENT $eps_1m $eps_5m $worker_util $worker_concurr $cpu_percent $heap $non_heap $q_event_cnt $q_size $TOTAL_EVENTS_OUT
}
aggregate_stats() {
local file_glob="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_*.json"
MAX_EPS_1M=$( jqmax '.flow.output_throughput.last_1_minute' "$file_glob" )
MAX_EPS_5M=$( jqmax '.flow.output_throughput.last_5_minutes' "$file_glob" )
MAX_WORKER_UTIL=$( jqmax '.pipelines.main.flow.worker_utilization.last_1_minute' "$file_glob" )
MAX_WORKER_CONCURR=$( jqmax '.pipelines.main.flow.worker_concurrency.last_1_minute' "$file_glob" )
MAX_Q_EVENT_CNT=$( jqmax '.pipelines.main.queue.events_count' "$file_glob" )
MAX_Q_SIZE=$( jqmax '.pipelines.main.queue.queue_size_in_bytes' "$file_glob" )
AVG_CPU_PERCENT=$( jqavg '.process.cpu.percent' "$file_glob" )
AVG_VIRTUAL_MEM=$( jqavg '.process.mem.total_virtual_in_bytes' "$file_glob" )
AVG_HEAP=$( jqavg '.jvm.mem.heap_used_in_bytes' "$file_glob" )
AVG_NON_HEAP=$( jqavg '.jvm.mem.non_heap_used_in_bytes' "$file_glob" )
}
send_summary() {
echo "--- Send summary to Elasticsearch"
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%S")
tee summary.json << EOF
{"index": {}}
{"timestamp": "$timestamp", "version": "$LS_VERSION", "cpu": "$CPU", "mem": "$MEM", "workers": "$WORKER", "batch_size": "$BATCH_SIZE", "queue_type": "$QTYPE", "total_events_out": "$TOTAL_EVENTS_OUT", "max_eps_1m": "$MAX_EPS_1M", "max_eps_5m": "$MAX_EPS_5M", "max_worker_utilization": "$MAX_WORKER_UTIL", "max_worker_concurrency": "$MAX_WORKER_CONCURR", "avg_cpu_percentage": "$AVG_CPU_PERCENT", "avg_heap": "$AVG_HEAP", "avg_non_heap": "$AVG_NON_HEAP", "avg_virtual_memory": "$AVG_VIRTUAL_MEM", "max_queue_events": "$MAX_Q_EVENT_CNT", "max_queue_bytes_size": "$MAX_Q_SIZE"}
EOF
curl -X POST -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" "$BENCHMARK_ES_HOST/benchmark_summary/_bulk" -H 'Content-Type: application/json' --data-binary @"summary.json"
echo ""
}
# $1: snapshot index
node_stats() {
NS_JSON="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_$1.json" # m_w8b1000_0.json
# curl inside container because docker on mac cannot resolve localhost to host network interface
docker exec -it ls curl localhost:9600/_node/stats > "$NS_JSON" 2> /dev/null
}
# $1: index
snapshot() {
node_stats $1
capture_stats
}
create_directory() {
NS_DIR="fb${FB_CNT}c${CPU}m${MEM}" # fb4c4m4
mkdir -p "$SCRIPT_PATH/$NS_DIR"
}
queue() {
for QTYPE in "persisted" "memory"; do
worker
done
}
worker() {
for m in "${MULTIPLIERS[@]}"; do
WORKER=$((CPU * m))
batch
done
}
batch() {
for BATCH_SIZE in "${BATCH_SIZES[@]}"; do
run_pipeline
stop_pipeline
done
}
run_pipeline() {
echo "--- Run pipeline. queue type: $QTYPE, worker: $WORKER, batch size: $BATCH_SIZE"
start_logstash
start_filebeat
docker ps
echo "(0) sleep 3m" && sleep 180
snapshot "0"
for i in {1..8}; do
echo "($i) sleep 30s" && sleep 30
snapshot "$i"
# print docker log when ingestion rate is zero
# remove '.' in number and return max val
[[ $(max -g "${CURRENT/./}" "0") -eq 0 ]] &&
docker logs fb0 &&
docker logs ls
done
aggregate_stats
send_summary
}
stop_pipeline() {
echo "--- Stop Pipeline"
for ((i = 0; i < FB_CNT; i++)); do
docker stop fb$i
docker rm fb$i
done
docker stop ls
docker rm ls
curl -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" -X DELETE $BENCHMARK_ES_HOST/_data_stream/logs-generic-default
echo " data stream deleted "
# TODO: clean page caches, reduce memory fragmentation
# https://github.com/elastic/logstash/pull/16191#discussion_r1647050216
}
main() {
parse_args "$@"
get_secret
@ -53,7 +317,8 @@ main() {
worker
fi
clean_up
# stop log generation if it has not done yet
[[ -n $(docker ps | grep flog) ]] && docker stop flog || true
}
main "$@"
main "$@"

View file

@ -1,44 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
# *******************************************************
# Run benchmark for versions that have flow metrics
# When the hardware changes, run the marathon task to establish a new baseline.
# Usage:
# nohup bash -x all.sh > log.log 2>&1 &
# Accept env vars:
# STACK_VERSIONS=8.15.0,8.15.1,8.16.0-SNAPSHOT # versions to test. It is comma separator string
# *******************************************************
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
source "$SCRIPT_PATH/core.sh"
parse_stack_versions() {
IFS=','
STACK_VERSIONS="${STACK_VERSIONS:-8.6.0,8.7.0,8.8.0,8.9.0,8.10.0,8.11.0,8.12.0,8.13.0,8.14.0,8.15.0}"
read -ra STACK_VERSIONS <<< "$STACK_VERSIONS"
}
main() {
parse_stack_versions
parse_args "$@"
get_secret
generate_logs
check_logs
USER_QTYPE="$QTYPE"
for V in "${STACK_VERSIONS[@]}" ; do
LS_VERSION="$V"
QTYPE="$USER_QTYPE"
pull_images
create_directory
if [[ $QTYPE == "all" ]]; then
queue
else
worker
fi
done
}
main "$@"

View file

@ -1,8 +0,0 @@
## 20241210
Remove scripted field `5m_num` from dashboards
## 20240912
Updated runtime field `release` to return `true` when `version` contains "SNAPSHOT"
## 20240912
Initial dashboards

View file

@ -1,14 +0,0 @@
benchmark_objects.ndjson contains the following resources
- Dashboards
- daily snapshot
- released versions
- Data Views
- benchmark
- runtime fields
- | Fields Name | Type | Comment |
|--------------|---------------------------------------------------------------------------------------|--------------------------------------------------|
| versions_num | long | convert semantic versioning to number for graph sorting |
| release | boolean | `true` for released version. `false` for snapshot version. It is for graph filtering. |
To import objects to Kibana, navigate to Stack Management > Save Objects and click Import

File diff suppressed because one or more lines are too long

View file

@ -1,6 +0,0 @@
POST /_aliases
{
"actions": [
{ "add": { "index": "benchmark_summary_v2", "alias": "benchmark_summary" } }
]
}

View file

@ -1,179 +0,0 @@
PUT /benchmark_summary_v2/_mapping
{
"properties": {
"avg_cpu_percentage": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_heap": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_non_heap": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_virtual_memory": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"batch_size": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"cpu": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_eps_1m": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_eps_5m": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_queue_bytes_size": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_queue_events": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_worker_concurrency": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_worker_utilization": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"mem": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"queue_type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"tag": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"timestamp": {
"type": "date"
},
"total_events_out": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"workers": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"tags" : {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}

View file

@ -30,12 +30,3 @@ jqavg() {
jqmax() {
jq -r "$1 | select(. != null)" $2 | jq -s . | jq 'max'
}
# return true if $1 is non empty and not "null"
not_empty() {
if [[ -n "$1" && "$1" != "null" ]]; then
return 0
else
return 1
fi
}

View file

@ -19,7 +19,7 @@ changed_files=$(git diff --name-only $previous_commit)
if [[ -n "$changed_files" ]] && [[ -z "$(echo "$changed_files" | grep -vE "$1")" ]]; then
echo "All files compared to the previous commit [$previous_commit] match the specified regex: [$1]"
echo "Files changed:"
git --no-pager diff --name-only HEAD^
git diff --name-only HEAD^
exit 0
else
exit 1

View file

@ -1,29 +0,0 @@
#!/usr/bin/env bash
# ********************************************************
# Source this script to get the QUALIFIED_VERSION env var
# or execute it to receive the qualified version on stdout
# ********************************************************
set -euo pipefail
export QUALIFIED_VERSION="$(
# Extract the version number from the version.yml file
# e.g.: 8.6.0
printf '%s' "$(awk -F':' '{ if ("logstash" == $1) { gsub(/^ | $/,"",$2); printf $2; exit } }' versions.yml)"
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases for staging builds only:
# e.g: 8.0.0-alpha1
printf '%s' "${VERSION_QUALIFIER:+-${VERSION_QUALIFIER}}"
# add the SNAPSHOT tag unless WORKFLOW_TYPE=="staging" or RELEASE=="1"
if [[ ! ( "${WORKFLOW_TYPE:-}" == "staging" || "${RELEASE:+$RELEASE}" == "1" ) ]]; then
printf '%s' "-SNAPSHOT"
fi
)"
# if invoked directly, output the QUALIFIED_VERSION to stdout
if [[ "$0" == "${BASH_SOURCE:-${ZSH_SCRIPT:-}}" ]]; then
printf '%s' "${QUALIFIED_VERSION}"
fi

View file

@ -26,7 +26,21 @@ rake artifact:docker_oss || error "artifact:docker_oss build failed."
rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
STACK_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
if [[ "$ARCH" != "aarch64" ]]; then
rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
if [[ "$WORKFLOW_TYPE" == "staging" ]] && [[ -n "$VERSION_QUALIFIER" ]]; then
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases for staging builds only:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER}"
fi
if [[ "$WORKFLOW_TYPE" == "snapshot" ]]; then
STACK_VERSION="${STACK_VERSION}-SNAPSHOT"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Saving tar.gz for docker images"
@ -38,6 +52,10 @@ for file in build/logstash-*; do shasum $file;done
info "Uploading DRA artifacts in buildkite's artifact store ..."
# Note the deb, rpm tar.gz AARCH64 files generated has already been loaded by the build_packages.sh
images="logstash logstash-oss logstash-wolfi"
if [ "$ARCH" != "aarch64" ]; then
# No logstash-ubi8 for AARCH64
images="logstash logstash-oss logstash-wolfi logstash-ubi8"
fi
for image in ${images}; do
buildkite-agent artifact upload "build/$image-${STACK_VERSION}-docker-image-${ARCH}.tar.gz"
done
@ -45,7 +63,7 @@ done
# Upload 'docker-build-context.tar.gz' files only when build x86_64, otherwise they will be
# overwritten when building aarch64 (or viceversa).
if [ "$ARCH" != "aarch64" ]; then
for image in logstash logstash-oss logstash-wolfi logstash-ironbank; do
for image in logstash logstash-oss logstash-wolfi logstash-ubi8 logstash-ironbank; do
buildkite-agent artifact upload "build/${image}-${STACK_VERSION}-docker-build-context.tar.gz"
done
fi

View file

@ -23,7 +23,17 @@ esac
SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
if [[ "$WORKFLOW_TYPE" == "staging" ]] && [[ -n "$VERSION_QUALIFIER" ]]; then
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases for staging builds only:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER}"
fi
if [[ "$WORKFLOW_TYPE" == "snapshot" ]]; then
STACK_VERSION="${STACK_VERSION}-SNAPSHOT"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Generated Artifacts"

View file

@ -11,6 +11,10 @@ function save_docker_tarballs {
local arch="${1:?architecture required}"
local version="${2:?stack-version required}"
local images="logstash logstash-oss logstash-wolfi"
if [ "${arch}" != "aarch64" ]; then
# No logstash-ubi8 for AARCH64
images="logstash logstash-oss logstash-wolfi logstash-ubi8"
fi
for image in ${images}; do
tar_file="${image}-${version}-docker-image-${arch}.tar"
@ -29,12 +33,12 @@ export JRUBY_OPTS="-J-Xmx4g"
# Extract the version number from the version.yml file
# e.g.: 8.6.0
# The suffix part like alpha1 etc is managed by the optional VERSION_QUALIFIER environment variable
# The suffix part like alpha1 etc is managed by the optional VERSION_QUALIFIER_OPT environment variable
STACK_VERSION=`cat versions.yml | sed -n 's/^logstash\:[[:space:]]\([[:digit:]]*\.[[:digit:]]*\.[[:digit:]]*\)$/\1/p'`
info "Agent is running on architecture [$(uname -i)]"
export VERSION_QUALIFIER=${VERSION_QUALIFIER:-""}
export VERSION_QUALIFIER_OPT=${VERSION_QUALIFIER_OPT:-""}
export DRA_DRY_RUN=${DRA_DRY_RUN:-""}
if [[ ! -z $DRA_DRY_RUN && $BUILDKITE_STEP_KEY == "logstash_publish_dra" ]]; then

View file

@ -42,10 +42,24 @@ if [ "$RELEASE_VER" != "7.17" ]; then
:
fi
# Deleting ubi8 for aarch64 for the time being. This image itself is not being built, and it is not expected
# by the release manager.
# See https://github.com/elastic/infra/blob/master/cd/release/release-manager/project-configs/8.5/logstash.gradle
# for more details.
# TODO filter it out when uploading artifacts instead
rm -f build/logstash-ubi8-${STACK_VERSION}-docker-image-aarch64.tar.gz
info "Downloaded ARTIFACTS sha report"
for file in build/logstash-*; do shasum $file;done
FINAL_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
FINAL_VERSION=$STACK_VERSION
if [[ -n "$VERSION_QUALIFIER" ]]; then
FINAL_VERSION="$FINAL_VERSION-${VERSION_QUALIFIER}"
fi
if [[ "$WORKFLOW_TYPE" == "snapshot" ]]; then
FINAL_VERSION="${STACK_VERSION}-SNAPSHOT"
fi
mv build/distributions/dependencies-reports/logstash-${FINAL_VERSION}.csv build/distributions/dependencies-${FINAL_VERSION}.csv

View file

@ -155,7 +155,7 @@ ci/acceptance_tests.sh"""),
def acceptance_docker_steps()-> list[typing.Any]:
steps = []
for flavor in ["full", "oss", "ubi", "wolfi"]:
for flavor in ["full", "oss", "ubi8", "wolfi"]:
steps.append({
"label": f":docker: {flavor} flavor acceptance",
"agents": gcp_agent(vm_name="ubuntu-2204", image_prefix="family/platform-ingest-logstash"),

View file

@ -6,13 +6,12 @@ Health Report Integration test bootstrapper with Python script
"""
import os
import subprocess
import time
import util
import yaml
class Bootstrap:
ELASTIC_STACK_RELEASED_VERSION_URL = "https://storage.googleapis.com/artifacts-api/releases/current/"
ELASTIC_STACK_VERSIONS_URL = "https://artifacts-api.elastic.co/v1/versions"
def __init__(self) -> None:
f"""
@ -43,13 +42,20 @@ class Bootstrap:
f"rerun again")
def __resolve_latest_stack_version_for(self, major_version: str) -> str:
resp = util.call_url_with_retry(self.ELASTIC_STACK_RELEASED_VERSION_URL + major_version)
release_version = resp.text.strip()
print(f"Resolved latest version for {major_version} is {release_version}.")
resolved_version = ""
response = util.call_url_with_retry(self.ELASTIC_STACK_VERSIONS_URL)
release_versions = response.json()["versions"]
for release_version in reversed(release_versions):
if release_version.find("SNAPSHOT") > 0:
continue
if release_version.split(".")[0] == major_version:
print(f"Resolved latest version for {major_version} is {release_version}.")
resolved_version = release_version
break
if release_version == "":
if resolved_version == "":
raise ValueError(f"Cannot resolve latest version for {major_version} major")
return release_version
return resolved_version
def install_plugin(self, plugin_path: str) -> None:
util.run_or_raise_error(
@ -93,19 +99,3 @@ class Bootstrap:
print(f"Logstash is running with PID: {process.pid}.")
return process
def stop_logstash(self, process: subprocess.Popen):
start_time = time.time() # in seconds
process.terminate()
for stdout_line in iter(process.stdout.readline, ""):
# print(f"STDOUT: {stdout_line.strip()}")
if "Logstash shut down" in stdout_line or "Logstash stopped" in stdout_line:
print(f"Logstash stopped.")
return None
# shudown watcher keep running, we should be good with considering time spent
if time.time() - start_time > 60:
print(f"Logstash didn't stop in 1min, sending SIGTERM signal.")
process.kill()
if time.time() - start_time > 70:
print(f"Logstash didn't stop over 1min, exiting.")
return None

View file

@ -6,11 +6,11 @@ class ConfigValidator:
REQUIRED_KEYS = {
"root": ["name", "config", "conditions", "expectation"],
"config": ["pipeline.id", "config.string"],
"conditions": ["full_start_required", "wait_seconds"],
"conditions": ["full_start_required"],
"expectation": ["status", "symptom", "indicators"],
"indicators": ["pipelines"],
"pipelines": ["status", "symptom", "indicators"],
"DYNAMIC": ["status", "symptom", "diagnosis", "impacts", "details"], # pipeline-id is a DYNAMIC
"DYNAMIC": ["status", "symptom", "diagnosis", "impacts", "details"],
"details": ["status"],
"status": ["state"]
}
@ -19,8 +19,7 @@ class ConfigValidator:
self.yaml_content = None
def __has_valid_keys(self, data: any, key_path: str, repeated: bool) -> bool:
# we reached the value
if isinstance(data, str) or isinstance(data, bool) or isinstance(data, int) or isinstance(data, float):
if isinstance(data, str) or isinstance(data, bool): # we reached values
return True
# we have two indicators section and for the next repeated ones, we go deeper

View file

@ -62,23 +62,21 @@ def main():
print(f"Testing `{scenario_content.get('name')}` scenario.")
scenario_name = scenario_content['name']
is_full_start_required = scenario_content.get('conditions').get('full_start_required')
wait_seconds = scenario_content.get('conditions').get('wait_seconds')
is_full_start_required = next(sub.get('full_start_required') for sub in
scenario_content.get('conditions') if 'full_start_required' in sub)
config = scenario_content['config']
if config is not None:
bootstrap.apply_config(config)
expectations = scenario_content.get("expectation")
process = bootstrap.run_logstash(is_full_start_required)
if process is not None:
if wait_seconds is not None:
print(f"Test requires to wait for `{wait_seconds}` seconds.")
time.sleep(wait_seconds) # wait for Logstash to start
try:
scenario_executor.on(scenario_name, expectations)
except Exception as e:
print(e)
has_failed_scenario = True
bootstrap.stop_logstash(process)
process.terminate()
time.sleep(5) # leave some window to terminate the process
if has_failed_scenario:
# intentionally fail due to visibility

View file

@ -4,13 +4,14 @@ set -euo pipefail
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:/opt/buildkite-agent/.java/bin:$PATH"
export JAVA_HOME="/opt/buildkite-agent/.java"
export PYENV_VERSION="3.11.5"
eval "$(rbenv init -)"
eval "$(pyenv init -)"
echo "--- Installing pip"
sudo apt-get install python3-pip -y
echo "--- Installing dependencies"
python3 -m pip install -r .buildkite/scripts/health-report-tests/requirements.txt
python3 -mpip install -r .buildkite/scripts/health-report-tests/requirements.txt
echo "--- Running tests"
python3 .buildkite/scripts/health-report-tests/main.py

View file

@ -12,12 +12,10 @@ class ScenarioExecutor:
pass
def __has_intersection(self, expects, results):
# TODO: this logic is aligned on current Health API response
# there is no guarantee that method correctly runs if provided multi expects and results
# we expect expects to be existing in results
for expect in expects:
for result in results:
if result.get('help_url') and "health-report-pipeline-" not in result.get('help_url'):
if result.get('help_url') and "health-report-pipeline-status.html#" not in result.get('help_url'):
return False
if not all(key in result and result[key] == value for key, value in expect.items()):
return False

View file

@ -8,8 +8,7 @@ config:
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 5
- full_start_required: true
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
@ -23,10 +22,10 @@ expectation:
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is not running, likely because it has encountered an error"
action: "view logs to determine the cause of abnormal pipeline shutdown"
- action: "view logs to determine the cause of abnormal pipeline shutdown"
impacts:
- description: "the pipeline is not currently processing"
impact_areas: ["pipeline_execution"]
- impact_areas: ["pipeline_execution"]
details:
status:
state: "TERMINATED"

View file

@ -1,38 +0,0 @@
name: "Backpressured in 1min pipeline"
config:
- pipeline.id: backpressure-1m-pp
config.string: |
input { heartbeat { interval => 0.1 } }
filter { failure_injector { degrade_at => [filter] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 70 # give more seconds to make sure time is over the threshold, 1m in this case
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`backpressure-1m-pp`)"
indicators:
backpressure-1m-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- id: "logstash:health:pipeline:flow:worker_utilization:diagnosis:1m-blocked"
cause: "pipeline workers have been completely blocked for at least one minute"
action: "address bottleneck or add resources"
impacts:
- id: "logstash:health:pipeline:flow:impact:blocked_processing"
severity: 2
description: "the pipeline is blocked"
impact_areas: ["pipeline_execution"]
details:
status:
state: "RUNNING"
flow:
worker_utilization:
last_1_minute: 100.0

View file

@ -1,39 +0,0 @@
name: "Backpressured in 5min pipeline"
config:
- pipeline.id: backpressure-5m-pp
config.string: |
input { heartbeat { interval => 0.1 } }
filter { failure_injector { degrade_at => [filter] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 310 # give more seconds to make sure time is over the threshold, 1m in this case
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`backpressure-5m-pp`)"
indicators:
backpressure-5m-pp:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- id: "logstash:health:pipeline:flow:worker_utilization:diagnosis:5m-blocked"
cause: "pipeline workers have been completely blocked for at least five minutes"
action: "address bottleneck or add resources"
impacts:
- id: "logstash:health:pipeline:flow:impact:blocked_processing"
severity: 1
description: "the pipeline is blocked"
impact_areas: ["pipeline_execution"]
details:
status:
state: "RUNNING"
flow:
worker_utilization:
last_1_minute: 100.0
last_5_minutes: 100.0

View file

@ -1,67 +0,0 @@
name: "Multi pipeline"
config:
- pipeline.id: slow-start-pp-multipipeline
config.string: |
input { heartbeat {} }
filter { failure_injector { degrade_at => [register] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
- pipeline.id: normally-terminated-pp-multipipeline
config.string: |
input { generator { count => 1 } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
- pipeline.id: abnormally-terminated-pp-multipipeline
config.string: |
input { heartbeat { interval => 1 } }
filter { failure_injector { crash_at => filter } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: false
wait_seconds: 10
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`abnormally-terminated-pp-multipipeline`) and 2 indicators are concerning (`slow-start-pp-multipipeline`, `normally-terminated-pp-multipipeline`)"
indicators:
slow-start-pp-multipipeline:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is loading"
action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "LOADING"
normally-terminated-pp-multipipeline:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline has finished running because its inputs have been closed and events have been processed"
action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
impacts:
- impact_areas: [ "pipeline_execution" ]
details:
status:
state: "FINISHED"
abnormally-terminated-pp-multipipeline:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is not running, likely because it has encountered an error"
action: "view logs to determine the cause of abnormal pipeline shutdown"
impacts:
- description: "the pipeline is not currently processing"
impact_areas: [ "pipeline_execution" ]
details:
status:
state: "TERMINATED"

View file

@ -7,8 +7,7 @@ config:
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 5
- full_start_required: true
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
@ -22,7 +21,7 @@ expectation:
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline has finished running because its inputs have been closed and events have been processed"
action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
- action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
impacts:
- impact_areas: ["pipeline_execution"]
details:

View file

@ -8,8 +8,7 @@ config:
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: false
wait_seconds: 0
- full_start_required: false
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
@ -23,7 +22,7 @@ expectation:
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is loading"
action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
- action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
impacts:
- impact_areas: ["pipeline_execution"]
details:

View file

@ -4,7 +4,6 @@ from dataclasses import dataclass, field
import os
import sys
import typing
from functools import partial
from ruamel.yaml import YAML
from ruamel.yaml.scalarstring import LiteralScalarString
@ -178,15 +177,17 @@ class LinuxJobs(Jobs):
super().__init__(os=os, jdk=jdk, group_key=group_key, agent=agent)
def all_jobs(self) -> list[typing.Callable[[], JobRetValues]]:
jobs=list()
jobs.append(self.init_annotation)
jobs.append(self.java_unit_test)
jobs.append(self.ruby_unit_test)
jobs.extend(self.integration_test_parts(3))
jobs.extend(self.pq_integration_test_parts(3))
jobs.append(self.x_pack_unit_tests)
jobs.append(self.x_pack_integration)
return jobs
return [
self.init_annotation,
self.java_unit_test,
self.ruby_unit_test,
self.integration_tests_part_1,
self.integration_tests_part_2,
self.pq_integration_tests_part_1,
self.pq_integration_tests_part_2,
self.x_pack_unit_tests,
self.x_pack_integration,
]
def prepare_shell(self) -> str:
jdk_dir = f"/opt/buildkite-agent/.java/{self.jdk}"
@ -258,14 +259,17 @@ ci/unit_tests.sh ruby
retry=copy.deepcopy(ENABLED_RETRIES),
)
def integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def integration_tests_part_1(self) -> JobRetValues:
return self.integration_tests(part=1)
def integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"Integration Tests - {part}/{parts}"
step_key = f"{self.group_key}-integration-tests-{part}-of-{parts}"
def integration_tests_part_2(self) -> JobRetValues:
return self.integration_tests(part=2)
def integration_tests(self, part: int) -> JobRetValues:
step_name_human = f"Integration Tests - {part}"
step_key = f"{self.group_key}-integration-tests-{part}"
test_command = f"""
ci/integration_tests.sh split {part-1} {parts}
ci/integration_tests.sh split {part-1}
"""
return JobRetValues(
@ -277,15 +281,18 @@ ci/integration_tests.sh split {part-1} {parts}
retry=copy.deepcopy(ENABLED_RETRIES),
)
def pq_integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.pq_integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def pq_integration_tests_part_1(self) -> JobRetValues:
return self.pq_integration_tests(part=1)
def pq_integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"IT Persistent Queues - {part}/{parts}"
step_key = f"{self.group_key}-it-persistent-queues-{part}-of-{parts}"
def pq_integration_tests_part_2(self) -> JobRetValues:
return self.pq_integration_tests(part=2)
def pq_integration_tests(self, part: int) -> JobRetValues:
step_name_human = f"IT Persistent Queues - {part}"
step_key = f"{self.group_key}-it-persistent-queues-{part}"
test_command = f"""
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split {part-1} {parts}
ci/integration_tests.sh split {part-1}
"""
return JobRetValues(

View file

@ -4,7 +4,7 @@ set -e
install_java() {
# TODO: let's think about regularly creating a custom image for Logstash which may align on version.yml definitions
sudo apt update && sudo apt install -y openjdk-21-jdk && sudo apt install -y openjdk-21-jre
sudo apt update && sudo apt install -y openjdk-17-jdk && sudo apt install -y openjdk-17-jre
}
install_java

View file

@ -4,13 +4,22 @@ set -e
TARGET_BRANCHES=("main")
install_java_11() {
curl -L -s "https://github.com/adoptium/temurin11-binaries/releases/download/jdk-11.0.24%2B8/OpenJDK11U-jdk_x64_linux_hotspot_11.0.24_8.tar.gz" | tar -zxf -
install_java() {
# TODO: let's think about using BK agent which has Java installed
# Current caveat is Logstash BK agent doesn't support docker operatioins in it
sudo apt update && sudo apt install -y openjdk-17-jdk && sudo apt install -y openjdk-17-jre
}
# Resolves the branches we are going to track
resolve_latest_branches() {
source .buildkite/scripts/snyk/resolve_stack_version.sh
for SNAPSHOT_VERSION in "${SNAPSHOT_VERSIONS[@]}"
do
IFS='.'
read -a versions <<< "$SNAPSHOT_VERSION"
version=${versions[0]}.${versions[1]}
TARGET_BRANCHES+=("$version")
done
}
# Build Logstash specific branch to generate Gemlock file where Snyk scans
@ -33,7 +42,7 @@ download_auth_snyk() {
report() {
REMOTE_REPO_URL=$1
echo "Reporting $REMOTE_REPO_URL branch."
if [ "$REMOTE_REPO_URL" != "main" ] && [ "$REMOTE_REPO_URL" != "8.x" ]; then
if [ "$REMOTE_REPO_URL" != "main" ]; then
MAJOR_VERSION=$(echo "$REMOTE_REPO_URL"| cut -d'.' -f 1)
REMOTE_REPO_URL="$MAJOR_VERSION".latest
echo "Using '$REMOTE_REPO_URL' remote repo url."
@ -46,18 +55,13 @@ report() {
./snyk monitor --prune-repeated-subdependencies --all-projects --org=logstash --remote-repo-url="$REMOTE_REPO_URL" --target-reference="$REMOTE_REPO_URL" --detection-depth=6 --exclude=qa,tools,devtools,requirements.txt --project-tags=branch="$TARGET_BRANCH",git_head="$GIT_HEAD" || :
}
install_java
resolve_latest_branches
download_auth_snyk
# clone Logstash repo, build and report
for TARGET_BRANCH in "${TARGET_BRANCHES[@]}"
do
if [ "$TARGET_BRANCH" == "7.17" ]; then
echo "Installing and configuring JDK11."
export OLD_PATH=$PATH
install_java_11
export PATH=$PWD/jdk-11.0.24+8/bin:$PATH
fi
git reset --hard HEAD # reset if any generated files appeared
# check if target branch exists
echo "Checking out $TARGET_BRANCH branch."
@ -67,10 +71,70 @@ do
else
echo "$TARGET_BRANCH branch doesn't exist."
fi
if [ "$TARGET_BRANCH" == "7.17" ]; then
# reset state
echo "Removing JDK11 installation."
rm -rf jdk-11.0.24+8
export PATH=$OLD_PATH
fi
done
# Scan Logstash docker images and report
REPOSITORY_BASE_URL="docker.elastic.co/logstash/"
report_docker_image() {
image=$1
project_name=$2
platform=$3
echo "Reporting $image to Snyk started..."
docker pull "$image"
if [[ $platform != null ]]; then
./snyk container monitor "$image" --org=logstash --platform="$platform" --project-name="$project_name" --project-tags=version="$version" || :
else
./snyk container monitor "$image" --org=logstash --project-name="$project_name" --project-tags=version="$version" || :
fi
}
report_docker_images() {
version=$1
echo "Version value: $version"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT"
snyk_project_name="logstash-$version-SNAPSHOT"
report_docker_image "$image" "$snyk_project_name"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT"
snyk_project_name="logstash-oss-$version-SNAPSHOT"
report_docker_image "$image" "$snyk_project_name"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT-arm64"
snyk_project_name="logstash-$version-SNAPSHOT-arm64"
report_docker_image "$image" "$snyk_project_name" "linux/arm64"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT-amd64"
snyk_project_name="logstash-$version-SNAPSHOT-amd64"
report_docker_image "$image" "$snyk_project_name" "linux/amd64"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT-arm64"
snyk_project_name="logstash-oss-$version-SNAPSHOT-arm64"
report_docker_image "$image" "$snyk_project_name" "linux/arm64"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT-amd64"
snyk_project_name="logstash-oss-$version-SNAPSHOT-amd64"
report_docker_image "$image" "$snyk_project_name" "linux/amd64"
}
resolve_version_and_report_docker_images() {
git reset --hard HEAD # reset if any generated files appeared
git checkout "$1"
# parse version (ex: 8.8.2 from 8.8 branch, or 8.9.0 from main branch)
versions_file="$PWD/versions.yml"
version=$(awk '/logstash:/ { print $2 }' "$versions_file")
report_docker_images "$version"
}
# resolve docker artifact and report
#for TARGET_BRANCH in "${TARGET_BRANCHES[@]}"
#do
# if git show-ref --quiet refs/heads/"$TARGET_BRANCH"; then
# echo "Using $TARGET_BRANCH branch for docker images."
# resolve_version_and_report_docker_images "$TARGET_BRANCH"
# else
# echo "$TARGET_BRANCH branch doesn't exist."
# fi
#done

View file

@ -6,9 +6,14 @@
set -e
VERSION_URL="https://storage.googleapis.com/artifacts-api/snapshots/branches.json"
VERSION_URL="https://raw.githubusercontent.com/elastic/logstash/main/ci/logstash_releases.json"
echo "Fetching versions from $VERSION_URL"
readarray -t TARGET_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $VERSION_URL | jq -r '.branches[]')
echo "${TARGET_BRANCHES[@]}"
VERSIONS=$(curl --silent $VERSION_URL)
SNAPSHOT_KEYS=$(echo "$VERSIONS" | jq -r '.snapshots | .[]')
SNAPSHOT_VERSIONS=()
while IFS= read -r line; do
SNAPSHOT_VERSIONS+=("$line")
echo "Resolved snapshot version: $line"
done <<< "$SNAPSHOT_KEYS"

View file

@ -1,8 +1,9 @@
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "2"
memory: "4Gi"
ephemeralStorage: "64Gi"
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 120
steps:
# reports main, previous (ex: 7.latest) and current (ex: 8.latest) release branches to Snyk

View file

@ -2,7 +2,7 @@
env:
DEFAULT_MATRIX_OS: "windows-2022"
DEFAULT_MATRIX_JDK: "adoptiumjdk_21"
DEFAULT_MATRIX_JDK: "adoptiumjdk_17"
steps:
- input: "Test Parameters"
@ -35,14 +35,20 @@ steps:
value: "adoptiumjdk_21"
- label: "Adoptium JDK 17 (Eclipse Temurin)"
value: "adoptiumjdk_17"
- label: "Adoptium JDK 11 (Eclipse Temurin)"
value: "adoptiumjdk_11"
- label: "OpenJDK 21"
value: "openjdk_21"
- label: "OpenJDK 17"
value: "openjdk_17"
- label: "OpenJDK 11"
value: "openjdk_11"
- label: "Zulu 21"
value: "zulu_21"
- label: "Zulu 17"
value: "zulu_17"
- label: "Zulu 11"
value: "zulu_11"
- wait: ~
if: build.source != "schedule" && build.source != "trigger_job"

View file

@ -1,42 +0,0 @@
.SILENT:
MAKEFLAGS += --no-print-directory
.SHELLFLAGS = -euc
SHELL = /bin/bash
#######################
## Templates
#######################
## Mergify template
define MERGIFY_TMPL
- name: backport patches to $(BRANCH) branch
conditions:
- merged
- base=main
- label=$(BACKPORT_LABEL)
actions:
backport:
branches:
- "$(BRANCH)"
endef
# Add mergify entry for the new backport label
.PHONY: mergify
export MERGIFY_TMPL
mergify: BACKPORT_LABEL=$${BACKPORT_LABEL} BRANCH=$${BRANCH} PUSH_BRANCH=$${PUSH_BRANCH}
mergify:
@echo ">> mergify"
echo "$$MERGIFY_TMPL" >> ../.mergify.yml
git add ../.mergify.yml
git status
if [ ! -z "$$(git status --porcelain)" ]; then \
git commit -m "mergify: add $(BACKPORT_LABEL) rule"; \
git push origin $(PUSH_BRANCH) ; \
fi
# Create GitHub backport label
.PHONY: backport-label
backport-label: BACKPORT_LABEL=$${BACKPORT_LABEL}
backport-label:
@echo ">> backport-label"
gh label create $(BACKPORT_LABEL) --description "Automated backport with mergify" --color 0052cc --force

View file

@ -1,2 +1,2 @@
LS_BUILD_JAVA=adoptiumjdk_21
LS_RUNTIME_JAVA=adoptiumjdk_21
LS_BUILD_JAVA=adoptiumjdk_17
LS_RUNTIME_JAVA=adoptiumjdk_17

View file

@ -21,6 +21,10 @@ analyze:
type: gradle
target: 'dependencies-report:'
path: .
- name: ingest-converter
type: gradle
target: 'ingest-converter:'
path: .
- name: logstash-core
type: gradle
target: 'logstash-core:'

View file

@ -21,5 +21,6 @@ to reproduce locally
**Failure history**:
<!--
Link to build stats and possible indication of when this started failing and how often it fails
<https://build-stats.elastic.co/app/kibana>
-->
**Failure excerpt**:

View file

@ -1,18 +0,0 @@
---
version: 2
updates:
- package-ecosystem: "github-actions"
directories:
- '/'
- '/.github/actions/*'
schedule:
interval: "weekly"
day: "sunday"
time: "22:00"
reviewers:
- "elastic/observablt-ci"
- "elastic/observablt-ci-contractors"
groups:
github-actions:
patterns:
- "*"

View file

@ -0,0 +1,33 @@
name: Docs Preview Link
on:
pull_request_target:
types: [opened, synchronize]
paths:
- docs/**
- docsk8s/**
jobs:
docs-preview-link:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- id: wait-for-status
uses: autotelic/action-wait-for-status-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
owner: elastic
# when running with on: pull_request_target we get the PR base ref by default
ref: ${{ github.event.pull_request.head.sha }}
statusName: "buildkite/docs-build-pr"
# https://elasticsearch-ci.elastic.co/job/elastic+logstash+pull-request+build-docs
# usually finishes in ~ 20 minutes
timeoutSeconds: 900
intervalSeconds: 30
- name: Add Docs Preview link in PR Comment
if: steps.wait-for-status.outputs.state == 'success'
uses: thollander/actions-comment-pull-request@v1
with:
message: |
:page_with_curl: **DOCS PREVIEW** :sparkles: https://logstash_bk_${{ github.event.number }}.docs-preview.app.elstc.co/diff
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View file

@ -1,22 +0,0 @@
name: Backport to active branches
on:
pull_request_target:
types: [closed]
branches:
- main
permissions:
pull-requests: write
contents: read
jobs:
backport:
# Only run if the PR was merged (not just closed) and has one of the backport labels
if: |
github.event.pull_request.merged == true &&
contains(toJSON(github.event.pull_request.labels.*.name), 'backport-active-')
runs-on: ubuntu-latest
steps:
- uses: elastic/oblt-actions/github/backport-active@v1

View file

@ -1,19 +0,0 @@
name: docs-build
on:
push:
branches:
- main
pull_request_target: ~
merge_group: ~
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-build.yml@main
with:
path-pattern: docs/**
permissions:
deployments: write
id-token: write
contents: read
pull-requests: read

View file

@ -1,14 +0,0 @@
name: docs-cleanup
on:
pull_request_target:
types:
- closed
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-cleanup.yml@main
permissions:
contents: none
id-token: write
deployments: write

View file

@ -1,23 +0,0 @@
name: mergify backport labels copier
on:
pull_request:
types:
- opened
permissions:
contents: read
jobs:
mergify-backport-labels-copier:
runs-on: ubuntu-latest
if: startsWith(github.head_ref, 'mergify/bp/')
permissions:
# Add GH labels
pull-requests: write
# See https://github.com/cli/cli/issues/6274
repository-projects: read
steps:
- uses: elastic/oblt-actions/mergify/labels-copier@v1
with:
excluded-labels-regex: "^backport-*"

49
.github/workflows/pr_backporter.yml vendored Normal file
View file

@ -0,0 +1,49 @@
name: Backport PR to another branch
on:
issue_comment:
types: [created]
permissions:
pull-requests: write
contents: write
jobs:
pr_commented:
name: PR comment
if: github.event.issue.pull_request
runs-on: ubuntu-latest
steps:
- uses: actions-ecosystem/action-regex-match@v2
id: regex-match
with:
text: ${{ github.event.comment.body }}
regex: '^@logstashmachine backport (main|[x0-9\.]+)$'
- if: ${{ steps.regex-match.outputs.group1 == '' }}
run: exit 1
- name: Fetch logstash-core team member list
uses: tspascoal/get-user-teams-membership@v1
id: checkUserMember
with:
username: ${{ github.actor }}
organization: elastic
team: logstash
GITHUB_TOKEN: ${{ secrets.READ_ORG_SECRET_JSVD }}
- name: Is user not a core team member?
if: ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
run: exit 1
- name: checkout repo content
uses: actions/checkout@v2
with:
fetch-depth: 0
ref: 'main'
- run: git config --global user.email "43502315+logstashmachine@users.noreply.github.com"
- run: git config --global user.name "logstashmachine"
- name: setup python
uses: actions/setup-python@v2
with:
python-version: 3.8
- run: |
mkdir ~/.elastic && echo ${{ github.token }} >> ~/.elastic/github.token
- run: pip install requests
- name: run backport
run: python devtools/backport ${{ steps.regex-match.outputs.group1 }} ${{ github.event.issue.number }} --remote=origin --yes

View file

@ -1,18 +0,0 @@
name: pre-commit
on:
pull_request:
push:
branches:
- main
- 8.*
- 9.*
permissions:
contents: read
jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: elastic/oblt-actions/pre-commit@v1

View file

@ -25,13 +25,9 @@ jobs:
version_bumper:
name: Bump versions
runs-on: ubuntu-latest
env:
INPUTS_BRANCH: "${{ inputs.branch }}"
INPUTS_BUMP: "${{ inputs.bump }}"
BACKPORT_LABEL: "backport-${{ inputs.branch }}"
steps:
- name: Fetch logstash-core team member list
uses: tspascoal/get-user-teams-membership@57e9f42acd78f4d0f496b3be4368fc5f62696662 #v3.0.0
uses: tspascoal/get-user-teams-membership@v1
with:
username: ${{ github.actor }}
organization: elastic
@ -41,14 +37,14 @@ jobs:
if: ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
run: exit 1
- name: checkout repo content
uses: actions/checkout@v4
uses: actions/checkout@v2
with:
fetch-depth: 0
ref: ${{ env.INPUTS_BRANCH }}
ref: ${{ github.event.inputs.branch }}
- run: git config --global user.email "43502315+logstashmachine@users.noreply.github.com"
- run: git config --global user.name "logstashmachine"
- run: ./gradlew clean installDefaultGems
- run: ./vendor/jruby/bin/jruby -S bundle update --all --${{ env.INPUTS_BUMP }} --strict
- run: ./vendor/jruby/bin/jruby -S bundle update --all --${{ github.event.inputs.bump }} --strict
- run: mv Gemfile.lock Gemfile.jruby-*.lock.release
- run: echo "T=$(date +%s)" >> $GITHUB_ENV
- run: echo "BRANCH=update_lock_${T}" >> $GITHUB_ENV
@ -57,21 +53,8 @@ jobs:
git add .
git status
if [[ -z $(git status --porcelain) ]]; then echo "No changes. We're done."; exit 0; fi
git commit -m "Update ${{ env.INPUTS_BUMP }} plugin versions in gemfile lock" -a
git commit -m "Update ${{ github.event.inputs.bump }} plugin versions in gemfile lock" -a
git push origin $BRANCH
- name: Update mergify (minor only)
if: ${{ inputs.bump == 'minor' }}
continue-on-error: true
run: make -C .ci mergify BACKPORT_LABEL=$BACKPORT_LABEL BRANCH=$INPUTS_BRANCH PUSH_BRANCH=$BRANCH
- name: Create Pull Request
run: |
curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -X POST -d "{\"title\": \"bump lock file for ${{ env.INPUTS_BRANCH }}\",\"head\": \"${BRANCH}\",\"base\": \"${{ env.INPUTS_BRANCH }}\"}" https://api.github.com/repos/elastic/logstash/pulls
- name: Create GitHub backport label (Mergify) (minor only)
if: ${{ inputs.bump == 'minor' }}
continue-on-error: true
run: make -C .ci backport-label BACKPORT_LABEL=$BACKPORT_LABEL
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -X POST -d "{\"title\": \"bump lock file for ${{ github.event.inputs.branch }}\",\"head\": \"${BRANCH}\",\"base\": \"${{ github.event.inputs.branch }}\"}" https://api.github.com/repos/elastic/logstash/pulls

View file

@ -1,132 +0,0 @@
commands_restrictions:
backport:
conditions:
- or:
- sender-permission>=write
- sender=github-actions[bot]
defaults:
actions:
backport:
title: "[{{ destination_branch }}] (backport #{{ number }}) {{ title }}"
assignees:
- "{{ author }}"
labels:
- "backport"
pull_request_rules:
# - name: ask to resolve conflict
# conditions:
# - conflict
# actions:
# comment:
# message: |
# This pull request is now in conflicts. Could you fix it @{{author}}? 🙏
# To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/
# ```
# git fetch upstream
# git checkout -b {{head}} upstream/{{head}}
# git merge upstream/{{base}}
# git push upstream {{head}}
# ```
- name: notify the backport policy
conditions:
- -label~=^backport
- base=main
actions:
comment:
message: |
This pull request does not have a backport label. Could you fix it @{{author}}? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:
* `backport-8./d` is the label to automatically backport to the `8./d` branch. `/d` is the digit.
* If no backport is necessary, please add the `backport-skip` label
- name: remove backport-skip label
conditions:
- label~=^backport-\d
actions:
label:
remove:
- backport-skip
- name: notify the backport has not been merged yet
conditions:
- -merged
- -closed
- author=mergify[bot]
- "#check-success>0"
- schedule=Mon-Mon 06:00-10:00[Europe/Paris]
actions:
comment:
message: |
This pull request has not been merged yet. Could you please review and merge it @{{ assignee | join(', @') }}? 🙏
- name: backport patches to 8.16 branch
conditions:
- merged
- base=main
- label=backport-8.16
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.16"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.17 branch
conditions:
- merged
- base=main
- label=backport-8.17
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.17"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.18 branch
conditions:
- merged
- base=main
- label=backport-8.18
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.18"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.19 branch
conditions:
- merged
- base=main
- label=backport-8.19
actions:
backport:
branches:
- "8.19"
- name: backport patches to 9.0 branch
conditions:
- merged
- base=main
- label=backport-9.0
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "9.0"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"

View file

@ -1,6 +0,0 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-merge-conflict
args: ['--assume-in-merge']

File diff suppressed because it is too large Load diff

View file

@ -13,6 +13,7 @@ gem "ruby-maven-libs", "~> 3", ">= 3.9.6.1"
gem "logstash-output-elasticsearch", ">= 11.14.0"
gem "polyglot", require: false
gem "treetop", require: false
gem "faraday", "~> 1", :require => false # due elasticsearch-transport (elastic-transport) depending faraday '~> 1'
gem "minitar", "~> 1", :group => :build
gem "childprocess", "~> 4", :group => :build
gem "fpm", "~> 1", ">= 1.14.1", :group => :build # compound due to bugfix https://github.com/jordansissel/fpm/pull/1856
@ -26,8 +27,6 @@ gem "stud", "~> 0.0.22", :group => :build
gem "fileutils", "~> 1.7"
gem "rubocop", :group => :development
# rubocop-ast 1.43.0 carries a dep on `prism` which requires native c extensions
gem 'rubocop-ast', '= 1.42.0', :group => :development
gem "belzebuth", :group => :development
gem "benchmark-ips", :group => :development
gem "ci_reporter_rspec", "~> 1", :group => :development
@ -45,5 +44,3 @@ gem "date", "= 3.3.3"
gem "thwait"
gem "bigdecimal", "~> 3.1"
gem "psych", "5.2.2"
gem "cgi", "0.3.7" # Pins until a new jruby version with updated cgi is released
gem "uri", "0.12.3" # Pins until a new jruby version with updated cgi is released

File diff suppressed because it is too large Load diff

View file

@ -20,6 +20,7 @@ supported platforms, from [downloads page](https://www.elastic.co/downloads/logs
- [Logstash Forum](https://discuss.elastic.co/c/logstash)
- [Logstash Documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
- [#logstash on freenode IRC](https://webchat.freenode.net/?channels=logstash)
- [Logstash Product Information](https://www.elastic.co/products/logstash)
- [Elastic Support](https://www.elastic.co/subscriptions)

10
bin/ingest-convert.bat Normal file
View file

@ -0,0 +1,10 @@
@echo off
setlocal enabledelayedexpansion
cd /d "%~dp0\.."
for /f %%i in ('cd') do set RESULT=%%i
"%JAVACMD%" -cp "!RESULT!\tools\ingest-converter\build\libs\ingest-converter.jar;*" ^
org.logstash.ingest.Pipeline %*
endlocal

4
bin/ingest-convert.sh Executable file
View file

@ -0,0 +1,4 @@
#!/usr/bin/env bash
java -cp "$(cd `dirname $0`/..; pwd)"'/tools/ingest-converter/build/libs/ingest-converter.jar:*' \
org.logstash.ingest.Pipeline "$@"

View file

@ -6,7 +6,7 @@ set params='%*'
if "%1" == "-V" goto version
if "%1" == "--version" goto version
1>&2 (call "%~dp0setup.bat") || exit /b 1
call "%~dp0setup.bat" || exit /b 1
if errorlevel 1 (
if not defined nopauseonerror (
pause

View file

@ -186,8 +186,8 @@ setup_vendored_jruby() {
}
setup() {
>&2 setup_java
>&2 setup_vendored_jruby
setup_java
setup_vendored_jruby
}
ruby_exec() {

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqCheck %*
"%JAVACMD%" "%JAVA_OPTS%" -cp "%CLASSPATH%" org.logstash.ackedqueue.PqCheck %*
:concat
IF not defined CLASSPATH (

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqRepair %*
"%JAVACMD%" %JAVA_OPTS% -cp "%CLASSPATH%" org.logstash.ackedqueue.PqRepair %*
:concat
IF not defined CLASSPATH (

View file

@ -42,7 +42,7 @@ if defined LS_JAVA_HOME (
)
if not exist "%JAVACMD%" (
echo could not find java; set LS_JAVA_HOME or ensure java is in PATH 1>&2
echo could not find java; set JAVA_HOME or ensure java is in PATH 1>&2
exit /b 1
)

View file

@ -146,6 +146,7 @@ subprojects {
}
version = versionMap['logstash-core']
String artifactVersionsApi = "https://artifacts-api.elastic.co/v1/versions"
tasks.register("configureArchitecture") {
String arch = System.properties['os.arch']
@ -171,28 +172,33 @@ tasks.register("configureArtifactInfo") {
description "Set the url to download stack artifacts for select stack version"
doLast {
def splitVersion = version.split('\\.')
int major = splitVersion[0].toInteger()
int minor = splitVersion[1].toInteger()
String branch = "${major}.${minor}"
String fallbackMajorX = "${major}.x"
boolean isFallBackPreviousMajor = minor - 1 < 0
String fallbackBranch = isFallBackPreviousMajor ? "${major-1}.x" : "${major}.${minor-1}"
def qualifiedVersion = ""
for (b in [branch, fallbackMajorX, fallbackBranch]) {
def url = "https://storage.googleapis.com/artifacts-api/snapshots/${b}.json"
try {
def snapshotInfo = new JsonSlurper().parseText(url.toURL().text)
qualifiedVersion = snapshotInfo.version
println "ArtifactInfo version: ${qualifiedVersion}"
break
} catch (Exception e) {
println "Failed to fetch branch ${branch} from ${url}: ${e.message}"
}
def versionQualifier = System.getenv('VERSION_QUALIFIER')
if (versionQualifier) {
version = "$version-$versionQualifier"
}
project.ext.set("artifactApiVersion", qualifiedVersion)
boolean isReleaseBuild = System.getenv('RELEASE') == "1" || versionQualifier
String apiResponse = artifactVersionsApi.toURL().text
def dlVersions = new JsonSlurper().parseText(apiResponse)
String qualifiedVersion = dlVersions['versions'].grep(isReleaseBuild ? ~/^${version}$/ : ~/^${version}-SNAPSHOT/)[0]
if (qualifiedVersion == null) {
if (!isReleaseBuild) {
project.ext.set("useProjectSpecificArtifactSnapshotUrl", true)
project.ext.set("stackArtifactSuffix", "${version}-SNAPSHOT")
return
}
throw new GradleException("could not find the current artifact from the artifact-api ${artifactVersionsApi} for ${version}")
}
// find latest reference to last build
String buildsListApi = "${artifactVersionsApi}/${qualifiedVersion}/builds/"
apiResponse = buildsListApi.toURL().text
def dlBuilds = new JsonSlurper().parseText(apiResponse)
def stackBuildVersion = dlBuilds["builds"][0]
project.ext.set("artifactApiVersionedBuildUrl", "${artifactVersionsApi}/${qualifiedVersion}/builds/${stackBuildVersion}")
project.ext.set("stackArtifactSuffix", qualifiedVersion)
project.ext.set("useProjectSpecificArtifactSnapshotUrl", false)
}
}
@ -328,6 +334,7 @@ tasks.register("assembleTarDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -344,6 +351,7 @@ tasks.register("assembleOssTarDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -358,6 +366,7 @@ tasks.register("assembleZipDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -374,6 +383,7 @@ tasks.register("assembleOssZipDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -408,7 +418,7 @@ def qaBuildPath = "${buildDir}/qa/integration"
def qaVendorPath = "${qaBuildPath}/vendor"
tasks.register("installIntegrationTestGems") {
dependsOn assembleTarDistribution
dependsOn unpackTarDistribution
def gemfilePath = file("${projectDir}/qa/integration/Gemfile")
inputs.files gemfilePath
inputs.files file("${projectDir}/qa/integration/integration_tests.gemspec")
@ -431,13 +441,23 @@ tasks.register("downloadFilebeat") {
doLast {
download {
String beatsVersion = project.ext.get("artifactApiVersion")
String downloadedFilebeatName = "filebeat-${beatsVersion}-${project.ext.get("beatsArchitecture")}"
String beatVersion = project.ext.get("stackArtifactSuffix")
String downloadedFilebeatName = "filebeat-${beatVersion}-${project.ext.get("beatsArchitecture")}"
project.ext.set("unpackedFilebeatName", downloadedFilebeatName)
def res = SnapshotArtifactURLs.packageUrls("beats", beatsVersion, downloadedFilebeatName)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("beats", beatVersion, downloadedFilebeatName)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/beats/packages/${downloadedFilebeatName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
}
src project.ext.filebeatSnapshotUrl
onlyIfNewer true
@ -473,12 +493,20 @@ tasks.register("checkEsSHA") {
description "Download ES version remote's fingerprint file"
doLast {
String esVersion = project.ext.get("artifactApiVersion")
String esVersion = project.ext.get("stackArtifactSuffix")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
String remoteSHA
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
remoteSHA = res.packageShaUrl
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
remoteSHA = res.packageShaUrl
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
remoteSHA = buildUrls.package.sha_url.toURL().text
}
def localESArchive = new File("${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
if (localESArchive.exists()) {
@ -512,14 +540,25 @@ tasks.register("downloadEs") {
doLast {
download {
String esVersion = project.ext.get("artifactApiVersion")
String esVersion = project.ext.get("stackArtifactSuffix")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
project.ext.set("unpackedElasticsearchName", "elasticsearch-${esVersion}")
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
}
src project.ext.elasticsearchSnapshotURL
onlyIfNewer true
@ -559,8 +598,7 @@ project(":logstash-integration-tests") {
systemProperty 'org.logstash.integration.specs', rubyIntegrationSpecs
environment "FEATURE_FLAG", System.getenv('FEATURE_FLAG')
workingDir integrationTestPwd
dependsOn installIntegrationTestGems
dependsOn copyProductionLog4jConfiguration
dependsOn = [installIntegrationTestGems, copyProductionLog4jConfiguration]
}
}
@ -703,66 +741,46 @@ class JDKDetails {
return createElasticCatalogDownloadUrl()
}
// throws an error iff local version in versions.yml doesn't match the latest from JVM catalog.
void checkLocalVersionMatchingLatest() {
// retrieve the metadata from remote
private String createElasticCatalogDownloadUrl() {
// Ask details to catalog https://jvm-catalog.elastic.co/jdk and return the url to download the JDK
// arch x86_64 never used, only aarch64 if macos
def url = "https://jvm-catalog.elastic.co/jdk/latest_adoptiumjdk_${major}_${osName}"
// Append the cpu's arch only if Mac on aarch64, all the other OSes doesn't have CPU extension
if (arch == "aarch64") {
url += "_${arch}"
}
println "Retrieving JDK from catalog..."
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
assert catalogConnection.responseCode == 200
def metadataRetrieved = catalogConnection.content.text
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
if (catalogMetadata.version != revision || catalogMetadata.revision != build) {
throw new GradleException("Found new jdk version. Please update version.yml to ${catalogMetadata.version} build ${catalogMetadata.revision}")
}
}
private String createElasticCatalogDownloadUrl() {
// Ask details to catalog https://jvm-catalog.elastic.co/jdk and return the url to download the JDK
// arch x86_64 is default, aarch64 if macos or linux
def url = "https://jvm-catalog.elastic.co/jdk/adoptiumjdk-${revision}+${build}-${osName}"
// Append the cpu's arch only if not x86_64, which is the default
if (arch == "aarch64") {
url += "-${arch}"
}
println "Retrieving JDK from catalog..."
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
if (catalogConnection.responseCode != 200) {
println "Can't find adoptiumjdk ${revision} for ${osName} on Elastic JVM catalog"
throw new GradleException("JVM not present on catalog")
}
def metadataRetrieved = catalogConnection.content.text
println "Retrieved!"
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
validateMetadata(catalogMetadata)
return catalogMetadata.url
}
//Verify that the artifact metadata correspond to the request, if not throws an error
private void validateMetadata(Map metadata) {
if (metadata.version != revision) {
throw new GradleException("Expected to retrieve a JDK for version ${revision} but received: ${metadata.version}")
}
if (!isSameArchitecture(metadata.architecture)) {
throw new GradleException("Expected to retrieve a JDK for architecture ${arch} but received: ${metadata.architecture}")
private String createAdoptDownloadUrl() {
String releaseName = major > 8 ?
"jdk-${revision}+${build}" :
"jdk${revision}u${build}"
String vendorOsName = vendorOsName(osName)
switch (vendor) {
case "adoptium":
return "https://api.adoptium.net/v3/binary/version/${releaseName}/${vendorOsName}/${arch}/jdk/hotspot/normal/adoptium"
default:
throw RuntimeException("Can't handle vendor: ${vendor}")
}
}
private boolean isSameArchitecture(String metadataArch) {
if (arch == 'x64') {
return metadataArch == 'x86_64'
}
return metadataArch == arch
private String vendorOsName(String osName) {
if (osName == "darwin")
return "mac"
return osName
}
private String parseJdkArchitecture(String jdkArch) {
@ -774,22 +792,16 @@ class JDKDetails {
return "aarch64"
break
default:
throw new GradleException("Can't handle CPU architechture: ${jdkArch}")
throw RuntimeException("Can't handle CPU architechture: ${jdkArch}")
}
}
}
tasks.register("lint") {
description = "Lint Ruby source files. Use -PrubySource=file1.rb,file2.rb to specify files"
// Calls rake's 'lint' task
dependsOn installDevelopmentGems
doLast {
if (project.hasProperty("rubySource")) {
// Split the comma-separated files and pass them as separate arguments
def files = project.property("rubySource").split(",")
rake(projectDir, buildDir, "lint:report", *files)
} else {
rake(projectDir, buildDir, "lint:report")
}
rake(projectDir, buildDir, 'lint:report')
}
}
@ -829,15 +841,6 @@ tasks.register("downloadJdk", Download) {
}
}
tasks.register("checkNewJdkVersion") {
def versionYml = new Yaml().load(new File("$projectDir/versions.yml").text)
// use Linux x86_64 as canary platform
def jdkDetails = new JDKDetails(versionYml, "linux", "x86_64")
// throws Gradle exception if local and remote doesn't match
jdkDetails.checkLocalVersionMatchingLatest()
}
tasks.register("deleteLocalJdk", Delete) {
// CLI project properties: -Pjdk_bundle_os=[windows|linux|darwin]
String osName = selectOsType()

View file

@ -33,7 +33,6 @@ spec:
- resource:logstash-windows-jdk-matrix-pipeline
- resource:logstash-benchmark-pipeline
- resource:logstash-health-report-tests-pipeline
- resource:logstash-jdk-availability-check-pipeline
# ***********************************
# Declare serverless IT pipeline
@ -142,7 +141,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -185,7 +184,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -236,7 +235,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -294,7 +293,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -345,7 +344,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -388,7 +387,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -407,7 +406,6 @@ spec:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
@ -439,7 +437,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -455,8 +453,7 @@ spec:
build_pull_requests: false
build_tags: false
trigger_mode: code
filter_condition: >-
build.branch !~ /^backport.*$/ && build.branch !~ /^mergify\/bp\/.*$/
filter_condition: 'build.branch !~ /^backport.*$/'
filter_enabled: true
cancel_intermediate_builds: false
skip_intermediate_builds: false
@ -495,7 +492,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -537,7 +534,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -617,7 +614,7 @@ spec:
kind: Pipeline
metadata:
name: logstash-benchmark-pipeline
description: ':running: The Benchmark pipeline for snapshot version'
description: ':logstash: The Benchmark pipeline'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/benchmark_pipeline.yml"
@ -648,54 +645,6 @@ spec:
# SECTION END: Benchmark pipeline
# *******************************
# ***********************************
# SECTION START: Benchmark Marathon
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-benchmark-marathon-pipeline
description: Buildkite pipeline for benchmarking multi-version
links:
- title: 'Logstash Benchmark Marathon'
url: https://buildkite.com/elastic/logstash-benchmark-marathon-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-benchmark-marathon-pipeline
description: ':running: The Benchmark Marathon for multi-version'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/benchmark_marathon_pipeline.yml"
maximum_timeout_in_minutes: 480
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'false'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# *******************************
# SECTION END: Benchmark Marathon
# *******************************
# ***********************************
# Declare Health Report Tests pipeline
# ***********************************
@ -722,7 +671,7 @@ spec:
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/health_report_tests_pipeline.yml"
maximum_timeout_in_minutes: 30 # usually tests last max ~17mins
maximum_timeout_in_minutes: 60
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
@ -747,59 +696,4 @@ spec:
# *******************************
# SECTION END: Health Report Tests pipeline
# *******************************
# ***********************************
# Declare JDK check pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-jdk-availability-check-pipeline
description: ":logstash: check availability of new JDK version"
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-jdk-availability-check-pipeline
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/jdk_availability_check_pipeline.yml"
maximum_timeout_in_minutes: 10
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Weekly JDK availability check (main):
branch: main
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
Weekly JDK availability check (8.x):
branch: 8.x
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
# *******************************
# SECTION END: JDK check pipeline
# *******************************

View file

@ -19,7 +19,7 @@ function get_package_type {
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export OSS=true
if [ -n "$BUILD_JAVA_HOME" ]; then

14
ci/branches.json Normal file
View file

@ -0,0 +1,14 @@
{
"notice": "This file is not maintained outside of the main branch and should only be used for tooling.",
"branches": [
{
"branch": "main"
},
{
"branch": "8.15"
},
{
"branch": "7.17"
}
]
}

View file

@ -1,7 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
echo "Checking local JDK version against latest remote from JVM catalog"
./gradlew checkNewJdkVersion

View file

@ -6,7 +6,7 @@ set -x
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
if [ -n "$BUILD_JAVA_HOME" ]; then
GRADLE_OPTS="$GRADLE_OPTS -Dorg.gradle.java.home=$BUILD_JAVA_HOME"
@ -15,6 +15,7 @@ fi
# Can run either a specific flavor, or all flavors -
# eg `ci/acceptance_tests.sh oss` will run tests for open source container
# `ci/acceptance_tests.sh full` will run tests for the default container
# `ci/acceptance_tests.sh ubi8` will run tests for the ubi8 based container
# `ci/acceptance_tests.sh wolfi` will run tests for the wolfi based container
# `ci/acceptance_tests.sh` will run tests for all containers
SELECTED_TEST_SUITE=$1
@ -48,13 +49,23 @@ if [[ $SELECTED_TEST_SUITE == "oss" ]]; then
elif [[ $SELECTED_TEST_SUITE == "full" ]]; then
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME
rake artifact:build_docker_full
rake artifact:docker
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/full/*_spec.rb
elif [[ $SELECTED_TEST_SUITE == "ubi8" ]]; then
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME
rake artifact:docker_ubi8
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/ubi8/*_spec.rb
elif [[ $SELECTED_TEST_SUITE == "wolfi" ]]; then
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME

View file

@ -19,15 +19,24 @@ if [[ $1 = "setup" ]]; then
exit 0
elif [[ $1 == "split" ]]; then
# Source shared function for splitting integration tests
source "$(dirname "${BASH_SOURCE[0]}")/partition-files.lib.sh"
cd qa/integration
glob1=(specs/*spec.rb)
glob2=(specs/**/*spec.rb)
all_specs=("${glob1[@]}" "${glob2[@]}")
index="${2:?index}"
count="${3:-2}"
specs=($(cd qa/integration; partition_files "${index}" "${count}" < <(find specs -name '*_spec.rb') ))
echo "Running integration tests partition[${index}] of ${count}: ${specs[*]}"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="${specs[*]}" --console=plain
specs0=${all_specs[@]::$((${#all_specs[@]} / 2 ))}
specs1=${all_specs[@]:$((${#all_specs[@]} / 2 ))}
cd ../..
if [[ $2 == 0 ]]; then
echo "Running the first half of integration specs: $specs0"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="$specs0" --console=plain
elif [[ $2 == 1 ]]; then
echo "Running the second half of integration specs: $specs1"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="$specs1" --console=plain
else
echo "Error, must specify 0 or 1 after the split. For example ci/integration_tests.sh split 0"
exit 1
fi
elif [[ ! -z $@ ]]; then
echo "Running integration tests 'rspec $@'"

View file

@ -1,15 +1,13 @@
{
"releases": {
"7.current": "7.17.28",
"8.previous": "8.17.5",
"8.current": "8.18.0"
"5.x": "5.6.16",
"6.x": "6.8.23",
"7.x": "7.17.23",
"8.x": "8.15.1"
},
"snapshots": {
"7.current": "7.17.29-SNAPSHOT",
"8.previous": "8.17.6-SNAPSHOT",
"8.current": "8.18.1-SNAPSHOT",
"8.next": "8.19.0-SNAPSHOT",
"9.next": "9.0.1-SNAPSHOT",
"main": "9.1.0-SNAPSHOT"
"7.x": "7.17.24-SNAPSHOT",
"8.x": "8.15.2-SNAPSHOT",
"main": "8.16.0-SNAPSHOT"
}
}

View file

@ -1,78 +0,0 @@
#!/bin/bash
# partition_files returns a consistent partition of the filenames given on stdin
# Usage: partition_files <partition_index> <partition_count=2> < <(ls files)
# partition_index: the zero-based index of the partition to select `[0,partition_count)`
# partition_count: the number of partitions `[2,#files]`
partition_files() (
set -e
local files
# ensure files is consistently sorted and distinct
IFS=$'\n' read -ra files -d '' <<<"$(cat - | sort | uniq)" || true
local partition_index="${1:?}"
local partition_count="${2:?}"
_error () { >&2 echo "ERROR: ${1:-UNSPECIFIED}"; exit 1; }
# safeguard against nonsense invocations
if (( ${#files[@]} < 2 )); then
_error "#files(${#files[@]}) must be at least 2 in order to partition"
elif ( ! [[ "${partition_count}" =~ ^[0-9]+$ ]] ) || (( partition_count < 2 )) || (( partition_count > ${#files[@]})); then
_error "partition_count(${partition_count}) must be a number that is at least 2 and not greater than #files(${#files[@]})"
elif ( ! [[ "${partition_index}" =~ ^[0-9]+$ ]] ) || (( partition_index < 0 )) || (( partition_index >= $partition_count )) ; then
_error "partition_index(${partition_index}) must be a number that is greater 0 and less than partition_count(${partition_count})"
fi
# round-robbin emit those in our selected partition
for index in "${!files[@]}"; do
partition="$(( index % partition_count ))"
if (( partition == partition_index )); then
echo "${files[$index]}"
fi
done
)
if [[ "$0" == "${BASH_SOURCE[0]}" ]]; then
if [[ "$1" == "test" ]]; then
status=0
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
file_list="$( cd "${SCRIPT_DIR}"; find . -type f )"
# for any legal partitioning into N partitions, we ensure that
# the combined output of `partition_files I N` where `I` is all numbers in
# the range `[0,N)` produces no repeats and no omissions, even if the
# input list is not consistently ordered.
for n in $(seq 2 $(wc -l <<<"${file_list}")); do
result=""
for i in $(seq 0 $(( n - 1 ))); do
for file in $(partition_files $i $n <<<"$( shuf <<<"${file_list}" )"); do
result+="${file}"$'\n'
done
done
repeated="$( uniq --repeated <<<"$( sort <<<"${result}" )" )"
if (( $(printf "${repeated}" | wc -l) > 0 )); then
status=1
echo "[n=${n}]FAIL(repeated):"$'\n'"${repeated}"
fi
missing=$( comm -23 <(sort <<<"${file_list}") <( sort <<<"${result}" ) )
if (( $(printf "${missing}" | wc -l) > 0 )); then
status=1
echo "[n=${n}]FAIL(omitted):"$'\n'"${missing}"
fi
done
if (( status > 0 )); then
echo "There were failures. The input list was:"
echo "${file_list}"
fi
exit "${status}"
else
partition_files $@
fi
fi

View file

@ -28,9 +28,7 @@ build_logstash() {
}
index_test_data() {
curl -X POST -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/$INDEX_NAME/_bulk" \
-H 'x-elastic-product-origin: logstash' \
-H 'Content-Type: application/json' --data-binary @"$CURRENT_DIR/test_data/book.json"
curl -X POST -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/$INDEX_NAME/_bulk" -H 'Content-Type: application/json' --data-binary @"$CURRENT_DIR/test_data/book.json"
}
# $1: check function

View file

@ -7,8 +7,7 @@ export PIPELINE_NAME='gen_es'
# update pipeline and check response code
index_pipeline() {
RESP_CODE=$(curl -s -w "%{http_code}" -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_logstash/pipeline/$1" \
-H 'x-elastic-product-origin: logstash' -H 'Content-Type: application/json' -d "$2")
RESP_CODE=$(curl -s -w "%{http_code}" -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_logstash/pipeline/$1" -H 'Content-Type: application/json' -d "$2")
if [[ $RESP_CODE -ge '400' ]]; then
echo "failed to update pipeline for Central Pipeline Management. Got $RESP_CODE from Elasticsearch"
exit 1
@ -35,7 +34,7 @@ check_plugin() {
}
delete_pipeline() {
curl -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' -X DELETE "$ES_ENDPOINT/_logstash/pipeline/$PIPELINE_NAME" -H 'Content-Type: application/json';
curl -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -X DELETE "$ES_ENDPOINT/_logstash/pipeline/$PIPELINE_NAME" -H 'Content-Type: application/json';
}
cpm_clean_up_and_get_result() {

View file

@ -6,12 +6,10 @@ source ./$(dirname "$0")/common.sh
deploy_ingest_pipeline() {
PIPELINE_RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_ingest/pipeline/integration-logstash_test.events-default" \
-H 'Content-Type: application/json' \
-H 'x-elastic-product-origin: logstash' \
--data-binary @"$CURRENT_DIR/test_data/ingest_pipeline.json")
TEMPLATE_RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_index_template/logs-serverless-default-template" \
-H 'Content-Type: application/json' \
-H 'x-elastic-product-origin: logstash' \
--data-binary @"$CURRENT_DIR/test_data/index_template.json")
# ingest pipeline is likely be there from the last run
@ -31,7 +29,7 @@ check_integration_filter() {
}
get_doc_msg_length() {
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/logs-$INDEX_NAME.004-default/_search?size=1" -H 'x-elastic-product-origin: logstash' | jq '.hits.hits[0]._source.message | length'
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/logs-$INDEX_NAME.004-default/_search?size=1" | jq '.hits.hits[0]._source.message | length'
}
# ensure no double run of ingest pipeline

View file

@ -9,7 +9,7 @@ check_named_index() {
}
get_data_stream_count() {
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' "$ES_ENDPOINT/logs-$INDEX_NAME.001-default/_count" | jq '.count // 0'
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/logs-$INDEX_NAME.001-default/_count" | jq '.count // 0'
}
compare_data_stream_count() {

View file

@ -10,7 +10,7 @@ export EXIT_CODE="0"
create_pipeline() {
RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME" \
-H 'Content-Type: application/json' -H 'kbn-xsrf: logstash' -H 'x-elastic-product-origin: logstash' \
-H 'Content-Type: application/json' -H 'kbn-xsrf: logstash' \
--data-binary @"$CURRENT_DIR/test_data/$PIPELINE_NAME.json")
if [[ RESP_CODE -ge '400' ]]; then
@ -20,8 +20,7 @@ create_pipeline() {
}
get_pipeline() {
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' \
"$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME") \
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME")
SOURCE_BODY=$(cat "$CURRENT_DIR/test_data/$PIPELINE_NAME.json")
RESP_PIPELINE_NAME=$(echo "$RESP_BODY" | jq -r '.id')
@ -42,8 +41,7 @@ get_pipeline() {
}
list_pipeline() {
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' \
"$KB_ENDPOINT/api/logstash/pipelines" | jq --arg name "$PIPELINE_NAME" '.pipelines[] | select(.id==$name)' )
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipelines" | jq --arg name "$PIPELINE_NAME" '.pipelines[] | select(.id==$name)' )
if [[ -z "$RESP_BODY" ]]; then
EXIT_CODE=$(( EXIT_CODE + 1 ))
echo "Fail to list pipeline."
@ -51,8 +49,7 @@ list_pipeline() {
}
delete_pipeline() {
RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X DELETE -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' \
"$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME" \
RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X DELETE -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME" \
-H 'Content-Type: application/json' -H 'kbn-xsrf: logstash' \
--data-binary @"$CURRENT_DIR/test_data/$PIPELINE_NAME.json")

View file

@ -40,7 +40,7 @@ stop_metricbeat() {
}
get_monitor_count() {
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' "$ES_ENDPOINT/$INDEX_NAME/_count" | jq '.count // 0'
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/$INDEX_NAME/_count" | jq '.count // 0'
}
compare_monitor_count() {

View file

@ -6,7 +6,7 @@ set -ex
source ./$(dirname "$0")/common.sh
get_monitor_count() {
curl -s -H "Authorization: ApiKey $LS_ROLE_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' "$ES_ENDPOINT/.monitoring-logstash-7-*/_count" | jq '.count'
curl -s -H "Authorization: ApiKey $LS_ROLE_API_KEY_ENCODED" "$ES_ENDPOINT/.monitoring-logstash-7-*/_count" | jq '.count'
}
compare_monitor_count() {

View file

@ -16,6 +16,10 @@
##
################################################################
## GC configuration
11-13:-XX:+UseConcMarkSweepGC
11-13:-XX:CMSInitiatingOccupancyFraction=75
11-13:-XX:+UseCMSInitiatingOccupancyOnly
## Locale
# Set the locale language
@ -30,7 +34,7 @@
## basic
# set the I/O temp directory
#-Djava.io.tmpdir=${HOME}
#-Djava.io.tmpdir=$HOME
# set to headless, just in case
-Djava.awt.headless=true
@ -55,7 +59,11 @@
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof
## GC logging
#-Xlog:gc*,gc+age=trace,safepoint:file=${LS_GC_LOG_FILE}:utctime,pid,tags:filecount=32,filesize=64m
#-Xlog:gc*,gc+age=trace,safepoint:file=@loggc@:utctime,pid,tags:filecount=32,filesize=64m
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}
# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom
@ -71,11 +79,11 @@
# text values with sizes less than or equal to this limit will be treated as invalid.
# This value should be higher than `logstash.jackson.stream-read-constraints.max-number-length`.
# The jackson library defaults to 20000000 or 20MB, whereas Logstash defaults to 200MB or 200000000 characters.
#-Dlogstash.jackson.stream-read-constraints.max-string-length=200000000
-Dlogstash.jackson.stream-read-constraints.max-string-length=200000000
#
# Sets the maximum number length (in chars or bytes, depending on input context).
# The jackson library defaults to 1000, whereas Logstash defaults to 10000.
#-Dlogstash.jackson.stream-read-constraints.max-number-length=10000
-Dlogstash.jackson.stream-read-constraints.max-number-length=10000
#
# Sets the maximum nesting depth. The depth is a count of objects and arrays that have not
# been closed, `{` and `[` respectively.

View file

@ -181,6 +181,38 @@
#
# api.auth.basic.password_policy.mode: WARN
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
@ -282,13 +314,13 @@
# * json
#
# log.format: plain
# log.format.json.fix_duplicate_message_fields: true
# log.format.json.fix_duplicate_message_fields: false
#
# path.logs:
#
# ------------ Other Settings --------------
#
# Allow or block running Logstash as superuser (default: true). Windows are excluded from the checking
# Allow or block running Logstash as superuser (default: true)
# allow_superuser: false
#
# Where to find custom plugins
@ -299,15 +331,13 @@
# pipeline.separate_logs: false
#
# Determine where to allocate memory buffers, for plugins that leverage them.
# Defaults to heap,but can be switched to direct if you prefer using direct memory space instead.
# Default to direct, optionally can be switched to heap to select Java heap space.
# pipeline.buffer.type: heap
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
# Flag to allow the legacy internal monitoring (default: false)
#xpack.monitoring.allow_legacy_collection: false
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password

232
devtools/backport Executable file
View file

@ -0,0 +1,232 @@
#!/usr/bin/env python3
"""Cherry pick and backport a PR"""
from __future__ import print_function
from builtins import input
import sys
import os
import argparse
from os.path import expanduser
import re
from subprocess import check_call, call, check_output
import requests
import json
usage = """
Example usage:
./devtools/backport 7.16 2565 6490604aa0cf7fa61932a90700e6ca988fc8a527
In case of backporting errors, fix them, then run
git cherry-pick --continue
./devtools/backport 7.16 2565 6490604aa0cf7fa61932a90700e6ca988fc8a527 --continue
This script does the following:
* cleanups both from_branch and to_branch (warning: drops local changes)
* creates a temporary branch named something like "branch_2565"
* calls the git cherry-pick command in this branch
* after fixing the merge errors (if needed), pushes the branch to your
remote
* it will attempt to create a PR for you using the GitHub API, but requires
the GitHub token, with the public_repo scope, available in `~/.elastic/github.token`.
Keep in mind this token has to also be authorized to the Elastic organization as
well as to work with SSO.
(see https://help.github.com/en/articles/authorizing-a-personal-access-token-for-use-with-saml-single-sign-on)
Note that you need to take the commit hashes from `git log` on the
from_branch, copying the IDs from Github doesn't work in case we squashed the
PR.
"""
def main():
"""Main"""
parser = argparse.ArgumentParser(
description="Creates a PR for cherry-picking commits",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=usage)
parser.add_argument("to_branch",
help="To branch (e.g 7.x)")
parser.add_argument("pr_number",
help="The PR number being merged (e.g. 2345)")
parser.add_argument("commit_hashes", metavar="hash", nargs="*",
help="The commit hashes to cherry pick." +
" You can specify multiple.")
parser.add_argument("--yes", action="store_true",
help="Assume yes. Warning: discards local changes.")
parser.add_argument("--continue", action="store_true",
help="Continue after fixing merging errors.")
parser.add_argument("--from_branch", default="main",
help="From branch")
parser.add_argument("--diff", action="store_true",
help="Display the diff before pushing the PR")
parser.add_argument("--remote", default="",
help="Which remote to push the backport branch to")
#parser.add_argument("--zube-team", default="",
# help="Team the PR belongs to")
#parser.add_argument("--keep-backport-label", action="store_true",
# help="Preserve label needs_backport in original PR")
args = parser.parse_args()
print(args)
create_pr(parser, args)
def create_pr(parser, args):
info("Checking if GitHub API token is available in `~/.elastic/github.token`")
token = get_github_token()
tmp_branch = "backport_{}_{}".format(args.pr_number, args.to_branch)
if not vars(args)["continue"]:
if not args.yes and input("This will destroy all local changes. " +
"Continue? [y/n]: ") != "y":
return 1
info("Destroying local changes...")
check_call("git reset --hard", shell=True)
check_call("git clean -df", shell=True)
check_call("git fetch", shell=True)
info("Checkout of {} to backport from....".format(args.from_branch))
check_call("git checkout {}".format(args.from_branch), shell=True)
check_call("git pull", shell=True)
info("Checkout of {} to backport to...".format(args.to_branch))
check_call("git checkout {}".format(args.to_branch), shell=True)
check_call("git pull", shell=True)
info("Creating backport branch {}...".format(tmp_branch))
call("git branch -D {} > /dev/null".format(tmp_branch), shell=True)
check_call("git checkout -b {}".format(tmp_branch), shell=True)
if len(args.commit_hashes) == 0:
if token:
session = github_session(token)
base = "https://api.github.com/repos/elastic/logstash"
original_pr = session.get(base + "/pulls/" + args.pr_number).json()
merge_commit = original_pr['merge_commit_sha']
if not merge_commit:
info("Could not auto resolve merge commit - PR isn't merged yet")
return 1
info("Merge commit detected from PR: {}".format(merge_commit))
commit_hashes = merge_commit
else:
info("GitHub API token not available. " +
"Please manually specify commit hash(es) argument(s)\n")
parser.print_help()
return 1
else:
commit_hashes = "{}".format(" ").join(args.commit_hashes)
info("Cherry-picking {}".format(commit_hashes))
if call("git cherry-pick -x {}".format(commit_hashes), shell=True) != 0:
info("Looks like you have cherry-pick errors.")
info("Fix them, then run: ")
info(" git cherry-pick --continue")
info(" {} --continue".format(" ".join(sys.argv)))
return 1
if len(check_output("git status -s", shell=True).strip()) > 0:
info("Looks like you have uncommitted changes." +
" Please execute first: git cherry-pick --continue")
return 1
if len(check_output("git log HEAD...{}".format(args.to_branch),
shell=True).strip()) == 0:
info("No commit to push")
return 1
if args.diff:
call("git diff {}".format(args.to_branch), shell=True)
if input("Continue? [y/n]: ") != "y":
info("Aborting cherry-pick.")
return 1
info("Ready to push branch.")
remote = args.remote
if not remote:
remote = input("To which remote should I push? (your fork): ")
info("Pushing branch {} to remote {}".format(tmp_branch, remote))
call("git push {} :{} > /dev/null".format(remote, tmp_branch), shell=True)
check_call("git push --set-upstream {} {}".format(remote, tmp_branch), shell=True)
if not token:
info("GitHub API token not available.\n" +
"Manually create a PR by following this URL: \n\t" +
"https://github.com/elastic/logstash/compare/{}...{}:{}?expand=1"
.format(args.to_branch, remote, tmp_branch))
else:
info("Automatically creating a PR for you...")
session = github_session(token)
base = "https://api.github.com/repos/elastic/logstash"
original_pr = session.get(base + "/pulls/" + args.pr_number).json()
# get the github username from the remote where we pushed
remote_url = check_output("git remote get-url {}".format(remote), shell=True)
remote_user = re.search("github.com[:/](.+)/logstash", str(remote_url)).group(1)
# create PR
request = session.post(base + "/pulls", json=dict(
title="Backport PR #{} to {}: {}".format(args.pr_number, args.to_branch, original_pr["title"]),
head=remote_user + ":" + tmp_branch,
base=args.to_branch,
body="**Backport PR #{} to {} branch, original message:**\n\n---\n\n{}"
.format(args.pr_number, args.to_branch, original_pr["body"])
))
if request.status_code > 299:
info("Creating PR failed: {}".format(request.json()))
sys.exit(1)
new_pr = request.json()
# add labels
labels = ["backport"]
# get the version (vX.Y.Z) we are backporting to
version = get_version(os.getcwd())
if version:
labels.append(version)
session.post(
base + "/issues/{}/labels".format(new_pr["number"]), json=labels)
"""
if not args.keep_backport_label:
# remove needs backport label from the original PR
session.delete(base + "/issues/{}/labels/needs_backport".format(args.pr_number))
"""
# Set a version label on the original PR
if version:
session.post(
base + "/issues/{}/labels".format(args.pr_number), json=[version])
info("Done. PR created: {}".format(new_pr["html_url"]))
info("Please go and check it and add the review tags")
def get_version(base_dir):
#pattern = re.compile(r'(const\s|)\w*(v|V)ersion\s=\s"(?P<version>.*)"')
with open(os.path.join(base_dir, "versions.yml"), "r") as f:
for line in f:
if line.startswith('logstash:'):
return "v" + line.split(':')[-1].strip()
#match = pattern.match(line)
#if match:
# return match.group('version')
def get_github_token():
try:
token = open(expanduser("~/.elastic/github.token"), "r").read().strip()
except:
token = False
return token
def github_session(token):
session = requests.Session()
session.headers.update({"Authorization": "token " + token})
return session
def info(msg):
print("\nINFO: {}".format(msg))
if __name__ == "__main__":
sys.exit(main())

163
devtools/create_pr Executable file
View file

@ -0,0 +1,163 @@
#!/usr/bin/env python3
"""Cherry pick and backport a PR"""
from __future__ import print_function
from builtins import input
import sys
import os
import argparse
from os.path import expanduser
import re
from subprocess import check_call, call, check_output
import requests
usage = """
Example usage:
./dev-tools/create local_branch
This script does the following:
* cleanups local_branch (warning: drops local changes)
* rebases the branch against main
* it will attempt to create a PR for you using the GitHub API, but requires
the GitHub token, with the public_repo scope, available in `~/.elastic/github.token`.
Keep in mind this token has to also be authorized to the Elastic organization as
well as to work with SSO.
(see https://help.github.com/en/articles/authorizing-a-personal-access-token-for-use-with-saml-single-sign-on)
Note that you need to take the commit hashes from `git log` on the
from_branch, copying the IDs from Github doesn't work in case we squashed the
PR.
"""
def main():
"""Main"""
parser = argparse.ArgumentParser(
description="Creates a new PR from a branch",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=usage)
parser.add_argument("local_branch",
help="Branch to Create a PR for")
parser.add_argument("--to_branch", default="main",
help="Which remote to push the backport branch to")
parser.add_argument("--yes", action="store_true",
help="Assume yes. Warning: discards local changes.")
parser.add_argument("--continue", action="store_true",
help="Continue after fixing merging errors.")
parser.add_argument("--diff", action="store_true",
help="Display the diff before pushing the PR")
parser.add_argument("--remote", default="",
help="Which remote to push the backport branch to")
args = parser.parse_args()
print(args)
create_pr(args)
def create_pr(args):
if not vars(args)["continue"]:
if not args.yes and input("This will destroy all local changes. " +
"Continue? [y/n]: ") != "y":
return 1
info("Destroying local changess...")
check_call("git reset --hard", shell=True)
check_call("git clean -df", shell=True)
#check_call("git fetch", shell=True)
info("Checkout of {} to create a PR....".format(args.local_branch))
check_call("git checkout {}".format(args.local_branch), shell=True)
check_call("git rebase {}".format(args.to_branch), shell=True)
if args.diff:
call("git diff {}".format(args.to_branch), shell=True)
if input("Continue? [y/n]: ") != "y":
info("Aborting PR creation...")
return 1
info("Ready to push branch and create PR...")
remote = args.remote
if not remote:
remote = input("To which remote should I push? (your fork): ")
info("Pushing branch {} to remote {}".format(args.local_branch, remote))
call("git push {} :{} > /dev/null".format(remote, args.local_branch),
shell=True)
check_call("git push --set-upstream {} {}"
.format(remote, args.local_branch), shell=True)
info("Checking if GitHub API token is available in `~/.elastic/github.token`")
try:
token = open(expanduser("~/.elastic/github.token"), "r").read().strip()
except:
token = False
if not token:
info("GitHub API token not available.\n" +
"Manually create a PR by following this URL: \n\t" +
"https://github.com/elastic/logstash/compare/{}...{}:{}?expand=1"
.format(args.to_branch, remote, args.local_branch))
else:
info("Automatically creating a PR for you...")
base = "https://api.github.com/repos/elastic/logstash"
session = requests.Session()
session.headers.update({"Authorization": "token " + token})
# get the github username from the remote where we pushed
remote_url = check_output("git remote get-url {}".format(remote),
shell=True)
remote_user = re.search("github.com[:/](.+)/logstash", str(remote_url)).group(1)
### TODO:
title = input("Title: ")
body = input("Description: ")
# create PR
request = session.post(base + "/pulls", json=dict(
title=title,
head=remote_user + ":" + args.local_branch,
base=args.to_branch,
body=body
))
if request.status_code > 299:
info("Creating PR failed: {}".format(request.json()))
sys.exit(1)
new_pr = request.json()
"""
# add labels
labels = ["backport"]
# get the version we are backported to
version = get_version(os.getcwd())
if version:
labels.append("v" + version)
session.post(
base + "/issues/{}/labels".format(new_pr["number"]), json=labels)
# Set a version label on the original PR
if version:
session.post(
base + "/issues/{}/labels".format(args.pr_number), json=[version])
"""
info("Done. PR created: {}".format(new_pr["html_url"]))
info("Please go and check it and add the review tags")
def get_version(base_dir):
#pattern = re.compile(r'(const\s|)\w*(v|V)ersion\s=\s"(?P<version>.*)"')
with open(os.path.join(base_dir, "versions.yml"), "r") as f:
for line in f:
if line.startswith('logstash:'):
return line.split(':')[-1].strip()
#match = pattern.match(line)
#if match:
# return match.group('version')
def info(msg):
print("\nINFO: {}".format(msg))
if __name__ == "__main__":
sys.exit(main())

View file

@ -2,7 +2,7 @@ SHELL=/bin/bash
ELASTIC_REGISTRY ?= docker.elastic.co
# Determine the version to build.
ELASTIC_VERSION ?= $(shell ../vendor/jruby/bin/jruby bin/elastic-version)
ELASTIC_VERSION := $(shell ../vendor/jruby/bin/jruby bin/elastic-version)
ifdef STAGING_BUILD_NUM
VERSION_TAG := $(ELASTIC_VERSION)-$(STAGING_BUILD_NUM)
@ -14,13 +14,9 @@ ifdef DOCKER_ARCHITECTURE
ARCHITECTURE := $(DOCKER_ARCHITECTURE)
else
ARCHITECTURE := $(shell uname -m)
# For MacOS
ifeq ($(ARCHITECTURE), arm64)
ARCHITECTURE := aarch64
endif
endif
IMAGE_FLAVORS ?= oss full wolfi
IMAGE_FLAVORS ?= oss full ubi8 wolfi
DEFAULT_IMAGE_FLAVOR ?= full
IMAGE_TAG := $(ELASTIC_REGISTRY)/logstash/logstash
@ -30,22 +26,31 @@ all: build-from-local-artifacts build-from-local-oss-artifacts public-dockerfile
# Build from artifacts on the local filesystem, using an http server (running
# in a container) to provide the artifacts to the Dockerfile.
build-from-local-full-artifacts: dockerfile
build-from-local-full-artifacts: dockerfile env2yaml
docker run --rm -d --name=$(HTTPD) \
-p 8000:8000 --expose=8000 -v $(ARTIFACTS_DIR):/mnt \
python:3 bash -c 'cd /mnt && python3 -m http.server'
timeout 120 bash -c 'until curl -s localhost:8000 > /dev/null; do sleep 1; done'
docker build --progress=plain --network=host -t $(IMAGE_TAG)-full:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-full data/logstash || \
docker build --network=host -t $(IMAGE_TAG)-full:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-full data/logstash || \
(docker kill $(HTTPD); false); \
docker tag $(IMAGE_TAG)-full:$(VERSION_TAG) $(IMAGE_TAG):$(VERSION_TAG);
docker kill $(HTTPD)
build-from-local-oss-artifacts: dockerfile
build-from-local-oss-artifacts: dockerfile env2yaml
docker run --rm -d --name=$(HTTPD) \
-p 8000:8000 --expose=8000 -v $(ARTIFACTS_DIR):/mnt \
python:3 bash -c 'cd /mnt && python3 -m http.server'
timeout 120 bash -c 'until curl -s localhost:8000 > /dev/null; do sleep 1; done'
docker build --progress=plain --network=host -t $(IMAGE_TAG)-oss:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-oss data/logstash || \
docker build --network=host -t $(IMAGE_TAG)-oss:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-oss data/logstash || \
(docker kill $(HTTPD); false);
-docker kill $(HTTPD)
build-from-local-ubi8-artifacts: dockerfile env2yaml
docker run --rm -d --name=$(HTTPD) \
-p 8000:8000 --expose=8000 -v $(ARTIFACTS_DIR):/mnt \
python:3 bash -c 'cd /mnt && python3 -m http.server'
timeout 120 bash -c 'until curl -s localhost:8000 > /dev/null; do sleep 1; done'
docker build --network=host -t $(IMAGE_TAG)-ubi8:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-ubi8 data/logstash || \
(docker kill $(HTTPD); false);
-docker kill $(HTTPD)
@ -54,14 +59,15 @@ build-from-local-wolfi-artifacts: dockerfile
-p 8000:8000 --expose=8000 -v $(ARTIFACTS_DIR):/mnt \
python:3 bash -c 'cd /mnt && python3 -m http.server'
timeout 120 bash -c 'until curl -s localhost:8000 > /dev/null; do sleep 1; done'
docker build --progress=plain --network=host -t $(IMAGE_TAG)-wolfi:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-wolfi data/logstash || \
docker build --network=host -t $(IMAGE_TAG)-wolfi:$(VERSION_TAG) -f $(ARTIFACTS_DIR)/Dockerfile-wolfi data/logstash || \
(docker kill $(HTTPD); false);
-docker kill $(HTTPD)
COPY_FILES := $(ARTIFACTS_DIR)/docker/config/pipelines.yml $(ARTIFACTS_DIR)/docker/config/logstash-oss.yml $(ARTIFACTS_DIR)/docker/config/logstash-full.yml
COPY_FILES += $(ARTIFACTS_DIR)/docker/config/log4j2.file.properties $(ARTIFACTS_DIR)/docker/config/log4j2.properties
COPY_FILES += $(ARTIFACTS_DIR)/docker/env2yaml/env2yaml.go $(ARTIFACTS_DIR)/docker/env2yaml/go.mod $(ARTIFACTS_DIR)/docker/env2yaml/go.sum
COPY_FILES += $(ARTIFACTS_DIR)/docker/pipeline/default.conf $(ARTIFACTS_DIR)/docker/bin/docker-entrypoint
COPY_FILES += $(ARTIFACTS_DIR)/docker/env2yaml/env2yaml-arm64
COPY_FILES += $(ARTIFACTS_DIR)/docker/env2yaml/env2yaml-amd64
$(ARTIFACTS_DIR)/docker/config/pipelines.yml: data/logstash/config/pipelines.yml
$(ARTIFACTS_DIR)/docker/config/logstash-oss.yml: data/logstash/config/logstash-oss.yml
@ -70,9 +76,8 @@ $(ARTIFACTS_DIR)/docker/config/log4j2.file.properties: data/logstash/config/log4
$(ARTIFACTS_DIR)/docker/config/log4j2.properties: data/logstash/config/log4j2.properties
$(ARTIFACTS_DIR)/docker/pipeline/default.conf: data/logstash/pipeline/default.conf
$(ARTIFACTS_DIR)/docker/bin/docker-entrypoint: data/logstash/bin/docker-entrypoint
$(ARTIFACTS_DIR)/docker/env2yaml/env2yaml.go: data/logstash/env2yaml/env2yaml.go
$(ARTIFACTS_DIR)/docker/env2yaml/go.mod: data/logstash/env2yaml/go.mod
$(ARTIFACTS_DIR)/docker/env2yaml/go.sum: data/logstash/env2yaml/go.sum
$(ARTIFACTS_DIR)/docker/env2yaml/env2yaml-arm64: data/logstash/env2yaml/env2yaml-arm64
$(ARTIFACTS_DIR)/docker/env2yaml/env2yaml-amd64: data/logstash/env2yaml/env2yaml-amd64
$(ARTIFACTS_DIR)/docker/%:
cp -f $< $@
@ -113,7 +118,7 @@ ironbank_docker_paths:
mkdir -p $(ARTIFACTS_DIR)/ironbank/scripts/go/src/env2yaml/vendor
mkdir -p $(ARTIFACTS_DIR)/ironbank/scripts/pipeline
public-dockerfiles: public-dockerfiles_oss public-dockerfiles_full public-dockerfiles_wolfi public-dockerfiles_ironbank
public-dockerfiles: public-dockerfiles_oss public-dockerfiles_full public-dockerfiles_ubi8 public-dockerfiles_wolfi public-dockerfiles_ironbank
public-dockerfiles_full: templates/Dockerfile.erb docker_paths $(COPY_FILES)
../vendor/jruby/bin/jruby -S erb -T "-"\
@ -129,13 +134,6 @@ public-dockerfiles_full: templates/Dockerfile.erb docker_paths $(COPY_FILES)
cp $(ARTIFACTS_DIR)/Dockerfile-full Dockerfile && \
tar -zcf ../logstash-$(VERSION_TAG)-docker-build-context.tar.gz Dockerfile bin config env2yaml pipeline
build-from-dockerfiles_full: public-dockerfiles_full
cd $(ARTIFACTS_DIR)/docker && \
mkdir -p dockerfile_build_full && cd dockerfile_build_full && \
tar -zxf ../../logstash-$(VERSION_TAG)-docker-build-context.tar.gz && \
sed 's/artifacts/snapshots/g' Dockerfile > Dockerfile.tmp && mv Dockerfile.tmp Dockerfile && \
docker build --progress=plain --network=host -t $(IMAGE_TAG)-dockerfile-full:$(VERSION_TAG) .
public-dockerfiles_oss: templates/Dockerfile.erb docker_paths $(COPY_FILES)
../vendor/jruby/bin/jruby -S erb -T "-"\
created_date="${BUILD_DATE}" \
@ -150,12 +148,19 @@ public-dockerfiles_oss: templates/Dockerfile.erb docker_paths $(COPY_FILES)
cp $(ARTIFACTS_DIR)/Dockerfile-oss Dockerfile && \
tar -zcf ../logstash-oss-$(VERSION_TAG)-docker-build-context.tar.gz Dockerfile bin config env2yaml pipeline
build-from-dockerfiles_oss: public-dockerfiles_oss
public-dockerfiles_ubi8: templates/Dockerfile.erb docker_paths $(COPY_FILES)
../vendor/jruby/bin/jruby -S erb -T "-"\
created_date="${BUILD_DATE}" \
elastic_version="${ELASTIC_VERSION}" \
arch="${ARCHITECTURE}" \
version_tag="${VERSION_TAG}" \
release="${RELEASE}" \
image_flavor="ubi8" \
local_artifacts="false" \
templates/Dockerfile.erb > "${ARTIFACTS_DIR}/Dockerfile-ubi8" && \
cd $(ARTIFACTS_DIR)/docker && \
mkdir -p dockerfile_build_oss && cd dockerfile_build_oss && \
tar -zxf ../../logstash-$(VERSION_TAG)-docker-build-context.tar.gz && \
sed 's/artifacts/snapshots/g' Dockerfile > Dockerfile.tmp && mv Dockerfile.tmp Dockerfile && \
docker build --progress=plain --network=host -t $(IMAGE_TAG)-dockerfile-oss:$(VERSION_TAG) .
cp $(ARTIFACTS_DIR)/Dockerfile-ubi8 Dockerfile && \
tar -zcf ../logstash-ubi8-$(VERSION_TAG)-docker-build-context.tar.gz Dockerfile bin config env2yaml pipeline
public-dockerfiles_wolfi: templates/Dockerfile.erb docker_paths $(COPY_FILES)
../vendor/jruby/bin/jruby -S erb -T "-"\
@ -171,14 +176,7 @@ public-dockerfiles_wolfi: templates/Dockerfile.erb docker_paths $(COPY_FILES)
cp $(ARTIFACTS_DIR)/Dockerfile-wolfi Dockerfile && \
tar -zcf ../logstash-wolfi-$(VERSION_TAG)-docker-build-context.tar.gz Dockerfile bin config env2yaml pipeline
build-from-dockerfiles_wolfi: public-dockerfiles_wolfi
cd $(ARTIFACTS_DIR)/docker && \
mkdir -p dockerfile_build_wolfi && cd dockerfile_build_wolfi && \
tar -zxf ../../logstash-$(VERSION_TAG)-docker-build-context.tar.gz && \
sed 's/artifacts/snapshots/g' Dockerfile > Dockerfile.tmp && mv Dockerfile.tmp Dockerfile && \
docker build --progress=plain --network=host -t $(IMAGE_TAG)-dockerfile-wolfi:$(VERSION_TAG) .
public-dockerfiles_ironbank: templates/hardening_manifest.yaml.erb templates/IronbankDockerfile.erb ironbank_docker_paths $(COPY_IRONBANK_FILES)
public-dockerfiles_ironbank: templates/hardening_manifest.yaml.erb templates/Dockerfile.erb ironbank_docker_paths $(COPY_IRONBANK_FILES)
../vendor/jruby/bin/jruby -S erb -T "-"\
elastic_version="${ELASTIC_VERSION}" \
templates/hardening_manifest.yaml.erb > $(ARTIFACTS_DIR)/ironbank/hardening_manifest.yaml && \
@ -190,11 +188,35 @@ public-dockerfiles_ironbank: templates/hardening_manifest.yaml.erb templates/Iro
release="${RELEASE}" \
image_flavor="ironbank" \
local_artifacts="false" \
templates/IronbankDockerfile.erb > "${ARTIFACTS_DIR}/Dockerfile-ironbank" && \
templates/Dockerfile.erb > "${ARTIFACTS_DIR}/Dockerfile-ironbank" && \
cd $(ARTIFACTS_DIR)/ironbank && \
cp $(ARTIFACTS_DIR)/Dockerfile-ironbank Dockerfile && \
tar -zcf ../logstash-ironbank-$(VERSION_TAG)-docker-build-context.tar.gz scripts Dockerfile hardening_manifest.yaml LICENSE README.md
# Push the image to the dedicated push endpoint at "push.docker.elastic.co"
push:
$(foreach FLAVOR, $(IMAGE_FLAVORS), \
docker tag $(IMAGE_TAG)-$(FLAVOR):$(VERSION_TAG) push.$(IMAGE_TAG)-$(FLAVOR):$(VERSION_TAG); \
docker push push.$(IMAGE_TAG)-$(FLAVOR):$(VERSION_TAG); \
docker rmi push.$(IMAGE_TAG)-$(FLAVOR):$(VERSION_TAG); \
)
# Also push the default version, with no suffix like '-oss' or '-full'
docker tag $(IMAGE_TAG):$(VERSION_TAG) push.$(IMAGE_TAG):$(VERSION_TAG);
docker push push.$(IMAGE_TAG):$(VERSION_TAG);
docker rmi push.$(IMAGE_TAG):$(VERSION_TAG);
# Compile "env2yaml", the helper for configuring logstash.yml via environment
# variables.
env2yaml:
docker run --rm \
-v "$(PWD)/data/logstash/env2yaml:/usr/src/env2yaml" \
-e GOARCH=arm64 -e GOOS=linux \
-w /usr/src/env2yaml golang:1 go build -o /usr/src/env2yaml/env2yaml-arm64
docker run --rm \
-v "$(PWD)/data/logstash/env2yaml:/usr/src/env2yaml" \
-e GOARCH=amd64 -e GOOS=linux \
-w /usr/src/env2yaml golang:1 go build -o /usr/src/env2yaml/env2yaml-amd64
# Generate the Dockerfiles from ERB templates.
dockerfile: templates/Dockerfile.erb
$(foreach FLAVOR, $(IMAGE_FLAVORS), \
@ -204,7 +226,7 @@ dockerfile: templates/Dockerfile.erb
arch="${ARCHITECTURE}" \
version_tag="${VERSION_TAG}" \
image_flavor="${FLAVOR}" \
local_artifacts="${LOCAL_ARTIFACTS}" \
local_artifacts="true" \
templates/Dockerfile.erb > "${ARTIFACTS_DIR}/Dockerfile-${FLAVOR}" ; \
)

View file

@ -1,2 +1,2 @@
api.http.host: "0.0.0.0"
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]

View file

@ -1 +1 @@
api.http.host: "0.0.0.0"
http.host: "0.0.0.0"

View file

@ -16,13 +16,12 @@ package main
import (
"errors"
"fmt"
"gopkg.in/yaml.v2"
"io/ioutil"
"log"
"os"
"strings"
"gopkg.in/yaml.v2"
"fmt"
)
var validSettings = []string{
@ -50,6 +49,7 @@ var validSettings = []string{
"config.debug",
"config.support_escapes",
"config.field_reference.escape_style",
"event_api.tags.illegal",
"queue.type",
"path.queue",
"queue.page_capacity",
@ -65,9 +65,14 @@ var validSettings = []string{
"dead_letter_queue.storage_policy",
"dead_letter_queue.retain.age",
"path.dead_letter_queue",
"http.enabled", // DEPRECATED: prefer `api.enabled`
"http.environment", // DEPRECATED: prefer `api.environment`
"http.host", // DEPRECATED: prefer `api.http.host`
"http.port", // DEPRECATED: prefer `api.http.port`
"log.level",
"log.format",
"log.format.json.fix_duplicate_message_fields",
"modules",
"metric.collect",
"path.logs",
"path.plugins",
@ -82,7 +87,6 @@ var validSettings = []string{
"api.auth.basic.password_policy.include.symbol",
"allow_superuser",
"monitoring.cluster_uuid",
"xpack.monitoring.allow_legacy_collection",
"xpack.monitoring.enabled",
"xpack.monitoring.collection.interval",
"xpack.monitoring.elasticsearch.hosts",
@ -126,6 +130,8 @@ var validSettings = []string{
"xpack.management.elasticsearch.ssl.cipher_suites",
"xpack.geoip.download.endpoint",
"xpack.geoip.downloader.enabled",
"cloud.id",
"cloud.auth",
}
// Given a setting name, return a downcased version with delimiters removed.

View file

@ -1,73 +1,147 @@
# This Dockerfile was generated from templates/Dockerfile.erb
<%# image_flavor 'full', oss', 'wolfi' -%>
<% if local_artifacts == 'false' -%>
<% url_root = 'https://artifacts.elastic.co/downloads/logstash' -%>
<% else -%>
<% url_root = 'http://localhost:8000' -%>
<% end -%>
<% if image_flavor == 'oss' -%>
<% tarball = "logstash-oss-#{elastic_version}-linux-#{arch}.tar.gz" -%>
<% license = 'Apache 2.0' -%>
<% else -%>
<% tarball = "logstash-#{elastic_version}-linux-#{arch}.tar.gz" -%>
<% license = 'Elastic License' -%>
<% end -%>
<% if image_flavor == 'full' || image_flavor == 'oss' -%>
<% base_image = 'redhat/ubi9-minimal:latest' -%>
<% go_image = 'golang:1.23' -%>
<% package_manager = 'microdnf' -%>
<% else -%>
<% base_image = 'docker.elastic.co/wolfi/chainguard-base' -%>
<% go_image = 'docker.elastic.co/wolfi/go:1.23' -%>
<% package_manager = 'apk' -%>
<% end -%>
<% locale = 'C.UTF-8' -%>
<% if image_flavor == 'wolfi' -%>
FROM docker.elastic.co/wolfi/go:1-dev as builder-env2yaml
# Build env2yaml
FROM <%= go_image %> AS builder-env2yaml
COPY env2yaml/env2yaml.go env2yaml/go.mod env2yaml/go.sum /tmp/go/src/env2yaml/
COPY env2yaml/env2yaml.go /tmp/go/src/env2yaml/env2yaml.go
COPY env2yaml/go.mod /tmp/go/src/env2yaml/go.mod
COPY env2yaml/go.sum /tmp/go/src/env2yaml/go.sum
WORKDIR /tmp/go/src/env2yaml
RUN go build -trimpath
RUN go build
<% end -%>
# Build main image
# Minimal distributions do not ship with en language packs.
FROM <%= base_image %>
<% if image_flavor == 'ironbank' -%>
<%# Start image_flavor 'ironbank' %>
ARG BASE_REGISTRY=registry1.dso.mil
ARG BASE_IMAGE=ironbank/redhat/ubi/ubi9
ARG BASE_TAG=9.5
ARG LOGSTASH_VERSION=<%= elastic_version %>
ARG GOLANG_VERSION=1.21.8
ENV ELASTIC_CONTAINER=true
# stage 1: build env2yaml
FROM ${BASE_REGISTRY}/google/golang/ubi9/golang-1.21:${GOLANG_VERSION} AS env2yaml
ENV GOPATH=/go
COPY scripts/go /go
USER root
RUN dnf-3 -y upgrade && dnf-3 install -y git && \
cd /go/src/env2yaml && \
go build
# Final stage
FROM ${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG}
ARG LOGSTASH_VERSION
ENV ELASTIC_CONTAINER true
ENV PATH=/usr/share/logstash/bin:$PATH
ENV LANG=<%= locale %> LC_ALL=<%= locale %>
WORKDIR /usr/share
# Install packages
RUN for iter in {1..10}; do \
<% if image_flavor == 'full' || image_flavor == 'oss' -%>
<%= package_manager %> update -y && \
<%= package_manager %> install -y procps findutils tar gzip && \
<%= package_manager %> install -y openssl && \
<%= package_manager %> install -y which shadow-utils && \
<%= package_manager %> clean all && \
<% else -%><%# 'wolfi' -%>
<%= package_manager %> add --no-cache curl bash openssl && \
<% end -%>
exit_code=0 && break || \
exit_code=$? && echo "packaging error: retry $iter in 10s" && \
<%= package_manager %> clean all && sleep 10; \
done; \
(exit $exit_code)
COPY --from=env2yaml /go/src/env2yaml/env2yaml /usr/local/bin/env2yaml
COPY scripts/config/* config/
COPY scripts/pipeline/default.conf pipeline/logstash.conf
COPY scripts/bin/docker-entrypoint /usr/local/bin/
COPY logstash-${LOGSTASH_VERSION}-linux-x86_64.tar.gz /tmp/logstash.tar.gz
# Provide a non-root user to run the process
# Add Logstash itself and set permissions
RUN dnf -y upgrade && \
dnf install -y procps findutils tar gzip which shadow-utils && \
dnf clean all && \
groupadd --gid 1000 logstash && \
adduser --uid 1000 --gid 1000 --home-dir /usr/share/logstash --no-create-home logstash && \
tar -zxf /tmp/logstash.tar.gz -C /usr/share/ && \
mv /usr/share/logstash-${LOGSTASH_VERSION} /usr/share/logstash && \
chown -R 1000:0 /usr/share/logstash && \
chown --recursive logstash:logstash /usr/share/logstash/ && \
chown -R logstash:root /usr/share/logstash config/ pipeline/ && \
chmod -R g=u /usr/share/logstash && \
mv config/* /usr/share/logstash/config && \
mv pipeline /usr/share/logstash/pipeline && \
mkdir /licenses/ && \
mv /usr/share/logstash/NOTICE.TXT /licenses/NOTICE.TXT && \
mv /usr/share/logstash/LICENSE.txt /licenses/LICENSE.txt && \
ln -s /usr/share/logstash /opt/logstash && \
chmod 0755 /usr/local/bin/docker-entrypoint && \
rmdir config && \
rm /tmp/logstash.tar.gz
<%# End image_flavor 'ironbank' %>
<% else -%>
<%# Start image_flavor 'full', oss', 'ubi8', 'wolfi' %>
<% if local_artifacts == 'false' -%>
<% url_root = 'https://artifacts.elastic.co/downloads/logstash' -%>
<% else -%>
<% url_root = 'http://localhost:8000' -%>
<% end -%>
<% if image_flavor == 'oss' -%>
<% tarball = "logstash-oss-#{elastic_version}-linux-$(arch).tar.gz" -%>
<% license = 'Apache 2.0' -%>
<% else -%>
<% tarball = "logstash-#{elastic_version}-linux-$(arch).tar.gz" -%>
<% license = 'Elastic License' -%>
<% end -%>
<% if image_flavor == 'ubi8' %>
<% base_image = 'docker.elastic.co/ubi8/ubi-minimal' -%>
<% package_manager = 'microdnf' -%>
<% arch_command = 'uname -m' -%>
# Minimal distributions do not ship with en language packs.
<% locale = 'C.UTF-8' -%>
<% elsif image_flavor == 'wolfi' %>
<% base_image = 'docker.elastic.co/wolfi/chainguard-base' -%>
<% package_manager = 'apk' -%>
<% arch_command = 'uname -m' -%>
# Minimal distributions do not ship with en language packs.
<% locale = 'C.UTF-8' -%>
<% else -%>
<% base_image = 'ubuntu:20.04' -%>
<% package_manager = 'apt-get' -%>
<% locale = 'en_US.UTF-8' -%>
<% arch_command = 'dpkg --print-architecture' -%>
<% end -%>
FROM <%= base_image %>
RUN for iter in {1..10}; do \
<% if image_flavor == 'wolfi' %>
<%= package_manager %> add --no-cache curl bash openssl && \
<% else -%>
<% if image_flavor == 'full' || image_flavor == 'oss' -%>
export DEBIAN_FRONTEND=noninteractive && \
<% end -%>
<%= package_manager %> update -y && \
<%= package_manager %> upgrade -y && \
<%= package_manager %> install -y procps findutils tar gzip && \
<% if image_flavor == 'ubi8' -%>
<%= package_manager %> install -y openssl && \
<% end -%>
<% if image_flavor == 'ubi8' -%>
<%= package_manager %> install -y which shadow-utils && \
<% else -%>
<%= package_manager %> install -y locales && \
<% end -%>
<% if image_flavor != 'ubi9' -%>
<%= package_manager %> install -y curl && \
<% end -%>
<%= package_manager %> clean all && \
<% if image_flavor == 'full' || image_flavor == 'oss' -%>
locale-gen 'en_US.UTF-8' && \
<%= package_manager %> clean metadata && \
<% end -%>
<% end -%>
exit_code=0 && break || exit_code=$? && \
echo "packaging error: retry $iter in 10s" && \
<%= package_manager %> clean all && \
<% if image_flavor == 'full' || image_flavor == 'oss' -%>
RUN groupadd --gid 1000 logstash && \
adduser --uid 1000 --gid 1000 \
--home "/usr/share/logstash" \
--no-create-home \
logstash && \
<% else -%><%# 'wolfi' -%>
<%= package_manager %> clean metadata && \
<% end -%>
sleep 10; done; \
(exit $exit_code)
# Provide a non-root user to run the process.
<% if image_flavor == 'wolfi' -%>
RUN addgroup -g 1000 logstash && \
adduser -u 1000 -G logstash \
--disabled-password \
@ -75,54 +149,95 @@ RUN addgroup -g 1000 logstash && \
--home "/usr/share/logstash" \
--shell "/sbin/nologin" \
--no-create-home \
logstash && \
logstash
<% else -%>
RUN groupadd --gid 1000 logstash && \
adduser --uid 1000 --gid 1000 --home /usr/share/logstash --no-create-home logstash
<% end -%>
curl -Lo - <%= url_root %>/<%= tarball %> | \
# Add Logstash itself.
RUN curl -Lo - <%= url_root %>/<%= tarball %> | \
tar zxf - -C /usr/share && \
mv /usr/share/logstash-<%= elastic_version %> /usr/share/logstash && \
chown --recursive logstash:logstash /usr/share/logstash/ && \
chown -R logstash:root /usr/share/logstash && \
chmod -R g=u /usr/share/logstash && \
mkdir /licenses && \
mkdir /licenses/ && \
mv /usr/share/logstash/NOTICE.TXT /licenses/NOTICE.TXT && \
mv /usr/share/logstash/LICENSE.txt /licenses/LICENSE.txt && \
find /usr/share/logstash -type d -exec chmod g+s {} \; && \
ln -s /usr/share/logstash /opt/logstash
COPY --from=builder-env2yaml /tmp/go/src/env2yaml/env2yaml /usr/local/bin/env2yaml
COPY --chown=logstash:root config/pipelines.yml config/log4j2.properties config/log4j2.file.properties /usr/share/logstash/config/
<% if image_flavor == 'oss' -%>
COPY --chown=logstash:root config/logstash-oss.yml /usr/share/logstash/config/logstash.yml
<% else -%><%# 'full', 'wolfi' -%>
COPY --chown=logstash:root config/logstash-full.yml /usr/share/logstash/config/logstash.yml
<% end -%>
COPY --chown=logstash:root pipeline/default.conf /usr/share/logstash/pipeline/logstash.conf
COPY --chmod=0755 bin/docker-entrypoint /usr/local/bin/
WORKDIR /usr/share/logstash
ENV ELASTIC_CONTAINER true
ENV PATH=/usr/share/logstash/bin:$PATH
# Provide a minimal configuration, so that simple invocations will provide
# a good experience.
<% if image_flavor == 'oss' -%>
COPY config/logstash-oss.yml config/logstash.yml
<% else -%>
COPY config/logstash-full.yml config/logstash.yml
<% end -%>
COPY config/pipelines.yml config/log4j2.properties config/log4j2.file.properties config/
COPY pipeline/default.conf pipeline/logstash.conf
RUN chown --recursive logstash:root config/ pipeline/
# Ensure Logstash gets the correct locale by default.
ENV LANG=<%= locale %> LC_ALL=<%= locale %>
<% if image_flavor == 'wolfi' -%>
COPY --from=builder-env2yaml /tmp/go/src/env2yaml/env2yaml /usr/local/bin/env2yaml
<% else -%>
COPY env2yaml/env2yaml-amd64 env2yaml/env2yaml-arm64 env2yaml/
# Copy over the appropriate env2yaml artifact
RUN env2yamlarch="$(<%= arch_command %>)"; \
case "${env2yamlarch}" in \
'x86_64'|'amd64') \
env2yamlarch=amd64; \
;; \
'aarch64'|'arm64') \
env2yamlarch=arm64; \
;; \
*) echo >&2 "error: unsupported architecture '$env2yamlarch'"; exit 1 ;; \
esac; \
mkdir -p /usr/local/bin; \
cp env2yaml/env2yaml-${env2yamlarch} /usr/local/bin/env2yaml; \
rm -rf env2yaml
<% end -%>
# Place the startup wrapper script.
COPY bin/docker-entrypoint /usr/local/bin/
RUN chmod 0755 /usr/local/bin/docker-entrypoint
<%# End image_flavor 'full', oss', 'ubi8', 'wolfi' %>
<% end -%>
USER 1000
EXPOSE 9600 5044
LABEL org.label-schema.build-date=<%= created_date %> \
org.label-schema.license="<%= license %>" \
<% if image_flavor != 'ironbank' -%>
LABEL org.label-schema.schema-version="1.0" \
org.label-schema.vendor="Elastic" \
org.opencontainers.image.vendor="Elastic" \
org.label-schema.name="logstash" \
org.label-schema.schema-version="1.0" \
org.opencontainers.image.title="logstash" \
org.label-schema.version="<%= elastic_version %>" \
org.opencontainers.image.version="<%= elastic_version %>" \
org.label-schema.url="https://www.elastic.co/products/logstash" \
org.label-schema.vcs-url="https://github.com/elastic/logstash" \
org.label-schema.vendor="Elastic" \
org.label-schema.version="<%= elastic_version %>" \
org.opencontainers.image.created=<%= created_date %> \
org.opencontainers.image.description="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
org.label-schema.license="<%= license %>" \
org.opencontainers.image.licenses="<%= license %>" \
org.opencontainers.image.title="logstash" \
org.opencontainers.image.vendor="Elastic" \
org.opencontainers.image.version="<%= elastic_version %>" \
org.opencontainers.image.description="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
org.label-schema.build-date=<%= created_date %> \
<% if image_flavor == 'ubi8' -%> license="<%= license %>" \
description="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
license="<%= license %>" \
maintainer="info@elastic.co" \
name="logstash" \
maintainer="info@elastic.co" \
summary="Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite 'stash.'" \
vendor="Elastic"
vendor="Elastic" \
<% end -%>
org.opencontainers.image.created=<%= created_date %>
<% end -%>
ENTRYPOINT ["/usr/local/bin/docker-entrypoint"]

View file

@ -1,65 +0,0 @@
# This Dockerfile was generated from templates/IronbankDockerfile.erb
ARG BASE_REGISTRY=registry1.dso.mil
ARG BASE_IMAGE=ironbank/redhat/ubi/ubi9
ARG BASE_TAG=9.5
ARG LOGSTASH_VERSION=<%= elastic_version %>
ARG GOLANG_VERSION=1.21.8
# stage 1: build env2yaml
FROM ${BASE_REGISTRY}/google/golang/ubi9/golang-1.21:${GOLANG_VERSION} AS env2yaml
ENV GOPATH=/go
COPY scripts/go /go
USER root
RUN dnf-3 -y upgrade && dnf-3 install -y git && \
cd /go/src/env2yaml && \
go build
# Final stage
FROM ${BASE_REGISTRY}/${BASE_IMAGE}:${BASE_TAG}
ARG LOGSTASH_VERSION
ENV ELASTIC_CONTAINER true
ENV PATH=/usr/share/logstash/bin:$PATH
WORKDIR /usr/share
COPY --from=env2yaml /go/src/env2yaml/env2yaml /usr/local/bin/env2yaml
COPY scripts/config/* config/
COPY scripts/pipeline/default.conf pipeline/logstash.conf
COPY scripts/bin/docker-entrypoint /usr/local/bin/
COPY logstash-${LOGSTASH_VERSION}-linux-x86_64.tar.gz /tmp/logstash.tar.gz
RUN dnf -y upgrade && \
dnf install -y procps findutils tar gzip which shadow-utils && \
dnf clean all && \
groupadd --gid 1000 logstash && \
adduser --uid 1000 --gid 1000 --home-dir /usr/share/logstash --no-create-home logstash && \
tar -zxf /tmp/logstash.tar.gz -C /usr/share/ && \
mv /usr/share/logstash-${LOGSTASH_VERSION} /usr/share/logstash && \
chown -R 1000:0 /usr/share/logstash && \
chown --recursive logstash:logstash /usr/share/logstash/ && \
chown -R logstash:root /usr/share/logstash config/ pipeline/ && \
chmod -R g=u /usr/share/logstash && \
mv config/* /usr/share/logstash/config && \
mv pipeline /usr/share/logstash/pipeline && \
mkdir /licenses/ && \
mv /usr/share/logstash/NOTICE.TXT /licenses/NOTICE.TXT && \
mv /usr/share/logstash/LICENSE.txt /licenses/LICENSE.txt && \
ln -s /usr/share/logstash /opt/logstash && \
chmod 0755 /usr/local/bin/docker-entrypoint && \
rmdir config && \
rm /tmp/logstash.tar.gz
WORKDIR /usr/share/logstash
USER 1000
EXPOSE 9600 5044
ENTRYPOINT ["/usr/local/bin/docker-entrypoint"]

View file

@ -1,46 +0,0 @@
project: 'Logstash'
cross_links:
- beats
- docs-content
- ecs
- elasticsearch
- integration-docs
- logstash-docs-md
- search-ui
toc:
- toc: reference
- toc: release-notes
- toc: extend
subs:
logstash-ref: "https://www.elastic.co/guide/en/logstash/current"
ecloud: "Elastic Cloud"
esf: "Elastic Serverless Forwarder"
ess: "Elasticsearch Service"
serverless-full: "Elastic Cloud Serverless"
serverless-short: "Serverless"
es-serverless: "Elasticsearch Serverless"
agent: "Elastic Agent"
fleet: "Fleet"
integrations: "Integrations"
stack: "Elastic Stack"
xpack: "X-Pack"
es: "Elasticsearch"
kib: "Kibana"
ls: "Logstash"
beats: "Beats"
filebeat: "Filebeat"
metricbeat: "Metricbeat"
winlogbeat: "Winlogbeat"
security: "X-Pack security"
security-features: "security features"
monitoring: "X-Pack monitoring"
monitor-features: "monitoring features"
stack-monitor-features: "Elastic Stack monitoring features"
ilm: "index lifecycle management"
ilm-cap: "Index lifecycle management"
ilm-init: "ILM"
dlm: "data lifecycle management"
dlm-init: "DLM"
stack-version: "9.0.0"
major-version: "9.x"
docker-repo: "docker.elastic.co/logstash/logstash"

View file

@ -1,636 +0,0 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/logstash/current/codec-new-plugin.html
---
# How to write a Logstash codec plugin [codec-new-plugin]
To develop a new codec for Logstash, build a self-contained Ruby gem whose source code lives in its own GitHub repository. The Ruby gem can then be hosted and shared on RubyGems.org. You can use the example codec implementation as a starting point. (If youre unfamiliar with Ruby, you can find an excellent quickstart guide at [https://www.ruby-lang.org/en/documentation/quickstart/](https://www.ruby-lang.org/en/documentation/quickstart/).)
## Get started [_get_started_2]
Lets step through creating a codec plugin using the [example codec plugin](https://github.com/logstash-plugins/logstash-codec-example/).
### Create a GitHub repo for your new plugin [_create_a_github_repo_for_your_new_plugin_2]
Each Logstash plugin lives in its own GitHub repository. To create a new repository for your plugin:
1. Log in to GitHub.
2. Click the **Repositories** tab. Youll see a list of other repositories youve forked or contributed to.
3. Click the green **New** button in the upper right.
4. Specify the following settings for your new repo:
* **Repository name**a unique name of the form `logstash-codec-pluginname`.
* **Public or Private**your choice, but the repository must be Public if you want to submit it as an official plugin.
* **Initialize this repository with a README**enables you to immediately clone the repository to your computer.
5. Click **Create Repository**.
### Use the plugin generator tool [_use_the_plugin_generator_tool_2]
You can create your own Logstash plugin in seconds! The `generate` subcommand of `bin/logstash-plugin` creates the foundation for a new Logstash plugin with templatized files. It creates the correct directory structure, gemspec files, and dependencies so you can start adding custom code to process data with Logstash.
For more information, see [Generating plugins](/reference/plugin-generator.md)
### Copy the codec code [_copy_the_codec_code]
Alternatively, you can use the examples repo we host on github.com
1. **Clone your plugin.** Replace `GITUSERNAME` with your github username, and `MYPLUGINNAME` with your plugin name.
* `git clone https://github.com/GITUSERNAME/logstash-``codec-MYPLUGINNAME.git`
* alternately, via ssh: `git clone git@github.com:GITUSERNAME/logstash``-codec-MYPLUGINNAME.git`
* `cd logstash-codec-MYPLUGINNAME`
2. **Clone the codec plugin example and copy it to your plugin branch.**
You dont want to include the example .git directory or its contents, so delete it before you copy the example.
* `cd /tmp`
* `git clone https://github.com/logstash-plugins/logstash``-codec-example.git`
* `cd logstash-codec-example`
* `rm -rf .git`
* `cp -R * /path/to/logstash-codec-mypluginname/`
3. **Rename the following files to match the name of your plugin.**
* `logstash-codec-example.gemspec`
* `example.rb`
* `example_spec.rb`
```txt
cd /path/to/logstash-codec-mypluginname
mv logstash-codec-example.gemspec logstash-codec-mypluginname.gemspec
mv lib/logstash/codecs/example.rb lib/logstash/codecs/mypluginname.rb
mv spec/codecs/example_spec.rb spec/codecs/mypluginname_spec.rb
```
Your file structure should look like this:
```txt
$ tree logstash-codec-mypluginname
├── Gemfile
├── LICENSE
├── README.md
├── Rakefile
├── lib
│   └── logstash
│   └── codecs
│   └── mypluginname.rb
├── logstash-codec-mypluginname.gemspec
└── spec
└── codecs
└── mypluginname_spec.rb
```
For more information about the Ruby gem file structure and an excellent walkthrough of the Ruby gem creation process, see [http://timelessrepo.com/making-ruby-gems](http://timelessrepo.com/making-ruby-gems)
### See what your plugin looks like [_see_what_your_plugin_looks_like_2]
Before we dive into the details, open up the plugin file in your favorite text editor and take a look.
```ruby
require "logstash/codecs/base"
require "logstash/codecs/line"
# Add any asciidoc formatted documentation here
class LogStash::Codecs::Example < LogStash::Codecs::Base
# This example codec will append a string to the message field
# of an event, either in the decoding or encoding methods
#
# This is only intended to be used as an example.
#
# input {
# stdin { codec => example }
# }
#
# or
#
# output {
# stdout { codec => example }
# }
config_name "example"
# Append a string to the message
config :append, :validate => :string, :default => ', Hello World!'
public
def register
@lines = LogStash::Codecs::Line.new
@lines.charset = "UTF-8"
end
public
def decode(data)
@lines.decode(data) do |line|
replace = { "message" => line["message"].to_s + @append }
yield LogStash::Event.new(replace)
end
end # def decode
public
def encode(event)
@on_event.call(event, event.get("message").to_s + @append + NL)
end # def encode
end # class LogStash::Codecs::Example
```
## Coding codec plugins [_coding_codec_plugins]
Now lets take a line-by-line look at the example plugin.
### `require` Statements [_require_statements_2]
Logstash codec plugins require parent classes defined in `logstash/codecs/base` and logstash/namespace:
```ruby
require "logstash/codecs/base"
require "logstash/namespace"
```
Of course, the plugin you build may depend on other code, or even gems. Just put them here along with these Logstash dependencies.
## Plugin Body [_plugin_body_2]
Lets go through the various elements of the plugin itself.
### `class` Declaration [_class_declaration_2]
The codec plugin class should be a subclass of `LogStash::Codecs::Base`:
```ruby
class LogStash::Codecs::Example < LogStash::Codecs::Base
```
The class name should closely mirror the plugin name, for example:
```ruby
LogStash::Codecs::Example
```
### `config_name` [_config_name_2]
```ruby
config_name "example"
```
This is the name your plugin will call inside the codec configuration block.
If you set `config_name "example"` in your plugin code, the corresponding Logstash configuration block would need to look like this:
## Configuration Parameters [_configuration_parameters_2]
```ruby
config :variable_name, :validate => :variable_type, :default => "Default value", :required => boolean, :deprecated => boolean, :obsolete => string
```
The configuration, or `config` section allows you to define as many (or as few) parameters as are needed to enable Logstash to process events.
There are several configuration attributes:
* `:validate` - allows you to enforce passing a particular data type to Logstash for this configuration option, such as `:string`, `:password`, `:boolean`, `:number`, `:array`, `:hash`, `:path` (a file-system path), `uri`, `:codec` (since 1.2.0), `:bytes`. Note that this also works as a coercion in that if I specify "true" for boolean (even though technically a string), it will become a valid boolean in the config. This coercion works for the `:number` type as well where "1.2" becomes a float and "22" is an integer.
* `:default` - lets you specify a default value for a parameter
* `:required` - whether or not this parameter is mandatory (a Boolean `true` or
* `:list` - whether or not this value should be a list of values. Will typecheck the list members, and convert scalars to one element lists. Note that this mostly obviates the array type, though if you need lists of complex objects that will be more suitable. `false`)
* `:deprecated` - informational (also a Boolean `true` or `false`)
* `:obsolete` - used to declare that a given setting has been removed and is no longer functioning. The idea is to provide an informed upgrade path to users who are still using a now-removed setting.
## Plugin Methods [_plugin_methods_2]
Logstash codecs must implement the `register` method, and the `decode` method or the `encode` method (or both).
### `register` Method [_register_method_2]
```ruby
public
def register
end # def register
```
The Logstash `register` method is like an `initialize` method. It was originally created to enforce having `super` called, preventing headaches for newbies. (Note: It may go away in favor of `initialize`, in conjunction with some enforced testing to ensure `super` is called.)
`public` means the method can be called anywhere, not just within the class. This is the default behavior for methods in Ruby, but it is specified explicitly here anyway.
You can also assign instance variables here (variables prepended by `@`). Configuration variables are now in scope as instance variables, like `@message`
### `decode` Method [_decode_method]
```ruby
public
def decode(data)
@lines.decode(data) do |line|
replace = { "message" => line["message"].to_s + @append }
yield LogStash::Event.new(replace)
end
end # def decode
```
The codecs `decode` method is where data coming in from an input is transformed into an event. There are complex examples like the [collectd](https://github.com/logstash-plugins/logstash-codec-collectd/blob/main/lib/logstash/codecs/collectd.rb#L386-L484) codec, and simpler examples like the [spool](https://github.com/logstash-plugins/logstash-codec-spool/blob/main/lib/logstash/codecs/spool.rb#L11-L16) codec.
There must be a `yield` statement as part of the `decode` method which will return decoded events to the pipeline.
### `encode` Method [_encode_method]
```ruby
public
def encode(event)
@on_event.call(event, event.get("message").to_s + @append + NL)
end # def encode
```
The `encode` method takes an event and serializes it (*encodes*) into another format. Good examples of `encode` methods include the simple [plain](https://github.com/logstash-plugins/logstash-codec-plain/blob/main/lib/logstash/codecs/plain.rb#L39-L46) codec, the slightly more involved [msgpack](https://github.com/logstash-plugins/logstash-codec-msgpack/blob/main/lib/logstash/codecs/msgpack.rb#L38-L46) codec, and even an [avro](https://github.com/logstash-plugins/logstash-codec-avro/blob/main/lib/logstash/codecs/avro.rb#L38-L45) codec.
In most cases, your `encode` method should have an `@on_event.call()` statement. This call will output data per event in the described way.
## Building the Plugin [_building_the_plugin_2]
At this point in the process you have coded your plugin and are ready to build a Ruby Gem from it. The following information will help you complete the process.
### External dependencies [_external_dependencies_2]
A `require` statement in Ruby is used to include necessary code. In some cases your plugin may require additional files. For example, the collectd plugin [uses](https://github.com/logstash-plugins/logstash-codec-collectd/blob/main/lib/logstash/codecs/collectd.rb#L148) the `types.db` file provided by collectd. In the main directory of your plugin, a file called `vendor.json` is where these files are described.
The `vendor.json` file contains an array of JSON objects, each describing a file dependency. This example comes from the [collectd](https://github.com/logstash-plugins/logstash-codec-collectd/blob/main/vendor.json) codec plugin:
```txt
[{
"sha1": "a90fe6cc53b76b7bdd56dc57950d90787cb9c96e",
"url": "http://collectd.org/files/collectd-5.4.0.tar.gz",
"files": [ "/src/types.db" ]
}]
```
* `sha1` is the sha1 signature used to verify the integrity of the file referenced by `url`.
* `url` is the address from where Logstash will download the file.
* `files` is an optional array of files to extract from the downloaded file. Note that while tar archives can use absolute or relative paths, treat them as absolute in this array. If `files` is not present, all files will be uncompressed and extracted into the vendor directory.
Another example of the `vendor.json` file is the [`geoip` filter](https://github.com/logstash-plugins/logstash-filter-geoip/blob/main/vendor.json)
The process used to download these dependencies is to call `rake vendor`. This will be discussed further in the testing section of this document.
Another kind of external dependency is on jar files. This will be described in the "Add a `gemspec` file" section.
### Deprecated features [_deprecated_features_2]
As a plugin evolves, an option or feature may no longer serve the intended purpose, and the developer may want to *deprecate* its usage. Deprecation warns users about the options status, so they arent caught by surprise when it is removed in a later release.
{{ls}} 7.6 introduced a *deprecation logger* to make handling those situations easier. You can use the [adapter](https://github.com/logstash-plugins/logstash-mixin-deprecation_logger_support) to ensure that your plugin can use the deprecation logger while still supporting older versions of {{ls}}. See the [readme](https://github.com/logstash-plugins/logstash-mixin-deprecation_logger_support/blob/main/README.md) for more information and for instructions on using the adapter.
Deprecations are noted in the `logstash-deprecation.log` file in the `log` directory.
### Add a Gemfile [_add_a_gemfile_2]
Gemfiles allow Rubys Bundler to maintain the dependencies for your plugin. Currently, all well need is the Logstash gem, for testing, but if you require other gems, you should add them in here.
::::{tip}
See [Bundlers Gemfile page](http://bundler.io/gemfile.html) for more details.
::::
```ruby
source 'https://rubygems.org'
gemspec
gem "logstash", :github => "elastic/logstash", :branch => "master"
```
## Add a `gemspec` file [_add_a_gemspec_file_2]
Gemspecs define the Ruby gem which will be built and contain your plugin.
::::{tip}
More information can be found on the [Rubygems Specification page](http://guides.rubygems.org/specification-reference/).
::::
```ruby
Gem::Specification.new do |s|
s.name = 'logstash-codec-example'
s.version = '0.1.0'
s.licenses = ['Apache License (2.0)']
s.summary = "This codec does x, y, z in Logstash"
s.description = "This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
s.authors = ["Elastic"]
s.email = 'info@elastic.co'
s.homepage = "http://www.elastic.co/guide/en/logstash/current/index.html"
s.require_paths = ["lib"]
# Files
s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT']
# Tests
s.test_files = s.files.grep(%r{^(test|spec|features)/})
# Special flag to let us know this is actually a logstash plugin
s.metadata = { "logstash_plugin" => "true", "logstash_group" => "codec" }
# Gem dependencies
s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
s.add_development_dependency 'logstash-devutils'
end
```
It is appropriate to change these values to fit your plugin. In particular, `s.name` and `s.summary` should reflect your plugins name and behavior.
`s.licenses` and `s.version` are also important and will come into play when you are ready to publish your plugin.
Logstash and all its plugins are licensed under [Apache License, version 2 ("ALv2")](https://github.com/elastic/logstash/blob/main/LICENSE.txt). If you make your plugin publicly available via [RubyGems.org](http://rubygems.org), please make sure to have this line in your gemspec:
* `s.licenses = ['Apache License (2.0)']`
The gem version, designated by `s.version`, helps track changes to plugins over time. You should use [semver versioning](http://semver.org/) strategy for version numbers.
### Runtime and Development Dependencies [_runtime_and_development_dependencies_2]
At the bottom of the `gemspec` file is a section with a comment: `Gem dependencies`. This is where any other needed gems must be mentioned. If a gem is necessary for your plugin to function, it is a runtime dependency. If a gem are only used for testing, then it would be a development dependency.
::::{note}
You can also have versioning requirements for your dependencies—including other Logstash plugins:
```ruby
# Gem dependencies
s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99"
s.add_development_dependency 'logstash-devutils'
```
This gemspec has a runtime dependency on the logstash-core-plugin-api and requires that it have a version number greater than or equal to version 1.60 and less than or equal to version 2.99.
::::
::::{important}
All plugins have a runtime dependency on the `logstash-core-plugin-api` gem, and a development dependency on `logstash-devutils`.
::::
### Jar dependencies [_jar_dependencies_2]
In some cases, such as the [Elasticsearch output plugin](https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/main/logstash-output-elasticsearch.gemspec#L22-L23), your code may depend on a jar file. In cases such as this, the dependency is added in the gemspec file in this manner:
```ruby
# Jar dependencies
s.requirements << "jar 'org.elasticsearch:elasticsearch', '5.0.0'"
s.add_runtime_dependency 'jar-dependencies'
```
With these both defined, the install process will search for the required jar file at [http://mvnrepository.com](http://mvnrepository.com) and download the specified version.
## Document your plugin [_document_your_plugin_2]
Documentation is an important part of your plugin. All plugin documentation is rendered and placed in the [Logstash Reference](/reference/index.md) and the [Versioned plugin docs](logstash-docs-md://vpr/integration-plugins.md).
See [Document your plugin](/extend/plugin-doc.md) for tips and guidelines.
## Add Tests [_add_tests_2]
Logstash loves tests. Lots of tests. If youre using your new codec plugin in a production environment, youll want to have some tests to ensure you are not breaking any existing functionality.
::::{note}
A full exposition on RSpec is outside the scope of this document. Learn more about RSpec at [http://rspec.info](http://rspec.info)
::::
For help learning about tests and testing, look in the `spec/codecs/` directory of several other similar plugins.
## Clone and test! [_clone_and_test_2]
Now lets start with a fresh clone of the plugin, build it and run the tests.
* **Clone your plugin into a temporary location** Replace `GITUSERNAME` with your github username, and `MYPLUGINNAME` with your plugin name.
* `git clone https://github.com/GITUSERNAME/logstash-``codec-MYPLUGINNAME.git`
* alternately, via ssh: `git clone git@github.com:GITUSERNAME/logstash-``codec-MYPLUGINNAME.git`
* `cd logstash-codec-MYPLUGINNAME`
Then, youll need to install your plugins dependencies with bundler:
```
bundle install
```
::::{important}
If your plugin has an external file dependency described in `vendor.json`, you must download that dependency before running or testing. You can do this by running:
```
rake vendor
```
::::
And finally, run the tests:
```
bundle exec rspec
```
You should see a success message, which looks something like this:
```
Finished in 0.034 seconds
1 example, 0 failures
```
Hooray! Youre almost there! (Unless you saw failures… you should fix those first).
## Building and Testing [_building_and_testing_2]
Now youre ready to build your (well-tested) plugin into a Ruby gem.
### Build [_build_2]
You already have all the necessary ingredients, so lets go ahead and run the build command:
```sh
gem build logstash-codec-example.gemspec
```
Thats it! Your gem should be built and be in the same path with the name
```sh
logstash-codec-mypluginname-0.1.0.gem
```
The `s.version` number from your gemspec file will provide the gem version, in this case, `0.1.0`.
### Test installation [_test_installation_2]
You should test install your plugin into a clean installation of Logstash. Download the latest version from the [Logstash downloads page](https://www.elastic.co/downloads/logstash/).
1. Untar and cd in to the directory:
```sh
curl -O https://download.elastic.co/logstash/logstash/logstash-9.0.0.tar.gz
tar xzvf logstash-9.0.0.tar.gz
cd logstash-9.0.0
```
2. Using the plugin tool, we can install the gem we just built.
* Replace `/my/logstash/plugins` with the correct path to the gem for your environment, and `0.1.0` with the correct version number from the gemspec file.
```sh
bin/logstash-plugin install /my/logstash/plugins/logstash-codec-example/logstash-codec-example-0.1.0.gem
```
* After running this, you should see feedback from Logstash that it was successfully installed:
```sh
validating /my/logstash/plugins/logstash-codec-example/logstash-codec-example-0.1.0.gem >= 0
Valid logstash plugin. Continuing...
Successfully installed 'logstash-codec-example' with version '0.1.0'
```
::::{tip}
You can also use the Logstash plugin tool to determine which plugins are currently available:
```sh
bin/logstash-plugin list
```
Depending on what you have installed, you might see a short or long list of plugins: inputs, codecs, filters and outputs.
::::
3. Now try running Logstash with a simple configuration passed in via the command-line, using the `-e` flag.
::::{note}
Your results will depend on what your codec plugin is designed to do.
::::
```sh
bin/logstash -e 'input { stdin{ codec => example{}} } output {stdout { codec => rubydebug }}'
```
The example codec plugin will append the contents of `append` (which by default appends ", Hello World!")
After starting Logstash, type something, for example "Random output string". The resulting output message field contents should be, "Random output string, Hello World!":
```sh
Random output string
{
"message" => "Random output string, Hello World!",
"@version" => "1",
"@timestamp" => "2015-01-27T19:17:18.932Z",
"host" => "cadenza"
}
```
Feel free to experiment and test this by changing the `append` parameter:
```sh
bin/logstash -e 'input { stdin{ codec => example{ append => ", I am appending this! }} } output {stdout { codec => rubydebug }}'
```
Congratulations! Youve built, deployed and successfully run a Logstash codec.
## Submitting your plugin to [RubyGems.org](http://rubygems.org) and [logstash-plugins](https://github.com/logstash-plugins) [_submitting_your_plugin_to_rubygems_orghttprubygems_org_and_logstash_pluginshttpsgithub_comlogstash_plugins_2]
Logstash uses [RubyGems.org](http://rubygems.org) as its repository for all plugin artifacts. Once you have developed your new plugin, you can make it available to Logstash users by simply publishing it to RubyGems.org.
### Licensing [_licensing_2]
Logstash and all its plugins are licensed under [Apache License, version 2 ("ALv2")](https://github.com/elasticsearch/logstash/blob/main/LICENSE). If you make your plugin publicly available via [RubyGems.org](http://rubygems.org), please make sure to have this line in your gemspec:
* `s.licenses = ['Apache License (2.0)']`
### Publishing to [RubyGems.org](http://rubygems.org) [_publishing_to_rubygems_orghttprubygems_org_2]
To begin, youll need an account on RubyGems.org
* [Sign-up for a RubyGems account](https://rubygems.org/sign_up).
After creating an account, [obtain](http://guides.rubygems.org/rubygems-org-api/#api-authorization) an API key from RubyGems.org. By default, RubyGems uses the file `~/.gem/credentials` to store your API key. These credentials will be used to publish the gem. Replace `username` and `password` with the credentials you created at RubyGems.org:
```sh
curl -u username:password https://rubygems.org/api/v1/api_key.yaml > ~/.gem/credentials
chmod 0600 ~/.gem/credentials
```
Before proceeding, make sure you have the right version in your gemspec file and commit your changes.
* `s.version = '0.1.0'`
To publish version 0.1.0 of your new logstash gem:
```sh
bundle install
bundle exec rake vendor
bundle exec rspec
bundle exec rake publish_gem
```
::::{note}
Executing `rake publish_gem`:
1. Reads the version from the gemspec file (`s.version = '0.1.0'`)
2. Checks in your local repository if a tag exists for that version. If the tag already exists, it aborts the process. Otherwise, it creates a new version tag in your local repository.
3. Builds the gem
4. Publishes the gem to RubyGems.org
::::
Thats it! Your plugin is published! Logstash users can now install your plugin by running:
```sh
bin/logstash-plugin install logstash-codec-mypluginname
```
## Contributing your source code to [logstash-plugins](https://github.com/logstash-plugins) [_contributing_your_source_code_to_logstash_pluginshttpsgithub_comlogstash_plugins_2]
It is not required to contribute your source code to [logstash-plugins](https://github.com/logstash-plugins) github organization, but we always welcome new plugins!
### Benefits [_benefits_2]
Some of the many benefits of having your plugin in the logstash-plugins repository are:
* **Discovery.** Your plugin will appear in the [Logstash Reference](/reference/index.md), where Logstash users look first for plugins and documentation.
* **Documentation.** Your plugin documentation will automatically be added to the [Logstash Reference](/reference/index.md).
* **Testing.** With our testing infrastructure, your plugin will be continuously tested against current and future releases of Logstash. As a result, users will have the assurance that if incompatibilities arise, they will be quickly discovered and corrected.
### Acceptance Guidelines [_acceptance_guidelines_2]
* **Code Review.** Your plugin must be reviewed by members of the community for coherence, quality, readability, stability and security.
* **Tests.** Your plugin must contain tests to be accepted. These tests are also subject to code review for scope and completeness. Its ok if you dont know how to write testswe will guide you. We are working on publishing a guide to creating tests for Logstash which will make it easier. In the meantime, you can refer to [http://betterspecs.org/](http://betterspecs.org/) for examples.
To begin migrating your plugin to logstash-plugins, simply create a new [issue](https://github.com/elasticsearch/logstash/issues) in the Logstash repository. When the acceptance guidelines are completed, we will facilitate the move to the logstash-plugins organization using the recommended [github process](https://help.github.com/articles/transferring-a-repository/#transferring-from-a-user-to-an-organization).

View file

@ -1,193 +0,0 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/logstash/current/community-maintainer.html
---
# Logstash Plugins Community Maintainer Guide [community-maintainer]
This document, to be read by new Maintainers, should explain their responsibilities. It was inspired by the [C4](http://rfc.zeromq.org/spec:22) document from the ZeroMQ project. This document is subject to change and suggestions through Pull Requests and issues are strongly encouraged.
## Contribution Guidelines [_contribution_guidelines]
For general guidance around contributing to Logstash Plugins, see the [*Contributing to Logstash*](/extend/index.md) section.
## Document Goals [_document_goals]
To help make the Logstash plugins community participation easy with positive feedback.
To increase diversity.
To reduce code review, merge and release dependencies on the core team by providing support and tools to the Community and Maintainers.
To support the natural life cycle of a plugin.
To codify the roles and responsibilities of: Maintainers and Contributors with specific focus on patch testing, code review, merging and release.
## Development Workflow [_development_workflow]
All Issues and Pull Requests must be tracked using the Github issue tracker.
The plugin uses the [Apache 2.0 license](http://www.apache.org/licenses/LICENSE-2.0). Maintainers should check whether a patch introduces code which has an incompatible license. Patch ownership and copyright is defined in the Elastic [Contributor License Agreement](https://www.elastic.co/contributor-agreement) (CLA).
### Terminology [_terminology_2]
A "Contributor" is a role a person assumes when providing a patch. Contributors will not have commit access to the repository. They need to sign the Elastic [Contributor License Agreement](https://www.elastic.co/contributor-agreement) before a patch can be reviewed. Contributors can add themselves to the plugin Contributor list.
A "Maintainer" is a role a person assumes when maintaining a plugin and keeping it healthy, including triaging issues, and reviewing and merging patches.
### Patch Requirements [_patch_requirements]
A patch is a minimal and accurate answer to exactly one identified and agreed upon problem. It must conform to the [code style guidelines](https://github.com/elastic/logstash/blob/main/STYLE.md) and must include RSpec tests that verify the fitness of the solution.
A patch will be automatically tested by a CI system that will report on the Pull Request status.
A patch CLA will be automatically verified and reported on the Pull Request status.
A patch commit message has a single short (less than 50 character) first line summarizing the change, a blank second line, and any additional lines as necessary for change explanation and rationale.
A patch is mergeable when it satisfies the above requirements and has been reviewed positively by at least one other person.
### Development Process [_development_process]
A user will log an issue on the issue tracker describing the problem they face or observe with as much detail as possible.
To work on an issue, a Contributor forks the plugin repository and then works on their forked repository and submits a patch by creating a pull request back to the plugin.
Maintainers must not merge patches where the author has not signed the CLA.
Before a patch can be accepted it should be reviewed. Maintainers should merge accepted patches without delay.
Maintainers should not merge their own patches except in exceptional cases, such as non-responsiveness from other Maintainers or core team for an extended period (more than 2 weeks).
Reviewers comments should not be based on personal preferences.
The Maintainers should label Issues and Pull Requests.
Maintainers should involve the core team if help is needed to reach consensus.
Review non-source changes such as documentation in the same way as source code changes.
### Branch Management [_branch_management]
The plugin has a main branch that always holds the latest in-progress version and should always build. Topic branches should kept to the minimum.
### Changelog Management [_changelog_management]
Every plugin should have a changelog (https://www.elastic.co/guide/en/logstash/current/CHANGELOG.html). If not, please create one. When changes are made to a plugin, make sure to include a changelog entry under the respective version to document the change. The changelog should be easily understood from a user point of view. As we iterate and release plugins rapidly, users use the changelog as a mechanism for deciding whether to update.
Changes that are not user facing should be tagged as `internal:`. For example:
```markdown
- internal: Refactored specs for better testing
- config: Default timeout configuration changed from 10s to 5s
```
#### Detailed format of https://www.elastic.co/guide/en/logstash/current/CHANGELOG.html [_detailed_format_of_changelog_md]
Sharing a similar format of https://www.elastic.co/guide/en/logstash/current/CHANGELOG.html in plugins ease readability for users. Please see following annotated example and see a concrete example in [logstash-filter-date](https://raw.githubusercontent.com/logstash-plugins/logstash-filter-date/main/https://www.elastic.co/guide/en/logstash/current/CHANGELOG.html).
```markdown
## 1.0.x <1>
- change description <2>
- tag: change description <3>
- tag1,tag2: change description <4>
- tag: Multi-line description <5>
must be indented and can use
additional markdown syntax
<6>
## 1.0.0 <7>
[...]
```
1. Latest version is the first line of https://www.elastic.co/guide/en/logstash/current/CHANGELOG.html. Each version identifier should be a level-2 header using `##`
2. One change description is described as a list item using a dash `-` aligned under the version identifier
3. One change can be tagged by a word and suffixed by `:`.<br> Common tags are `bugfix`, `feature`, `doc`, `test` or `internal`.
4. One change can have multiple tags separated by a comma and suffixed by `:`
5. A multi-line change description must be properly indented
6. Please take care to **separate versions with an empty line**
7. Previous version identifier
### Continuous Integration [_continuous_integration]
Plugins are setup with automated continuous integration (CI) environments and there should be a corresponding badge on each Github page. If its missing, please contact the Logstash core team.
Every Pull Request opened automatically triggers a CI run. To conduct a manual run, comment “Jenkins, please test this.” on the Pull Request.
## Versioning Plugins [_versioning_plugins]
Logstash core and its plugins have separate product development lifecycles. Hence the versioning and release strategy for the core and plugins do not have to be aligned. In fact, this was one of our goals during the great separation of plugins work in Logstash 1.5.
At times, there will be changes in core API in Logstash, which will require mass update of plugins to reflect the changes in core. However, this does not happen frequently.
For plugins, we would like to adhere to a versioning and release strategy that can better inform our users, about any breaking changes to the Logstash configuration formats and functionality.
Plugin releases follows a three-placed numbering scheme X.Y.Z. where X denotes a major release version which may break compatibility with existing configuration or functionality. Y denotes releases which includes features which are backward compatible. Z denotes releases which includes bug fixes and patches.
### Changing the version [_changing_the_version]
Version can be changed in the Gemspec, which needs to be associated with a changelog entry. Following this, we can publish the gem to RubyGem.org manually. At this point only the core developers can publish a gem.
### Labeling [_labeling]
Labeling is a critical aspect of maintaining plugins. All issues in GitHub should be labeled correctly so it can:
* Provide good feedback to users/developers
* Help prioritize changes
* Be used in release notes
Most labels are self explanatory, but heres a quick recap of few important labels:
* `bug`: Labels an issue as an unintentional defect
* `needs details`: If a the issue reporter has incomplete details, please ask them for more info and label as needs details.
* `missing cla`: Contributor License Agreement is missing and patch cannot be accepted without it
* `adopt me`: Ask for help from the community to take over this issue
* `enhancement`: New feature, not a bug fix
* `needs tests`: Patch has no tests, and cannot be accepted without unit/integration tests
* `docs`: Documentation related issue/PR
## Logging [_logging]
Although its important not to bog down performance with excessive logging, debug level logs can be immensely helpful when diagnosing and troubleshooting issues with Logstash. Please remember to liberally add debug logs wherever it makes sense as users will be forever gracious.
```shell
@logger.debug("Logstash loves debug logs!", :actions => actions)
```
## Contributor License Agreement (CLA) Guidance [_contributor_license_agreement_cla_guidance]
Why is a [CLA](https://www.elastic.co/contributor-agreement) required?
: We ask this of all Contributors in order to assure our users of the origin and continuing existence of the code. We are not asking Contributors to assign copyright to us, but to give us the right to distribute a Contributors code without restriction.
Please make sure the CLA is signed by every Contributor prior to reviewing PRs and commits.
: Contributors only need to sign the CLA once and should sign with the same email as used in Github. If a Contributor signs the CLA after a PR is submitted, they can refresh the automated CLA checker by pushing another comment on the PR after 5 minutes of signing.
## Need Help? [_need_help]
Ping @logstash-core on Github to get the attention of the Logstash core team.
## Community Administration [_community_administration]
The core team is there to support the plugin Maintainers and overall ecosystem.
Maintainers should propose Contributors to become a Maintainer.
Contributors and Maintainers should follow the Elastic Community [Code of Conduct](https://www.elastic.co/community/codeofconduct). The core team should block or ban "bad actors".

View file

@ -1,11 +0,0 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/logstash/current/contribute-to-core.html
---
# Extending Logstash core [contribute-to-core]
We also welcome contributions and bug fixes to the Logstash core feature set.
Please read through our [contribution](https://github.com/elastic/logstash/blob/main/CONTRIBUTING.md) guide, and the Logstash [readme](https://github.com/elastic/logstash/blob/main/README.md) document.

View file

@ -1,386 +0,0 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/logstash/current/contributing-patch-plugin.html
---
# Contributing a patch to a Logstash plugin [contributing-patch-plugin]
This section discusses the information you need to know to successfully contribute a patch to a Logstash plugin.
Each plugin defines its own configuration options. These control the behavior of the plugin to some degree. Configuration option definitions commonly include:
* Data validation
* Default value
* Any required flags
Plugins are subclasses of a Logstash base class. A plugins base class defines common configuration and methods.
## Input plugins [contrib-patch-input]
Input plugins ingest data from an external source. Input plugins are always associated with a codec. An input plugin always has an associated codec plugin. Input and codec plugins operate in conjunction to create a Logstash event and add that event to the processing queue. An input codec is a subclass of the `LogStash::Inputs::Base` class.
### Input API [input-api]
`#register() -> nil`
: Required. This API sets up resources for the plugin, typically the connection to the external source.
`#run(queue) -> nil`
: Required. This API fetches or listens for source data, typically looping until stopped. Must handle errors inside the loop. Pushes any created events to the queue object specified in the method argument. Some inputs may receive batched data to minimize the external call overhead.
`#stop() -> nil`
: Optional. Stops external connections and cleans up.
## Codec plugins [contrib-patch-codec]
Codec plugins decode input data that has a specific structure, such as JSON input data. A codec plugin is a subclass of `LogStash::Codecs::Base`.
### Codec API [codec-api]
`#register() -> nil`
: Identical to the API of the same name for input plugins.
`#decode(data){|event| block} -> nil`
: Must be implemented. Used to create an Event from the raw data given in the method argument. Must handle errors. The caller must provide a Ruby block. The block is called with the created Event.
`#encode(event) -> nil`
: Required. Used to create a structured data object from the given Event. May handle errors. This method calls a block that was previously stored as @on_event with two arguments: the original event and the data object.
## Filter plugins [contrib-patch-filter]
A mechanism to change, mutate or merge one or more Events. A filter plugin is a subclass of the `LogStash::Filters::Base` class.
### Filter API [filter-api]
`#register() -> nil`
: Identical to the API of the same name for input plugins.
`#filter(event) -> nil`
: Required. May handle errors. Used to apply a mutation function to the given event.
## Output plugins [contrib-patch-output]
A mechanism to send an event to an external destination. This process may require serialization. An output plugin is a subclass of the `LogStash::Outputs::Base` class.
### Output API [output-api]
`#register() -> nil`
: Identical to the API of the same name for input plugins.
`#receive(event) -> nil`
: Required. Must handle errors. Used to prepare the given event for transmission to the external destination. Some outputs may buffer the prepared events to batch transmit to the destination.
## Process [patch-process]
A bug or feature is identified. An issue is created in the plugin repository. A patch is created and a pull request (PR) is submitted. After review and possible rework the PR is merged and the plugin is published.
The [Community Maintainer Guide](/extend/community-maintainer.md) explains, in more detail, the process of getting a patch accepted, merged and published. The Community Maintainer Guide also details the roles that contributors and maintainers are expected to perform.
## Testing methodologies [test-methods]
### Test driven development [tdd]
Test driven development (TDD) describes a methodology for using tests to guide evolution of source code. For our purposes, we are use only a part of it. Before writing the fix, we create tests that illustrate the bug by failing. We stop when we have written enough code to make the tests pass and submit the fix and tests as a patch. It is not necessary to write the tests before the fix, but it is very easy to write a passing test afterwards that may not actually verify that the fault is really fixed especially if the fault can be triggered via multiple execution paths or varying input data.
### RSpec framework [rspec]
Logstash uses Rspec, a Ruby testing framework, to define and run the test suite. What follows is a summary of various sources.
```ruby
2 require "logstash/devutils/rspec/spec_helper"
3 require "logstash/plugin"
4
5 describe "outputs/riemann" do
6 describe "#register" do
7 let(:output) do
8 LogStash::Plugin.lookup("output", "riemann").new(configuration)
9 end
10
11 context "when no protocol is specified" do
12 let(:configuration) { Hash.new }
13
14 it "the method completes without error" do
15 expect {output.register}.not_to raise_error
16 end
17 end
18
19 context "when a bad protocol is specified" do
20 let(:configuration) { {"protocol" => "fake"} }
21
22 it "the method fails with error" do
23 expect {output.register}.to raise_error
24 end
25 end
26
27 context "when the tcp protocol is specified" do
28 let(:configuration) { {"protocol" => "tcp"} }
29
30 it "the method completes without error" do
31 expect {output.register}.not_to raise_error
32 end
33 end
34 end
35
36 describe "#receive" do
37 let(:output) do
38 LogStash::Plugin.lookup("output", "riemann").new(configuration)
39 end
40
41 context "when operating normally" do
42 let(:configuration) { Hash.new }
43 let(:event) do
44 data = {"message"=>"hello", "@version"=>"1",
45 "@timestamp"=>"2015-06-03T23:34:54.076Z",
46 "host"=>"vagrant-ubuntu-trusty-64"}
47 LogStash::Event.new(data)
48 end
49
50 before(:example) do
51 output.register
52 end
53
54 it "should accept the event" do
55 expect { output.receive event }.not_to raise_error
56 end
57 end
58 end
59 end
```
```ruby
describe(string){block} -> nil
describe(Class){block} -> nil
```
With RSpec, we are always describing the plugin method behavior. The describe block is added in logical sections and can accept either an existing class name or a string. The string used in line 5 is the plugin name. Line 6 is the register method, line 36 is the receive method. It is a RSpec convention to prefix instance methods with one hash and class methods with one dot.
```ruby
context(string){block} -> nil
```
In RSpec, context blocks define sections that group tests by a variation. The string should start with the word `when` and then detail the variation. See line 11. The tests in the content block should should only be for that variation.
```ruby
let(symbol){block} -> nil
```
In RSpec, `let` blocks define resources for use in the test blocks. These resources are reinitialized for every test block. They are available as method calls inside the test block. Define `let` blocks in `describe` and `context` blocks, which scope the `let` block and any other nested blocks. You can use other `let` methods defined later within the `let` block body. See lines 7-9, which define the output resource and use the configuration method, defined with different variations in lines 12, 20 and 28.
```ruby
before(symbol){block} -> nil - symbol is one of :suite, :context, :example, but :all and :each are synonyms for :suite and :example respectively.
```
In RSpec, `before` blocks are used to further set up any resources that would have been initialized in a `let` block. You cannot define `let` blocks inside `before` blocks.
You can also define `after` blocks, which are typically used to clean up any setup activity performed by a `before` block.
```ruby
it(string){block} -> nil
```
In RSpec, `it` blocks set the expectations that verify the behavior of the tested code. The string should not start with *it* or *should*, but needs to express the outcome of the expectation. When put together the texts from the enclosing describe, `context` and `it` blocks should form a fairly readable sentence, as in lines 5, 6, 11 and 14:
```ruby
outputs/riemann
#register when no protocol is specified the method completes without error
```
Readable code like this make the goals of tests easy to understand.
```ruby
expect(object){block} -> nil
```
In RSpec, the expect method verifies a statement that compares an actual result to an expected result. The `expect` method is usually paired with a call to the `to` or `not_to` methods. Use the block form when expecting errors or observing for changes. The `to` or `not_to` methods require a `matcher` object that encapsulates the expected value. The argument form of the `expect` method encapsulates the actual value. When put together the whole line tests the actual against the expected value.
```ruby
raise_error(error class|nil) -> matcher instance
be(object) -> matcher instance
eq(object) -> matcher instance
eql(object) -> matcher instance
for more see http://www.relishapp.com/rspec/rspec-expectations/docs/built-in-matchers
```
In RSpec, a matcher is an object generated by the equivalent method call (be, eq) that will be used to evaluate the expected against the actual values.
## Putting it all together [all-together]
This example fixes an [issue](https://github.com/logstash-plugins/logstash-output-zeromq/issues/9) in the ZeroMQ output plugin. The issue does not require knowledge of ZeroMQ.
The activities in this example have the following prerequisites:
* A minimal knowledge of Git and Github. See the [Github boot camp](https://help.github.com/categories/bootcamp/).
* A text editor.
* A JRuby [runtime](https://www.ruby-lang.org/en/documentation/installation/#managers) [environment](https://howistart.org/posts/ruby/1). The `chruby` tool manages Ruby versions.
* JRuby 1.7.22 or later.
* The `bundler` and `rake` gems installed.
* ZeroMQ [installed](http://zeromq.org/intro:get-the-software).
1. In Github, fork the ZeroMQ [output plugin repository](https://github.com/logstash-plugins/logstash-output-zeromq).
2. On your local machine, [clone](https://help.github.com/articles/fork-a-repo/) the fork to a known folder such as `logstash/`.
3. Open the following files in a text editor:
* `logstash-output-zeromq/lib/logstash/outputs/zeromq.rb`
* `logstash-output-zeromq/lib/logstash/util/zeromq.rb`
* `logstash-output-zeromq/spec/outputs/zeromq_spec.rb`
4. According to the issue, log output in server mode must indicate `bound`. Furthermore, the test file contains no tests.
::::{note}
Line 21 of `util/zeromq.rb` reads `@logger.info("0mq: #{server? ? 'connected' : 'bound'}", :address => address)`
::::
5. In the text editor, require `zeromq.rb` for the file `zeromq_spec.rb` by adding the following lines:
```ruby
require "logstash/outputs/zeromq"
require "logstash/devutils/rspec/spec_helper"
```
6. The desired error message should read:
```ruby
LogStash::Outputs::ZeroMQ when in server mode a 'bound' info line is logged
```
To properly generate this message, add a `describe` block with the fully qualified class name as the argument, a context block, and an `it` block.
```ruby
describe LogStash::Outputs::ZeroMQ do
context "when in server mode" do
it "a 'bound' info line is logged" do
end
end
end
```
7. To add the missing test, use an instance of the ZeroMQ output and a substitute logger. This example uses an RSpec feature called *test doubles* as the substitute logger.
Add the following lines to `zeromq_spec.rb`, after `describe LogStash::Outputs::ZeroMQ do` and before `context "when in server mode" do`:
```ruby
let(:output) { described_class.new("mode" => "server", "topology" => "pushpull" }
let(:tracer) { double("logger") }
```
8. Add the body to the `it` block. Add the following five lines after the line `context "when in server mode" do`:
```ruby
allow(tracer).to receive(:debug)<1>
output.logger = logger<2>
expect(tracer).to receive(:info).with("0mq: bound", {:address=>"tcp://127.0.0.1:2120"})<3>
output.register<4>
output.do_close<5>
```
1. Allow the double to receive `debug` method calls.
2. Make the output use the test double.
3. Set an expectation on the test to receive an `info` method call.
4. Call `register` on the output.
5. Call `do_close` on the output so the test does not hang.
At the end of the modifications, the relevant code section reads:
```ruby
require "logstash/outputs/zeromq"
require "logstash/devutils/rspec/spec_helper"
describe LogStash::Outputs::ZeroMQ do
let(:output) { described_class.new("mode" => "server", "topology" => "pushpull") }
let(:tracer) { double("logger") }
context "when in server mode" do
it "a bound info line is logged" do
allow(tracer).to receive(:debug)
output.logger = tracer
expect(tracer).to receive(:info).with("0mq: bound", {:address=>"tcp://127.0.0.1:2120"})
output.register
output.do_close
end
end
end
```
To run this test:
1. Open a terminal window
2. Navigate to the cloned plugin folder
3. The first time you run the test, run the command `bundle install`
4. Run the command `bundle exec rspec`
Assuming all prerequisites were installed correctly, the test fails with output similar to:
```shell
Using Accessor#strict_set for specs
Run options: exclude {:redis=>true, :socket=>true, :performance=>true, :couchdb=>true, :elasticsearch=>true,
:elasticsearch_secure=>true, :export_cypher=>true, :integration=>true, :windows=>true}
LogStash::Outputs::ZeroMQ
when in server mode
a bound info line is logged (FAILED - 1)
Failures:
1) LogStash::Outputs::ZeroMQ when in server mode a bound info line is logged
Failure/Error: output.register
Double "logger" received :info with unexpected arguments
expected: ("0mq: bound", {:address=>"tcp://127.0.0.1:2120"})
got: ("0mq: connected", {:address=>"tcp://127.0.0.1:2120"})
# ./lib/logstash/util/zeromq.rb:21:in `setup'
# ./lib/logstash/outputs/zeromq.rb:92:in `register'
# ./lib/logstash/outputs/zeromq.rb:91:in `register'
# ./spec/outputs/zeromq_spec.rb:13:in `(root)'
# /Users/guy/.gem/jruby/1.9.3/gems/rspec-wait-0.0.7/lib/rspec/wait.rb:46:in `(root)'
Finished in 0.133 seconds (files took 1.28 seconds to load)
1 example, 1 failure
Failed examples:
rspec ./spec/outputs/zeromq_spec.rb:10 # LogStash::Outputs::ZeroMQ when in server mode a bound info line is logged
Randomized with seed 2568
```
To correct the error, open the `util/zeromq.rb` file in your text editor and swap the positions of the words `connected` and `bound` on line 21. Line 21 now reads:
```ruby
@logger.info("0mq: #{server? ? 'bound' : 'connected'}", :address => address)
```
Run the test again with the `bundle exec rspec` command.
The test passes with output similar to:
```shell
Using Accessor#strict_set for specs
Run options: exclude {:redis=>true, :socket=>true, :performance=>true, :couchdb=>true, :elasticsearch=>true, :elasticsearch_secure=>true, :export_cypher=>true, :integration=>true, :windows=>true}
LogStash::Outputs::ZeroMQ
when in server mode
a bound info line is logged
Finished in 0.114 seconds (files took 1.22 seconds to load)
1 example, 0 failures
Randomized with seed 45887
```
[Commit](https://help.github.com/articles/fork-a-repo/#next-steps) the changes to git and Github.
Your pull request is visible from the [Pull Requests](https://github.com/logstash-plugins/logstash-output-zeromq/pulls) section of the original Github repository. The plugin maintainers review your work, suggest changes if necessary, and merge and publish a new version of the plugin.

View file

@ -1,47 +0,0 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/logstash/current/contributing-java-plugin.html
---
# Create Logstash plugins [contributing-java-plugin]
Now you can write your own Java plugin for use with {{ls}}. We have provided instructions and GitHub examples to give you a head start.
Native support for Java plugins in {{ls}} consists of several components:
* Extensions to the Java execution engine to support running Java plugins in Logstash pipelines
* APIs for developing Java plugins. The APIs are in the `co.elastic.logstash.api` package. A Java plugin might break if it references classes or specific concrete implementations of API interfaces outside that package. The implementation of classes outside of the API package may change at any time.
* Tooling to automate the packaging and deployment of Java plugins in Logstash.
## Process overview [_process_overview]
Here are the steps:
1. Choose the type of plugin you want to create: input, codec, filter, or output.
2. Set up your environment.
3. Code the plugin.
4. Package and deploy the plugin.
5. Run Logstash with your new plugin.
### Lets get started [_lets_get_started]
Here are the example repos:
* [Input plugin example](https://github.com/logstash-plugins/logstash-input-java_input_example)
* [Codec plugin example](https://github.com/logstash-plugins/logstash-codec-java_codec_example)
* [Filter plugin example](https://github.com/logstash-plugins/logstash-filter-java_filter_example)
* [Output plugin example](https://github.com/logstash-plugins/logstash-output-java_output_example)
Here are the instructions:
* [How to write a Java input plugin](/extend/java-input-plugin.md)
* [How to write a Java codec plugin](/extend/java-codec-plugin.md)
* [How to write a Java filter plugin](/extend/java-filter-plugin.md)
* [How to write a Java output plugin](/extend/java-output-plugin.md)

Some files were not shown because too many files have changed in this diff Show more