Compare commits

..

32 commits

Author SHA1 Message Date
Ry Biesemeyer
6861ea613a
bump to 8.4.3 (#16288) 2024-07-04 09:07:22 +01:00
github-actions[bot]
7d75395e77
Release notes for 8.14.2 (#16266)
* Update release notes for 8.14.2

* human touch for 8.14.2 release notes

* Update docs/static/releasenotes.asciidoc

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>

* Update docs/static/releasenotes.asciidoc

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-07-03 09:35:32 -07:00
github-actions[bot]
9f71e03312
Add retries to aarch64 CI pipeline (#16271) (#16272)
Add retries in the aarch64 CI pipeline to reduce noise from transient
network failures.

Closes https://github.com/elastic/ingest-dev/issues/3510

(cherry picked from commit 7080ec5427)

Co-authored-by: Dimitrios Liappis <dimitrios.liappis@gmail.com>
2024-07-01 14:31:11 +03:00
github-actions[bot]
6b8968476d
Doc: Add ecs and datastream requirement for intg filter (#16268) (#16270)
(cherry picked from commit 095733c409)

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-06-28 19:35:12 -04:00
github-actions[bot]
3467751f90
Doc: Remove include statements for screenshots not rendering properly (#15981) (#16269)
(cherry picked from commit cb45cd28cc)

Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-06-28 19:24:41 -04:00
Ry Biesemeyer
bd7e40885c
Backport 16250 8.14 (#16259)
* sync notices

* backport 16250

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>

---------

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
2024-06-25 15:11:22 -07:00
github-actions[bot]
04d193d613
Update patch plugin versions in gemfile lock (#16258)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2024-06-25 14:13:53 -07:00
github-actions[bot]
ff37802a77
Unicode pipeline and plugin ids (#15971) (#16257)
* fix: restore support for unicode pipeline- and plugin-id's

JRuby's `Ruby#newSymbol(String)` throws an exception when provided a `String`
that contains characters outside of lower-ASCII because JRuby internals expect
"the incoming String to be one of our mangled ISO-8859-1 strings" as noted in
a comment on jruby/jruby#6217.

Instead, we use `Ruby#newString(String)` to create a new `RubyString` (which
works properly), and then rely on `RubyString#intern` to get our `RubySymbol`.

This fixes a regression introduced in the 8.7 series in which pipeline id's
are consistently represented as ruby symbols in the metrics store, and ensures
similar issue does not exist when specifying a plugin id that contains
characters above the lower-ASCII plane.

* fix: use properly-encoded RubySymbol in PipelineConfig

We cannot rely on `RubySymbol#toString` to produce a properly-encoded `String`
whe the string contains characters above the lower-ASCII plane because the
result is effectively a binary ruby-internal marshal of the bytes that only
holds when the symbol contains lower-ASCII.

Instead, we can use the internally-memoizing `RubySymbol#name` to get a
properly-encoded `RubyString`, and `RubyString#asJavaString()` to get a
properly-encoded java-`String`.

* fix: properly serialize unicode pipeline names in API output

Jackson's JSON serializer leaks the JRuby-internal byte structure of Symbols,
which only aligns with the byte-structure of the symbol's actual string when
that string is wholly-comprised of lower-ASCII characters.

By pre-converting Symbols to Strings, we ensure that the result is readable
and useful.

* spec: bypass monitoring specs for unicode pipeline ids when PQ enabled

(cherry picked from commit 0ec16ca398)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-06-25 09:46:55 -07:00
github-actions[bot]
a5b7d2bfad
json: remove unnecessary dup/freeze in serialization (#16213) (#16253)
(cherry picked from commit 92909cb1c4)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-06-24 12:21:13 -07:00
github-actions[bot]
a42c71a904
Avoid to log file not found errors when DLQ segments are removed concurrently between writer and reader. (#16204) (#16249)
* Rework the logic to delete DLQ eldest segments to be more resilient on file not found errors and avoid to log warn messages that there isn't any action the user can do to solve.

* Fixed test case, when path point to a file that doesn't exist, rely always on path name comparator. Reworked the code to simplify, not needing anymore the tri-state variable

(cherry picked from commit 321e407e53)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-06-20 11:46:32 -07:00
github-actions[bot]
0ba5330b5f
Geoip database management cache invalidation (#16222) (#16223)
* geoip: failing specs demonstrating elastic/logstash#16221

* geoip: invalidate cached db state when receiving updates/expiries

(cherry picked from commit 801f0f441e)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-06-18 16:47:52 -07:00
Edmo Vamerlatti Costa
8949dc77b6
Bump 8.14.2 (#16216)
Bump 8.14.2
2024-06-12 16:45:47 +02:00
github-actions[bot]
f9d6b42a7e
Release notes for 8.14.1 (#16212)
* Update release notes for 8.14.1

* Snip generated context

* Manually fill release notes for Elastic Integration filter

* Reword release notes from core to be user-centric

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-06-11 10:13:59 -07:00
github-actions[bot]
8fa13707fa
Update patch plugin versions in gemfile lock (#16211)
Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
2024-06-10 09:52:24 -07:00
github-actions[bot]
224421f3e9
Pin rexml gem version to 3.2.6 (#16209) (#16210)
This commit pinned the `rexml` gem version to `3.2.6`

(cherry picked from commit 23221caddb)

Co-authored-by: Edmo Vamerlatti Costa <11836452+edmocosta@users.noreply.github.com>
2024-06-10 18:01:22 +02:00
github-actions[bot]
550e935835
Revert PR #16050 (#16203)
The PR was created to skip resolving environment variable references in comments present in the “config.string” pipelines defined in the pipelines.yml file.
However it introduced a bug that no longer resolves env var references in values of settings like pipeline.batch.size or queue.max_bytes.
For now we’ll revert this PR and create a fix that handles both problems.

(cherry picked from commit efa83787a5)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-06-06 20:29:56 +01:00
Rob Bavey
7afb42c13a
Bump version to 8.14.1 (#16196) 2024-06-05 09:15:37 -04:00
github-actions[bot]
78fb379282
Release notes for 8.14.0 (#16155)
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-06-04 19:11:05 -04:00
João Duarte
9878e2737a
Upgrade nokogiri to 1.16.5 (#16188) 2024-05-31 11:28:03 +01:00
github-actions[bot]
9cdb8b839f
PQ: avoid blocking writer when precisely full (#16176) (#16178)
* pq: avoid blocking writer when queue is precisely full

A PQ is considered full (and therefore needs to block before releasing the
writer) when its persisted size on disk _exceeds_ its `queue.max_bytes`
capacity.

This removes an edge-case preemptive block when the persisted size after
writing an event _meets_ its `queue.max_bytes` precisely AND its current
head page has insufficient room to also accept a hypothetical future event.

Fixes: elastic/logstash#16172

* docs: PQ `queue.max_bytes` cannot be less than `queue.page_capacity`

(cherry picked from commit ea930861ef)

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
2024-05-22 10:55:53 -07:00
Mashhur
7277a369a1
Handle non-unicode payload in Logstash. (#16072) (#16168)
* A logic to handle non-unicode payload in Logstash.

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>

* Upgrade jrjackson to 0.4.20

* Code review: simplify the logic with a standard String#encode interface with replace option.

Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>

---------

Co-authored-by: Ry Biesemeyer <yaauie@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <ry.biesemeyer@elastic.co>
(cherry picked from commit 979d30d701)
2024-05-21 06:50:35 -07:00
github-actions[bot]
304b8b25e6
[DOC] Remove reference to puppet LS module (#12356) (#16165)
As the module is not maintained since 2018 and it was community supported, I would like to remove it from the documentation.

(cherry picked from commit 53d9480176)

Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
2024-05-16 14:24:08 -04:00
github-actions[bot]
1401565c48
bump lock file for 8.14 (#16154)
* Update minor plugin versions in gemfile lock

* Update Gemfile.jruby-3.1.lock.release

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-05-09 14:18:55 +01:00
github-actions[bot]
581ac90ec6
Release notes for 8.13.4 (#16144) (#16151)
Refined release notes for 8.13.4

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: andsel <selva.andre@gmail.com>
(cherry picked from commit 49a65324a8)
2024-05-07 18:06:55 -04:00
github-actions[bot]
e673e0d36f
bump lock file for 8.14 (#16132)
* Update patch plugin versions in gemfile lock

* Update Gemfile.jruby-3.1.lock.release

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-05-02 22:23:22 +01:00
github-actions[bot]
b1773f8c02
force ruby-maven-libs constraint to >= 3.9.6.1 (#16130) (#16131)
(cherry picked from commit 973c2ba3aa)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-05-02 21:50:23 +01:00
github-actions[bot]
1f12208528
bump lock file for 8.14 (#16129)
---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-05-02 15:34:38 +01:00
github-actions[bot]
ed535a822a Release notes for 8.13.3 (#16108)
* Update release notes for 8.13.3

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
Co-authored-by: Karen Metts <35154725+karenzone@users.noreply.github.com>
2024-05-02 15:09:45 +01:00
github-actions[bot]
c2195bc1ee
upgrade jruby to 9.4.7.0 (#16125) (#16126)
(cherry picked from commit 4350855e7b)

Co-authored-by: João Duarte <jsvd@users.noreply.github.com>
2024-05-02 10:00:03 +01:00
github-actions[bot]
ceddff3ab3
Provide opt-in flag to avoid fields name clash when log format is json (#15969) (#16094)
Adds log.format.json.fix_duplicate_message_fields feature flag to rename the clashing fields when json logging format (log.format) is selected.
In case two message fields clashes on structured log message, then the second is renamed attaching _1 suffix to the field name.
By default the feature is disabled and requires user to explicitly enable the behaviour.

Co-authored-by: Rob Bavey <rob.bavey@elastic.co>
(cherry picked from commit 830733d758)

Co-authored-by: Andrea Selva <selva.andre@gmail.com>
2024-04-17 18:01:16 +02:00
github-actions[bot]
18583787b3
bump lock file for 8.14 (#16087)
* Update minor plugin versions in gemfile lock

* Update Gemfile.jruby-3.1.lock.release

---------

Co-authored-by: logstashmachine <43502315+logstashmachine@users.noreply.github.com>
Co-authored-by: kaisecheng <69120390+kaisecheng@users.noreply.github.com>
2024-04-17 09:16:02 +01:00
Kaise Cheng
ba5af17681 update Gemfile lock 2024-04-16 17:24:39 +01:00
2023 changed files with 60662 additions and 30334 deletions

View file

@ -35,71 +35,48 @@ steps:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
- label: ":lab_coat: Integration Tests / part 1"
key: "integration-tests-part-1"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 0 3
ci/integration_tests.sh split 0
retry:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
- label: ":lab_coat: Integration Tests / part 2"
key: "integration-tests-part-2"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 1 3
ci/integration_tests.sh split 1
retry:
automatic:
- limit: 3
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
ci/integration_tests.sh split 2 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
- label: ":lab_coat: IT Persistent Queues / part 1"
key: "integration-tests-qa-part-1"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0 3
ci/integration_tests.sh split 0
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
- label: ":lab_coat: IT Persistent Queues / part 2"
key: "integration-tests-qa-part-2"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1 3
retry:
automatic:
- limit: 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
command: |
set -euo pipefail
source .buildkite/scripts/common/vm-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
ci/integration_tests.sh split 1
retry:
automatic:
- limit: 3

View file

@ -1,11 +0,0 @@
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 100
diskType: pd-ssd
steps:
- label: "Benchmark Marathon"
command: .buildkite/scripts/benchmark/marathon.sh

View file

@ -1,14 +0,0 @@
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 100
diskType: pd-ssd
steps:
- label: "Benchmark Snapshot"
retry:
automatic:
- limit: 3
command: .buildkite/scripts/benchmark/main.sh

View file

@ -4,12 +4,8 @@ steps:
- label: ":pipeline: Generate steps"
command: |
set -euo pipefail
echo "--- Building [$${WORKFLOW_TYPE}] artifacts"
echo "--- Building [${WORKFLOW_TYPE}] artifacts"
python3 -m pip install pyyaml
echo "--- Building dynamic pipeline steps"
python3 .buildkite/scripts/dra/generatesteps.py > steps.yml
echo "--- Printing dynamic pipeline steps"
cat steps.yml
echo "--- Uploading dynamic pipeline steps"
cat steps.yml | buildkite-agent pipeline upload
python3 .buildkite/scripts/dra/generatesteps.py | buildkite-agent pipeline upload

View file

@ -1,20 +0,0 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json
agents:
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 64
steps:
- group: ":logstash: Health API integration tests"
key: "testing-phase"
steps:
- label: "main branch"
key: "integ-tests-on-main-branch"
command:
- .buildkite/scripts/health-report-tests/main.sh
retry:
automatic:
- limit: 3

View file

@ -1,14 +0,0 @@
steps:
- label: "JDK Availability check"
key: "jdk-availability-check"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "4"
memory: "6Gi"
ephemeralStorage: "100Gi"
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export GRADLE_OPTS="-Xmx2g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info"
ci/check_jdk_version_availability.sh

View file

@ -5,7 +5,7 @@
env:
DEFAULT_MATRIX_OS: "ubuntu-2204"
DEFAULT_MATRIX_JDK: "adoptiumjdk_21"
DEFAULT_MATRIX_JDK: "adoptiumjdk_17"
steps:
- input: "Test Parameters"
@ -18,8 +18,6 @@ steps:
multiple: true
default: "${DEFAULT_MATRIX_OS}"
options:
- label: "Ubuntu 24.04"
value: "ubuntu-2404"
- label: "Ubuntu 22.04"
value: "ubuntu-2204"
- label: "Ubuntu 20.04"
@ -28,10 +26,14 @@ steps:
value: "debian-12"
- label: "Debian 11"
value: "debian-11"
- label: "Debian 10"
value: "debian-10"
- label: "RHEL 9"
value: "rhel-9"
- label: "RHEL 8"
value: "rhel-8"
- label: "CentOS 7"
value: "centos-7"
- label: "Oracle Linux 8"
value: "oraclelinux-8"
- label: "Oracle Linux 7"
@ -60,12 +62,20 @@ steps:
value: "adoptiumjdk_21"
- label: "Adoptium JDK 17 (Eclipse Temurin)"
value: "adoptiumjdk_17"
- label: "Adoptium JDK 11 (Eclipse Temurin)"
value: "adoptiumjdk_11"
- label: "OpenJDK 21"
value: "openjdk_21"
- label: "OpenJDK 17"
value: "openjdk_17"
- label: "OpenJDK 11"
value: "openjdk_11"
- label: "Zulu 21"
value: "zulu_21"
- label: "Zulu 17"
value: "zulu_17"
- label: "Zulu 11"
value: "zulu_11"
- wait: ~
if: build.source != "schedule" && build.source != "trigger_job"

View file

@ -5,7 +5,7 @@
"pipeline_slug": "logstash-pull-request-pipeline",
"allow_org_users": true,
"allowed_repo_permissions": ["admin", "write"],
"allowed_list": ["dependabot[bot]", "mergify[bot]", "github-actions[bot]", "elastic-vault-github-plugin-prod[bot]"],
"allowed_list": ["github-actions[bot]"],
"set_commit_status": true,
"build_on_commit": true,
"build_on_comment": true,
@ -14,11 +14,7 @@
"skip_ci_labels": [ ],
"skip_target_branches": [ ],
"skip_ci_on_only_changed": [
"^.github/",
"^docs/",
"^.mergify.yml$",
"^.pre-commit-config.yaml",
"\\.md$"
"^docs/"
],
"always_require_ci_on_changed": [ ]
}

View file

@ -22,12 +22,10 @@ steps:
- label: ":rspec: Ruby unit tests"
key: "ruby-unit-tests"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "4"
memory: "8Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
@ -81,8 +79,8 @@ steps:
manual:
allowed: true
- label: ":lab_coat: Integration Tests / part 1-of-3"
key: "integration-tests-part-1-of-3"
- label: ":lab_coat: Integration Tests / part 1"
key: "integration-tests-part-1"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -97,10 +95,10 @@ steps:
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 0 3
ci/integration_tests.sh split 0
- label: ":lab_coat: Integration Tests / part 2-of-3"
key: "integration-tests-part-2-of-3"
- label: ":lab_coat: Integration Tests / part 2"
key: "integration-tests-part-2"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -115,28 +113,10 @@ steps:
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 1 3
ci/integration_tests.sh split 1
- label: ":lab_coat: Integration Tests / part 3-of-3"
key: "integration-tests-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as a non-root user
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
ci/integration_tests.sh split 2 3
- label: ":lab_coat: IT Persistent Queues / part 1-of-3"
key: "integration-tests-qa-part-1-of-3"
- label: ":lab_coat: IT Persistent Queues / part 1"
key: "integration-tests-qa-part-1"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -152,10 +132,10 @@ steps:
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 0 3
ci/integration_tests.sh split 0
- label: ":lab_coat: IT Persistent Queues / part 2-of-3"
key: "integration-tests-qa-part-2-of-3"
- label: ":lab_coat: IT Persistent Queues / part 2"
key: "integration-tests-qa-part-2"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
@ -171,26 +151,7 @@ steps:
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 1 3
- label: ":lab_coat: IT Persistent Queues / part 3-of-3"
key: "integration-tests-qa-part-3-of-3"
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci-no-root"
cpu: "8"
memory: "16Gi"
ephemeralStorage: "100Gi"
# Run as non root (logstash) user. UID is hardcoded in image.
imageUID: "1002"
retry:
automatic:
- limit: 3
command: |
set -euo pipefail
source .buildkite/scripts/common/container-agent.sh
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split 2 3
ci/integration_tests.sh split 1
- label: ":lab_coat: x-pack unit tests"
key: "x-pack-unit-tests"

View file

@ -1,22 +0,0 @@
## Steps to set up GCP instance to run benchmark script
- Create an instance "n2-standard-16" with Ubuntu image
- Install docker
- `sudo snap install docker`
- `sudo usermod -a -G docker $USER`
- Install jq
- Install vault
- `sudo snap install vault`
- `vault login --method github`
- `vault kv get -format json secret/ci/elastic-logstash/benchmark`
- Setup Elasticsearch index mapping and alias with `setup/*`
- Import Kibana dashboard with `save-objects/*`
- Run the benchmark script
- Send data to your own Elasticsearch. Customise `VAULT_PATH="secret/ci/elastic-logstash/your/path"`
- Run the script `main.sh`
- or run in background `nohup bash -x main.sh > log.log 2>&1 &`
## Notes
- Benchmarks should only be compared using the same hardware setup.
- Please do not send the test metrics to the benchmark cluster. You can set `VAULT_PATH` to send data and metrics to your own server.
- Run `all.sh` as calibration which gives you a baseline of performance in different versions.
- [#16586](https://github.com/elastic/logstash/pull/16586) allows legacy monitoring using the configuration `xpack.monitoring.allow_legacy_collection: true`, which is not recognized in version 8. To run benchmarks in version 8, use the script of the corresponding branch (e.g. `8.16`) instead of `main` in buildkite.

View file

@ -1,15 +0,0 @@
http.enabled: false
filebeat.inputs:
- type: log
symlinks: true
paths:
- "/usr/share/filebeat/flog/*.log"
logging.level: info
output.logstash:
hosts:
- "localhost:5044"
ttl: 10ms
bulk_max_size: 2048
# queue.mem:
# events: 4096
# flush.min_events: 2048

View file

@ -1,10 +0,0 @@
api.http.host: 0.0.0.0
pipeline.workers: ${WORKER}
pipeline.batch.size: ${BATCH_SIZE}
queue.type: ${QTYPE}
xpack.monitoring.allow_legacy_collection: true
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: ${MONITOR_ES_USER}
xpack.monitoring.elasticsearch.password: ${MONITOR_ES_PW}
xpack.monitoring.elasticsearch.hosts: ["${MONITOR_ES_HOST}"]

View file

@ -1,44 +0,0 @@
- pipeline.id: main
config.string: |
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => [ "${BENCHMARK_ES_HOST}" ]
user => "${BENCHMARK_ES_USER}"
password => "${BENCHMARK_ES_PW}"
}
}
- pipeline.id: node_stats
config.string: |
input {
http_poller {
urls => {
NodeStats => {
method => get
url => "http://localhost:9600/_node/stats"
}
}
schedule => { every => "30s"}
codec => "json"
}
}
filter {
mutate {
remove_field => [ "host", "[pipelines][.monitoring-logstash]", "event" ]
add_field => { "[benchmark][label]" => "${QTYPE}_w${WORKER}b${BATCH_SIZE}" }
}
}
output {
elasticsearch {
hosts => [ "${BENCHMARK_ES_HOST}" ]
user => "${BENCHMARK_ES_USER}"
password => "${BENCHMARK_ES_PW}"
data_stream_type => "metrics"
data_stream_dataset => "nodestats"
data_stream_namespace => "logstash"
}
}

View file

@ -1 +0,0 @@
f74f1a28-25e9-494f-ba41-ca9f13d4446d

View file

@ -1,315 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
CONFIG_PATH="$SCRIPT_PATH/config"
source "$SCRIPT_PATH/util.sh"
usage() {
echo "Usage: $0 [FB_CNT] [QTYPE] [CPU] [MEM]"
echo "Example: $0 4 {persisted|memory|all} 2 2"
exit 1
}
parse_args() {
while [[ "$#" -gt 0 ]]; do
if [ -z "$FB_CNT" ]; then
FB_CNT=$1
elif [ -z "$QTYPE" ]; then
case $1 in
all | persisted | memory)
QTYPE=$1
;;
*)
echo "Error: wrong queue type $1"
usage
;;
esac
elif [ -z "$CPU" ]; then
CPU=$1
elif [ -z "$MEM" ]; then
MEM=$1
else
echo "Error: Too many arguments"
usage
fi
shift
done
# set default value
# number of filebeat
FB_CNT=${FB_CNT:-4}
# all | persisted | memory
QTYPE=${QTYPE:-all}
CPU=${CPU:-4}
MEM=${MEM:-4}
XMX=$((MEM / 2))
IFS=','
# worker multiplier: 1,2,4
MULTIPLIERS="${MULTIPLIERS:-1,2,4}"
read -ra MULTIPLIERS <<< "$MULTIPLIERS"
BATCH_SIZES="${BATCH_SIZES:-500}"
read -ra BATCH_SIZES <<< "$BATCH_SIZES"
# tags to json array
read -ra TAG_ARRAY <<< "$TAGS"
JSON_TAGS=$(printf '"%s",' "${TAG_ARRAY[@]}" | sed 's/,$//')
JSON_TAGS="[$JSON_TAGS]"
IFS=' '
echo "filebeats: $FB_CNT, cpu: $CPU, mem: $MEM, Queue: $QTYPE, worker multiplier: ${MULTIPLIERS[@]}, batch size: ${BATCH_SIZES[@]}"
}
get_secret() {
VAULT_PATH=${VAULT_PATH:-secret/ci/elastic-logstash/benchmark}
VAULT_DATA=$(vault kv get -format json $VAULT_PATH)
BENCHMARK_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.es_host')
BENCHMARK_ES_USER=$(echo $VAULT_DATA | jq -r '.data.es_user')
BENCHMARK_ES_PW=$(echo $VAULT_DATA | jq -r '.data.es_pw')
MONITOR_ES_HOST=$(echo $VAULT_DATA | jq -r '.data.monitor_es_host')
MONITOR_ES_USER=$(echo $VAULT_DATA | jq -r '.data.monitor_es_user')
MONITOR_ES_PW=$(echo $VAULT_DATA | jq -r '.data.monitor_es_pw')
}
pull_images() {
echo "--- Pull docker images"
if [[ -n "$LS_VERSION" ]]; then
# pull image if it doesn't exist in local
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker pull "docker.elastic.co/logstash/logstash:$LS_VERSION"
else
# pull the latest snapshot logstash image
# select the SNAPSHOT artifact with the highest semantic version number
LS_VERSION=$( curl --retry-all-errors --retry 5 --retry-delay 1 -s "https://storage.googleapis.com/artifacts-api/snapshots/main.json" | jq -r '.version' )
BUILD_ID=$(curl --retry-all-errors --retry 5 --retry-delay 1 -s "https://storage.googleapis.com/artifacts-api/snapshots/main.json" | jq -r '.build_id')
ARCH=$(arch)
IMAGE_URL="https://snapshots.elastic.co/${BUILD_ID}/downloads/logstash/logstash-$LS_VERSION-docker-image-$ARCH.tar.gz"
IMAGE_FILENAME="$LS_VERSION.tar.gz"
echo "Download $LS_VERSION from $IMAGE_URL"
[[ ! -e $IMAGE_FILENAME ]] && curl -fsSL --retry-max-time 60 --retry 3 --retry-delay 5 -o "$IMAGE_FILENAME" "$IMAGE_URL"
[[ -z $(docker images -q docker.elastic.co/logstash/logstash:$LS_VERSION) ]] && docker load -i "$IMAGE_FILENAME"
fi
# pull filebeat image
FB_DEFAULT_VERSION="8.13.4"
FB_VERSION=${FB_VERSION:-$FB_DEFAULT_VERSION}
docker pull "docker.elastic.co/beats/filebeat:$FB_VERSION"
}
generate_logs() {
FLOG_FILE_CNT=${FLOG_FILE_CNT:-4}
SINGLE_SIZE=524288000
TOTAL_SIZE="$((FLOG_FILE_CNT * SINGLE_SIZE))"
FLOG_PATH="$SCRIPT_PATH/flog"
mkdir -p $FLOG_PATH
if [[ ! -e "$FLOG_PATH/log${FLOG_FILE_CNT}.log" ]]; then
echo "--- Generate logs in background. log: ${FLOG_FILE_CNT}, each size: 500mb"
docker run -d --name=flog --rm -v $FLOG_PATH:/go/src/data mingrammer/flog -t log -w -o "/go/src/data/log.log" -b $TOTAL_SIZE -p $SINGLE_SIZE
fi
}
check_logs() {
echo "--- Check log generation"
local cnt=0
until [[ -e "$FLOG_PATH/log${FLOG_FILE_CNT}.log" || $cnt -gt 600 ]]; do
echo "wait 30s" && sleep 30
cnt=$((cnt + 30))
done
ls -lah $FLOG_PATH
}
start_logstash() {
LS_CONFIG_PATH=$SCRIPT_PATH/ls/config
mkdir -p $LS_CONFIG_PATH
cp $CONFIG_PATH/pipelines.yml $LS_CONFIG_PATH/pipelines.yml
cp $CONFIG_PATH/logstash.yml $LS_CONFIG_PATH/logstash.yml
cp $CONFIG_PATH/uuid $LS_CONFIG_PATH/uuid
LS_JAVA_OPTS=${LS_JAVA_OPTS:--Xmx${XMX}g}
docker run -d --name=ls --net=host --cpus=$CPU --memory=${MEM}g -e LS_JAVA_OPTS="$LS_JAVA_OPTS" \
-e QTYPE="$QTYPE" -e WORKER="$WORKER" -e BATCH_SIZE="$BATCH_SIZE" \
-e BENCHMARK_ES_HOST="$BENCHMARK_ES_HOST" -e BENCHMARK_ES_USER="$BENCHMARK_ES_USER" -e BENCHMARK_ES_PW="$BENCHMARK_ES_PW" \
-e MONITOR_ES_HOST="$MONITOR_ES_HOST" -e MONITOR_ES_USER="$MONITOR_ES_USER" -e MONITOR_ES_PW="$MONITOR_ES_PW" \
-v $LS_CONFIG_PATH/logstash.yml:/usr/share/logstash/config/logstash.yml:ro \
-v $LS_CONFIG_PATH/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro \
-v $LS_CONFIG_PATH/uuid:/usr/share/logstash/data/uuid:ro \
docker.elastic.co/logstash/logstash:$LS_VERSION
}
start_filebeat() {
for ((i = 0; i < FB_CNT; i++)); do
FB_PATH="$SCRIPT_PATH/fb${i}"
mkdir -p $FB_PATH
cp $CONFIG_PATH/filebeat.yml $FB_PATH/filebeat.yml
docker run -d --name=fb$i --net=host --user=root \
-v $FB_PATH/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v $SCRIPT_PATH/flog:/usr/share/filebeat/flog \
docker.elastic.co/beats/filebeat:$FB_VERSION filebeat -e --strict.perms=false
done
}
capture_stats() {
CURRENT=$(jq -r '.flow.output_throughput.current' $NS_JSON)
local eps_1m=$(jq -r '.flow.output_throughput.last_1_minute' $NS_JSON)
local eps_5m=$(jq -r '.flow.output_throughput.last_5_minutes' $NS_JSON)
local worker_util=$(jq -r '.pipelines.main.flow.worker_utilization.last_1_minute' $NS_JSON)
local worker_concurr=$(jq -r '.pipelines.main.flow.worker_concurrency.last_1_minute' $NS_JSON)
local cpu_percent=$(jq -r '.process.cpu.percent' $NS_JSON)
local heap=$(jq -r '.jvm.mem.heap_used_in_bytes' $NS_JSON)
local non_heap=$(jq -r '.jvm.mem.non_heap_used_in_bytes' $NS_JSON)
local q_event_cnt=$(jq -r '.pipelines.main.queue.events_count' $NS_JSON)
local q_size=$(jq -r '.pipelines.main.queue.queue_size_in_bytes' $NS_JSON)
TOTAL_EVENTS_OUT=$(jq -r '.pipelines.main.events.out' $NS_JSON)
printf "current: %s, 1m: %s, 5m: %s, worker_utilization: %s, worker_concurrency: %s, cpu: %s, heap: %s, non-heap: %s, q_events: %s, q_size: %s, total_events_out: %s\n" \
$CURRENT $eps_1m $eps_5m $worker_util $worker_concurr $cpu_percent $heap $non_heap $q_event_cnt $q_size $TOTAL_EVENTS_OUT
}
aggregate_stats() {
local file_glob="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_*.json"
MAX_EPS_1M=$( jqmax '.flow.output_throughput.last_1_minute' "$file_glob" )
MAX_EPS_5M=$( jqmax '.flow.output_throughput.last_5_minutes' "$file_glob" )
MAX_WORKER_UTIL=$( jqmax '.pipelines.main.flow.worker_utilization.last_1_minute' "$file_glob" )
MAX_WORKER_CONCURR=$( jqmax '.pipelines.main.flow.worker_concurrency.last_1_minute' "$file_glob" )
MAX_Q_EVENT_CNT=$( jqmax '.pipelines.main.queue.events_count' "$file_glob" )
MAX_Q_SIZE=$( jqmax '.pipelines.main.queue.queue_size_in_bytes' "$file_glob" )
AVG_CPU_PERCENT=$( jqavg '.process.cpu.percent' "$file_glob" )
AVG_VIRTUAL_MEM=$( jqavg '.process.mem.total_virtual_in_bytes' "$file_glob" )
AVG_HEAP=$( jqavg '.jvm.mem.heap_used_in_bytes' "$file_glob" )
AVG_NON_HEAP=$( jqavg '.jvm.mem.non_heap_used_in_bytes' "$file_glob" )
}
send_summary() {
echo "--- Send summary to Elasticsearch"
# build json
local timestamp
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%S")
SUMMARY="{\"timestamp\": \"$timestamp\", \"version\": \"$LS_VERSION\", \"cpu\": \"$CPU\", \"mem\": \"$MEM\", \"workers\": \"$WORKER\", \"batch_size\": \"$BATCH_SIZE\", \"queue_type\": \"$QTYPE\""
not_empty "$TOTAL_EVENTS_OUT" && SUMMARY="$SUMMARY, \"total_events_out\": \"$TOTAL_EVENTS_OUT\""
not_empty "$MAX_EPS_1M" && SUMMARY="$SUMMARY, \"max_eps_1m\": \"$MAX_EPS_1M\""
not_empty "$MAX_EPS_5M" && SUMMARY="$SUMMARY, \"max_eps_5m\": \"$MAX_EPS_5M\""
not_empty "$MAX_WORKER_UTIL" && SUMMARY="$SUMMARY, \"max_worker_utilization\": \"$MAX_WORKER_UTIL\""
not_empty "$MAX_WORKER_CONCURR" && SUMMARY="$SUMMARY, \"max_worker_concurrency\": \"$MAX_WORKER_CONCURR\""
not_empty "$AVG_CPU_PERCENT" && SUMMARY="$SUMMARY, \"avg_cpu_percentage\": \"$AVG_CPU_PERCENT\""
not_empty "$AVG_HEAP" && SUMMARY="$SUMMARY, \"avg_heap\": \"$AVG_HEAP\""
not_empty "$AVG_NON_HEAP" && SUMMARY="$SUMMARY, \"avg_non_heap\": \"$AVG_NON_HEAP\""
not_empty "$AVG_VIRTUAL_MEM" && SUMMARY="$SUMMARY, \"avg_virtual_memory\": \"$AVG_VIRTUAL_MEM\""
not_empty "$MAX_Q_EVENT_CNT" && SUMMARY="$SUMMARY, \"max_queue_events\": \"$MAX_Q_EVENT_CNT\""
not_empty "$MAX_Q_SIZE" && SUMMARY="$SUMMARY, \"max_queue_bytes_size\": \"$MAX_Q_SIZE\""
not_empty "$TAGS" && SUMMARY="$SUMMARY, \"tags\": $JSON_TAGS"
SUMMARY="$SUMMARY}"
tee summary.json << EOF
{"index": {}}
$SUMMARY
EOF
# send to ES
local resp
local err_status
resp=$(curl -s -X POST -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" "$BENCHMARK_ES_HOST/benchmark_summary/_bulk" -H 'Content-Type: application/json' --data-binary @"summary.json")
echo "$resp"
err_status=$(echo "$resp" | jq -r ".errors")
if [[ "$err_status" == "true" ]]; then
echo "Failed to send summary"
exit 1
fi
}
# $1: snapshot index
node_stats() {
NS_JSON="$SCRIPT_PATH/$NS_DIR/${QTYPE:0:1}_w${WORKER}b${BATCH_SIZE}_$1.json" # m_w8b1000_0.json
# curl inside container because docker on mac cannot resolve localhost to host network interface
docker exec -i ls curl localhost:9600/_node/stats > "$NS_JSON" 2> /dev/null
}
# $1: index
snapshot() {
node_stats $1
capture_stats
}
create_directory() {
NS_DIR="fb${FB_CNT}c${CPU}m${MEM}" # fb4c4m4
mkdir -p "$SCRIPT_PATH/$NS_DIR"
}
queue() {
for QTYPE in "persisted" "memory"; do
worker
done
}
worker() {
for m in "${MULTIPLIERS[@]}"; do
WORKER=$((CPU * m))
batch
done
}
batch() {
for BATCH_SIZE in "${BATCH_SIZES[@]}"; do
run_pipeline
stop_pipeline
done
}
run_pipeline() {
echo "--- Run pipeline. queue type: $QTYPE, worker: $WORKER, batch size: $BATCH_SIZE"
start_logstash
start_filebeat
docker ps
echo "(0) sleep 3m" && sleep 180
snapshot "0"
for i in {1..8}; do
echo "($i) sleep 30s" && sleep 30
snapshot "$i"
# print docker log when ingestion rate is zero
# remove '.' in number and return max val
[[ $(max -g "${CURRENT/./}" "0") -eq 0 ]] &&
docker logs fb0 &&
docker logs ls
done
aggregate_stats
send_summary
}
stop_pipeline() {
echo "--- Stop Pipeline"
for ((i = 0; i < FB_CNT; i++)); do
docker stop fb$i
docker rm fb$i
done
docker stop ls
docker rm ls
curl -u "$BENCHMARK_ES_USER:$BENCHMARK_ES_PW" -X DELETE $BENCHMARK_ES_HOST/_data_stream/logs-generic-default
echo " data stream deleted "
# TODO: clean page caches, reduce memory fragmentation
# https://github.com/elastic/logstash/pull/16191#discussion_r1647050216
}
clean_up() {
# stop log generation if it has not done yet
[[ -n $(docker ps | grep flog) ]] && docker stop flog || true
# remove image
docker image rm docker.elastic.co/logstash/logstash:$LS_VERSION
}

View file

@ -1,59 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
# *******************************************************
# This script does benchmark by running Filebeats (docker) -> Logstash (docker) -> ES Cloud.
# Logstash metrics and benchmark results are sent to the same ES Cloud.
# Highlights:
# - Use flog (docker) to generate ~2GB log
# - Pull the snapshot docker image of the main branch every day
# - Logstash runs two pipelines, main and node_stats
# - The main pipeline handles beats ingestion, sending data to the data stream `logs-generic-default`
# - It runs for all combinations. (pq + mq) x worker x batch size
# - Each test runs for ~7 minutes
# - The node_stats pipeline retrieves Logstash /_node/stats every 30s and sends it to the data stream `metrics-nodestats-logstash`
# - The script sends a summary of EPS and resource usage to index `benchmark_summary`
# *******************************************************
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
source "$SCRIPT_PATH/core.sh"
## usage:
## main.sh FB_CNT QTYPE CPU MEM
## main.sh 4 all 4 4 # default launch 4 filebeats to benchmark pq and mq
## main.sh 4 memory
## main.sh 4 persisted
## main.sh 4
## main.sh
## accept env vars:
## FB_VERSION=8.13.4 # docker tag
## LS_VERSION=8.15.0-SNAPSHOT # docker tag
## LS_JAVA_OPTS=-Xmx2g # by default, Xmx is set to half of memory
## MULTIPLIERS=1,2,4 # determine the number of workers (cpu * multiplier)
## BATCH_SIZES=125,500
## CPU=4 # number of cpu for Logstash container
## MEM=4 # number of GB for Logstash container
## QTYPE=memory # queue type to test {persisted|memory|all}
## FB_CNT=4 # number of filebeats to use in benchmark
## FLOG_FILE_CNT=4 # number of files to generate for ingestion
## VAULT_PATH=secret/path # vault path point to Elasticsearch credentials. The default value points to benchmark cluster.
## TAGS=test,other # tags with "," separator.
main() {
parse_args "$@"
get_secret
generate_logs
pull_images
check_logs
create_directory
if [[ $QTYPE == "all" ]]; then
queue
else
worker
fi
clean_up
}
main "$@"

View file

@ -1,44 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
# *******************************************************
# Run benchmark for versions that have flow metrics
# When the hardware changes, run the marathon task to establish a new baseline.
# Usage:
# nohup bash -x all.sh > log.log 2>&1 &
# Accept env vars:
# STACK_VERSIONS=8.15.0,8.15.1,8.16.0-SNAPSHOT # versions to test. It is comma separator string
# *******************************************************
SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
source "$SCRIPT_PATH/core.sh"
parse_stack_versions() {
IFS=','
STACK_VERSIONS="${STACK_VERSIONS:-8.6.0,8.7.0,8.8.0,8.9.0,8.10.0,8.11.0,8.12.0,8.13.0,8.14.0,8.15.0}"
read -ra STACK_VERSIONS <<< "$STACK_VERSIONS"
}
main() {
parse_stack_versions
parse_args "$@"
get_secret
generate_logs
check_logs
USER_QTYPE="$QTYPE"
for V in "${STACK_VERSIONS[@]}" ; do
LS_VERSION="$V"
QTYPE="$USER_QTYPE"
pull_images
create_directory
if [[ $QTYPE == "all" ]]; then
queue
else
worker
fi
done
}
main "$@"

View file

@ -1,8 +0,0 @@
## 20241210
Remove scripted field `5m_num` from dashboards
## 20240912
Updated runtime field `release` to return `true` when `version` contains "SNAPSHOT"
## 20240912
Initial dashboards

View file

@ -1,14 +0,0 @@
benchmark_objects.ndjson contains the following resources
- Dashboards
- daily snapshot
- released versions
- Data Views
- benchmark
- runtime fields
- | Fields Name | Type | Comment |
|--------------|---------------------------------------------------------------------------------------|--------------------------------------------------|
| versions_num | long | convert semantic versioning to number for graph sorting |
| release | boolean | `true` for released version. `false` for snapshot version. It is for graph filtering. |
To import objects to Kibana, navigate to Stack Management > Save Objects and click Import

File diff suppressed because one or more lines are too long

View file

@ -1,6 +0,0 @@
POST /_aliases
{
"actions": [
{ "add": { "index": "benchmark_summary_v2", "alias": "benchmark_summary" } }
]
}

View file

@ -1,179 +0,0 @@
PUT /benchmark_summary_v2/_mapping
{
"properties": {
"avg_cpu_percentage": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_heap": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_non_heap": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"avg_virtual_memory": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"batch_size": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"cpu": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_eps_1m": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_eps_5m": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_queue_bytes_size": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_queue_events": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_worker_concurrency": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"max_worker_utilization": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"mem": {
"type": "float",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"queue_type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"tag": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"timestamp": {
"type": "date"
},
"total_events_out": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"workers": {
"type": "integer",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"tags" : {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}

View file

@ -1,41 +0,0 @@
#!/usr/bin/env bash
arch() { uname -m | sed -e "s|amd|x86_|" -e "s|arm|aarch|"; }
# return the min value
# usage:
# g: float; h: human; d: dictionary; M: month
# min -g 3 2 5 1
# max -g 1.5 5.2 2.5 1.2 5.7
# max -g "null" "0"
# min -h 25M 13G 99K 1098M
min() { printf "%s\n" "${@:2}" | sort "$1" | head -n1 ; }
max() { min ${1}r ${@:2} ; }
# return the average of values
# usage:
# jqavg '.process.cpu.percent' m_w8b1000_*.json
# $1: jq field
# $2: file path in glob pattern
jqavg() {
jq -r "$1 | select(. != null)" $2 | jq -s . | jq 'add / length'
}
# return the maximum of values
# usage:
# jqmax '.process.cpu.percent' m_w8b1000_*.json
# $1: jq field
# $2: file path in glob pattern
jqmax() {
jq -r "$1 | select(. != null)" $2 | jq -s . | jq 'max'
}
# return true if $1 is non empty and not "null"
not_empty() {
if [[ -n "$1" && "$1" != "null" ]]; then
return 0
else
return 1
fi
}

View file

@ -19,7 +19,7 @@ changed_files=$(git diff --name-only $previous_commit)
if [[ -n "$changed_files" ]] && [[ -z "$(echo "$changed_files" | grep -vE "$1")" ]]; then
echo "All files compared to the previous commit [$previous_commit] match the specified regex: [$1]"
echo "Files changed:"
git --no-pager diff --name-only HEAD^
git diff --name-only HEAD^
exit 0
else
exit 1

View file

@ -1,29 +0,0 @@
#!/usr/bin/env bash
# ********************************************************
# Source this script to get the QUALIFIED_VERSION env var
# or execute it to receive the qualified version on stdout
# ********************************************************
set -euo pipefail
export QUALIFIED_VERSION="$(
# Extract the version number from the version.yml file
# e.g.: 8.6.0
printf '%s' "$(awk -F':' '{ if ("logstash" == $1) { gsub(/^ | $/,"",$2); printf $2; exit } }' versions.yml)"
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases for staging builds only:
# e.g: 8.0.0-alpha1
printf '%s' "${VERSION_QUALIFIER:+-${VERSION_QUALIFIER}}"
# add the SNAPSHOT tag unless WORKFLOW_TYPE=="staging" or RELEASE=="1"
if [[ ! ( "${WORKFLOW_TYPE:-}" == "staging" || "${RELEASE:+$RELEASE}" == "1" ) ]]; then
printf '%s' "-SNAPSHOT"
fi
)"
# if invoked directly, output the QUALIFIED_VERSION to stdout
if [[ "$0" == "${BASH_SOURCE:-${ZSH_SCRIPT:-}}" ]]; then
printf '%s' "${QUALIFIED_VERSION}"
fi

View file

@ -12,7 +12,7 @@ set -eo pipefail
# https://github.com/elastic/ingest-dev/issues/2664
# *******************************************************
ACTIVE_BRANCHES_URL="https://storage.googleapis.com/artifacts-api/snapshots/branches.json"
ACTIVE_BRANCHES_URL="https://raw.githubusercontent.com/elastic/logstash/main/ci/branches.json"
EXCLUDE_BRANCHES_ARRAY=()
BRANCHES=()
@ -63,7 +63,7 @@ exclude_branches_to_array
set -u
set +e
# pull releaseable branches from $ACTIVE_BRANCHES_URL
readarray -t ELIGIBLE_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $ACTIVE_BRANCHES_URL | jq -r '.branches[]')
readarray -t ELIGIBLE_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $ACTIVE_BRANCHES_URL | jq -r '.branches[].branch')
if [[ $? -ne 0 ]]; then
echo "There was an error downloading or parsing the json output from [$ACTIVE_BRANCHES_URL]. Exiting."
exit 1

View file

@ -1,13 +1,13 @@
{
"#comment": "This file lists all custom vm images. We use it to make decisions about randomized CI jobs.",
"linux": {
"ubuntu": ["ubuntu-2404", "ubuntu-2204", "ubuntu-2004"],
"debian": ["debian-12", "debian-11"],
"ubuntu": ["ubuntu-2204", "ubuntu-2004"],
"debian": ["debian-12", "debian-11", "debian-10"],
"rhel": ["rhel-9", "rhel-8"],
"oraclelinux": ["oraclelinux-8", "oraclelinux-7"],
"rocky": ["rocky-linux-8"],
"amazonlinux": ["amazonlinux-2023"],
"opensuse": ["opensuse-leap-15"]
},
"windows": ["windows-2025", "windows-2022", "windows-2019", "windows-2016"]
"windows": ["windows-2022", "windows-2019", "windows-2016"]
}

View file

@ -7,28 +7,59 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
# WORKFLOW_TYPE is a CI externally configured environment variable that could assume "snapshot" or "staging" values
info "Building artifacts for the $WORKFLOW_TYPE workflow ..."
case "$WORKFLOW_TYPE" in
snapshot)
: # no-op
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
rake artifact:docker || error "artifact:docker build failed."
rake artifact:docker_oss || error "artifact:docker_oss build failed."
rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
else
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker || error "artifact:docker build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_oss || error "artifact:docker_oss build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
;;
staging)
export RELEASE=1
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
RELEASE=1 rake artifact:docker || error "artifact:docker build failed."
RELEASE=1 rake artifact:docker_oss || error "artifact:docker_oss build failed."
RELEASE=1 rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
RELEASE=1 rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
else
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker || error "artifact:docker build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_oss || error "artifact:docker_oss build failed."
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
if [ "$ARCH" != "aarch64" ]; then
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 rake artifact:docker_ubi8 || error "artifact:docker_ubi8 build failed."
fi
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
;;
*)
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
rake artifact:docker || error "artifact:docker build failed."
rake artifact:docker_oss || error "artifact:docker_oss build failed."
rake artifact:docker_wolfi || error "artifact:docker_wolfi build failed."
rake artifact:dockerfiles || error "artifact:dockerfiles build failed."
STACK_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Saving tar.gz for docker images"
save_docker_tarballs "${ARCH}" "${STACK_VERSION}"
@ -37,7 +68,11 @@ for file in build/logstash-*; do shasum $file;done
info "Uploading DRA artifacts in buildkite's artifact store ..."
# Note the deb, rpm tar.gz AARCH64 files generated has already been loaded by the build_packages.sh
images="logstash logstash-oss logstash-wolfi"
images="logstash logstash-oss"
if [ "$ARCH" != "aarch64" ]; then
# No logstash-ubi8 for AARCH64
images="logstash logstash-oss logstash-ubi8"
fi
for image in ${images}; do
buildkite-agent artifact upload "build/$image-${STACK_VERSION}-docker-image-${ARCH}.tar.gz"
done
@ -45,7 +80,7 @@ done
# Upload 'docker-build-context.tar.gz' files only when build x86_64, otherwise they will be
# overwritten when building aarch64 (or viceversa).
if [ "$ARCH" != "aarch64" ]; then
for image in logstash logstash-oss logstash-wolfi logstash-ironbank; do
for image in logstash logstash-oss logstash-ubi8 logstash-ironbank; do
buildkite-agent artifact upload "build/${image}-${STACK_VERSION}-docker-build-context.tar.gz"
done
fi

View file

@ -7,25 +7,39 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
# WORKFLOW_TYPE is a CI externally configured environment variable that could assume "snapshot" or "staging" values
info "Building artifacts for the $WORKFLOW_TYPE workflow ..."
case "$WORKFLOW_TYPE" in
snapshot)
: # no-op
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
else
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
;;
staging)
export RELEASE=1
info "Building artifacts for the $WORKFLOW_TYPE workflow..."
if [ -z "$VERSION_QUALIFIER_OPT" ]; then
RELEASE=1 SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
else
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
VERSION_QUALIFIER="$VERSION_QUALIFIER_OPT" RELEASE=1 SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
;;
*)
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
SKIP_DOCKER=1 rake artifact:all || error "rake artifact:all build failed."
STACK_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
info "Build complete, setting STACK_VERSION to $STACK_VERSION."
info "Generated Artifacts"
for file in build/logstash-*; do shasum $file;done

View file

@ -10,7 +10,11 @@ function error {
function save_docker_tarballs {
local arch="${1:?architecture required}"
local version="${2:?stack-version required}"
local images="logstash logstash-oss logstash-wolfi"
local images="logstash logstash-oss"
if [ "${arch}" != "aarch64" ]; then
# No logstash-ubi8 for AARCH64
images="logstash logstash-oss logstash-ubi8"
fi
for image in ${images}; do
tar_file="${image}-${version}-docker-image-${arch}.tar"
@ -25,16 +29,16 @@ function save_docker_tarballs {
# Since we are using the system jruby, we need to make sure our jvm process
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx4g"
export JRUBY_OPTS="-J-Xmx1g"
# Extract the version number from the version.yml file
# e.g.: 8.6.0
# The suffix part like alpha1 etc is managed by the optional VERSION_QUALIFIER environment variable
# The suffix part like alpha1 etc is managed by the optional VERSION_QUALIFIER_OPT environment variable
STACK_VERSION=`cat versions.yml | sed -n 's/^logstash\:[[:space:]]\([[:digit:]]*\.[[:digit:]]*\.[[:digit:]]*\)$/\1/p'`
info "Agent is running on architecture [$(uname -i)]"
export VERSION_QUALIFIER=${VERSION_QUALIFIER:-""}
export VERSION_QUALIFIER_OPT=${VERSION_QUALIFIER_OPT:-""}
export DRA_DRY_RUN=${DRA_DRY_RUN:-""}
if [[ ! -z $DRA_DRY_RUN && $BUILDKITE_STEP_KEY == "logstash_publish_dra" ]]; then

View file

@ -3,8 +3,6 @@ import sys
import yaml
YAML_HEADER = '# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json\n'
def to_bk_key_friendly_string(key):
"""
Convert and return key to an acceptable format for Buildkite's key: field
@ -30,8 +28,6 @@ def package_x86_step(branch, workflow_type):
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
eval "$(rbenv init -)"
.buildkite/scripts/dra/build_packages.sh
artifact_paths:
- "**/*.hprof"
'''
return step
@ -46,8 +42,6 @@ def package_x86_docker_step(branch, workflow_type):
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-16"
diskSizeGb: 200
artifact_paths:
- "**/*.hprof"
command: |
export WORKFLOW_TYPE="{workflow_type}"
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
@ -67,8 +61,6 @@ def package_aarch64_docker_step(branch, workflow_type):
imagePrefix: platform-ingest-logstash-ubuntu-2204-aarch64
instanceType: "m6g.4xlarge"
diskSizeGb: 200
artifact_paths:
- "**/*.hprof"
command: |
export WORKFLOW_TYPE="{workflow_type}"
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:$PATH"
@ -114,7 +106,6 @@ def build_steps_to_yaml(branch, workflow_type):
if __name__ == "__main__":
try:
workflow_type = os.environ["WORKFLOW_TYPE"]
version_qualifier = os.environ.get("VERSION_QUALIFIER", "")
except ImportError:
print(f"Missing env variable WORKFLOW_TYPE. Use export WORKFLOW_TYPE=<staging|snapshot>\n.Exiting.")
exit(1)
@ -123,25 +114,18 @@ if __name__ == "__main__":
structure = {"steps": []}
if workflow_type.upper() == "SNAPSHOT" and len(version_qualifier)>0:
structure["steps"].append({
"label": f"no-op pipeline because prerelease builds (VERSION_QUALIFIER is set to [{version_qualifier}]) don't support the [{workflow_type}] workflow",
"command": ":",
"skip": "VERSION_QUALIFIER (prerelease builds) not supported with SNAPSHOT DRA",
})
else:
# Group defining parallel steps that build and save artifacts
group_key = to_bk_key_friendly_string(f"logstash_dra_{workflow_type}")
# Group defining parallel steps that build and save artifacts
group_key = to_bk_key_friendly_string(f"logstash_dra_{workflow_type}")
structure["steps"].append({
"group": f":Build Artifacts - {workflow_type.upper()}",
"key": group_key,
"steps": build_steps_to_yaml(branch, workflow_type),
})
structure["steps"].append({
"group": f":Build Artifacts - {workflow_type.upper()}",
"key": group_key,
"steps": build_steps_to_yaml(branch, workflow_type),
})
# Final step: pull artifacts built above and publish them via the release-manager
structure["steps"].extend(
yaml.safe_load(publish_dra_step(branch, workflow_type, depends_on=group_key)),
)
# Final step: pull artifacts built above and publish them via the release-manager
structure["steps"].extend(
yaml.safe_load(publish_dra_step(branch, workflow_type, depends_on=group_key)),
)
print(YAML_HEADER + yaml.dump(structure, Dumper=yaml.Dumper, sort_keys=False))
print('# yaml-language-server: $schema=https://raw.githubusercontent.com/buildkite/pipeline-schema/main/schema.json\n' + yaml.dump(structure, Dumper=yaml.Dumper, sort_keys=False))

View file

@ -7,9 +7,7 @@ echo "####################################################################"
source ./$(dirname "$0")/common.sh
# DRA_BRANCH can be used for manually testing packaging with PRs
# e.g. define `DRA_BRANCH="main"` and `RUN_SNAPSHOT="true"` under Options/Environment Variables in the Buildkite UI after clicking new Build
BRANCH="${DRA_BRANCH:="${BUILDKITE_BRANCH:=""}"}"
PLAIN_STACK_VERSION=$STACK_VERSION
# This is the branch selector that needs to be passed to the release-manager
# It has to be the name of the branch which originates the artifacts.
@ -17,24 +15,29 @@ RELEASE_VER=`cat versions.yml | sed -n 's/^logstash\:[[:space:]]\([[:digit:]]*\.
if [ -n "$(git ls-remote --heads origin $RELEASE_VER)" ] ; then
RELEASE_BRANCH=$RELEASE_VER
else
RELEASE_BRANCH="${BRANCH:="main"}"
RELEASE_BRANCH=main
fi
echo "RELEASE BRANCH: $RELEASE_BRANCH"
VERSION_QUALIFIER="${VERSION_QUALIFIER:=""}"
if [ -n "$VERSION_QUALIFIER_OPT" ]; then
# Qualifier is passed from CI as optional field and specify the version postfix
# in case of alpha or beta releases:
# e.g: 8.0.0-alpha1
STACK_VERSION="${STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
PLAIN_STACK_VERSION="${PLAIN_STACK_VERSION}-${VERSION_QUALIFIER_OPT}"
fi
case "$WORKFLOW_TYPE" in
snapshot)
:
STACK_VERSION=${STACK_VERSION}-SNAPSHOT
;;
staging)
;;
*)
error "Workflow (WORKFLOW_TYPE variable) is not set, exiting..."
error "Worklflow (WORKFLOW_TYPE variable) is not set, exiting..."
;;
esac
info "Uploading artifacts for ${WORKFLOW_TYPE} workflow on branch: ${RELEASE_BRANCH} for version: ${STACK_VERSION} with version_qualifier: ${VERSION_QUALIFIER}"
info "Uploading artifacts for ${WORKFLOW_TYPE} workflow on branch: ${RELEASE_BRANCH}"
if [ "$RELEASE_VER" != "7.17" ]; then
# Version 7.17.x doesn't generates ARM artifacts for Darwin
@ -42,12 +45,17 @@ if [ "$RELEASE_VER" != "7.17" ]; then
:
fi
# Deleting ubi8 for aarch64 for the time being. This image itself is not being built, and it is not expected
# by the release manager.
# See https://github.com/elastic/infra/blob/master/cd/release/release-manager/project-configs/8.5/logstash.gradle
# for more details.
# TODO filter it out when uploading artifacts instead
rm -f build/logstash-ubi8-${STACK_VERSION}-docker-image-aarch64.tar.gz
info "Downloaded ARTIFACTS sha report"
for file in build/logstash-*; do shasum $file;done
FINAL_VERSION="$(./$(dirname "$0")/../common/qualified-version.sh)"
mv build/distributions/dependencies-reports/logstash-${FINAL_VERSION}.csv build/distributions/dependencies-${FINAL_VERSION}.csv
mv build/distributions/dependencies-reports/logstash-${STACK_VERSION}.csv build/distributions/dependencies-${STACK_VERSION}.csv
# set required permissions on artifacts and directory
chmod -R a+r build/*
@ -65,22 +73,6 @@ release_manager_login
# ensure the latest image has been pulled
docker pull docker.elastic.co/infra/release-manager:latest
echo "+++ :clipboard: Listing DRA artifacts for version [$STACK_VERSION], branch [$RELEASE_BRANCH], workflow [$WORKFLOW_TYPE], QUALIFIER [$VERSION_QUALIFIER]"
docker run --rm \
--name release-manager \
-e VAULT_ROLE_ID \
-e VAULT_SECRET_ID \
--mount type=bind,readonly=false,src="$PWD",target=/artifacts \
docker.elastic.co/infra/release-manager:latest \
cli list \
--project logstash \
--branch "${RELEASE_BRANCH}" \
--commit "$(git rev-parse HEAD)" \
--workflow "${WORKFLOW_TYPE}" \
--version "${STACK_VERSION}" \
--artifact-set main \
--qualifier "${VERSION_QUALIFIER}"
info "Running the release manager ..."
# collect the artifacts for use with the unified build
@ -96,9 +88,8 @@ docker run --rm \
--branch ${RELEASE_BRANCH} \
--commit "$(git rev-parse HEAD)" \
--workflow "${WORKFLOW_TYPE}" \
--version "${STACK_VERSION}" \
--version "${PLAIN_STACK_VERSION}" \
--artifact-set main \
--qualifier "${VERSION_QUALIFIER}" \
${DRA_DRY_RUN} | tee rm-output.txt
# extract the summary URL from a release manager output line like:

View file

@ -10,7 +10,7 @@ from ruamel.yaml.scalarstring import LiteralScalarString
VM_IMAGES_FILE = ".buildkite/scripts/common/vm-images.json"
VM_IMAGE_PREFIX = "platform-ingest-logstash-multi-jdk-"
ACCEPTANCE_LINUX_OSES = ["ubuntu-2404", "ubuntu-2204", "ubuntu-2004", "debian-11", "rhel-8", "oraclelinux-7", "rocky-linux-8", "opensuse-leap-15", "amazonlinux-2023"]
ACCEPTANCE_LINUX_OSES = ["ubuntu-2204", "ubuntu-2004", "debian-11", "debian-10", "rhel-8", "oraclelinux-7", "rocky-linux-8", "opensuse-leap-15", "amazonlinux-2023"]
CUR_PATH = os.path.dirname(os.path.abspath(__file__))
@ -147,6 +147,11 @@ rake artifact:deb artifact:rpm
set -eo pipefail
source .buildkite/scripts/common/vm-agent-multi-jdk.sh
source /etc/os-release
if [[ "$$(echo $$ID_LIKE | tr '[:upper:]' '[:lower:]')" =~ (rhel|fedora) && "$${VERSION_ID%.*}" -le 7 ]]; then
# jruby-9.3.10.0 unavailable on centos-7 / oel-7, see https://github.com/jruby/jruby/issues/7579#issuecomment-1425885324 / https://github.com/jruby/jruby/issues/7695
# we only need a working jruby to run the acceptance test framework -- the packages have been prebuilt in a previous stage
rbenv local jruby-9.4.5.0
fi
ci/acceptance_tests.sh"""),
}
steps.append(step)
@ -155,7 +160,7 @@ ci/acceptance_tests.sh"""),
def acceptance_docker_steps()-> list[typing.Any]:
steps = []
for flavor in ["full", "oss", "ubi", "wolfi"]:
for flavor in ["full", "oss", "ubi8"]:
steps.append({
"label": f":docker: {flavor} flavor acceptance",
"agents": gcp_agent(vm_name="ubuntu-2204", image_prefix="family/platform-ingest-logstash"),

View file

@ -1,18 +0,0 @@
## Description
This package for integration tests of the Health Report API.
Export `LS_BRANCH` to run on a specific branch. By default, it uses the main branch.
## How to run the Health Report Integration test?
### Prerequisites
Make sure you have python installed. Install the integration test dependencies with the following command:
```shell
python3 -mpip install -r .buildkite/scripts/health-report-tests/requirements.txt
```
### Run the integration tests
```shell
python3 .buildkite/scripts/health-report-tests/main.py
```
### Troubleshooting
- If you get `WARNING: pip is configured with locations that require TLS/SSL,...` warning message, make sure you have python >=3.12.4 installed.

View file

@ -1,111 +0,0 @@
"""
Health Report Integration test bootstrapper with Python script
- A script to resolve Logstash version if not provided
- Download LS docker image and spin up
- When tests finished, teardown the Logstash
"""
import os
import subprocess
import time
import util
import yaml
class Bootstrap:
ELASTIC_STACK_RELEASED_VERSION_URL = "https://storage.googleapis.com/artifacts-api/releases/current/"
def __init__(self) -> None:
f"""
A constructor of the {Bootstrap}.
Returns:
Resolves Logstash branch considering provided LS_BRANCH
Checks out git branch
"""
logstash_branch = os.environ.get("LS_BRANCH")
if logstash_branch is None:
# version is not specified, use the main branch, no need to git checkout
print(f"LS_BRANCH is not specified, using main branch.")
else:
# LS_BRANCH accepts major latest as a major.x or specific branch as X.Y
if logstash_branch.find(".x") == -1:
print(f"Using specified branch: {logstash_branch}")
util.git_check_out_branch(logstash_branch)
else:
major_version = logstash_branch.split(".")[0]
if major_version and major_version.isnumeric():
resolved_version = self.__resolve_latest_stack_version_for(major_version)
minor_version = resolved_version.split(".")[1]
branch = major_version + "." + minor_version
print(f"Using resolved branch: {branch}")
util.git_check_out_branch(branch)
else:
raise ValueError(f"Invalid value set to LS_BRANCH. Please set it properly (ex: 8.x or 9.0) and "
f"rerun again")
def __resolve_latest_stack_version_for(self, major_version: str) -> str:
resp = util.call_url_with_retry(self.ELASTIC_STACK_RELEASED_VERSION_URL + major_version)
release_version = resp.text.strip()
print(f"Resolved latest version for {major_version} is {release_version}.")
if release_version == "":
raise ValueError(f"Cannot resolve latest version for {major_version} major")
return release_version
def install_plugin(self, plugin_path: str) -> None:
util.run_or_raise_error(
["bin/logstash-plugin", "install", plugin_path],
f"Failed to install {plugin_path}")
def build_logstash(self):
print(f"Building Logstash...")
util.run_or_raise_error(
["./gradlew", "clean", "bootstrap", "assemble", "installDefaultGems"],
"Failed to build Logstash")
print(f"Logstash has successfully built.")
def apply_config(self, config: dict) -> None:
with open(os.getcwd() + "/.buildkite/scripts/health-report-tests/config/pipelines.yml", 'w') as pipelines_file:
yaml.dump(config, pipelines_file)
def run_logstash(self, full_start_required: bool) -> subprocess.Popen:
# --config.reload.automatic is to make instance active
# it is helpful when testing crash pipeline cases
config_path = os.getcwd() + "/.buildkite/scripts/health-report-tests/config"
process = subprocess.Popen(["bin/logstash", "--config.reload.automatic", "--path.settings", config_path,
"-w 1"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, shell=False)
if process.poll() is not None:
print(f"Logstash failed to run, check the the config and logs, then rerun.")
return None
# Read stdout and stderr in real-time
logs = []
for stdout_line in iter(process.stdout.readline, ""):
logs.append(stdout_line.strip())
# we don't wait for Logstash fully start as we also test slow pipeline start scenarios
if full_start_required is False and "Starting pipeline" in stdout_line:
break
if full_start_required is True and "Pipeline started" in stdout_line:
break
if "Logstash shut down" in stdout_line or "Logstash stopped" in stdout_line:
print(f"Logstash couldn't spin up.")
print(logs)
return None
print(f"Logstash is running with PID: {process.pid}.")
return process
def stop_logstash(self, process: subprocess.Popen):
start_time = time.time() # in seconds
process.terminate()
for stdout_line in iter(process.stdout.readline, ""):
# print(f"STDOUT: {stdout_line.strip()}")
if "Logstash shut down" in stdout_line or "Logstash stopped" in stdout_line:
print(f"Logstash stopped.")
return None
# shudown watcher keep running, we should be good with considering time spent
if time.time() - start_time > 60:
print(f"Logstash didn't stop in 1min, sending SIGTERM signal.")
process.kill()
if time.time() - start_time > 70:
print(f"Logstash didn't stop over 1min, exiting.")
return None

View file

@ -1 +0,0 @@
# Intentionally left blank

View file

@ -1,70 +0,0 @@
import yaml
from typing import Any, List, Dict
class ConfigValidator:
REQUIRED_KEYS = {
"root": ["name", "config", "conditions", "expectation"],
"config": ["pipeline.id", "config.string"],
"conditions": ["full_start_required", "wait_seconds"],
"expectation": ["status", "symptom", "indicators"],
"indicators": ["pipelines"],
"pipelines": ["status", "symptom", "indicators"],
"DYNAMIC": ["status", "symptom", "diagnosis", "impacts", "details"], # pipeline-id is a DYNAMIC
"details": ["status"],
"status": ["state"]
}
def __init__(self):
self.yaml_content = None
def __has_valid_keys(self, data: any, key_path: str, repeated: bool) -> bool:
# we reached the value
if isinstance(data, str) or isinstance(data, bool) or isinstance(data, int) or isinstance(data, float):
return True
# we have two indicators section and for the next repeated ones, we go deeper
first_key = next(iter(data))
data = data[first_key] if repeated and key_path == "indicators" else data
if isinstance(data, dict):
# pipeline-id is a DYNAMIC
required = self.REQUIRED_KEYS.get("DYNAMIC" if repeated and key_path == "indicators" else key_path, [])
repeated = not repeated if key_path == "indicators" else repeated
for key in required:
if key not in data:
print(f"Missing key '{key}' in '{key_path}'")
return False
else:
dic_keys_result = self.__has_valid_keys(data[key], key, repeated)
if dic_keys_result is False:
return False
elif isinstance(data, list):
for item in data:
list_keys_result = self.__has_valid_keys(item, key_path, repeated)
if list_keys_result is False:
return False
return True
def load(self, file_path: str) -> None:
"""Load the YAML file content into self.yaml_content."""
self.yaml_content: [Dict[str, Any]] = None
try:
with open(file_path, 'r') as file:
self.yaml_content = yaml.safe_load(file)
except yaml.YAMLError as exc:
print(f"Error in YAML file: {exc}")
self.yaml_content = None
def is_valid(self) -> bool:
"""Validate the entire YAML structure."""
if self.yaml_content is None:
print(f"YAML content is empty.")
return False
if not isinstance(self.yaml_content, dict):
print(f"YAML structure is not as expected, it should start with a Dict.")
return False
result = self.__has_valid_keys(self.yaml_content, "root", False)
return True if result is True else False

View file

@ -1,16 +0,0 @@
"""
A class to provide information about Logstash node stats.
"""
import util
class LogstashHealthReport:
LOGSTASH_HEALTH_REPORT_URL = "http://localhost:9600/_health_report"
def __init__(self):
pass
def get(self):
response = util.call_url_with_retry(self.LOGSTASH_HEALTH_REPORT_URL)
return response.json()

View file

@ -1,89 +0,0 @@
"""
Main entry point of the LS health report API integration test suites
"""
import glob
import os
import time
import traceback
import yaml
from bootstrap import Bootstrap
from scenario_executor import ScenarioExecutor
from config_validator import ConfigValidator
class BootstrapContextManager:
def __init__(self):
pass
def __enter__(self):
print(f"Starting Logstash Health Report Integration test.")
self.bootstrap = Bootstrap()
self.bootstrap.build_logstash()
plugin_path = os.getcwd() + "/qa/support/logstash-integration-failure_injector/logstash-integration" \
"-failure_injector-*.gem"
matching_files = glob.glob(plugin_path)
if len(matching_files) == 0:
raise ValueError(f"Could not find logstash-integration-failure_injector plugin.")
self.bootstrap.install_plugin(matching_files[0])
print(f"logstash-integration-failure_injector successfully installed.")
return self.bootstrap
def __exit__(self, exc_type, exc_value, exc_traceback):
if exc_type is not None:
print(traceback.format_exception(exc_type, exc_value, exc_traceback))
def main():
with BootstrapContextManager() as bootstrap:
scenario_executor = ScenarioExecutor()
config_validator = ConfigValidator()
working_dir = os.getcwd()
scenario_files_path = working_dir + "/.buildkite/scripts/health-report-tests/tests/*.yaml"
scenario_files = glob.glob(scenario_files_path)
for scenario_file in scenario_files:
print(f"Validating {scenario_file} scenario file.")
config_validator.load(scenario_file)
if config_validator.is_valid() is False:
print(f"{scenario_file} scenario file is not valid.")
return
else:
print(f"Validation succeeded.")
has_failed_scenario = False
for scenario_file in scenario_files:
with open(scenario_file, 'r') as file:
# scenario_content: Dict[str, Any] = None
scenario_content = yaml.safe_load(file)
print(f"Testing `{scenario_content.get('name')}` scenario.")
scenario_name = scenario_content['name']
is_full_start_required = scenario_content.get('conditions').get('full_start_required')
wait_seconds = scenario_content.get('conditions').get('wait_seconds')
config = scenario_content['config']
if config is not None:
bootstrap.apply_config(config)
expectations = scenario_content.get("expectation")
process = bootstrap.run_logstash(is_full_start_required)
if process is not None:
if wait_seconds is not None:
print(f"Test requires to wait for `{wait_seconds}` seconds.")
time.sleep(wait_seconds) # wait for Logstash to start
try:
scenario_executor.on(scenario_name, expectations)
except Exception as e:
print(e)
has_failed_scenario = True
bootstrap.stop_logstash(process)
if has_failed_scenario:
# intentionally fail due to visibility
raise Exception("Some of scenarios failed, check the log for details.")
if __name__ == "__main__":
main()

View file

@ -1,16 +0,0 @@
#!/bin/bash
set -euo pipefail
export PATH="/opt/buildkite-agent/.rbenv/bin:/opt/buildkite-agent/.pyenv/bin:/opt/buildkite-agent/.java/bin:$PATH"
export JAVA_HOME="/opt/buildkite-agent/.java"
export PYENV_VERSION="3.11.5"
eval "$(rbenv init -)"
eval "$(pyenv init -)"
echo "--- Installing dependencies"
python3 -m pip install -r .buildkite/scripts/health-report-tests/requirements.txt
echo "--- Running tests"
python3 .buildkite/scripts/health-report-tests/main.py

View file

@ -1,2 +0,0 @@
requests==2.32.3
pyyaml==6.0.2

View file

@ -1,67 +0,0 @@
"""
A class to execute the given scenario for Logstash Health Report integration test
"""
import time
from logstash_health_report import LogstashHealthReport
class ScenarioExecutor:
logstash_health_report_api = LogstashHealthReport()
def __init__(self):
pass
def __has_intersection(self, expects, results):
# TODO: this logic is aligned on current Health API response
# there is no guarantee that method correctly runs if provided multi expects and results
# we expect expects to be existing in results
for expect in expects:
for result in results:
if result.get('help_url') and "health-report-pipeline-" not in result.get('help_url'):
return False
if not all(key in result and result[key] == value for key, value in expect.items()):
return False
return True
def __get_difference(self, differences: list, expectations: dict, reports: dict) -> dict:
for key in expectations.keys():
if type(expectations.get(key)) != type(reports.get(key)):
differences.append(f"Scenario expectation and Health API report structure differs for {key}.")
return differences
if isinstance(expectations.get(key), str):
if expectations.get(key) != reports.get(key):
differences.append({key: {"expected": expectations.get(key), "got": reports.get(key)}})
continue
elif isinstance(expectations.get(key), dict):
self.__get_difference(differences, expectations.get(key), reports.get(key))
elif isinstance(expectations.get(key), list):
if not self.__has_intersection(expectations.get(key), reports.get(key)):
differences.append({key: {"expected": expectations.get(key), "got": reports.get(key)}})
return differences
def __is_expected(self, expectations: dict) -> None:
reports = self.logstash_health_report_api.get()
differences = self.__get_difference([], expectations, reports)
if differences:
print("Differences found in 'expectation' section between YAML content and stats:")
for diff in differences:
print(f"Difference: {diff}")
return False
else:
return True
def on(self, scenario_name: str, expectations: dict) -> None:
# retriable check the expectations
attempts = 5
while self.__is_expected(expectations) is False:
attempts = attempts - 1
if attempts == 0:
break
time.sleep(1)
if attempts == 0:
raise Exception(f"{scenario_name} failed.")
else:
print(f"Scenario `{scenario_name}` expectaion meets the health report stats.")

View file

@ -1,32 +0,0 @@
name: "Abnormally terminated pipeline"
config:
- pipeline.id: abnormally-terminated-pp
config.string: |
input { heartbeat { interval => 1 } }
filter { failure_injector { crash_at => filter } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 5
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`abnormally-terminated-pp`)"
indicators:
abnormally-terminated-pp:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is not running, likely because it has encountered an error"
action: "view logs to determine the cause of abnormal pipeline shutdown"
impacts:
- description: "the pipeline is not currently processing"
impact_areas: ["pipeline_execution"]
details:
status:
state: "TERMINATED"

View file

@ -1,38 +0,0 @@
name: "Backpressured in 1min pipeline"
config:
- pipeline.id: backpressure-1m-pp
config.string: |
input { heartbeat { interval => 0.1 } }
filter { failure_injector { degrade_at => [filter] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 70 # give more seconds to make sure time is over the threshold, 1m in this case
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`backpressure-1m-pp`)"
indicators:
backpressure-1m-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- id: "logstash:health:pipeline:flow:worker_utilization:diagnosis:1m-blocked"
cause: "pipeline workers have been completely blocked for at least one minute"
action: "address bottleneck or add resources"
impacts:
- id: "logstash:health:pipeline:flow:impact:blocked_processing"
severity: 2
description: "the pipeline is blocked"
impact_areas: ["pipeline_execution"]
details:
status:
state: "RUNNING"
flow:
worker_utilization:
last_1_minute: 100.0

View file

@ -1,39 +0,0 @@
name: "Backpressured in 5min pipeline"
config:
- pipeline.id: backpressure-5m-pp
config.string: |
input { heartbeat { interval => 0.1 } }
filter { failure_injector { degrade_at => [filter] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 310 # give more seconds to make sure time is over the threshold, 1m in this case
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`backpressure-5m-pp`)"
indicators:
backpressure-5m-pp:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- id: "logstash:health:pipeline:flow:worker_utilization:diagnosis:5m-blocked"
cause: "pipeline workers have been completely blocked for at least five minutes"
action: "address bottleneck or add resources"
impacts:
- id: "logstash:health:pipeline:flow:impact:blocked_processing"
severity: 1
description: "the pipeline is blocked"
impact_areas: ["pipeline_execution"]
details:
status:
state: "RUNNING"
flow:
worker_utilization:
last_1_minute: 100.0
last_5_minutes: 100.0

View file

@ -1,67 +0,0 @@
name: "Multi pipeline"
config:
- pipeline.id: slow-start-pp-multipipeline
config.string: |
input { heartbeat {} }
filter { failure_injector { degrade_at => [register] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
- pipeline.id: normally-terminated-pp-multipipeline
config.string: |
input { generator { count => 1 } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
- pipeline.id: abnormally-terminated-pp-multipipeline
config.string: |
input { heartbeat { interval => 1 } }
filter { failure_injector { crash_at => filter } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: false
wait_seconds: 10
expectation:
status: "red"
symptom: "1 indicator is unhealthy (`pipelines`)"
indicators:
pipelines:
status: "red"
symptom: "1 indicator is unhealthy (`abnormally-terminated-pp-multipipeline`) and 2 indicators are concerning (`slow-start-pp-multipipeline`, `normally-terminated-pp-multipipeline`)"
indicators:
slow-start-pp-multipipeline:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is loading"
action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "LOADING"
normally-terminated-pp-multipipeline:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline has finished running because its inputs have been closed and events have been processed"
action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
impacts:
- impact_areas: [ "pipeline_execution" ]
details:
status:
state: "FINISHED"
abnormally-terminated-pp-multipipeline:
status: "red"
symptom: "The pipeline is unhealthy; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is not running, likely because it has encountered an error"
action: "view logs to determine the cause of abnormal pipeline shutdown"
impacts:
- description: "the pipeline is not currently processing"
impact_areas: [ "pipeline_execution" ]
details:
status:
state: "TERMINATED"

View file

@ -1,30 +0,0 @@
name: "Successfully terminated pipeline"
config:
- pipeline.id: normally-terminated-pp
config.string: |
input { generator { count => 1 } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: true
wait_seconds: 5
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`normally-terminated-pp`)"
indicators:
normally-terminated-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline has finished running because its inputs have been closed and events have been processed"
action: "if you expect this pipeline to run indefinitely, you will need to configure its inputs to continue receiving or fetching events"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "FINISHED"

View file

@ -1,31 +0,0 @@
name: "Slow start pipeline"
config:
- pipeline.id: slow-start-pp
config.string: |
input { heartbeat {} }
filter { failure_injector { degrade_at => [register] } }
output { stdout {} }
pipeline.workers: 1
pipeline.batch.size: 1
conditions:
full_start_required: false
wait_seconds: 0
expectation:
status: "yellow"
symptom: "1 indicator is concerning (`pipelines`)"
indicators:
pipelines:
status: "yellow"
symptom: "1 indicator is concerning (`slow-start-pp`)"
indicators:
slow-start-pp:
status: "yellow"
symptom: "The pipeline is concerning; 1 area is impacted and 1 diagnosis is available"
diagnosis:
- cause: "pipeline is loading"
action: "if pipeline does not come up quickly, you may need to check the logs to see if it is stalled"
impacts:
- impact_areas: ["pipeline_execution"]
details:
status:
state: "LOADING"

View file

@ -1,36 +0,0 @@
import os
import requests
import subprocess
from requests.adapters import HTTPAdapter, Retry
def call_url_with_retry(url: str, max_retries: int = 5, delay: int = 1) -> requests.Response:
f"""
Calls the given {url} with maximum of {max_retries} retries with {delay} delay.
"""
schema = "https://" if "https://" in url else "http://"
session = requests.Session()
# retry on most common failures such as connection timeout(408), etc...
retries = Retry(total=max_retries, backoff_factor=delay, status_forcelist=[408, 502, 503, 504])
session.mount(schema, HTTPAdapter(max_retries=retries))
return session.get(url)
def git_check_out_branch(branch_name: str) -> None:
f"""
Checks out specified branch or fails with error if checkout operation fails.
"""
run_or_raise_error(["git", "checkout", branch_name],
"Error occurred while checking out the " + branch_name + " branch")
def run_or_raise_error(commands: list, error_message):
f"""
Executes the {list} commands and raises an {Exception} if opration fails.
"""
result = subprocess.run(commands, env=os.environ.copy(), universal_newlines=True, stdout=subprocess.PIPE)
if result.returncode != 0:
full_error_message = (error_message + ", output: " + result.stdout.decode('utf-8')) \
if result.stdout else error_message
raise Exception(f"{full_error_message}")

View file

@ -4,7 +4,6 @@ from dataclasses import dataclass, field
import os
import sys
import typing
from functools import partial
from ruamel.yaml import YAML
from ruamel.yaml.scalarstring import LiteralScalarString
@ -178,15 +177,17 @@ class LinuxJobs(Jobs):
super().__init__(os=os, jdk=jdk, group_key=group_key, agent=agent)
def all_jobs(self) -> list[typing.Callable[[], JobRetValues]]:
jobs=list()
jobs.append(self.init_annotation)
jobs.append(self.java_unit_test)
jobs.append(self.ruby_unit_test)
jobs.extend(self.integration_test_parts(3))
jobs.extend(self.pq_integration_test_parts(3))
jobs.append(self.x_pack_unit_tests)
jobs.append(self.x_pack_integration)
return jobs
return [
self.init_annotation,
self.java_unit_test,
self.ruby_unit_test,
self.integration_tests_part_1,
self.integration_tests_part_2,
self.pq_integration_tests_part_1,
self.pq_integration_tests_part_2,
self.x_pack_unit_tests,
self.x_pack_integration,
]
def prepare_shell(self) -> str:
jdk_dir = f"/opt/buildkite-agent/.java/{self.jdk}"
@ -258,14 +259,17 @@ ci/unit_tests.sh ruby
retry=copy.deepcopy(ENABLED_RETRIES),
)
def integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def integration_tests_part_1(self) -> JobRetValues:
return self.integration_tests(part=1)
def integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"Integration Tests - {part}/{parts}"
step_key = f"{self.group_key}-integration-tests-{part}-of-{parts}"
def integration_tests_part_2(self) -> JobRetValues:
return self.integration_tests(part=2)
def integration_tests(self, part: int) -> JobRetValues:
step_name_human = f"Integration Tests - {part}"
step_key = f"{self.group_key}-integration-tests-{part}"
test_command = f"""
ci/integration_tests.sh split {part-1} {parts}
ci/integration_tests.sh split {part-1}
"""
return JobRetValues(
@ -277,15 +281,18 @@ ci/integration_tests.sh split {part-1} {parts}
retry=copy.deepcopy(ENABLED_RETRIES),
)
def pq_integration_test_parts(self, parts) -> list[partial[JobRetValues]]:
return [partial(self.pq_integration_tests, part=idx+1, parts=parts) for idx in range(parts)]
def pq_integration_tests_part_1(self) -> JobRetValues:
return self.pq_integration_tests(part=1)
def pq_integration_tests(self, part: int, parts: int) -> JobRetValues:
step_name_human = f"IT Persistent Queues - {part}/{parts}"
step_key = f"{self.group_key}-it-persistent-queues-{part}-of-{parts}"
def pq_integration_tests_part_2(self) -> JobRetValues:
return self.pq_integration_tests(part=2)
def pq_integration_tests(self, part: int) -> JobRetValues:
step_name_human = f"IT Persistent Queues - {part}"
step_key = f"{self.group_key}-it-persistent-queues-{part}"
test_command = f"""
export FEATURE_FLAG=persistent_queues
ci/integration_tests.sh split {part-1} {parts}
ci/integration_tests.sh split {part-1}
"""
return JobRetValues(

View file

@ -4,7 +4,7 @@ set -e
install_java() {
# TODO: let's think about regularly creating a custom image for Logstash which may align on version.yml definitions
sudo apt update && sudo apt install -y openjdk-21-jdk && sudo apt install -y openjdk-21-jre
sudo apt update && sudo apt install -y openjdk-17-jdk && sudo apt install -y openjdk-17-jre
}
install_java

View file

@ -4,13 +4,22 @@ set -e
TARGET_BRANCHES=("main")
install_java_11() {
curl -L -s "https://github.com/adoptium/temurin11-binaries/releases/download/jdk-11.0.24%2B8/OpenJDK11U-jdk_x64_linux_hotspot_11.0.24_8.tar.gz" | tar -zxf -
install_java() {
# TODO: let's think about using BK agent which has Java installed
# Current caveat is Logstash BK agent doesn't support docker operatioins in it
sudo apt update && sudo apt install -y openjdk-17-jdk && sudo apt install -y openjdk-17-jre
}
# Resolves the branches we are going to track
resolve_latest_branches() {
source .buildkite/scripts/snyk/resolve_stack_version.sh
for SNAPSHOT_VERSION in "${SNAPSHOT_VERSIONS[@]}"
do
IFS='.'
read -a versions <<< "$SNAPSHOT_VERSION"
version=${versions[0]}.${versions[1]}
TARGET_BRANCHES+=("$version")
done
}
# Build Logstash specific branch to generate Gemlock file where Snyk scans
@ -33,7 +42,7 @@ download_auth_snyk() {
report() {
REMOTE_REPO_URL=$1
echo "Reporting $REMOTE_REPO_URL branch."
if [ "$REMOTE_REPO_URL" != "main" ] && [ "$REMOTE_REPO_URL" != "8.x" ]; then
if [ "$REMOTE_REPO_URL" != "main" ]; then
MAJOR_VERSION=$(echo "$REMOTE_REPO_URL"| cut -d'.' -f 1)
REMOTE_REPO_URL="$MAJOR_VERSION".latest
echo "Using '$REMOTE_REPO_URL' remote repo url."
@ -46,18 +55,13 @@ report() {
./snyk monitor --prune-repeated-subdependencies --all-projects --org=logstash --remote-repo-url="$REMOTE_REPO_URL" --target-reference="$REMOTE_REPO_URL" --detection-depth=6 --exclude=qa,tools,devtools,requirements.txt --project-tags=branch="$TARGET_BRANCH",git_head="$GIT_HEAD" || :
}
install_java
resolve_latest_branches
download_auth_snyk
# clone Logstash repo, build and report
for TARGET_BRANCH in "${TARGET_BRANCHES[@]}"
do
if [ "$TARGET_BRANCH" == "7.17" ]; then
echo "Installing and configuring JDK11."
export OLD_PATH=$PATH
install_java_11
export PATH=$PWD/jdk-11.0.24+8/bin:$PATH
fi
git reset --hard HEAD # reset if any generated files appeared
# check if target branch exists
echo "Checking out $TARGET_BRANCH branch."
@ -67,10 +71,70 @@ do
else
echo "$TARGET_BRANCH branch doesn't exist."
fi
if [ "$TARGET_BRANCH" == "7.17" ]; then
# reset state
echo "Removing JDK11 installation."
rm -rf jdk-11.0.24+8
export PATH=$OLD_PATH
fi
done
# Scan Logstash docker images and report
REPOSITORY_BASE_URL="docker.elastic.co/logstash/"
report_docker_image() {
image=$1
project_name=$2
platform=$3
echo "Reporting $image to Snyk started..."
docker pull "$image"
if [[ $platform != null ]]; then
./snyk container monitor "$image" --org=logstash --platform="$platform" --project-name="$project_name" --project-tags=version="$version" || :
else
./snyk container monitor "$image" --org=logstash --project-name="$project_name" --project-tags=version="$version" || :
fi
}
report_docker_images() {
version=$1
echo "Version value: $version"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT"
snyk_project_name="logstash-$version-SNAPSHOT"
report_docker_image "$image" "$snyk_project_name"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT"
snyk_project_name="logstash-oss-$version-SNAPSHOT"
report_docker_image "$image" "$snyk_project_name"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT-arm64"
snyk_project_name="logstash-$version-SNAPSHOT-arm64"
report_docker_image "$image" "$snyk_project_name" "linux/arm64"
image=$REPOSITORY_BASE_URL"logstash:$version-SNAPSHOT-amd64"
snyk_project_name="logstash-$version-SNAPSHOT-amd64"
report_docker_image "$image" "$snyk_project_name" "linux/amd64"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT-arm64"
snyk_project_name="logstash-oss-$version-SNAPSHOT-arm64"
report_docker_image "$image" "$snyk_project_name" "linux/arm64"
image=$REPOSITORY_BASE_URL"logstash-oss:$version-SNAPSHOT-amd64"
snyk_project_name="logstash-oss-$version-SNAPSHOT-amd64"
report_docker_image "$image" "$snyk_project_name" "linux/amd64"
}
resolve_version_and_report_docker_images() {
git reset --hard HEAD # reset if any generated files appeared
git checkout "$1"
# parse version (ex: 8.8.2 from 8.8 branch, or 8.9.0 from main branch)
versions_file="$PWD/versions.yml"
version=$(awk '/logstash:/ { print $2 }' "$versions_file")
report_docker_images "$version"
}
# resolve docker artifact and report
#for TARGET_BRANCH in "${TARGET_BRANCHES[@]}"
#do
# if git show-ref --quiet refs/heads/"$TARGET_BRANCH"; then
# echo "Using $TARGET_BRANCH branch for docker images."
# resolve_version_and_report_docker_images "$TARGET_BRANCH"
# else
# echo "$TARGET_BRANCH branch doesn't exist."
# fi
#done

View file

@ -6,9 +6,14 @@
set -e
VERSION_URL="https://storage.googleapis.com/artifacts-api/snapshots/branches.json"
VERSION_URL="https://raw.githubusercontent.com/elastic/logstash/main/ci/logstash_releases.json"
echo "Fetching versions from $VERSION_URL"
readarray -t TARGET_BRANCHES < <(curl --retry-all-errors --retry 5 --retry-delay 5 -fsSL $VERSION_URL | jq -r '.branches[]')
echo "${TARGET_BRANCHES[@]}"
VERSIONS=$(curl --silent $VERSION_URL)
SNAPSHOT_KEYS=$(echo "$VERSIONS" | jq -r '.snapshots | .[]')
SNAPSHOT_VERSIONS=()
while IFS= read -r line; do
SNAPSHOT_VERSIONS+=("$line")
echo "Resolved snapshot version: $line"
done <<< "$SNAPSHOT_KEYS"

View file

@ -1,14 +1,12 @@
agents:
image: "docker.elastic.co/ci-agent-images/platform-ingest/buildkite-agent-logstash-ci"
cpu: "2"
memory: "4Gi"
ephemeralStorage: "64Gi"
provider: gcp
imageProject: elastic-images-prod
image: family/platform-ingest-logstash-ubuntu-2204
machineType: "n2-standard-4"
diskSizeGb: 120
steps:
# reports main, previous (ex: 7.latest) and current (ex: 8.latest) release branches to Snyk
- label: ":hammer: Report to Snyk"
command:
- .buildkite/scripts/snyk/report.sh
retry:
automatic:
- limit: 3
- .buildkite/scripts/snyk/report.sh

View file

@ -2,7 +2,7 @@
env:
DEFAULT_MATRIX_OS: "windows-2022"
DEFAULT_MATRIX_JDK: "adoptiumjdk_21"
DEFAULT_MATRIX_JDK: "adoptiumjdk_17"
steps:
- input: "Test Parameters"
@ -15,8 +15,6 @@ steps:
multiple: true
default: "${DEFAULT_MATRIX_OS}"
options:
- label: "Windows 2025"
value: "windows-2025"
- label: "Windows 2022"
value: "windows-2022"
- label: "Windows 2019"
@ -35,14 +33,20 @@ steps:
value: "adoptiumjdk_21"
- label: "Adoptium JDK 17 (Eclipse Temurin)"
value: "adoptiumjdk_17"
- label: "Adoptium JDK 11 (Eclipse Temurin)"
value: "adoptiumjdk_11"
- label: "OpenJDK 21"
value: "openjdk_21"
- label: "OpenJDK 17"
value: "openjdk_17"
- label: "OpenJDK 11"
value: "openjdk_11"
- label: "Zulu 21"
value: "zulu_21"
- label: "Zulu 17"
value: "zulu_17"
- label: "Zulu 11"
value: "zulu_11"
- wait: ~
if: build.source != "schedule" && build.source != "trigger_job"

View file

@ -1,42 +0,0 @@
.SILENT:
MAKEFLAGS += --no-print-directory
.SHELLFLAGS = -euc
SHELL = /bin/bash
#######################
## Templates
#######################
## Mergify template
define MERGIFY_TMPL
- name: backport patches to $(BRANCH) branch
conditions:
- merged
- base=main
- label=$(BACKPORT_LABEL)
actions:
backport:
branches:
- "$(BRANCH)"
endef
# Add mergify entry for the new backport label
.PHONY: mergify
export MERGIFY_TMPL
mergify: BACKPORT_LABEL=$${BACKPORT_LABEL} BRANCH=$${BRANCH} PUSH_BRANCH=$${PUSH_BRANCH}
mergify:
@echo ">> mergify"
echo "$$MERGIFY_TMPL" >> ../.mergify.yml
git add ../.mergify.yml
git status
if [ ! -z "$$(git status --porcelain)" ]; then \
git commit -m "mergify: add $(BACKPORT_LABEL) rule"; \
git push origin $(PUSH_BRANCH) ; \
fi
# Create GitHub backport label
.PHONY: backport-label
backport-label: BACKPORT_LABEL=$${BACKPORT_LABEL}
backport-label:
@echo ">> backport-label"
gh label create $(BACKPORT_LABEL) --description "Automated backport with mergify" --color 0052cc --force

View file

@ -1,2 +1,2 @@
LS_BUILD_JAVA=adoptiumjdk_21
LS_RUNTIME_JAVA=adoptiumjdk_21
LS_BUILD_JAVA=adoptiumjdk_17
LS_RUNTIME_JAVA=adoptiumjdk_17

View file

@ -21,6 +21,10 @@ analyze:
type: gradle
target: 'dependencies-report:'
path: .
- name: ingest-converter
type: gradle
target: 'ingest-converter:'
path: .
- name: logstash-core
type: gradle
target: 'logstash-core:'

View file

@ -21,5 +21,6 @@ to reproduce locally
**Failure history**:
<!--
Link to build stats and possible indication of when this started failing and how often it fails
<https://build-stats.elastic.co/app/kibana>
-->
**Failure excerpt**:

View file

@ -1,18 +0,0 @@
---
version: 2
updates:
- package-ecosystem: "github-actions"
directories:
- '/'
- '/.github/actions/*'
schedule:
interval: "weekly"
day: "sunday"
time: "22:00"
reviewers:
- "elastic/observablt-ci"
- "elastic/observablt-ci-contractors"
groups:
github-actions:
patterns:
- "*"

View file

@ -0,0 +1,33 @@
name: Docs Preview Link
on:
pull_request_target:
types: [opened, synchronize]
paths:
- docs/**
- docsk8s/**
jobs:
docs-preview-link:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- id: wait-for-status
uses: autotelic/action-wait-for-status-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
owner: elastic
# when running with on: pull_request_target we get the PR base ref by default
ref: ${{ github.event.pull_request.head.sha }}
statusName: "buildkite/docs-build-pr"
# https://elasticsearch-ci.elastic.co/job/elastic+logstash+pull-request+build-docs
# usually finishes in ~ 20 minutes
timeoutSeconds: 900
intervalSeconds: 30
- name: Add Docs Preview link in PR Comment
if: steps.wait-for-status.outputs.state == 'success'
uses: thollander/actions-comment-pull-request@v1
with:
message: |
:page_with_curl: **DOCS PREVIEW** :sparkles: https://logstash_bk_${{ github.event.number }}.docs-preview.app.elstc.co/diff
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View file

@ -1,22 +0,0 @@
name: Backport to active branches
on:
pull_request_target:
types: [closed]
branches:
- main
permissions:
pull-requests: write
contents: read
jobs:
backport:
# Only run if the PR was merged (not just closed) and has one of the backport labels
if: |
github.event.pull_request.merged == true &&
contains(toJSON(github.event.pull_request.labels.*.name), 'backport-active-')
runs-on: ubuntu-latest
steps:
- uses: elastic/oblt-actions/github/backport-active@v1

View file

@ -1,19 +0,0 @@
name: docs-build
on:
push:
branches:
- main
pull_request_target: ~
merge_group: ~
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-build.yml@main
with:
path-pattern: docs/**
permissions:
deployments: write
id-token: write
contents: read
pull-requests: read

View file

@ -1,14 +0,0 @@
name: docs-cleanup
on:
pull_request_target:
types:
- closed
jobs:
docs-preview:
uses: elastic/docs-builder/.github/workflows/preview-cleanup.yml@main
permissions:
contents: none
id-token: write
deployments: write

View file

@ -1,23 +0,0 @@
name: mergify backport labels copier
on:
pull_request:
types:
- opened
permissions:
contents: read
jobs:
mergify-backport-labels-copier:
runs-on: ubuntu-latest
if: startsWith(github.head_ref, 'mergify/bp/')
permissions:
# Add GH labels
pull-requests: write
# See https://github.com/cli/cli/issues/6274
repository-projects: read
steps:
- uses: elastic/oblt-actions/mergify/labels-copier@v1
with:
excluded-labels-regex: "^backport-*"

49
.github/workflows/pr_backporter.yml vendored Normal file
View file

@ -0,0 +1,49 @@
name: Backport PR to another branch
on:
issue_comment:
types: [created]
permissions:
pull-requests: write
contents: write
jobs:
pr_commented:
name: PR comment
if: github.event.issue.pull_request
runs-on: ubuntu-latest
steps:
- uses: actions-ecosystem/action-regex-match@v2
id: regex-match
with:
text: ${{ github.event.comment.body }}
regex: '^@logstashmachine backport (main|[x0-9\.]+)$'
- if: ${{ steps.regex-match.outputs.group1 == '' }}
run: exit 1
- name: Fetch logstash-core team member list
uses: tspascoal/get-user-teams-membership@v1
id: checkUserMember
with:
username: ${{ github.actor }}
organization: elastic
team: logstash
GITHUB_TOKEN: ${{ secrets.READ_ORG_SECRET_JSVD }}
- name: Is user not a core team member?
if: ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
run: exit 1
- name: checkout repo content
uses: actions/checkout@v2
with:
fetch-depth: 0
ref: 'main'
- run: git config --global user.email "43502315+logstashmachine@users.noreply.github.com"
- run: git config --global user.name "logstashmachine"
- name: setup python
uses: actions/setup-python@v2
with:
python-version: 3.8
- run: |
mkdir ~/.elastic && echo ${{ github.token }} >> ~/.elastic/github.token
- run: pip install requests
- name: run backport
run: python devtools/backport ${{ steps.regex-match.outputs.group1 }} ${{ github.event.issue.number }} --remote=origin --yes

View file

@ -1,18 +0,0 @@
name: pre-commit
on:
pull_request:
push:
branches:
- main
- 8.*
- 9.*
permissions:
contents: read
jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: elastic/oblt-actions/pre-commit@v1

View file

@ -25,13 +25,9 @@ jobs:
version_bumper:
name: Bump versions
runs-on: ubuntu-latest
env:
INPUTS_BRANCH: "${{ inputs.branch }}"
INPUTS_BUMP: "${{ inputs.bump }}"
BACKPORT_LABEL: "backport-${{ inputs.branch }}"
steps:
- name: Fetch logstash-core team member list
uses: tspascoal/get-user-teams-membership@57e9f42acd78f4d0f496b3be4368fc5f62696662 #v3.0.0
uses: tspascoal/get-user-teams-membership@v1
with:
username: ${{ github.actor }}
organization: elastic
@ -41,14 +37,14 @@ jobs:
if: ${{ steps.checkUserMember.outputs.isTeamMember == 'false' }}
run: exit 1
- name: checkout repo content
uses: actions/checkout@v4
uses: actions/checkout@v2
with:
fetch-depth: 0
ref: ${{ env.INPUTS_BRANCH }}
ref: ${{ github.event.inputs.branch }}
- run: git config --global user.email "43502315+logstashmachine@users.noreply.github.com"
- run: git config --global user.name "logstashmachine"
- run: ./gradlew clean installDefaultGems
- run: ./vendor/jruby/bin/jruby -S bundle update --all --${{ env.INPUTS_BUMP }} --strict
- run: ./vendor/jruby/bin/jruby -S bundle update --all --${{ github.event.inputs.bump }} --strict
- run: mv Gemfile.lock Gemfile.jruby-*.lock.release
- run: echo "T=$(date +%s)" >> $GITHUB_ENV
- run: echo "BRANCH=update_lock_${T}" >> $GITHUB_ENV
@ -57,21 +53,8 @@ jobs:
git add .
git status
if [[ -z $(git status --porcelain) ]]; then echo "No changes. We're done."; exit 0; fi
git commit -m "Update ${{ env.INPUTS_BUMP }} plugin versions in gemfile lock" -a
git commit -m "Update ${{ github.event.inputs.bump }} plugin versions in gemfile lock" -a
git push origin $BRANCH
- name: Update mergify (minor only)
if: ${{ inputs.bump == 'minor' }}
continue-on-error: true
run: make -C .ci mergify BACKPORT_LABEL=$BACKPORT_LABEL BRANCH=$INPUTS_BRANCH PUSH_BRANCH=$BRANCH
- name: Create Pull Request
run: |
curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -X POST -d "{\"title\": \"bump lock file for ${{ env.INPUTS_BRANCH }}\",\"head\": \"${BRANCH}\",\"base\": \"${{ env.INPUTS_BRANCH }}\"}" https://api.github.com/repos/elastic/logstash/pulls
- name: Create GitHub backport label (Mergify) (minor only)
if: ${{ inputs.bump == 'minor' }}
continue-on-error: true
run: make -C .ci backport-label BACKPORT_LABEL=$BACKPORT_LABEL
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" -X POST -d "{\"title\": \"bump lock file for ${{ github.event.inputs.branch }}\",\"head\": \"${BRANCH}\",\"base\": \"${{ github.event.inputs.branch }}\"}" https://api.github.com/repos/elastic/logstash/pulls

View file

@ -1,132 +0,0 @@
commands_restrictions:
backport:
conditions:
- or:
- sender-permission>=write
- sender=github-actions[bot]
defaults:
actions:
backport:
title: "[{{ destination_branch }}] (backport #{{ number }}) {{ title }}"
assignees:
- "{{ author }}"
labels:
- "backport"
pull_request_rules:
# - name: ask to resolve conflict
# conditions:
# - conflict
# actions:
# comment:
# message: |
# This pull request is now in conflicts. Could you fix it @{{author}}? 🙏
# To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/
# ```
# git fetch upstream
# git checkout -b {{head}} upstream/{{head}}
# git merge upstream/{{base}}
# git push upstream {{head}}
# ```
- name: notify the backport policy
conditions:
- -label~=^backport
- base=main
actions:
comment:
message: |
This pull request does not have a backport label. Could you fix it @{{author}}? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:
* `backport-8./d` is the label to automatically backport to the `8./d` branch. `/d` is the digit.
* If no backport is necessary, please add the `backport-skip` label
- name: remove backport-skip label
conditions:
- label~=^backport-\d
actions:
label:
remove:
- backport-skip
- name: notify the backport has not been merged yet
conditions:
- -merged
- -closed
- author=mergify[bot]
- "#check-success>0"
- schedule=Mon-Mon 06:00-10:00[Europe/Paris]
actions:
comment:
message: |
This pull request has not been merged yet. Could you please review and merge it @{{ assignee | join(', @') }}? 🙏
- name: backport patches to 8.16 branch
conditions:
- merged
- base=main
- label=backport-8.16
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.16"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.17 branch
conditions:
- merged
- base=main
- label=backport-8.17
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.17"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.18 branch
conditions:
- merged
- base=main
- label=backport-8.18
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.18"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
- name: backport patches to 8.19 branch
conditions:
- merged
- base=main
- label=backport-8.19
actions:
backport:
branches:
- "8.19"
- name: backport patches to 9.0 branch
conditions:
- merged
- base=main
- label=backport-9.0
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "9.0"
labels:
- "backport"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"

View file

@ -1,6 +0,0 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-merge-conflict
args: ['--assume-in-merge']

View file

@ -1 +1 @@
jruby-9.4.9.0
jruby-9.3.10.0

64
Dockerfile Normal file
View file

@ -0,0 +1,64 @@
FROM ubuntu:bionic
RUN apt-get update && \
apt-get install -y zlib1g-dev build-essential vim rake git curl libssl-dev libreadline-dev libyaml-dev \
libxml2-dev libxslt-dev openjdk-11-jdk-headless curl iputils-ping netcat && \
apt-get clean
WORKDIR /root
RUN adduser --disabled-password --gecos "" --home /home/logstash logstash && \
mkdir -p /usr/local/share/ruby-build && \
mkdir -p /opt/logstash && \
mkdir -p /opt/logstash/data && \
mkdir -p /mnt/host && \
chown logstash:logstash /opt/logstash
USER logstash
WORKDIR /home/logstash
# used by the purge policy
LABEL retention="keep"
# Setup gradle wrapper. When running any `gradle` command, a `settings.gradle` is expected (and will soon be required).
# This section adds the gradle wrapper, `settings.gradle` and sets the permissions (setting the user to root for `chown`
# and working directory to allow this and then reverts back to the previous working directory and user.
COPY --chown=logstash:logstash gradlew /opt/logstash/gradlew
COPY --chown=logstash:logstash gradle/wrapper /opt/logstash/gradle/wrapper
COPY --chown=logstash:logstash settings.gradle /opt/logstash/settings.gradle
WORKDIR /opt/logstash
RUN for iter in `seq 1 10`; do ./gradlew wrapper --warning-mode all && exit_code=0 && break || exit_code=$? && echo "gradlew error: retry $iter in 10s" && sleep 10; done; exit $exit_code
WORKDIR /home/logstash
COPY versions.yml /opt/logstash/versions.yml
COPY LICENSE.txt /opt/logstash/LICENSE.txt
COPY NOTICE.TXT /opt/logstash/NOTICE.TXT
COPY licenses /opt/logstash/licenses
COPY CONTRIBUTORS /opt/logstash/CONTRIBUTORS
COPY Gemfile.template Gemfile.jruby-3.1.lock.* /opt/logstash/
COPY Rakefile /opt/logstash/Rakefile
COPY build.gradle /opt/logstash/build.gradle
COPY rubyUtils.gradle /opt/logstash/rubyUtils.gradle
COPY rakelib /opt/logstash/rakelib
COPY config /opt/logstash/config
COPY spec /opt/logstash/spec
COPY qa /opt/logstash/qa
COPY lib /opt/logstash/lib
COPY pkg /opt/logstash/pkg
COPY buildSrc /opt/logstash/buildSrc
COPY tools /opt/logstash/tools
COPY logstash-core /opt/logstash/logstash-core
COPY logstash-core-plugin-api /opt/logstash/logstash-core-plugin-api
COPY bin /opt/logstash/bin
COPY modules /opt/logstash/modules
COPY x-pack /opt/logstash/x-pack
COPY ci /opt/logstash/ci
USER root
RUN rm -rf build && \
mkdir -p build && \
chown -R logstash:logstash /opt/logstash
USER logstash
WORKDIR /opt/logstash
LABEL retention="prune"

22
Dockerfile.base Normal file
View file

@ -0,0 +1,22 @@
#logstash-base image, use ci/docker_update_base_image.sh to push updates
FROM ubuntu:bionic
RUN apt-get update && \
apt-get install -y zlib1g-dev build-essential vim rake git curl libssl-dev libreadline-dev libyaml-dev \
libxml2-dev libxslt-dev openjdk-11-jdk-headless curl iputils-ping netcat && \
apt-get clean
WORKDIR /root
RUN adduser --disabled-password --gecos "" --home /home/logstash logstash && \
mkdir -p /usr/local/share/ruby-build && \
mkdir -p /opt/logstash && \
mkdir -p /mnt/host && \
chown logstash:logstash /opt/logstash
USER logstash
WORKDIR /home/logstash
RUN mkdir -p /opt/logstash/data
# used by the purge policy
LABEL retention="keep"

File diff suppressed because it is too large Load diff

View file

@ -13,7 +13,7 @@ gem "ruby-maven-libs", "~> 3", ">= 3.9.6.1"
gem "logstash-output-elasticsearch", ">= 11.14.0"
gem "polyglot", require: false
gem "treetop", require: false
gem "minitar", "~> 1", :group => :build
gem "faraday", "~> 1", :require => false # due elasticsearch-transport (elastic-transport) depending faraday '~> 1'
gem "childprocess", "~> 4", :group => :build
gem "fpm", "~> 1", ">= 1.14.1", :group => :build # compound due to bugfix https://github.com/jordansissel/fpm/pull/1856
gem "gems", "~> 1", :group => :build
@ -26,11 +26,10 @@ gem "stud", "~> 0.0.22", :group => :build
gem "fileutils", "~> 1.7"
gem "rubocop", :group => :development
# rubocop-ast 1.43.0 carries a dep on `prism` which requires native c extensions
gem 'rubocop-ast', '= 1.42.0', :group => :development
gem "belzebuth", :group => :development
gem "benchmark-ips", :group => :development
gem "ci_reporter_rspec", "~> 1", :group => :development
gem "rexml", "3.2.6", :group => :development
gem "flores", "~> 0.0.8", :group => :development
gem "json-schema", "~> 2", :group => :development
gem "logstash-devutils", "~> 2.6.0", :group => :development
@ -41,9 +40,5 @@ gem "simplecov", "~> 0.22.0", :group => :development
gem "simplecov-json", require: false, :group => :development
gem "jar-dependencies", "= 0.4.1" # Gem::LoadError with jar-dependencies 0.4.2
gem "murmurhash3", "= 0.1.6" # Pins until version 0.1.7-java is released
gem "date", "= 3.3.3"
gem "thwait"
gem "bigdecimal", "~> 3.1"
gem "psych", "5.2.2"
gem "cgi", "0.3.7" # Pins until a new jruby version with updated cgi is released
gem "uri", "0.12.3" # Pins until a new jruby version with updated cgi is released

1080
NOTICE.TXT

File diff suppressed because it is too large Load diff

View file

@ -20,6 +20,7 @@ supported platforms, from [downloads page](https://www.elastic.co/downloads/logs
- [Logstash Forum](https://discuss.elastic.co/c/logstash)
- [Logstash Documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
- [#logstash on freenode IRC](https://webchat.freenode.net/?channels=logstash)
- [Logstash Product Information](https://www.elastic.co/products/logstash)
- [Elastic Support](https://www.elastic.co/subscriptions)

10
bin/ingest-convert.bat Normal file
View file

@ -0,0 +1,10 @@
@echo off
setlocal enabledelayedexpansion
cd /d "%~dp0\.."
for /f %%i in ('cd') do set RESULT=%%i
"%JAVACMD%" -cp "!RESULT!\tools\ingest-converter\build\libs\ingest-converter.jar;*" ^
org.logstash.ingest.Pipeline %*
endlocal

4
bin/ingest-convert.sh Executable file
View file

@ -0,0 +1,4 @@
#!/usr/bin/env bash
java -cp "$(cd `dirname $0`/..; pwd)"'/tools/ingest-converter/build/libs/ingest-converter.jar:*' \
org.logstash.ingest.Pipeline "$@"

View file

@ -6,7 +6,7 @@ set params='%*'
if "%1" == "-V" goto version
if "%1" == "--version" goto version
1>&2 (call "%~dp0setup.bat") || exit /b 1
call "%~dp0setup.bat" || exit /b 1
if errorlevel 1 (
if not defined nopauseonerror (
pause

View file

@ -186,8 +186,8 @@ setup_vendored_jruby() {
}
setup() {
>&2 setup_java
>&2 setup_vendored_jruby
setup_java
setup_vendored_jruby
}
ruby_exec() {

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqCheck %*
"%JAVACMD%" %JAVA_OPTS% -cp "%CLASSPATH%" org.logstash.ackedqueue.PqCheck %*
:concat
IF not defined CLASSPATH (

View file

@ -16,7 +16,7 @@ for %%i in ("%LS_HOME%\logstash-core\lib\jars\*.jar") do (
call :concat "%%i"
)
"%JAVACMD%" %JAVA_OPTS% org.logstash.ackedqueue.PqRepair %*
"%JAVACMD%" %JAVA_OPTS% -cp "%CLASSPATH%" org.logstash.ackedqueue.PqRepair %*
:concat
IF not defined CLASSPATH (

View file

@ -42,7 +42,7 @@ if defined LS_JAVA_HOME (
)
if not exist "%JAVACMD%" (
echo could not find java; set LS_JAVA_HOME or ensure java is in PATH 1>&2
echo could not find java; set JAVA_HOME or ensure java is in PATH 1>&2
exit /b 1
)

View file

@ -19,8 +19,8 @@
buildscript {
ext {
snakeYamlVersion = '2.2'
shadowGradlePluginVersion = '8.1.1'
snakeYamlVersion = '2.0'
shadowGradlePluginVersion = '7.0.0'
}
repositories {
@ -37,6 +37,8 @@ buildscript {
plugins {
id "de.undercouch.download" version "4.0.4"
id "com.dorongold.task-tree" version "2.1.0"
// id "jacoco"
// id "org.sonarqube" version "4.3.0.3225"
}
apply plugin: 'de.undercouch.download'
@ -57,11 +59,8 @@ allprojects {
apply plugin: 'java'
apply plugin: 'idea'
apply plugin: 'java-library'
java {
sourceCompatibility = JavaVersion.VERSION_11
targetCompatibility = JavaVersion.VERSION_11
}
project.sourceCompatibility = JavaVersion.VERSION_11
project.targetCompatibility = JavaVersion.VERSION_11
tasks.withType(JavaCompile).configureEach {
options.compilerArgs.add("-Xlint:all")
@ -101,7 +100,6 @@ allprojects {
"--add-opens=java.base/java.lang=ALL-UNNAMED",
"--add-opens=java.base/java.util=ALL-UNNAMED"
]
maxHeapSize = "2g"
//https://stackoverflow.com/questions/3963708/gradle-how-to-display-test-results-in-the-console-in-real-time
testLogging {
// set options for log level LIFECYCLE
@ -146,6 +144,7 @@ subprojects {
}
version = versionMap['logstash-core']
String artifactVersionsApi = "https://artifacts-api.elastic.co/v1/versions"
tasks.register("configureArchitecture") {
String arch = System.properties['os.arch']
@ -171,28 +170,33 @@ tasks.register("configureArtifactInfo") {
description "Set the url to download stack artifacts for select stack version"
doLast {
def splitVersion = version.split('\\.')
int major = splitVersion[0].toInteger()
int minor = splitVersion[1].toInteger()
String branch = "${major}.${minor}"
String fallbackMajorX = "${major}.x"
boolean isFallBackPreviousMajor = minor - 1 < 0
String fallbackBranch = isFallBackPreviousMajor ? "${major-1}.x" : "${major}.${minor-1}"
def qualifiedVersion = ""
for (b in [branch, fallbackMajorX, fallbackBranch]) {
def url = "https://storage.googleapis.com/artifacts-api/snapshots/${b}.json"
try {
def snapshotInfo = new JsonSlurper().parseText(url.toURL().text)
qualifiedVersion = snapshotInfo.version
println "ArtifactInfo version: ${qualifiedVersion}"
break
} catch (Exception e) {
println "Failed to fetch branch ${branch} from ${url}: ${e.message}"
}
def versionQualifier = System.getenv('VERSION_QUALIFIER')
if (versionQualifier) {
version = "$version-$versionQualifier"
}
project.ext.set("artifactApiVersion", qualifiedVersion)
boolean isReleaseBuild = System.getenv('RELEASE') == "1" || versionQualifier
String apiResponse = artifactVersionsApi.toURL().text
def dlVersions = new JsonSlurper().parseText(apiResponse)
String qualifiedVersion = dlVersions['versions'].grep(isReleaseBuild ? ~/^${version}$/ : ~/^${version}-SNAPSHOT/)[0]
if (qualifiedVersion == null) {
if (!isReleaseBuild) {
project.ext.set("useProjectSpecificArtifactSnapshotUrl", true)
project.ext.set("stackArtifactSuffix", "${version}-SNAPSHOT")
return
}
throw new GradleException("could not find the current artifact from the artifact-api ${artifactVersionsApi} for ${version}")
}
// find latest reference to last build
String buildsListApi = "${artifactVersionsApi}/${qualifiedVersion}/builds/"
apiResponse = buildsListApi.toURL().text
def dlBuilds = new JsonSlurper().parseText(apiResponse)
def stackBuildVersion = dlBuilds["builds"][0]
project.ext.set("artifactApiVersionedBuildUrl", "${artifactVersionsApi}/${qualifiedVersion}/builds/${stackBuildVersion}")
project.ext.set("stackArtifactSuffix", qualifiedVersion)
project.ext.set("useProjectSpecificArtifactSnapshotUrl", false)
}
}
@ -328,6 +332,7 @@ tasks.register("assembleTarDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -344,6 +349,7 @@ tasks.register("assembleOssTarDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -358,6 +364,7 @@ tasks.register("assembleZipDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -374,6 +381,7 @@ tasks.register("assembleOssZipDistribution") {
inputs.files fileTree("${projectDir}/bin")
inputs.files fileTree("${projectDir}/config")
inputs.files fileTree("${projectDir}/lib")
inputs.files fileTree("${projectDir}/modules")
inputs.files fileTree("${projectDir}/logstash-core-plugin-api")
inputs.files fileTree("${projectDir}/logstash-core/lib")
inputs.files fileTree("${projectDir}/logstash-core/src")
@ -408,7 +416,7 @@ def qaBuildPath = "${buildDir}/qa/integration"
def qaVendorPath = "${qaBuildPath}/vendor"
tasks.register("installIntegrationTestGems") {
dependsOn assembleTarDistribution
dependsOn unpackTarDistribution
def gemfilePath = file("${projectDir}/qa/integration/Gemfile")
inputs.files gemfilePath
inputs.files file("${projectDir}/qa/integration/integration_tests.gemspec")
@ -431,13 +439,23 @@ tasks.register("downloadFilebeat") {
doLast {
download {
String beatsVersion = project.ext.get("artifactApiVersion")
String downloadedFilebeatName = "filebeat-${beatsVersion}-${project.ext.get("beatsArchitecture")}"
String beatVersion = project.ext.get("stackArtifactSuffix")
String downloadedFilebeatName = "filebeat-${beatVersion}-${project.ext.get("beatsArchitecture")}"
project.ext.set("unpackedFilebeatName", downloadedFilebeatName)
def res = SnapshotArtifactURLs.packageUrls("beats", beatsVersion, downloadedFilebeatName)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("beats", beatVersion, downloadedFilebeatName)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/beats/packages/${downloadedFilebeatName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("filebeatSnapshotUrl", System.getenv("FILEBEAT_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
project.ext.set("filebeatDownloadLocation", "${projectDir}/build/${downloadedFilebeatName}.tar.gz")
}
src project.ext.filebeatSnapshotUrl
onlyIfNewer true
@ -473,12 +491,20 @@ tasks.register("checkEsSHA") {
description "Download ES version remote's fingerprint file"
doLast {
String esVersion = project.ext.get("artifactApiVersion")
String esVersion = project.ext.get("stackArtifactSuffix")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
String remoteSHA
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
remoteSHA = res.packageShaUrl
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
remoteSHA = res.packageShaUrl
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
remoteSHA = buildUrls.package.sha_url.toURL().text
}
def localESArchive = new File("${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
if (localESArchive.exists()) {
@ -512,14 +538,25 @@ tasks.register("downloadEs") {
doLast {
download {
String esVersion = project.ext.get("artifactApiVersion")
String esVersion = project.ext.get("stackArtifactSuffix")
String downloadedElasticsearchName = "elasticsearch-${esVersion}-${project.ext.get("esArchitecture")}"
project.ext.set("unpackedElasticsearchName", "elasticsearch-${esVersion}")
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
if (project.ext.get("useProjectSpecificArtifactSnapshotUrl")) {
def res = SnapshotArtifactURLs.packageUrls("elasticsearch", esVersion, downloadedElasticsearchName)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: res.packageUrl)
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
} else {
// find url of build artifact
String artifactApiUrl = "${project.ext.get("artifactApiVersionedBuildUrl")}/projects/elasticsearch/packages/${downloadedElasticsearchName}.tar.gz"
String apiResponse = artifactApiUrl.toURL().text
def buildUrls = new JsonSlurper().parseText(apiResponse)
project.ext.set("elasticsearchSnapshotURL", System.getenv("ELASTICSEARCH_SNAPSHOT_URL") ?: buildUrls["package"]["url"])
project.ext.set("elasticsearchDownloadLocation", "${projectDir}/build/${downloadedElasticsearchName}.tar.gz")
}
src project.ext.elasticsearchSnapshotURL
onlyIfNewer true
@ -559,8 +596,7 @@ project(":logstash-integration-tests") {
systemProperty 'org.logstash.integration.specs', rubyIntegrationSpecs
environment "FEATURE_FLAG", System.getenv('FEATURE_FLAG')
workingDir integrationTestPwd
dependsOn installIntegrationTestGems
dependsOn copyProductionLog4jConfiguration
dependsOn = [installIntegrationTestGems, copyProductionLog4jConfiguration]
}
}
@ -703,66 +739,46 @@ class JDKDetails {
return createElasticCatalogDownloadUrl()
}
// throws an error iff local version in versions.yml doesn't match the latest from JVM catalog.
void checkLocalVersionMatchingLatest() {
// retrieve the metadata from remote
private String createElasticCatalogDownloadUrl() {
// Ask details to catalog https://jvm-catalog.elastic.co/jdk and return the url to download the JDK
// arch x86_64 never used, only aarch64 if macos
def url = "https://jvm-catalog.elastic.co/jdk/latest_adoptiumjdk_${major}_${osName}"
// Append the cpu's arch only if Mac on aarch64, all the other OSes doesn't have CPU extension
if (arch == "aarch64") {
url += "_${arch}"
}
println "Retrieving JDK from catalog..."
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
assert catalogConnection.responseCode == 200
def metadataRetrieved = catalogConnection.content.text
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
if (catalogMetadata.version != revision || catalogMetadata.revision != build) {
throw new GradleException("Found new jdk version. Please update version.yml to ${catalogMetadata.version} build ${catalogMetadata.revision}")
}
}
private String createElasticCatalogDownloadUrl() {
// Ask details to catalog https://jvm-catalog.elastic.co/jdk and return the url to download the JDK
// arch x86_64 is default, aarch64 if macos or linux
def url = "https://jvm-catalog.elastic.co/jdk/adoptiumjdk-${revision}+${build}-${osName}"
// Append the cpu's arch only if not x86_64, which is the default
if (arch == "aarch64") {
url += "-${arch}"
}
println "Retrieving JDK from catalog..."
def catalogMetadataUrl = URI.create(url).toURL()
def catalogConnection = catalogMetadataUrl.openConnection()
catalogConnection.requestMethod = 'GET'
if (catalogConnection.responseCode != 200) {
println "Can't find adoptiumjdk ${revision} for ${osName} on Elastic JVM catalog"
throw new GradleException("JVM not present on catalog")
}
def metadataRetrieved = catalogConnection.content.text
println "Retrieved!"
def catalogMetadata = new JsonSlurper().parseText(metadataRetrieved)
validateMetadata(catalogMetadata)
return catalogMetadata.url
}
//Verify that the artifact metadata correspond to the request, if not throws an error
private void validateMetadata(Map metadata) {
if (metadata.version != revision) {
throw new GradleException("Expected to retrieve a JDK for version ${revision} but received: ${metadata.version}")
}
if (!isSameArchitecture(metadata.architecture)) {
throw new GradleException("Expected to retrieve a JDK for architecture ${arch} but received: ${metadata.architecture}")
private String createAdoptDownloadUrl() {
String releaseName = major > 8 ?
"jdk-${revision}+${build}" :
"jdk${revision}u${build}"
String vendorOsName = vendorOsName(osName)
switch (vendor) {
case "adoptium":
return "https://api.adoptium.net/v3/binary/version/${releaseName}/${vendorOsName}/${arch}/jdk/hotspot/normal/adoptium"
default:
throw RuntimeException("Can't handle vendor: ${vendor}")
}
}
private boolean isSameArchitecture(String metadataArch) {
if (arch == 'x64') {
return metadataArch == 'x86_64'
}
return metadataArch == arch
private String vendorOsName(String osName) {
if (osName == "darwin")
return "mac"
return osName
}
private String parseJdkArchitecture(String jdkArch) {
@ -774,22 +790,16 @@ class JDKDetails {
return "aarch64"
break
default:
throw new GradleException("Can't handle CPU architechture: ${jdkArch}")
throw RuntimeException("Can't handle CPU architechture: ${jdkArch}")
}
}
}
tasks.register("lint") {
description = "Lint Ruby source files. Use -PrubySource=file1.rb,file2.rb to specify files"
// Calls rake's 'lint' task
dependsOn installDevelopmentGems
doLast {
if (project.hasProperty("rubySource")) {
// Split the comma-separated files and pass them as separate arguments
def files = project.property("rubySource").split(",")
rake(projectDir, buildDir, "lint:report", *files)
} else {
rake(projectDir, buildDir, "lint:report")
}
rake(projectDir, buildDir, 'lint:report')
}
}
@ -829,15 +839,6 @@ tasks.register("downloadJdk", Download) {
}
}
tasks.register("checkNewJdkVersion") {
def versionYml = new Yaml().load(new File("$projectDir/versions.yml").text)
// use Linux x86_64 as canary platform
def jdkDetails = new JDKDetails(versionYml, "linux", "x86_64")
// throws Gradle exception if local and remote doesn't match
jdkDetails.checkLocalVersionMatchingLatest()
}
tasks.register("deleteLocalJdk", Delete) {
// CLI project properties: -Pjdk_bundle_os=[windows|linux|darwin]
String osName = selectOsType()
@ -924,4 +925,4 @@ if (System.getenv('OSS') != 'true') {
tasks.register("runXPackIntegrationTests") {
dependsOn copyPluginTestAlias
dependsOn ":logstash-xpack:rubyIntegrationTests"
}
}

View file

@ -21,19 +21,15 @@ metadata:
url: https://elastic.co/logstash
spec:
type: tool
owner: group:logstash
owner: group:ingest-fp
system: platform-ingest
lifecycle: production
dependsOn:
- resource:buildkite-logstash-serverless-integration-testing
- resource:logstash-snyk-report
- resource:logstash-dra-snapshot-pipeline
- resource:logstash-dra-staging-pipeline
- resource:logstash-linux-jdk-matrix-pipeline
- resource:logstash-windows-jdk-matrix-pipeline
- resource:logstash-benchmark-pipeline
- resource:logstash-health-report-tests-pipeline
- resource:logstash-jdk-availability-check-pipeline
- logstash-dra-snapshot-pipeline
- logstash-dra-staging-pipeline
- logstash-linux-jdk-matrix-pipeline
- logstash-windows-jdk-matrix-pipeline
# ***********************************
# Declare serverless IT pipeline
@ -47,8 +43,8 @@ metadata:
description: Buildkite pipeline for the Serverless Integration Test
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
owner: group:ingest-fp
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -66,9 +62,7 @@ spec:
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
@ -91,8 +85,8 @@ metadata:
description: 'The logstash-snyk-report pipeline.'
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
owner: group:ingest-fp
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -142,7 +136,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -185,7 +179,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -236,7 +230,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -294,7 +288,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -345,7 +339,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -388,7 +382,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -407,7 +401,6 @@ spec:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
@ -439,7 +432,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -455,8 +448,7 @@ spec:
build_pull_requests: false
build_tags: false
trigger_mode: code
filter_condition: >-
build.branch !~ /^backport.*$/ && build.branch !~ /^mergify\/bp\/.*$/
filter_condition: 'build.branch !~ /^backport.*$/'
filter_enabled: true
cancel_intermediate_builds: false
skip_intermediate_builds: false
@ -495,7 +487,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -537,7 +529,7 @@ metadata:
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
system: buildkite
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
@ -594,212 +586,3 @@ spec:
# *******************************
# SECTION END: Scheduler pipeline
# *******************************
# ***********************************
# Declare Benchmark pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-benchmark-pipeline
description: Buildkite pipeline for the Logstash benchmark
links:
- title: 'Logstash Benchmark (Daily, Auto) pipeline'
url: https://buildkite.com/elastic/logstash-benchmark-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-benchmark-pipeline
description: ':running: The Benchmark pipeline for snapshot version'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/benchmark_pipeline.yml"
maximum_timeout_in_minutes: 90
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'false'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Daily serverless test on core_serverless_test branch:
branch: main
cronline: 30 04 * * *
message: Daily trigger of Benchmark Pipeline
# *******************************
# SECTION END: Benchmark pipeline
# *******************************
# ***********************************
# SECTION START: Benchmark Marathon
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-benchmark-marathon-pipeline
description: Buildkite pipeline for benchmarking multi-version
links:
- title: 'Logstash Benchmark Marathon'
url: https://buildkite.com/elastic/logstash-benchmark-marathon-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-benchmark-marathon-pipeline
description: ':running: The Benchmark Marathon for multi-version'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/benchmark_marathon_pipeline.yml"
maximum_timeout_in_minutes: 480
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'false'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
# *******************************
# SECTION END: Benchmark Marathon
# *******************************
# ***********************************
# Declare Health Report Tests pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-health-report-tests-pipeline
description: Buildkite pipeline for the Logstash Health Report Tests
links:
- title: ':logstash Logstash Health Report Tests (Daily, Auto) pipeline'
url: https://buildkite.com/elastic/logstash-health-report-tests-pipeline
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-health-report-tests-pipeline
description: ':logstash: Logstash Health Report tests :pipeline:'
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/health_report_tests_pipeline.yml"
maximum_timeout_in_minutes: 30 # usually tests last max ~17mins
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
ingest-fp:
access_level: MANAGE_BUILD_AND_READ
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Daily Health Report tests on main branch:
branch: main
cronline: 30 20 * * *
message: Daily trigger of Health Report Tests Pipeline
# *******************************
# SECTION END: Health Report Tests pipeline
# *******************************
# ***********************************
# Declare JDK check pipeline
# ***********************************
---
# yaml-language-server: $schema=https://gist.githubusercontent.com/elasticmachine/988b80dae436cafea07d9a4a460a011d/raw/rre.schema.json
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: logstash-jdk-availability-check-pipeline
description: ":logstash: check availability of new JDK version"
spec:
type: buildkite-pipeline
owner: group:logstash
system: platform-ingest
implementation:
apiVersion: buildkite.elastic.dev/v1
kind: Pipeline
metadata:
name: logstash-jdk-availability-check-pipeline
spec:
repository: elastic/logstash
pipeline_file: ".buildkite/jdk_availability_check_pipeline.yml"
maximum_timeout_in_minutes: 10
provider_settings:
trigger_mode: none # don't trigger jobs from github activity
env:
ELASTIC_SLACK_NOTIFICATIONS_ENABLED: 'true'
SLACK_NOTIFICATIONS_CHANNEL: '#logstash-build'
SLACK_NOTIFICATIONS_ON_SUCCESS: 'false'
SLACK_NOTIFICATIONS_SKIP_FOR_RETRIES: 'true'
teams:
logstash:
access_level: MANAGE_BUILD_AND_READ
ingest-eng-prod:
access_level: MANAGE_BUILD_AND_READ
everyone:
access_level: READ_ONLY
schedules:
Weekly JDK availability check (main):
branch: main
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
Weekly JDK availability check (8.19):
branch: "8.19"
cronline: 0 2 * * 1 # every Monday@2AM UTC
message: Weekly trigger of JDK update availability pipeline per branch
env:
PIPELINES_TO_TRIGGER: 'logstash-jdk-availability-check-pipeline'
# *******************************
# SECTION END: JDK check pipeline
# *******************************

View file

@ -19,7 +19,7 @@ function get_package_type {
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export OSS=true
if [ -n "$BUILD_JAVA_HOME" ]; then

14
ci/branches.json Normal file
View file

@ -0,0 +1,14 @@
{
"notice": "This file is not maintained outside of the main branch and should only be used for tooling.",
"branches": [
{
"branch": "main"
},
{
"branch": "8.13"
},
{
"branch": "7.17"
}
]
}

View file

@ -1,7 +0,0 @@
#!/usr/bin/env bash
set -eo pipefail
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
echo "Checking local JDK version against latest remote from JVM catalog"
./gradlew checkNewJdkVersion

View file

@ -6,7 +6,7 @@ set -x
# uses at least 1g of memory, If we don't do this we can get OOM issues when
# installing gems. See https://github.com/elastic/logstash/issues/5179
export JRUBY_OPTS="-J-Xmx1g"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.console=plain -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
export GRADLE_OPTS="-Xmx4g -Dorg.gradle.daemon=false -Dorg.gradle.logging.level=info -Dfile.encoding=UTF-8"
if [ -n "$BUILD_JAVA_HOME" ]; then
GRADLE_OPTS="$GRADLE_OPTS -Dorg.gradle.java.home=$BUILD_JAVA_HOME"
@ -15,7 +15,7 @@ fi
# Can run either a specific flavor, or all flavors -
# eg `ci/acceptance_tests.sh oss` will run tests for open source container
# `ci/acceptance_tests.sh full` will run tests for the default container
# `ci/acceptance_tests.sh wolfi` will run tests for the wolfi based container
# `ci/acceptance_tests.sh ubi8` will run tests for the ubi8 based container
# `ci/acceptance_tests.sh` will run tests for all containers
SELECTED_TEST_SUITE=$1
@ -48,23 +48,23 @@ if [[ $SELECTED_TEST_SUITE == "oss" ]]; then
elif [[ $SELECTED_TEST_SUITE == "full" ]]; then
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME
rake artifact:build_docker_full
rake artifact:docker
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/full/*_spec.rb
elif [[ $SELECTED_TEST_SUITE == "wolfi" ]]; then
elif [[ $SELECTED_TEST_SUITE == "ubi8" ]]; then
echo "--- Building $SELECTED_TEST_SUITE docker images"
cd $LS_HOME
rake artifact:docker_wolfi
rake artifact:docker_ubi8
echo "--- Acceptance: Installing dependencies"
cd $QA_DIR
bundle install
echo "--- Acceptance: Running the tests"
bundle exec rspec docker/spec/wolfi/*_spec.rb
bundle exec rspec docker/spec/ubi8/*_spec.rb
else
echo "--- Building all docker images"
cd $LS_HOME

12
ci/docker_update_base_image.sh Executable file
View file

@ -0,0 +1,12 @@
#!/bin/bash -ie
if docker rmi --force logstash-base ; then
echo "Removed existing logstash-base image, building logstash-base image from scratch."
else
echo "Building logstash-base image from scratch." #Keep the global -e flag but allow the remove command to fail.
fi
docker build -f Dockerfile.base -t logstash-base .
docker login --username=logstashci container-registry-test.elastic.co #will prompt for password
docker tag logstash-base container-registry-test.elastic.co/logstash-test/logstash-base
docker push container-registry-test.elastic.co/logstash-test/logstash-base

View file

@ -19,15 +19,24 @@ if [[ $1 = "setup" ]]; then
exit 0
elif [[ $1 == "split" ]]; then
# Source shared function for splitting integration tests
source "$(dirname "${BASH_SOURCE[0]}")/partition-files.lib.sh"
cd qa/integration
glob1=(specs/*spec.rb)
glob2=(specs/**/*spec.rb)
all_specs=("${glob1[@]}" "${glob2[@]}")
index="${2:?index}"
count="${3:-2}"
specs=($(cd qa/integration; partition_files "${index}" "${count}" < <(find specs -name '*_spec.rb') ))
echo "Running integration tests partition[${index}] of ${count}: ${specs[*]}"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="${specs[*]}" --console=plain
specs0=${all_specs[@]::$((${#all_specs[@]} / 2 ))}
specs1=${all_specs[@]:$((${#all_specs[@]} / 2 ))}
cd ../..
if [[ $2 == 0 ]]; then
echo "Running the first half of integration specs: $specs0"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="$specs0" --console=plain
elif [[ $2 == 1 ]]; then
echo "Running the second half of integration specs: $specs1"
./gradlew runIntegrationTests -PrubyIntegrationSpecs="$specs1" --console=plain
else
echo "Error, must specify 0 or 1 after the split. For example ci/integration_tests.sh split 0"
exit 1
fi
elif [[ ! -z $@ ]]; then
echo "Running integration tests 'rspec $@'"

View file

@ -1,15 +1,13 @@
{
"releases": {
"7.current": "7.17.28",
"8.previous": "8.17.5",
"8.current": "8.18.0"
"5.x": "5.6.16",
"6.x": "6.8.23",
"7.x": "7.17.19",
"8.x": "8.13.1"
},
"snapshots": {
"7.current": "7.17.29-SNAPSHOT",
"8.previous": "8.17.6-SNAPSHOT",
"8.current": "8.18.1-SNAPSHOT",
"8.next": "8.19.0-SNAPSHOT",
"9.next": "9.0.1-SNAPSHOT",
"main": "9.1.0-SNAPSHOT"
"7.x": "7.17.20-SNAPSHOT",
"8.x": "8.13.2-SNAPSHOT",
"main": "8.14.0-SNAPSHOT"
}
}

View file

@ -1,78 +0,0 @@
#!/bin/bash
# partition_files returns a consistent partition of the filenames given on stdin
# Usage: partition_files <partition_index> <partition_count=2> < <(ls files)
# partition_index: the zero-based index of the partition to select `[0,partition_count)`
# partition_count: the number of partitions `[2,#files]`
partition_files() (
set -e
local files
# ensure files is consistently sorted and distinct
IFS=$'\n' read -ra files -d '' <<<"$(cat - | sort | uniq)" || true
local partition_index="${1:?}"
local partition_count="${2:?}"
_error () { >&2 echo "ERROR: ${1:-UNSPECIFIED}"; exit 1; }
# safeguard against nonsense invocations
if (( ${#files[@]} < 2 )); then
_error "#files(${#files[@]}) must be at least 2 in order to partition"
elif ( ! [[ "${partition_count}" =~ ^[0-9]+$ ]] ) || (( partition_count < 2 )) || (( partition_count > ${#files[@]})); then
_error "partition_count(${partition_count}) must be a number that is at least 2 and not greater than #files(${#files[@]})"
elif ( ! [[ "${partition_index}" =~ ^[0-9]+$ ]] ) || (( partition_index < 0 )) || (( partition_index >= $partition_count )) ; then
_error "partition_index(${partition_index}) must be a number that is greater 0 and less than partition_count(${partition_count})"
fi
# round-robbin emit those in our selected partition
for index in "${!files[@]}"; do
partition="$(( index % partition_count ))"
if (( partition == partition_index )); then
echo "${files[$index]}"
fi
done
)
if [[ "$0" == "${BASH_SOURCE[0]}" ]]; then
if [[ "$1" == "test" ]]; then
status=0
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
file_list="$( cd "${SCRIPT_DIR}"; find . -type f )"
# for any legal partitioning into N partitions, we ensure that
# the combined output of `partition_files I N` where `I` is all numbers in
# the range `[0,N)` produces no repeats and no omissions, even if the
# input list is not consistently ordered.
for n in $(seq 2 $(wc -l <<<"${file_list}")); do
result=""
for i in $(seq 0 $(( n - 1 ))); do
for file in $(partition_files $i $n <<<"$( shuf <<<"${file_list}" )"); do
result+="${file}"$'\n'
done
done
repeated="$( uniq --repeated <<<"$( sort <<<"${result}" )" )"
if (( $(printf "${repeated}" | wc -l) > 0 )); then
status=1
echo "[n=${n}]FAIL(repeated):"$'\n'"${repeated}"
fi
missing=$( comm -23 <(sort <<<"${file_list}") <( sort <<<"${result}" ) )
if (( $(printf "${missing}" | wc -l) > 0 )); then
status=1
echo "[n=${n}]FAIL(omitted):"$'\n'"${missing}"
fi
done
if (( status > 0 )); then
echo "There were failures. The input list was:"
echo "${file_list}"
fi
exit "${status}"
else
partition_files $@
fi
fi

View file

@ -28,9 +28,7 @@ build_logstash() {
}
index_test_data() {
curl -X POST -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/$INDEX_NAME/_bulk" \
-H 'x-elastic-product-origin: logstash' \
-H 'Content-Type: application/json' --data-binary @"$CURRENT_DIR/test_data/book.json"
curl -X POST -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/$INDEX_NAME/_bulk" -H 'Content-Type: application/json' --data-binary @"$CURRENT_DIR/test_data/book.json"
}
# $1: check function

View file

@ -7,8 +7,7 @@ export PIPELINE_NAME='gen_es'
# update pipeline and check response code
index_pipeline() {
RESP_CODE=$(curl -s -w "%{http_code}" -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_logstash/pipeline/$1" \
-H 'x-elastic-product-origin: logstash' -H 'Content-Type: application/json' -d "$2")
RESP_CODE=$(curl -s -w "%{http_code}" -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_logstash/pipeline/$1" -H 'Content-Type: application/json' -d "$2")
if [[ $RESP_CODE -ge '400' ]]; then
echo "failed to update pipeline for Central Pipeline Management. Got $RESP_CODE from Elasticsearch"
exit 1
@ -35,7 +34,7 @@ check_plugin() {
}
delete_pipeline() {
curl -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' -X DELETE "$ES_ENDPOINT/_logstash/pipeline/$PIPELINE_NAME" -H 'Content-Type: application/json';
curl -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -X DELETE "$ES_ENDPOINT/_logstash/pipeline/$PIPELINE_NAME" -H 'Content-Type: application/json';
}
cpm_clean_up_and_get_result() {

View file

@ -6,12 +6,10 @@ source ./$(dirname "$0")/common.sh
deploy_ingest_pipeline() {
PIPELINE_RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_ingest/pipeline/integration-logstash_test.events-default" \
-H 'Content-Type: application/json' \
-H 'x-elastic-product-origin: logstash' \
--data-binary @"$CURRENT_DIR/test_data/ingest_pipeline.json")
TEMPLATE_RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/_index_template/logs-serverless-default-template" \
-H 'Content-Type: application/json' \
-H 'x-elastic-product-origin: logstash' \
--data-binary @"$CURRENT_DIR/test_data/index_template.json")
# ingest pipeline is likely be there from the last run
@ -31,7 +29,7 @@ check_integration_filter() {
}
get_doc_msg_length() {
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/logs-$INDEX_NAME.004-default/_search?size=1" -H 'x-elastic-product-origin: logstash' | jq '.hits.hits[0]._source.message | length'
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/logs-$INDEX_NAME.004-default/_search?size=1" | jq '.hits.hits[0]._source.message | length'
}
# ensure no double run of ingest pipeline

View file

@ -9,7 +9,7 @@ check_named_index() {
}
get_data_stream_count() {
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' "$ES_ENDPOINT/logs-$INDEX_NAME.001-default/_count" | jq '.count // 0'
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/logs-$INDEX_NAME.001-default/_count" | jq '.count // 0'
}
compare_data_stream_count() {

View file

@ -10,7 +10,7 @@ export EXIT_CODE="0"
create_pipeline() {
RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X PUT -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME" \
-H 'Content-Type: application/json' -H 'kbn-xsrf: logstash' -H 'x-elastic-product-origin: logstash' \
-H 'Content-Type: application/json' -H 'kbn-xsrf: logstash' \
--data-binary @"$CURRENT_DIR/test_data/$PIPELINE_NAME.json")
if [[ RESP_CODE -ge '400' ]]; then
@ -20,8 +20,7 @@ create_pipeline() {
}
get_pipeline() {
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' \
"$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME") \
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME")
SOURCE_BODY=$(cat "$CURRENT_DIR/test_data/$PIPELINE_NAME.json")
RESP_PIPELINE_NAME=$(echo "$RESP_BODY" | jq -r '.id')
@ -42,8 +41,7 @@ get_pipeline() {
}
list_pipeline() {
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' \
"$KB_ENDPOINT/api/logstash/pipelines" | jq --arg name "$PIPELINE_NAME" '.pipelines[] | select(.id==$name)' )
RESP_BODY=$(curl -s -X GET -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipelines" | jq --arg name "$PIPELINE_NAME" '.pipelines[] | select(.id==$name)' )
if [[ -z "$RESP_BODY" ]]; then
EXIT_CODE=$(( EXIT_CODE + 1 ))
echo "Fail to list pipeline."
@ -51,8 +49,7 @@ list_pipeline() {
}
delete_pipeline() {
RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X DELETE -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' \
"$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME" \
RESP_CODE=$(curl -s -w "%{http_code}" -o /dev/null -X DELETE -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$KB_ENDPOINT/api/logstash/pipeline/$PIPELINE_NAME" \
-H 'Content-Type: application/json' -H 'kbn-xsrf: logstash' \
--data-binary @"$CURRENT_DIR/test_data/$PIPELINE_NAME.json")

View file

@ -40,7 +40,7 @@ stop_metricbeat() {
}
get_monitor_count() {
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" -H 'x-elastic-product-origin: logstash' "$ES_ENDPOINT/$INDEX_NAME/_count" | jq '.count // 0'
curl -s -H "Authorization: ApiKey $TESTER_API_KEY_ENCODED" "$ES_ENDPOINT/$INDEX_NAME/_count" | jq '.count // 0'
}
compare_monitor_count() {

Some files were not shown because too many files have changed in this diff Show more