This PR changes the wait strategy from the default HostPortWaitStrategy
to LogMessageWaitStrategy to accommodate the change in the latest Minio
docker image.
The default HostPortWaitStrategy has two issues: 1. It assumes certain
Linux commands such as grep and nc available inside the container.
This is not the case for the latest version of Minio docker image
which does not have either of the commands. 2. The first item on their
own is not fatal since the check also falls back on just reading the
listening port as a file with `/bin/bash -c
'</dev/tcp/localhost/9000'`. However, the command stirng is build
using the system's current locale which may not be Engilsh and can
have different symbols for print integers like 9000. This completely
breaks the command and in turn leads to total failure of the wait
check.
There is no easy fix for the above issues nor do I think it is necessary
to fix. So the PR simply switches the wait strategy to be log message
based.
Resolves: #120101Resolves: #120115Resolves: #120117Resolves: #118548
This PR upgrades the minio docker image from
RELEASE.2021-03-01T04-20-55Z which is 3+ years old to the latest
RELEASE.2024-12-18T13-15-44Z.
Relates: #118548
- Require IMDSv1 if using alternative endpoints (i.e. ECS)
- Forbid profile name lookup with alternative endpoints
- Add token TTL header for IMDSv2
- Add support for instance-identity docs
This updates the gradle wrapper to 8.12
We addressed deprecation warnings due to the update that includes:
- Fix change in TestOutputEvent api
- Fix deprecation in groovy syntax
- Use latest ospackage plugin containing our fix
- Remove project usages at execution time
- Fix deprecated project references in repository-old-versions
The building of the shadowed hdfs2 and hdfs3 fixtures takes quite long time due to being 51 and 80mb in size.
By removing non used dependencies from the shadow jar creation we can speed up this significantly.
Also we avoid building hdfs fixture jars now for compile only (resulting in no shadow jar creation for precommit checks)
The version of the AWS Java SDK we use already magically switches to
IMDSv2 if available, but today we cannot claim to support IMDSv2 in
Elasticsearch since we have no tests demonstrating that the magic really
works for us. In particular, this sort of thing often risks falling foul
of some restrictions imposed by the security manager (if not now then
maybe in some future release).
This commit adds proper support for IMDSv2 by enhancing the test suite
to add the missing coverage to avoid any risk of breaking this magical
SDK behaviour in future.
Closes#105135 Closes ES-9984
Today these YAML tests rely on a bunch of rather complex setup organised
by Gradle, and contain lots of duplication and coincident strings,
mostly because that was the only way to achieve what we wanted before we
could orchestrate test clusters and fixtures directly from Java test
suites. We're not actually running the YAML tests in ways that take
advantage of their YAMLness (e.g. in mixed-version clusters, or from
other client libraries).
This commit replaces these tests with Java REST tests which enormously
simplifies this area of code.
Relates ES-9984
Running with FIPS approved mode requires secret keys to be at least 114
bits long.
Relates: #117324Resolves: #117596Resolves: #117709Resolves: #117710Resolves: #117711Resolves: #117712
Rephrase the authorization check in `S3HttpFixture` in terms of a
predicate provided by the caller so that there's no need for a separate
subclass that handles session tokens, and so that it can support
auto-generated credentials more naturally.
Also adapts `Ec2ImdsHttpFixture` to dynamically generate credentials
this way.
Also extracts the STS fixture in `S3HttpFixtureWithSTS` into a separate
service, similarly to #117324, and adapts this new fixture to
dynamically generate credentials too.
Relates ES-9984
The S3 and IMDS services are separate things in practice, we shouldn't
be conflating them as we do today. This commit introduces a new
independent test fixture just for the IMDS endpoint and migrates the
relevant tests to use it.
Relates ES-9984
Today the `:test:fixtures` modules' test suites are disabled, but in
fact these fixtures do have nontrivial behaviour that wants testing in
its own right, so we should run their tests.
This commit reinstates the disabled tests and fixes one which should
have been fixed as part of #116212.
We don't seem to have a test that completely verifies that a S3
repository can reload credentials from an updated keystore. This commit
adds such a test.
A `CompleteMultipartUpload` action may fail after sending the `200 OK`
response line. In this case the response body describes the error, and
the SDK translates this situation to an exception with status code 0 but
with the `ErrorCode` string set appropriately. This commit enhances the
exception handling in `S3BlobContainer` to handle this possibility.
Closes#102294
Co-authored-by: Pat Patterson <metadaddy@gmail.com>
The libs projects are configured to all begin with `elasticsearch-`.
While this is desireable for the artifacts to contain this consistent
prefix, it means the project names don't match up with their
directories. Additionally, it creates complexities for subproject naming
that must be manually adjusted.
This commit adjusts the project names for those under libs to be their
directory names. The resulting artifacts for these libs are kept the
same, all beginning with `elasticsearch-`.
If Elasticsearch fails part-way through a multipart upload to S3 it will
generally try and abort the upload, but it's possible that the abort
attempt also fails. In this case the upload becomes _dangling_. Dangling
uploads consume storage space, and therefore cost money, until they are
eventually aborted.
Earlier versions of Elasticsearch require users to check for dangling
multipart uploads, and to manually abort any that they find. This commit
introduces a cleanup process which aborts all dangling uploads on each
snapshot delete instead.
Closes#44971Closes#101169
With this commit, if no key or SAS token is supplied for an Azure
repository then Elasticsearch will use the `DefaultAzureCredential`
chain defined in the Azure SDK, which will obtain credentials from the
instance metadata service when running on an Azure VM.
Today the Azure test fixture accepts all requests, but we should be
checking that the `Authorization` header is at least present and
approximately correct. This commit adds support for this check.
Some soon-to-be-added authentication mechanisms are not supported over
plain HTTP. This commit adds HTTPS support to the internal fixture, and
adopts its use in all the real-cluster tests which use it.
If the geoip_use_service system property is set to true, then the
EnterpriseGeoIpHttpFixture is disabled, so EnterpriseGeoIpDownloader
incorrectly calls maxmind.com without credentials, and fails. This
change makes it so that the maxmind server is never called from tests.
Closes https://github.com/elastic/elasticsearch/issues/111002
* Implement RequestedRangeNotSatisfiedException for Azure and GCP
* spotless
* rename test
* Generalize 3rd party tests for 3 cloud blob containers
* Follow comments
* minimize changes with main
* Follow comments 2
Potentially addresses flaky test as described here:
https://github.com/elastic/elasticsearch/issues/106739 We see that the
data folder sometimes is created with wrong permissions. Now we try to
fix that by creating that folder preemptively as part of the fixture
setup.
This ports our krb5kdc test fixture to test container and reworks hdfs handling to also be based on test containers.
The yaml rest tests that are using hdfs required introducing variable substitution in yamlresttestparser handling.
To make those docker images available for our testcontainer based
fixtures we need to publish them on a regular basis to ensure the source
and the images are in sync. This adds some convenience plugin to take
care of publishing our docker test fixtures
This change fixes the S3HttpHandler so that it supports range byte
requests with ending offsets that can be larger than the blob length.
This is something that is supported today by S3 and the current
version of the AWS SDK we use can also set an ending offset to
a very large value (Long.MAX_VALUE).
This change also adds support for trappy situation where the
starting offset is also larger than the bob length.
Fixes the AzureHttpHandler so that it supports range byte requests
with ending offsets that go beyond the real blob length, like the real
Azure service supports.