* Reapply "[Build] Do not invalidate configuration cache when branch is switched (#118894)" (#119300) (#119325)
* Reapply "[Build] Do not invalidate configuration cache when branch is switched (#118894)" (#119300)
The original PR (#118894) has broken serverless.
* Fix gitinfo plugin for serverless usage
* Update buildscan git revision reference
(cherry picked from commit 5278159987)
# Conflicts:
# build-conventions/src/main/java/org/elasticsearch/gradle/internal/conventions/PublishPlugin.java
* Fix merge conflict
* Use single-char variant of String.indexOf() where possible
indexOf(char) is more efficient than searching for the same one-character String.
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Closes#97032
Adding the ability to set `require_data_stream` parameter (boolean) on bulk and indexing APIs.
For document indexing, this flag requires the indexing operation to either be pointed at a data stream, or match a template that will create a data stream.
By default the REST client uses a thread factory which names its threads
with the generic pattern `I/O dispatcher %d`. This commit adds the
prefix `elasticsearch-rest-client-`, and a client-instance-specific ID,
to the name of these threads to make them easier to identify.
Today `RestClient` interprets an `?ignore=` request parameter as an
indication that certain HTTP response codes should be considered
successful and not raise a `ResponseException`. This commit replaces the
magic literal `"ignore"` with a constant and adds a utility to specify
the ignored codes as `RestStatus` values.
* Make forbidden apis check cacheable and cc compatible
* Port CheckForbiddenApiTask to use worker api
* Simplify runtime classpath for CheckForbiddenApisTask
The underlying issue is closed, so either this test should be running
correctly now, is still failing for valid reasons or can be removed.
Either way we need to enable it to see.
Removing the custom dependency checksum functionality in favor of Gradle build-in dependency verification support.
- Use sha256 in favor of sha1 as sha1 is not considered safe these days.
Closes https://github.com/elastic/elasticsearch/issues/69736
This PR reworks the testing conventions precommit plugin. This plugin now:
- is compatible with yaml, java rest tests and internalClusterTest (aka different sourceSets per test type)
- enforces test base class and simple naming conventions (as it did before)
- adds one check task per test sourceSet
- uses the worker api to improve task execution parallelism and encapsulation
- is gradle configuration cache compatible
This also ports the TestingConventions integration testing to Spock and removes the build-tools-internal/test kit folder that is not required anymore. We also add some common logic for testing java related gradle plugins.
We will apply further cleanup on other tests within our test suite in a dedicated follow up cleanup
Thanks to https://bugs.eclipse.org/bugs/show_bug.cgi?id=574437,
we've run into a situation where Spotless is incorrectly formatting
a particular piece of syntax (due the underlying Eclipse bug). We
were able to turn off formatting of this syntax using `// @formatter:off`
and `// @formatter:on`, but there was a further problem. We configure
IntelliJ to use the Eclipse formatter plugin, but this doesn't
respect the `@formatter` tags since these are set at the Spotless
level, not the Eclipse formatter level. Note that these tags aren't
set in the Eclipse formatter config, because there we use `// tag::`
and `// end::` in order to avoid reformatting docs snippets, which
have a much narrower line width.
What a mess.
So, to get around all this, drop the `@formatter` tags and tweak
our custom `SnippetLengthCheck` Checkstyle rule so that
`// tag:noformat` regions are not subject to the narrower line length
check, but are still exempt from formatting.
Matchers is deprecated in Mockito, in favor of the newer
ArgumentMatchers class. In fact, internally Matchers just extends
ArgumentMatchers as all the methods there were moved. This commit
changes all imports of org.mockito.Matchers to
org.mockito.ArgumentMatchers.
Securemock is a wrapper around Mockito that monkey patches internals of
Mockito to work with the SecurityManager. However, the library has not
been updated in several years due to the complicated nature of this
monkey patching. This has left us with an ancient version of Mockito,
missing out on updates to the library in the last half decade.
While Securemock currently works with Mockito 1.x, in 2.x an official
means of plugging into mockito was added, MockMaker. This commit removes
securemock as a dependnecy of the test framework, replacing it with a
modern version of Mockito, and implementing a MockMaker that integrates
with SecurityManager.
Note that while there is a newer version of Mockito available, 4.0, it
has several deprecations removed that are used throughout Elasticsearch.
Those can be addressed in followups, and then a subsequent upgrade to
4.0 should be possible.
relates #79567closes#40334
A recent change for the deprecation logs provided the capability to emit deprecation's at critical vs. warning levels, #77482.
However deprecated settings always log at critical level without the ability to express that the setting deprecation is only a
warning.
This commit exposes the ability to set the deprecation level when deprecating a setting.
Closes#78781
This adds support for the headers necessary for REST version compatibility to the High Level Rest
Client (HLRC).
Compatibility mode can be turned on either with the .setAPICompatibilityMode(true) when creating
the client, or by setting the ELASTIC_CLIENT_APIVERSIONING to true similar to our other
Elasticsearch clients.
Resolves#77859
- Use file property and conventions to avoid afterEvaluate hook
- Simplify root build script
- One little step closer to configuration cache compliance
This commit switches the security and identity-provider plugins to use
v4.0.1 of the OpenSAML library (upgraded from v3.4).
In order to facilitate this upgrade the following changes are also
made:
- Common Codec is upgraded to 1.14 across all modules
- Guava is upgraded to v28.2 in the 2 affected modules
Relates: #71983
* Reformatting to keep Checkstyle after formatting
* Configure spotless everywhere, and disable the tasks if necessary
* Add XContentBuilder helpers, fix test
* Tweaks
* Add a TODO
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Follow-up to #73434
Ensures that High Level Rest Client is running against a verified
Elasticsearch. When the first request is send on HLRC, a request to the
info endpoint is made first to verify the product identification and
version.
This moves the public build api and plugins into a separete included build called 'build-tools'
and we removed the duplication of included buildSrc twice (2nd import as build-tools).
The elasticsearch internal build logic is kept in build-tools-internal as included build which allows us better handling of this project that its just being an buildSrc project (e.g. we can reference tasks directly from the root build etc.)
Convention logic applied to both projects will live in a new build-conventions project.
Instead of using the same handler with new latches each time, test now registers completely new handler for every request. This prevents race condition that led to deadlock and timeout.
Local testing shows around 500k iterations without a failure.
Closes#45577
This commit introduces data_content, data_hot, data_warm,
data_cold, and data_frozen to the low level REST client (LLRC).
Since the LLRC only cares about dedicated masters this change
simply makes the LLRC aware of the new roles and does not have
any functional impact. The tests have been adjusted accordingly.
As per the new licensing change for Elasticsearch and Kibana this commit
moves existing Apache 2.0 licensed source code to the new dual license
SSPL+Elastic license 2.0. In addition, existing x-pack code now uses
the new version 2.0 of the Elastic license. Full changes include:
- Updating LICENSE and NOTICE files throughout the code base, as well
as those packaged in our published artifacts
- Update IDE integration to now use the new license header on newly
created source files
- Remove references to the "OSS" distribution from our documentation
- Update build time verification checks to no longer allow Apache 2.0
license header in Elasticsearch source code
- Replace all existing Apache 2.0 license headers for non-xpack code
with updated header (vendored code with Apache 2.0 headers obviously
remains the same).
- Replace all Elastic license 1.0 headers with new 2.0 header in xpack.
Adds a X-Elastic-Client-Meta header to http requests sent by RestClient. This
header contains information about the runtime environment that is meant to
allow analyzing usage context by collecting this information on the receiving
side of requests, like a proxy server in front of ES.
Using a custom header allows client applications to change the User-Agent
header for their own purpose without losing this information.
We have an in-house rule to compare explicitly against `false` instead
of using the logical not operator (`!`). However, this hasn't
historically been enforced, meaning that there are many violations in
the source at present.
We now have a Checkstyle rule that can detect these cases, but before we
can turn it on, we need to fix the existing violations. This is being
done over a series of PRs, since there are a lot to fix.
We were depending on the BouncyCastle FIPS own mechanics to set
itself in approved only mode since we run with the Security
Manager enabled. The check during startup seems to happen before we
set our restrictive SecurityManager though in
org.elasticsearch.bootstrap.Elasticsearch , and this means that
BCFIPS would not be in approved only mode, unless explicitly
configured so.
This commit sets the appropriate JVM property to explicitly set
BCFIPS in approved only mode in CI and adds tests to ensure that we
will be running with BCFIPS in approved only mode when we expect to.
It also sets xpack.security.fips_mode.enabled to true for all test clusters
used in fips mode and sets the distribution to the default one. It adds a
password to the elasticsearch keystore for all test clusters that run in fips
mode.
Moreover, it changes a few unit tests where we would use bcrypt even in
FIPS 140 mode. These would still pass since we are bundling our own
bcrypt implementation, but are now changed to use FIPS 140 approved
algorithms instead for better coverage.
It also addresses a number of tests that would fail in approved only mode
Mainly:
Tests that use PBKDF2 with a password less than 112 bits (14char). We
elected to change the passwords used everywhere to be at least 14
characters long instead of mandating
the use of pbkdf2_stretch because both pbkdf2 and
pbkdf2_stretch are supported and allowed in fips mode and it makes sense
to test with both. We could possibly figure out the password algorithm used
for each test and adjust password length accordingly only for pbkdf2 but
there is little value in that. It's good practice to use strong passwords so if
our docs and tests use longer passwords, then it's for the best. The approach
is brittle as there is no guarantee that the next test that will be added won't
use a short password, so we add some testing documentation too.
This leaves us with a possible coverage gap since we do support passwords
as short as 6 characters but we only test with > 14 chars but the
validation itself was not tested even before. Tests can be added in a followup,
outside of fips related context.
Tests that use a PKCS12 keystore and were not already muted.
Tests that depend on running test clusters with a basic license or
using the OSS distribution as FIPS 140 support is not available in
neither of these.
Finally, it adds some information around FIPS 140 testing in our testing
documentation reference so that developers can hopefully keep in
mind fips 140 related intricacies when writing/changing docs.
When a gzip-encoded response is decompressed the response should no more
have a content-encoding header and content-length should be set to
"unknown". GzipDecompressingEntity correctly does this for the entity
but the response still reported the original response's content-encoding
and content-length headers.
Adds a `RestClient.setCompressionEnabled()` setting that will gzip-
compress request bodies and add a `Accept-Encoding: gzip` header so
that the ES server can send compressed responses.
This removes the assertion that the header warnings we parse in the
rest client reponses conform to RFC 7234 because we are not in full control
of the warnings that could be present in the responses (ie. proxies might
emit warnings that don't comply).
We still maintain this assertion on the ES side (see `HeaderWarning#addWarning`)
for the warnings we emit.
The domain part of a Cloud-Id can contain an optional custom port, e.g.
cloud.example.org:9443. This feature is used for Elastic Cloud
Enterprise installations that can't use the default port 443.
This change fixes RestClient.build() to correctly handle custom ports.
Java implements grouping based on patterns that prescribe alternate paths
using recursion. This could lead to StackOverflowError given enough characters
in the target text.
This replaces the "((?:\t| |!|[\\x23-\\x5B]|[\\x5D-\\x7E]|[\\x80-\\xFF]|\\\\|\\\\\")*)\"
group pattern with a lazy "get all characters between quotes" pattern \"(.*?)\"
- Use java-library instead of plugin to allow api configuration usage
- Remove explicit references to runtime configurations in dependency declarations
- Make test runtime classpath input for testing convention
- required as java library will by default not have build jar file
- jar file is now explicit input of the task and gradle will ensure its properly build
Different kinds of requests may need different request options from the client
default. Users can optionally set RequestConfig on a single request's
RequestOptions to override the default. Without this, socketTimeout can only
set at RestClient initialization.