This introduces a new getHistoricalFeatures() method on ESRestTestCase
which returns a map of historical feature version mappings loaded from
FeatureSpecification implementations from any plugins/modules in use
by the current test suite. The mappings are generated by a new Gradle
task at build time, and then injected into the test runtime as a
System property.
This commit upgrades to OpenSAML v4.3.0
Versions of OpenSAML ≥ 4.1 have a hard dependency on the non-FIPS release of BouncyCastle.
This would prevent ES from being able to run in a JVM where BC-FIPS is configured as the security provider.
Closes: #71983
Co-authored-by: Tim Vernum tim@adjective.org
Using gradle toolchain support in gradle requires refactoring how the composite build is composed.
We added three toolchain resolver
1. Resolver for resolving defined bundled version from oracle as openjdk
2. Resolve all available jdks from Adoption
3. Resolve archived Oracle jdk distributions.
We should be able to remove the JdkDownloadPlugin altogether without having that in place, but we'll do that in a separate effort.
Fixes#95094
We need to verify, for each release, that our stable plugin APIs
are not breaking.
This commit adds some Gradle support for basic backwards compatibility
testing. On the Gradle side, we add a new qa project to test the
current commit against downloads of released versions, and against
fresh builds of snapshot versions.
As for the actual comparison, we break up the output of javap (the
decompiler) by line and create maps of classes to public class,
field, and method declarations within those class files. We then
check that the signature map from the new jar is not missing any
elements present in the old jar. This method has known limitations,
which are documented in the JarApiComparisonTask class.
Co-authored-by: Mark Vieira <portugee@gmail.com>
This commit adds a new test framework for configuring and orchestrating
test clusters for both Java and YAML REST testing. This will eventually
replace the existing "test-clusters" Gradle plugin and the build-time
cluster orchestration.
This project has a problem with availability of Docker images after
release. Disabling individual tasks is tricky because it uses test
fixtures, so instead just skip the project entirely until we can work
out a way forward.
We only rely on the checkstyle version in the buildLibs.toml gradle version catalogue with this change.
Also added some hints for gradle best practices.
This is an aftermath of #88283
Elasticsearch provides several command line tools, as well as the main script to start elasticsearch. While most of the logic is abstracted away for cli tools, the main elasticsearch script has hundreds of lines of platform specific shell code. That code is difficult to maintain because it uses many special shell features which then must also exist in other platforms (ie windows batch files). Additionally, the logic in these scripts are not easy to test, we must be on the actual platform and test with a full installation of Elasticsearch, which is relatively slow (compared to most in process tests).
This commit replaces logic of the main server script, as well as the windows service management script, with Java. The new entrypoints use the CliToolLauncher. The server cli figures out all the jvm options and such necessary, then launches the real server process. If run in the foreground, the launcher will stay alive for the lifetime of Elasticsearch; the streams are effectively inherited so all output from Elasticsearch still goes to the console. If daemonizing, the launcher waits around until Elasticsearch is "ready" (this means the Node startup completed), then detaches and exits.
Co-authored-by: William Brafford <william.brafford@elastic.co>
The "launchers" tool jar contains tools used when launching
Elasticsearch, to find the necessary jvm options and temp directory. In
preparation for #85758, this commit renames the "launchers" tool to
server-cli. The classes there will become part of the Java based launch
script, and the new naming better matches the intent of the jar, which
is to serve as the cli entrypoint for the Elasticsearch server.
relates #85758
CLI scripts have a common infrastructure in that they call to the shared
elasticsearch-cli shell script which launches them with the appropriate
java command line. However, each underlying Java class must implement
its own main method.
This commit introduces a single main method to be shared by CLIs. The
new CliToolLauncher takes in system properties to determine which tool
is being run, and a new CliToolProvider SPI allows defining and finding
the named tools.
relates #85758
Co-authored-by: William Brafford <william.brafford@elastic.co>
The ESClientYamlSuiteTestCase is used to run yaml tests throughout
Elasticsearch. It utilizes the low level rest client in sniffing for
nodes, but the sniffer is not needed anywhere else in the test
framework.
This commit creates a new project, `:test:rest-runner` which is meant to
house the rest test running infrastructure. This has two purposes. First
is to remove the sniffer from the test framework dependencies, because
it transitively depends on Jackson. Second is to setup the runner for
future refactorings where it could be made to not depend on the entire
test framework, though how that could work is left for the future.
This commit adds a jar separate from the test framework to provide
utilities for testing x-content related code. The first thing moved
there is the base schema validation test case, which also pulls along
the com.networknt dependency and jackson. For now these are direct
dependencies, though we could consider shading them in the future so as
not to expose downstream projects to them, which may have version
conflicts.
This change isolates the Jackson implementation of x-content parsers and generators to a separate classloader. The code is loaded dynamically upon accessing any x-content functionality.
The x-content implementation is embedded inside the x-content jar, as a hidden set of resource files. These are loaded through a special classloader created to initialize the XContentProvider through service loader. One caveat to this approach is that IDEs will no longer trigger building the x-content implementation when it changes. However, running any test from the command line, or running a full Build in IntelliJ will trigger the directory to be built.
Co-authored-by: ChrisHegarty <christopher.hegarty@elastic.co>
Adds a new "ConsoleLoader" that uses jANSI in a separate classloader
to determine whether standard output is a real console (that is, not
redirected to a file or /dev/null, etc)
Also updates security auto-configuration to only print out credentials
when there is a console.
This upgrades the repository-hdfs plugin to hadoop 3. Tests are performed against both hadoop 2 and hadoop 3 HDFS. The advantages of using the hadoop 3 client are:
Over-the-wire encryption works (tests coming in an upcoming PR).
We don't have to add (or ask customers to add) additional jvm permissions to the elasticsearch jvm
It's compatible with java versions higher than java 8
This should give us a little more decoupling from jcenter as the
gradle plugin portal tries resolving thirdparty plugin dependencies from
jcenter by default.
This should shield us a bit better from jcenter outtakes that transiently
cause issues resolving from the gradle plugin portal
Closes#74795.
Introduce two Docker image variants for Cloud. The first bundles
(actually installs) the S3, Azure and GCS repository plugins. The
second bundles all official plugins, but only installs the repository
plugins.
Both images also bundle Filebeat and Metricbeat.
The testing utils have been refactored to introduce a `docker`
sub-package. This allows the static `Docker.containerId` to be
shared without needing all the code in one big class. The code for
checking file ownership / permissions has also been refactored to
a more Hamcrest style, using a custom Docker file matcher.
In 8.0, with security on by default, we store the HTTP
layer CA PrivateKeyEntry in the http.ssl keystore (along
with the node certificate) so that it is available in our
Enrollment API transport actions.
When loading a keystore, the current behavior is that the
X509ExtendedKeyManager will iterate through the PrivateKeyEntry
objects and will return the first key/certificate that satisfies
the requirements of the client and the server configuration,
and lacks any additional logic/filters.
We need the KeyManager to deterministically pick the node
certificate/key in all cases as this is the intended entry to be
used for TLS on the HTTP layer.
This change introduces filtering when creating the in-memory
keystore the KeyManager is loaded with, so that it will not
include PrivateKeyEntry objects when:
- there are more than 1 PrivateKeyEntry objects in the keystore
- The leaf certificate associated with the PrivateKeyEntry is a
CA certificate
Related: #75097
Co-authored-by: Ioannis Kakavas <ioannis@elastic.co>
- Fix new introduced deprecated usages
- Update to newer ospackage snapshot to include provided PR for fixing deprecated usage
This gradle release comes with improvements on incremental compilation which we should benefit from
This moves the public build api and plugins into a separete included build called 'build-tools'
and we removed the duplication of included buildSrc twice (2nd import as build-tools).
The elasticsearch internal build logic is kept in build-tools-internal as included build which allows us better handling of this project that its just being an buildSrc project (e.g. we can reference tasks directly from the root build etc.)
Convention logic applied to both projects will live in a new build-conventions project.
For the Docker distribution, we transform the archive distribution's
log4j2 config so that all messages are logged to the console by default.
However this transformation step happens when the Docker image is built,
which means that the source for the transformation must be included in
the Docker context.
Improve this by making the archive distribution's log4j config available
as an artifact in the build, then simply copying it into the Docker context.
The transform logic has been reimplemented as a simple copy filter.
Consequently, the `transform-log4j-config` project has been removed.
Air-gapped environments can't simply use GeoIp database service provided by Infra, so they have to either use proxy or recreate similar service themselves.
This PR adds tool to make this process easier. Basic workflow is:
download databases from MaxMind site to single directory (either .mmdb files or gzipped tarballs with .tgz suffix)
run the tool with $ES_PATH/bin/elasticsearch-geoip -s directory/to/use [-t target/directory]
serve static files from that directory (for example with docker run -v directory/to/use:/usr/share/nginx/html:ro nginx
use server above as endpoint for GeoIpDownloader (geoip.downloader.endpoint setting)
to update new databases simply put new files in directory and run the tool again
This change also adds support for relative paths in overview json because the cli tool doesn't know about the address it would be served under.
Relates to #68920
Closes#69930. Closes#69928.
The ES build currently has 2 types of Docker output - Docker images,
and Docker build contexts. At the moment, only the images are tested,
meaning that bugs in the build contexts can go unnoticed.
This PR changes how we create Docker images so that we first create
the build contexts, and then build the images using them. This does
require some sleight-of-hand - the build contexts expect to download
an Elasticsearch archive directorly from the `Dockerfile`, which
will only ever work for non-snapshot version builds. In order to
get around this, the `Dockerfile` is modified to `COPY` in a local
archive file. Any other dependency files must exist in the build
context archive.
This PR also builds and tests the Iron Bank context. We do not
currently build a Docker image for this at all, and to build an
image requires us to set some build arguments to useful values. We
also need to provide all artifacts to the build, as the `Dockerfile`
cannot download anything. As a result, the `:distribution:docker`
project now defines a GitHub repository so that Gradle will download
a `tini` binary.
Note that there will need to be corresponding changes to
`release-manager`.
This change adds component that will download new GeoIP databases from infra service
New databases are downloaded in chunks and stored in .geoip_databases index
Downloads are verified against MD5 checksum provided by the server
Current state of all stored databases is stored in cluster state in persistent task state
Relates to #68920
This PR is a first attempt to get the build to run on an Apple M1 (ARM 64 / aarch64) machine.
I think the changes are mostly reasonable, apart from some hard-coding to use the Azul JVM,
which at the time of writing seems to be the only available JVM. I'll follow up when our preferred
JVM is available.
Closes#62758.
Include the Stack log4j config in the Docker image, in order to
make it possible to write logs in a container environment in the
same way as for an archive or package deployment. This is useful
in situations where the user is bind-mounting the logs directory
and has their own arrangements for log shipping.
To use stack logging, set the environment variable `ES_LOG_STYLE`
to `file`. It can also be set to `console`, which is the same as
not specifying it at all.
The Docker logging config is now auto-generated at image build time,
by running the default config through a transformer program when
preparing the distribution in an image builder step.
Also, in the docker distribution `build.gradle`, I changed a helper
closure into a class with a static method in order to fix an
issue where the Docker image was always being rebuilt, even when
there were no changes.