* Update Gradle wrapper to 8.13 (#122421)
* Fix Gradle Deprecation warning as declaring an is- property with a Boolean type has been deprecated.
* Make use of new layout.settingsFolder api to address some cross project references
* Fix buildParams snapshot check for multiprojet projects
(cherry picked from commit e19b2264af)
# Conflicts:
# build-tools-internal/src/main/java/org/elasticsearch/gradle/internal/BaseInternalPluginBuildPlugin.java
# build-tools-internal/src/main/java/org/elasticsearch/gradle/internal/InternalDistributionBwcSetupPlugin.java
# docs/build.gradle
# qa/lucene-index-compatibility/build.gradle
# x-pack/qa/multi-project/core-rest-tests-with-multiple-projects/build.gradle
# x-pack/qa/multi-project/xpack-rest-tests-with-multiple-projects/build.gradle
* More fixes
This updates the gradle wrapper to 8.12
We addressed deprecation warnings due to the update that includes:
- Fix change in TestOutputEvent api
- Fix deprecation in groovy syntax
- Use latest ospackage plugin containing our fix
- Remove project usages at execution time
- Fix deprecated project references in repository-old-versions
(cherry picked from commit ba61f8c7f7)
# Conflicts:
# build-tools-internal/src/main/java/org/elasticsearch/gradle/internal/distribution/DockerCloudElasticsearchDistributionType.java
# build-tools-internal/src/main/java/org/elasticsearch/gradle/internal/distribution/DockerUbiElasticsearchDistributionType.java
# build-tools-internal/src/main/java/org/elasticsearch/gradle/internal/test/Fixture.java
# plugins/repository-hdfs/hadoop-client-api/build.gradle
# server/src/main/java/org/elasticsearch/inference/ChunkingOptions.java
# x-pack/plugin/kql/build.gradle
# x-pack/plugin/migrate/build.gradle
# x-pack/plugin/security/qa/security-basic/build.gradle
Fix incompatibility with 8.8 and our internal api usages
- Update ospackage to a version that contains a fix we provided
- Tweak build logic to avoid deprecation warnings
- Use newer permission api
- Use custom shadowplugin
- Rework ElasticsearchDistribution dependencies resolution
- Update Gradle wrapper to 8.8
This commit makes zstd compression available to Elasticsearch. The
library is pulled in through maven in jar files for each platform, then
bundled in a new platform directory under lib. Access to the zstd
compression/decompression is through NativeAccess.
The build handles platform specific code which may be for arm or x86.
Yet there are multiple ways to describe 64bit x86, and the build
converts between the two in several places. This commit consolidates on
the x64 nomenclature in most places, except where necessary (eg ML still
uses x86_64).
relates #105715
There is a bug in the nebula os package gradle plugin that breaks copy spec specific setgid handling.
We have created a patch for the plugin that we use for now to unblock us. This will be ported upstream
to the nebula main branch and part of a release but that requires some more polishing and will
be taken care of in a later PR
Gradle now fully supports compiling, testing and running on Java 20.
Among other general performance improvements this release introduces --test-dry-run command line option that allows checking if tests are filtered or not by gradle.
Required updating nebula ospackage plugin as setuid was broken in gradle 8.3.
- Remove custom checksum build logic in wrapper task
- Adjust jdk home handling adjusting the change in behaviour in gradle. Requires providing canonical paths for provisioned jdk homes.
- Fix test by add workaround to bug in configuration cache
This changes the LoggedExec task to be configuration cache compatible. We changed the implementation
to use `ExecOperations` instead of extending `Exec` task. As double checked with the Gradle team this task
is not planned to be made configuration cache compatible out of the box anytime soon.
This is part of the effort on https://github.com/elastic/elasticsearch/issues/57918
We don't build these libraries ourselves and the license forbids
us from modifying them in any way, so we won't be able to make
this rule pass on them. All we can do is override it.
Fixes#87632
Elasticsearch provides several command line tools, as well as the main script to start elasticsearch. While most of the logic is abstracted away for cli tools, the main elasticsearch script has hundreds of lines of platform specific shell code. That code is difficult to maintain because it uses many special shell features which then must also exist in other platforms (ie windows batch files). Additionally, the logic in these scripts are not easy to test, we must be on the actual platform and test with a full installation of Elasticsearch, which is relatively slow (compared to most in process tests).
This commit replaces logic of the main server script, as well as the windows service management script, with Java. The new entrypoints use the CliToolLauncher. The server cli figures out all the jvm options and such necessary, then launches the real server process. If run in the foreground, the launcher will stay alive for the lifetime of Elasticsearch; the streams are effectively inherited so all output from Elasticsearch still goes to the console. If daemonizing, the launcher waits around until Elasticsearch is "ready" (this means the Node startup completed), then detaches and exits.
Co-authored-by: William Brafford <william.brafford@elastic.co>
CLI scripts have a common infrastructure in that they call to the shared
elasticsearch-cli shell script which launches them with the appropriate
java command line. However, each underlying Java class must implement
its own main method.
This commit introduces a single main method to be shared by CLIs. The
new CliToolLauncher takes in system properties to determine which tool
is being run, and a new CliToolProvider SPI allows defining and finding
the named tools.
relates #85758
Co-authored-by: William Brafford <william.brafford@elastic.co>
Closes#82433. If the environment variable `RESTART_ON_UPGRADE` is true,
then ensure that we delay restarting Elasticseach until after the
keystore is upgraded, or else we can run into permissions problems.
Closes#81652.
Convert the `repository-azure`, `repository-gcs` and `repository-s3`
plugins into modules, so that they are always included in the
Elasticsearch distribution. Also change plugin installation, removal
and syncing so that attempting to add or remove these plugins still
succeeds but is now a no-op.
We (mostly I) were initially advocating for the auto-generated files to
use unique names (the name containing a timestamp particle), in order to
avoid that subsequent invocations of the config step conflict with
itself. Moreover, I was wishing that these files will not have to be
handled directly by admins (that the enrollment process was to be used).
However, experience proved us otherwise, admins have to manipulate these
files, and unique configuration names are hard to deal with in scripts
and docs, so this PR is all about using a fixed name for all the
generated files. _Labeling as a bug fix because the feedback is that it
very negatively impacts usabilty._ Closes
https://github.com/elastic/elasticsearch/issues/81057
Closes#70219.
Introduce a declarative way for the Elasticsearch server to manage plugins,
which reads the `elasticsearch-plugins.yml` file and works which out
plugins need to be added and / or removed to match the configuration. Also
make it possible to configure a proxy in the config file, instead of
through the environment.
Most of the work of adding and removing is still done in the
`InstallPluginAction` and `RemovePluginAction` classes, so the
behaviour should be the same as with the `install` and `remove`
commands. However, these commands will now abort if the above config
file exists. The intent is to make it harder for the configuration
to drift.
This new method only applies to `docker` distribution types at the
moment.
Since this syncing mechanism declarative, rather than imperative,
the Cloud-specific plugin wrapper script is no longer required.
Instead, an environment variable informs `InstallPluginAction` to
install plugins from an archive directory instead of downloading
them, where possible.
This change introduces a CLI tool that can be run directly after
installation time in packaged installations, to allow for a node
that was auto-configured to be the initial node of a cluster during
installation ( default installation behavior) to be reconfigured
to join an existing cluster, using an enrollment token.
The use of this tool presumes that the user has the
appropriate permissions to read/write to the installation dirs and
that this node has not been yet started, i.e. this tool is run
directly after installation. It is destructive, as it removes
existing security auto-configuration, and as such it requires an
explicit verification from the user.
This is a follow-up to #7718.
The functionality to enroll a new node to a cluster was
introduced in #77292 as a CLI tool. This change replaces this
CLI tool with the option to trigger the enrollment functionality
on startup of elasticsearch via a named argument that can be
passed to the elasticsearch startup script (--enrollment-token)
so that the users that want to enroll a node to a cluster can do
this with one command instead of two.
In a followup PR we are introducing a CLI tool version of this
functionality, that can be used to reconfigure packaged
installations.
This change introduces a new CLI tool that can be used to set and
reset the password of all the built-in users and users in the native
realm in Elasticsearch. It depends on the file realm being enabled
(which it is, by default) and can (re)set one built-in user password at a time.
It removes the previously introduced elasticsearch-reset-elastic-password
and elasticsearch-reset-kibana-system-password as their functionality is
covered by this new tool.
This commit ensures that for packaged installations
we will run the auto-configuration code on installation (but not upgrade) time.
This is needed because we expect elasticsearch to be run as a service.
By the time the service runs, the configuration directory is not writable by the
user that runs elasticsearch so we can't persist configuration and key/certificate
material on runtime. Running auto-configuration on installation time
allows us to print information to the user that they have better chance of seeing
(barring unattended installations). We don't have the option to show output to the
user when starting the service with systemctl.
During installation we:
- Generate TLS material, enable security and TLS and persist on disk
- Generate a password for the elastic user and store a hash of this
in the elasticsearch.keystore. This will be picked up by the node
starting and will be "promoted" to be the cluster wide elastic
password on first startup. (see #78306 )
- We notify the user in the output of the package installation about
whether we succeed and what the password of the elastic user is.
This change makes it so x-pack-core and x-pack-security are bundled
in the INTEG TEST distribution that we use for testClusters in our
tests. There are two reasons for this:
- In https://github.com/elastic/elasticsearch/pull/77231 where we
are looking into enabling and auto-configuring security by default
for all nodes, we need to call out to ConfigInitialNode to
determine whether we should do the auto-configuration or not.
- Since we are enabling security by default, we should be looking
into enabling security for all for our tests moving forward, or
at least make a conscious decision about which ones run without
security. This change is a step towards that direction.
Elasticsearch's keystore initial md5sum was added in #28928 with
the intention to allow us to remove the elasticsearch.keystore
file upon package removal, if this hadn't been altered after
installation. At that time this decision made perfect sense as
the elasticsearch keystore only contains transient data by
default ( keystore.seed ) that is meant to be useful for bootstrap
related actions, and doesn't need to survive re-installations.
With Security ON by default, we will be storing additional
settings in the keystore upon installation(namely, the passwords
for the PKCS#12 keystores used for TLS) and these have a more
persistent nature. Since `remove` doesn't delete the configuration
directories and files where said PKCS#12 keystores are stored, it
makes sense to also not delete the elasticsearch.keystore which
stores the passwords.
* Update redline library to 1.2.10
The redline team just released version 1.2.10 of the redline
library which contains our fix of the rpm signatures / headers.
Also a PR to update that dependency in the ospackage plugin has been
raised at https://github.com/nebula-plugins/gradle-ospackage-plugin/pull/402
* Update common about enforcing redline 1.2.10
This adds support for Sha256 header signature in our RPMs by
updating the dependency to the readline library to a version
we have patched until the provided PR (https://github.com/craigwblake/redline/pull/157)
got merged and released by the redline folks.
This work is related to #58257
We are using the `-n` bash operator to determine whether a second
argument is passed to the `postinst` script during installation of
our DEB packages. The reasoning is that we want to differentiate
between installations and upgrades and a second argument to
postinst is only passed when this is an upgrade ( the identifier
of the version the package is upgraded to ).
The problem is that we use `-n` without quoting the $2 and this is
unsafe practice as it might yield unexpected results for `-n` and
`-z` operators used within test brackets. Testing with bash
5.0-6ubuntu1.1 `[ -n $2 ]` returns false both for upgrades and
installations, while `[ -n "$2" ]` behaves as expected.
References:
https://tldp.org/LDP/abs/html/comparison-ops.htmlhttps://wiki.debian.org/MaintainerScripts
- Fix new introduced deprecated usages
- Update to newer ospackage snapshot to include provided PR for fixing deprecated usage
This gradle release comes with improvements on incremental compilation which we should benefit from
Extract usage of internal API from TestClustersPlugin and PluginBuildPlugin and related plugins and build logic
This includes a refactoring of ElasticsearchDistribution to handle types
better in a way we can differentiate between supported Elasticsearch
Distribution types supported in TestCkustersPlugin and types only supported
in internal plugins.
It also introduces a set of internal versions of public plugins.
As part of this we also generate the plugin descriptors now.
As a follow up on this we can actually move these public used classes into
an extra project (declared as included build)
We keep LoggedExec and VersionProperties effectively public And workaround for RestTestBase
Related to #71593 we move all build logic that is for elasticsearch build only into
the org.elasticsearch.gradle.internal* packages
This makes it clearer if build logic is considered to be used by external projects
Ultimately we want to only expose TestCluster and PluginBuildPlugin logic
to third party plugin authors.
This is a very first step towards that direction.
- Update gradle wrapper to gradle 7.0
- Remove deprecated usages to make build 7.0 compatible
- Fix excludes in docs snippet tasks (See https://github.com/gradle/gradle/issues/16160 for details)
- Fix deprecation warnings in 7.0
- Add explicit dependencies that have been missed
- Make extract native licenses tasks output dir more explicit
- Use a snapshot of the ospackage plugin that includes a fix for 7.0 already
- fix test runtime classpath setup in repository-hdfs
- Make task dependency explicit to fix further deprecation warnings
- Remove manual check for http repo usages that has been deprecated in gradle 7.0
- Update spock to latest 2.0 milestone required for groovy 3
This commit renames JAVA_HOME to ES_JAVA_HOME in the default environment
file that ships with the Debian and RPM pacakges. This commented out
environment variable shouldu use our preferred name here.
As per the new licensing change for Elasticsearch and Kibana this commit
moves existing Apache 2.0 licensed source code to the new dual license
SSPL+Elastic license 2.0. In addition, existing x-pack code now uses
the new version 2.0 of the Elastic license. Full changes include:
- Updating LICENSE and NOTICE files throughout the code base, as well
as those packaged in our published artifacts
- Update IDE integration to now use the new license header on newly
created source files
- Remove references to the "OSS" distribution from our documentation
- Update build time verification checks to no longer allow Apache 2.0
license header in Elasticsearch source code
- Replace all existing Apache 2.0 license headers for non-xpack code
with updated header (vendored code with Apache 2.0 headers obviously
remains the same).
- Replace all Elastic license 1.0 headers with new 2.0 header in xpack.