Memory locking on Windows with the bundled jdk was broken by native
access refactoring. This commit fixes the linking issue, as well as adds
a packaging test to ensure memory locking is invoked on all supported
platforms.
This is an attempt to fix occasional test failures where asserting on a
request response fails because the cluster has not finished
initialization and cannot yet serve requests.
Closes#109660
* Add an override to the aggs tests to override the allow list default setting. This makes it possible to run the scripted metric aggs tests on Serverless, even when we disallow these aggs per default on Serverless.
* Move the allow list tests next to the scripted metric tests since these belong together.
Wholesale fix of every `TRAPPY_IMPLICIT_DEFAULT_MASTER_NODE_TIMEOUT` in
`o.e.snapshots` and `o.e.repositories`, just pulling them up to the REST
layer (where they become API params), the test suite (where they become
`TEST_REQUEST_TIMEOUT`), or some other place where an explicit value is
available.
Relates #107984
* Mechanical package change in IntelliJ
* A couple of manual fixups
* Export plugins.loading to deprecation
* Put plugin-cli in a module so can export PluginsUtils to it.
This adds a new quantization mechanism for HNSW and flat indices. Here
we add `int4` quantization via the `int4_hnsw` and `int4_flat` index
types. This quantization methodology further reduces the memory required
for fast HNSW, meaning that the memory required is 8x smaller than with
regular float32 values.
8x reduction means that 1M 1024 dimension vectors goes from requiring
3.8GB to 477MB.
Recall continues to stay steady, there is some reduction that is
recoverable via slightly oversampling and reranking. For example over
500k CohereV3 vectors, only 5 extra vectors are required to be gathered
to achieve over 0.98 recall in a brute-force scenario.

Introduces new cluster settings that allow only a certain set of scripts in scripted metrics aggregations:
- search.aggs.only_allowed_metric_scripts, defaults to false
- search.aggs.allowed_inline_metric_scripts, defaults to empty list
- search.aggs.allowed_stored_metric_scripts, defaults to empty list
Currently these tests run against any old cluster older than 8.0.0, but
the fix that allowed `index.mapper.dynamic` to exist is only available
in 7.17.22.
Adjust these tests to only run if old cluster is after version 7.17.21
and before 8.0.0
the test was disabled due to problem with availability of docker images after the release.
It was reimplemented in :test:external-modules:apm-integration TracesApmIT
closes#90308
Many of our system indices that rely on auto_expand_replicas get created providing a manual number of replicas. Such number will be immediately overridden by the auto expand replicas functionality according to the number of data nodes available. While this causes no harm, it seems misleading and unnecessary, a potential misuse that we can avoid for indices that we create ourselves.
Ideally we'd even prevent this from happening by rejecting such index creation requests, but that would be a breaking change that we'd prefer not to make at this time.
Currently when upgrading a 7.x cluster to 8.x with
`index.mapper.dynamic` index setting defined the following happens:
- In case of a full cluster restart upgrade, then the index setting gets archived and after the upgrade the cluster is in a green health.
- In case of a rolling cluster restart upgrade, then shards of indices with the index setting fail to allocate as nodes start with 8.x version. The result is that the cluster has a red health and the index setting isn't archived. Closing and opening the index should archive the index setting and allocate the shards.
The change is about ensuring the same behavior happens when upgrading a
cluster from 7.x to 8.x with indices that have the
`index.mapper.dynamic` index setting defined. By re-defining the
`index.mapper.dynamic `index setting with
`IndexSettingDeprecatedInV7AndRemovedInV8` property, the index is
allowed to exist in 7.x indices, but can't be defined in new indices
after the upgrade. This way we don't have to rely on index archiving and
upgrading via full cluster restart or rolling restart will yield the
same outcome.
Based on the test in #109301. Relates to #109160 and #96075
This adds a SecuredFileAccessPermission that Elasticsearch and plugins can use to limit access to certain files, so that only code that also has that same permission can access the specified files
Bootstrap checks are an important part of ensuring proper Elasticsearch
configuration. They are often system dependent, so checking they work on
each supported platform should be part of testing. This commit adjusts
packaging tests to enable bootstrap checks.
Add the semantic query to the Query DSL, which is used to query semantic_text fields
---------
Co-authored-by: carlosdelest <carlos.delgado@elastic.co>
Co-authored-by: Jim Ferenczi <jim.ferenczi@elastic.co>
Now that mock logging has a single internal appender, the "appender"
suffix for MockLogAppender doesn't make sense. This commit renames the
class to MockLog. It was completely mechanical, done with IntelliJ
renames.
Previously readiness waited only on a master node being elected.
Recently it was also made to wait on file settings being applied. Yet
the node may be fully started before those file settings are applied.
The test expected readiness was ok after the node finishes starting.
This commit retries the readiness check until it succeeds since
readiness state will be updated async to the node finishing starting.
closes#108523
The test started failing because of the recent changes to allow closing (and deleting shards) asynchronously. As a result dandling index API now is seeing a directory in partially deleted state, fails to interpret partial data and fails as a result.
The fix retries the failure on the client.
Existing uses of MockLogAppender first construct an appender, then call
capturing on the instance in a try-with-resources block. This commit
adds a new method, capture, which creates an appender and sets up the
capture the the same time. The intent is that this will replace the
existing capturing calls, but there are too many to change in one PR.