This lowers the number of documents used to test lookup because we have
a few failures over the last few months. These are all cases that we
expect to *pass* so fewer documents should make them even more likely to
pass.
Closes#125913Closes#125779
We document support for snapshot repositories using `ftp://` URLs but it
seems this functionality has not worked for many years because of
security-manager restrictions, although nobody noticed because it was
not covered by any tests. The migration to the Entitlements framework
means that this functionality now works again, so this commit adds tests
to make sure we do not break it again in future.
In order to support a future TransportRequest variant that accepts the
response type, TransportRequest needs to be an interface. This commit
adds AbstractTransportRequest as a concrete implementation and makes
TransportRequest a simple interface that joints together the parent
interfaces from TransportMessage.
Note that this was done entirely in Intellij using structural find and
replace.
Recently we changed the implementation of
`testDataStreamLifecycleDownsampleRollingRestart` to use a temporary
state listener. We missed that the listener also had a timeout that was
quite shorter than the `safeGet` timeout we were configuring. In this PR
we align these two timeouts.
Fixes: #123769
This adds the interface for search online prewarming with a default NOOP
implementation. This also hooks the interface in the SearchService after
we fork the query phase to the search thread pool.
Patchers transform specific classes in some "broken" dependencies to ensure they behave correctly (fixing a bug, disabling some undesired or dangerous behaviour, updating calls to deprecated or removed method overloads).
If we upgrade one of the dependencies we patch, we have a concerns that the patchers may not work against the classes in the new version.
This PR addresses this concern by introducing a check on the SHA256 digest of the class, to ensure we are operating on the same bytes the patcher was designed for; if the digest changes that means the class has been changed (e.g. for a dependency update). If that happens, we break the build process with a specific error, so we can double check that the patchers still work against the new classes.
Extracted from #126326
Relates to ES-11279
If updating the `index.time_series.end_time` fails for one data stream,
then UpdateTimeSeriesRangeService should continue updating this setting for other data streams.
The following error was observed in the wild:
```
[2025-04-07T08:50:39,698][WARN ][o.e.d.UpdateTimeSeriesRangeService] [node-01] failed to update tsdb data stream end times
java.lang.IllegalArgumentException: [index.time_series.end_time] requires [index.mode=time_series]
at org.elasticsearch.index.IndexSettings$1.validate(IndexSettings.java:636) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.index.IndexSettings$1.validate(IndexSettings.java:619) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.settings.Setting.get(Setting.java:563) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.settings.Setting.get(Setting.java:535) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.datastreams.UpdateTimeSeriesRangeService.updateTimeSeriesTemporalRange(UpdateTimeSeriesRangeService.java:111) ~[?:?]
at org.elasticsearch.datastreams.UpdateTimeSeriesRangeService$UpdateTimeSeriesExecutor.execute(UpdateTimeSeriesRangeService.java:210) ~[?:?]
at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.3.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
at java.lang.Thread.run(Thread.java:1575) ~[?:?]
```
Which resulted in a situation, that causes the `index.time_series.end_time` index setting not being updated for any data stream. This then caused data loss as metrics couldn't be indexed, because no suitable backing index could be resolved:
```
the document timestamp [2025-03-26T15:26:10.000Z] is outside of ranges of currently writable indices [[2025-01-31T07:22:43.000Z,2025-02-15T07:24:06.000Z][2025-02-15T07:24:06.000Z,2025-03-02T07:34:07.000Z][2025-03-02T07:34:07.000Z,2025-03-10T12:45:37.000Z][2025-03-10T12:45:37.000Z,2025-03-10T14:30:37.000Z][2025-03-10T14:30:37.000Z,2025-03-25T12:50:40.000Z][2025-03-25T12:50:40.000Z,2025-03-25T14:35:40.000Z
```
- Translate a 404 during a multipart copy into a `FileNotFoundException`
- Use multiple threads in `S3HttpHandler` to avoid `CopyObject`/`PutObject` deadlock
Closes#126576
This change re-introduces the engine read/write lock to guard against engine resets.
It differs from #124635 on the following:
uses the engineMutex for creating/closing engines
uses the reentrant r/w lock for retaining engine instances and for resetting the engine
acquires the reentrant read lock during refreshes to prevent deadlocks during resets
add tests to ensure no deadlock when re-acquiring read lock in refresh listeners
Relates ES-11447
* BlobContainer: add copyBlob method
If a container implements copyBlob, then the copy is
performed by the store, without client-side IO. If the store
does not provide a copy operation then the default implementation
throws UnsupportedOperationException.
This change provides implementations for the FS and S3 blob containers.
More will follow.
Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
Co-authored-by: David Turner <david.turner@elastic.co>
In the unexpected case that Elasticsearch dies due to a segfault or
other similar native issue, a core dump is useful in diagnosing the
problem. Yet core dumps are written to the working directory, which is
read-only for most installations of Elasticsearch. This commit changes
the working directory to the logs dir which should always be writeable.
This PR adds two new REST endpoints, for listing queries and getting information on a current query.
* Resolves#124827
* Related to #124828 (initial work)
Changes from the API specified in the above issues:
* The get API is pretty initial, as we don't have a way of fetching the memory used or number of rows processed.
List queries response:
```
GET /_query/queries
// returns for each of the running queries
// query_id, start_time, running_time, query
{ "queries" : {
"abc": {
"id": "abc",
"start_time_millis": 14585858875292,
"running_time_nanos": 762794,
"query": "FROM logs* | STATS BY hostname"
},
"4321": {
"id":"4321",
"start_time_millis": 14585858823573,
"running_time_nanos": 90231,
"query": "FROM orders | LOOKUP country_code ON country"
}
}
}
```
Get query response:
```
GET /_query/queries/abc
{
"id" : "abc",
"start_time_millis": 14585858875292,
"running_time_nanos": 762794,
"query": "FROM logs* | STATS BY hostname"
"coordinating_node": "oTUltX4IQMOUUVeiohTt8A"
"data_nodes" : [ "DwrYwfytxthse49X4", "i5msnbUyWlpe86e7"]
}
```
The mostly-optional parameters to `createBlobContainer` are getting
rather numerous in this test harness which makes the tests hard to read.
This commit introduces a builder to help name the provided parameters
and skip the omitted ones.
Transport actions have associated request and response classes. However,
the base type restrictions are not necessary to duplicate when creating
a map of transport actions. Relatedly, the ActionHandler class doesn't
actually need strongly typed action type and classes since they are lost
when shoved into the node client map. This commit removes these type
restrictions and generic parameters.
The backport to `8.x` needed some changes to pass through CI; this
commit forward-ports the relevant bits of those changes back into `main`
to keep the branches aligned.
Some `AbstractBlobContainerRetriesTestCase#createBlobContainer`
implementations choose a path for the container randomly, but we have a
need for a test which re-creates the same container against a different
`S3Service` and `BlobStore` and must therefore specify the same path
each time. This commit exposes a parameter that lets callers specify a
container path.
Today `ActionResponse$Empty` implements `ToXContentObject`, but yields
no bytes of content when serialized which creates an invalid JSON
response. This commit removes the bogus interface and adjusts the
affected REST APIs to send a `text/plain` response instead.
Update the PerFieldFormatSupplier so that new standard indices use the
Lucene101PostingsFormat instead of the current default ES812PostingsFormat.
Currently, use of the new codec is gated behind a feature flag.
Rather than hard-coding a region name we should always auto-generate it
randomly during test execution. This commit replaces the remaining fixed
`String` arguments with a `Supplier<String>` argument to enable this.
This reverts commit 81555cc9d9 from
https://github.com/elastic/elasticsearch/pull/125060.
Fix https://github.com/elastic/elasticsearch/issues/126275
@idegtiarenko and I investigated and believe this needs reverting:
silently dropping results from the query response in case any index is
missing can lead to real problems if users don't spot their mistake. I'm
also not sure if all the results will get dropped, or only from some
nodes/shards/clusters, meaning that this might be hard to spot by users
if only some results get dropped.
The main PR has no transport version bump, no new ESQL capability, and
was merged 15h ago - so it should be safe to just revert it. I noticed
there was a linked Serverless PR on the original PR, but it merely
disabled some obsolete tests on Serverless and doesn't require reverting
itself.
These tests only don't work in a FIPS JVM because they use a secret key
that is unacceptably short. This commit replaces the relevant uses of
`randomIdentifier` with `randomSecretKey` so they work whether in FIPS
mode or not.
`ListObjects` and `ListObjectsV2` only really differ in their approach
to pagination, but today `S3HttpHandler` does not simulate pagination
anyway so we can use the same handling code for both APIs. The only
practical difference is that the v2 SDK requires the `<IsTruncated>`
element in a `ListObjectsV2` response, but this element is permitted in
both APIs so we add it here.
With this change, ES|QL will return partial results instead of failing
the entire query when encountering errors. Callers should check the
partial_results flag in the response to determine if the result is
partial or complete. If returning partial results is not desired, this
option can be overridden per request via the allow_partial_results
parameter in the query URL or globally via the cluster setting
esql.allow_partial_results.
Relates #122802
We recently cleared stack traces on data nodes before transport back to the coordinating node when error_trace=false to reduce unnecessary data transfer and memory on the coordinating node (#118266). However, all logging of exceptions happens on the coordinating node, so stack traces disappeared from any logs. This change logs stack traces directly on the data node when error_trace=false.
**Issue** The data stream lifecycle does not register correctly rollover
errors for failure store.
**Observed bahaviour** When data stream lifecycle encounters a rollover
error it records it unless it sees that the current write index of this
data stream doesn't match the source index of the request. However, the
write index check does not use the failure write index but the write
backing index, so the failure gets ignored
**Desired behaviour** When data stream lifecycle encounters a rollover
error it will check the relevant write index before it determines if it
should be recorded or not.
Today the `ListObjects` implementation in `S3HttpHandler` will put all
the common prefixes in a single `<CommonPrefixes>` container, but in
fact the real S3 gives each one its own container. The v1 SDK is lenient
and accepts either, but the v2 SDK requires us to do this correctly.
This commit fixes the test fixture to match the behaviour of the real
S3.
- Fixed bug where 416 was being erroneously returned for zero-length blobs even with no Range header
- Fixed bug where partial upload wouldn't be completed if the last PUT included no data
- Return 206 (partial content) status when a Range header is specified
- Return an ETag on object get - BlobReadChannel uses this to ensure we fail when the blob is updated between successive chunks being fetched)
- The 416 on zero-length blobs was one of(?) the causes of #125668
The `METHOD /path/components?and=query` string representation of a
request is becoming increasingly difficult to parse, with slight
variations in parsing between the implementation in `S3HttpHandler` and
the various other implementations. This commit gets rid of the
string-concatenate-and-split behaviour in favour of a proper object that
has predicates for testing all the different kinds of request that might
be made against S3.
This patch builds on the work in #113757, #122999, #124594, #125529, and
#125709 to natively store array offsets for scaled float fields instead of
falling back to ignored source when synthetic_source_keep: arrays.
Adds prefixes to various randomly-generated values to make it easier to
pin down where they're coming from in debugging sessions. Also forces
the STS expiry time to be rendered in UTC.