Adds a node feature that is conditionally added to the cluster state if the failure store
feature flag is enabled. Requires all nodes in the cluster to have the node feature
present in order to redirect failed documents to the failure store from the ingest node
or from shard level bulk failures.
No need to have these marker interfaces around when weäre not using them anywhere, all they do is hide a lot of code duplication actually. Removing them sets up the possible removal of hundreds of lines of downstream code it seems
With this PR we restrict the paths we allow access to, forbidding plugins to specify/request entitlements for reading or writing to specific protected directories.
I added this validation to EntitlementInitialization, as I wanted to fail fast and this is the earliest occurrence where we have all we need: PathLookup to resolve relative paths, policies (for plugins, server, agents) and the Paths for the specific directories we want to protect.
Relates to ES-10918
* Revert "Release buffers in netty test (#126744)"
This reverts commit f9f3defe92.
* Revert "Add flow-control and remove auto-read in netty4 HTTP pipeline (#126441)"
This reverts commit c8805b85d2.
We document support for snapshot repositories using `ftp://` URLs but it
seems this functionality has not worked for many years because of
security-manager restrictions, although nobody noticed because it was
not covered by any tests. The migration to the Entitlements framework
means that this functionality now works again, so this commit adds tests
to make sure we do not break it again in future.
In order to support a future TransportRequest variant that accepts the
response type, TransportRequest needs to be an interface. This commit
adds AbstractTransportRequest as a concrete implementation and makes
TransportRequest a simple interface that joints together the parent
interfaces from TransportMessage.
Note that this was done entirely in Intellij using structural find and
replace.
Catching Exception instead of AmazonClientException in copyBlob and
executeMultipart led to failures in S3RepositoryAnalysisRestIT due to
the injected exceptions getting wrapped in IOExceptions that prevented
them from being caught and handled in BlobAnalyzeAction.
Closes#126576
This fixes an issue where if a Painless getter method return type
didn't match a Java getter method return type we add a cast.
Currentlythis is adding an extraneous cast.
Closes: #70682
If updating the `index.time_series.end_time` fails for one data stream,
then UpdateTimeSeriesRangeService should continue updating this setting for other data streams.
The following error was observed in the wild:
```
[2025-04-07T08:50:39,698][WARN ][o.e.d.UpdateTimeSeriesRangeService] [node-01] failed to update tsdb data stream end times
java.lang.IllegalArgumentException: [index.time_series.end_time] requires [index.mode=time_series]
at org.elasticsearch.index.IndexSettings$1.validate(IndexSettings.java:636) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.index.IndexSettings$1.validate(IndexSettings.java:619) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.settings.Setting.get(Setting.java:563) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.settings.Setting.get(Setting.java:535) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.datastreams.UpdateTimeSeriesRangeService.updateTimeSeriesTemporalRange(UpdateTimeSeriesRangeService.java:111) ~[?:?]
at org.elasticsearch.datastreams.UpdateTimeSeriesRangeService$UpdateTimeSeriesExecutor.execute(UpdateTimeSeriesRangeService.java:210) ~[?:?]
at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.3.jar:?]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.3.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
at java.lang.Thread.run(Thread.java:1575) ~[?:?]
```
Which resulted in a situation, that causes the `index.time_series.end_time` index setting not being updated for any data stream. This then caused data loss as metrics couldn't be indexed, because no suitable backing index could be resolved:
```
the document timestamp [2025-03-26T15:26:10.000Z] is outside of ranges of currently writable indices [[2025-01-31T07:22:43.000Z,2025-02-15T07:24:06.000Z][2025-02-15T07:24:06.000Z,2025-03-02T07:34:07.000Z][2025-03-02T07:34:07.000Z,2025-03-10T12:45:37.000Z][2025-03-10T12:45:37.000Z,2025-03-10T14:30:37.000Z][2025-03-10T14:30:37.000Z,2025-03-25T12:50:40.000Z][2025-03-25T12:50:40.000Z,2025-03-25T14:35:40.000Z
```
I noticed that we tend to create the flag instance and call this method
everywhere. This doesn't compile the same way as a real boolean constant
unless you're running with `-XX:+TrustFinalNonStaticFields`.
For most of the code spots changed here that's irrelevant but at least
the usage in the mapper parsing code is a little hot and gets a small
speedup from this potentially.
Also we're simply wasting some bytes for the static footprint of ES by
using the `FeatureFlag` indirection instead of just a boolean.
- Translate a 404 during a multipart copy into a `FileNotFoundException`
- Use multiple threads in `S3HttpHandler` to avoid `CopyObject`/`PutObject` deadlock
Closes#126576
In this PR we add support for the failure store for system data streams.
Specifically:
- We pass the system descriptor so the failure index can be created based on that.
- We extend the tests to ensure it works
- We remove a guard we had but I wasn't able to test it because it only gets triggered if the data stream gets created right after a failure in the ingest pipeline, and I didn't see how to add one (yet).
- We extend the system data stream migration to ensure this is also working.
In mustache, this change returns null values which convert to empty strings
instead of throwing an exception when users have a template with
something like a.8 where the index 8 is out of bounds. This matches the
behavior for non-existent keys like a.d.
Closes#55200
* BlobContainer: add copyBlob method
If a container implements copyBlob, then the copy is
performed by the store, without client-side IO. If the store
does not provide a copy operation then the default implementation
throws UnsupportedOperationException.
This change provides implementations for the FS and S3 blob containers.
More will follow.
Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
Co-authored-by: David Turner <david.turner@elastic.co>
This commit adds support for system data streams reindexing. The system data stream migration extends the existing system indices migration task and uses the data stream reindex API.
The system index migration task starts a reindex data stream task and tracks its status every second. Only one system index or system data stream is migrated at a time. If a data stream migration fails, the entire system index migration task will also fail.
Port of #123926
The mostly-optional parameters to `createBlobContainer` are getting
rather numerous in this test harness which makes the tests hard to read.
This commit introduces a builder to help name the provided parameters
and skip the omitted ones.
When creating the an empty lifecycle we used to use the default
constructor. This is not just for efficiency but it will allow us to
separate the default data and failures lifecycle in the future.
Transport actions have associated request and response classes. However,
the base type restrictions are not necessary to duplicate when creating
a map of transport actions. Relatedly, the ActionHandler class doesn't
actually need strongly typed action type and classes since they are lost
when shoved into the node client map. This commit removes these type
restrictions and generic parameters.
Some `AbstractBlobContainerRetriesTestCase#createBlobContainer`
implementations choose a path for the container randomly, but we have a
need for a test which re-creates the same container against a different
`S3Service` and `BlobStore` and must therefore specify the same path
each time. This commit exposes a parameter that lets callers specify a
container path.
This test had a copy paste mistake. When the cluster has only one data
node the replicas cannot be assigned so we end up with a force merge
error. In the case of the failure store this was not asserted correctly.
On the other hand, this test only checked for the existence of an error
and it was not ensuring that the current error is not the rollover error
that should have recovered. We make this test a bit more explicit.
Fixes: https://github.com/elastic/elasticsearch/issues/126252
* [main] Move system indices migration to migrate plugin
It seems the best way to fix#122949 is to use existing data stream reindex API. However, this API is located in the migrate x-pack plugin. This commit moves the system indices migration logic (REST handlers, transport actions, and task) to the migrate plugin.
Port of #123551
* [CI] Auto commit changes from spotless
* Fix compilation
* Fix tests
* Fix test
---------
Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
Rather than hard-coding a region name we should always auto-generate it
randomly during test execution. This commit replaces the remaining fixed
`String` arguments with a `Supplier<String>` argument to enable this.
Today we have some special-case test classes in `:modules:repository-s3`
within the same source root as the regular tests, with some trickery to
define separate Gradle tasks to run them with their special-case
configs. This commit simplifies the build by just moving each of these
classes into its own Gradle project.
These tests only don't work in a FIPS JVM because they use a secret key
that is unacceptably short. This commit replaces the relevant uses of
`randomIdentifier` with `randomSecretKey` so they work whether in FIPS
mode or not.
**Issue** The data stream lifecycle does not register correctly rollover
errors for failure store.
**Observed bahaviour** When data stream lifecycle encounters a rollover
error it records it unless it sees that the current write index of this
data stream doesn't match the source index of the request. However, the
write index check does not use the failure write index but the write
backing index, so the failure gets ignored
**Desired behaviour** When data stream lifecycle encounters a rollover
error it will check the relevant write index before it determines if it
should be recorded or not.
`S3BlobContainerRetriesTests` uses `MD5DigestCalculatingInputStream`
from the AWS v1 SDK to compute a MD5 checksum, but this feature is not
available in the v2 SDK. With this commit we remove this dependency and
compute the MD5 checksums directly instead.
In this PR we introduce the data stream API in the `es-rest-api` using
the feature flag feature. This enabled us to use the `yamlRestTests`
tests instead of the `javaRestTests`.
- Fixed bug where 416 was being erroneously returned for zero-length blobs even with no Range header
- Fixed bug where partial upload wouldn't be completed if the last PUT included no data
- Return 206 (partial content) status when a Range header is specified
- Return an ETag on object get - BlobReadChannel uses this to ensure we fail when the blob is updated between successive chunks being fetched)
- The 416 on zero-length blobs was one of(?) the causes of #125668
The `METHOD /path/components?and=query` string representation of a
request is becoming increasingly difficult to parse, with slight
variations in parsing between the implementation in `S3HttpHandler` and
the various other implementations. This commit gets rid of the
string-concatenate-and-split behaviour in favour of a proper object that
has predicates for testing all the different kinds of request that might
be made against S3.
This removes some leftover remnants from using StringBuilder
as part of String concatenation. Since we no longer support JDK 8,
this code can be safely removed.