Earlier versions of MinIO had a bug which can cause repository analysis
failures. This commit upgrades the MinIO test container version to pick
up the bug fix, and reverts the workaround implemented in #127166.
Relates https://github.com/minio/minio/issues/21189
Our APMTracer doesn't like nulls - this is a sensible thing, as APM in general does not allow nulls (it only allows a precise set of types).
This PR changes the attribute to a sentinel "" in place of null values. It also makes a small change to APMTracer to give a better error message in case of null values in attributes.
The close-listeners are never completed exceptionally today so they do
not need the exception mangling of a `ListenableFuture`. The connect-
and remove-listeners sometimes see an exception if the connection
attempt fails, but they also do not need any exception-mangling.
This commit removes the exception-mangling by replacing these
`ListenableFuture` instances with `SubscribableListener` ones.
Today we have a `FlowControlHandler` near the top of the Netty HTTP
pipeline in order to hold back a request body while validating the
request headers. This is inefficient since once we've validated the
headers we can handle the body chunks as fast as they arrive, needing no
more flow control. Moreover today we always fork the validation
completion back onto the event loop, forcing any available chunks to be
buffered in the `FlowControlHandler`.
This commit moves the flow-control mechanism into
`Netty4HttpHeaderValidator` itself so that we can bypass it on validated
message bodies. Morever in the (common) case that validation completes
immediately, e.g. because the credentials are available in cache, then
with this commit we skip the flow-control-related buffering entirely.
Previously, exceptions encountered on a netty channel were caught and logged at
some level, but not passed to the TcpChannel or Transport.Connection close
listeners. This limited observability. This change implements this exception
reporting and passing, with TcpChannel.onException and NodeChannels.closeAndFail
reporting exceptions and their close listeners receiving them. Some test
infrastructure (FakeTcpChannel) and assertions in close listener onFailure
methods have been updated.
* Add multi-project support for more stats APIs
This affects the following APIs:
- `GET _nodes/stats`:
- For `indices`, it now prefixes the index name with the project ID (for non-default projects). Previously, it didn't tell you which project an index was in, and it failed if two projects had the same index name.
- For `ingest`, it now gets the pipeline and processor stats for all projects, and prefixes the pipeline ID with the project ID. Previously, it only got them for the default project.
- `GET /_cluster/stats`:
- For `ingest`, it now aggregates the pipeline and processor stats for all projects. Previously, it only got them for the default project.
- `GET /_info`:
- For `ingest`, same as for `GET /_nodes/stats`.
This is done by making `IndicesService.stats()` and `IngestService.stats()` include project IDs in the `NodeIndicesStats` and `IngestStats` objects they return, and making those stats objects incorporate the project IDs when converting to XContent.
The transitive callers of these two methods are rather extensive (including all callers to `NodeService.stats()`, all callers of `TransportNodesStatsAction`, and so on). To ensure the change is safe, the callers were all checked out, and they fall into the following cases:
- The behaviour change is one of the desired enhancements described above.
- There is no behaviour change because it was getting node stats but neither `indices` nor `ingest` stats were requested.
- There is no behaviour change because it was getting `indices` and/or `ingest` stats but only using aggregate values.
- In `MachineLearningUsageTransportAction` and `TransportGetTrainedModelsStatsAction`, the `IngestStats` returned will return stats from all projects instead of just the default with this change, but they have been changed to filter the non-default project stats out, so this change is a noop there. (These actions are not MP-ready yet.)
- `MonitoringService` will be affected, but this is the legacy monitoring module which is not in use anywhere that MP is going to be enabled. (If anything, the behaviour is probably improved by this change, as it will now include project IDs, rather than producing ambiguous unqualified results and failing in the case of duplicates.)
* Update test/external-modules/multi-project/build.gradle
Change suggested by Niels.
Co-authored-by: Niels Bauman <33722607+nielsbauman@users.noreply.github.com>
* Respond to review comments
* fix merge weirdness
* [CI] Auto commit changes from spotless
* Fix test compilation following upstream change to base class
* Update x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/datatiers/DataTierUsageFixtures.java
Co-authored-by: Niels Bauman <33722607+nielsbauman@users.noreply.github.com>
* Make projects-by-index map nullable and omit in single-project; always include project prefix in XContent in multip-project, even if default; also incorporate one other review comment
* Add a TODO
* update IT to reflect changed behaviour
* Switch to using XContent.Params to indicate whether it is multi-project or not
* Refactor NodesStatsMultiProjectIT to common up repeated assertions
* Defer use of ProjectIdResolver in REST handlers to keep tests happy
* Include index UUID in "unknown project" case
* Make the index-to-project map empty rather than null in the BWC deserialization case.
This works out fine, for the reasons given in the comment. As it happens, I'd already forgotten to do the null check in the one place it's actively used.
* remove a TODO that is done, and add a comment
* fix typo
* Get REST YAML tests working with project ID prefix TODO finish this
* As a drive-by, fix and un-suppress one of the health REST tests
* [CI] Auto commit changes from spotless
* TODO ugh
* Experiment with different stashing behaviour
* [CI] Auto commit changes from spotless
* Try a more sensible stash behaviour for assertions
* clarify comment
* Make checkstyle happy
* Make the way `Assertion` works more consistent, and simplify implementation
* [CI] Auto commit changes from spotless
* In RestNodesStatsAction, make the XContent params to channel.request(), which is the value it would have had before this change
---------
Co-authored-by: Niels Bauman <33722607+nielsbauman@users.noreply.github.com>
Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
In order to remove ActionType, ActionRequest will become strongly typed,
referring to the ActionResponse type. As a precursor to that, this
commit adds a LegacyActionRequest which all existing ActionRequest
implementations now inherit from. This will allow adding the
ActionResponse type to ActionRequest in a future commit without
modifying every implementation at once.
On Serverless, the `repository_azure` thread pool is
shared between snapshots and translogs/segments upload
logic. Because snapshots can be rate-limited when
executing in the repository_azure thread pool, we want
to leave enough room for the other upload threads to be
executed.
Relates ES-11391
Today Elasticsearch will record the purpose for each request to S3 using
a custom query parameter[^1]. This isn't believed to be necessary
outside of the ECH/ECE/ECK/... managed services, and it adds rather a
lot to the request logs, so with this commit we make the feature
optional and disabled by default.
[^1]:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html#LogFormatCustom
Painless does not support accessing nested docs (except through
_source). Yet the painless execute api indexes any nested docs that are
found when parsing the sample document. This commit changes the ram
indexing to only index the root document, ignoring any nested docs.
fixes#41004
This entitlement is required, but only if validating the metadata
endpoint against `https://login.microsoft.com/` which isn't something we
can do in a test. Kind of a SDK bug, we should be using an existing
event loop rather than spawning threads randomly like this.
This changes the default value of both the
`data_streams.auto_sharding.increase_shards.load_metric` and
`data_streams.auto_sharding.decrease_shards.load_metric` cluster
settings from `PEAK` to `ALL_TIME`. This setting has been applied via
config for several weeks now.
The approach taken to updating the tests was to swap the values given for the all-time and peak loads in all the stats objects provided as input to the tests, and to swap the enum values in the couple of places they appear.
It's possible for another component to request a S3 client after the
node has started to shut down, and today the `S3Service` will dutifully
attempt to create a fresh client instance even if it is closed. Such
clients will then leak, resulting in test failures.
With this commit we refuse to create new S3 clients once the service has
started to shut down.
Fix a bug in the `significant_terms` agg where the "subsetSize" array is
too small because we never collect the ordinal for the agg "above" it.
This mostly hits when the you do a `range` agg containing a
`significant_terms` AND you only collect the first few ranges. `range`
isn't particularly popular, but `date_histogram` is super popular and it
rewrites into a `range` pretty commonly - so that's likely what's really
hitting this - a `date_histogram` followed by a `significant_text` where
the matches are all early in the date range held by the shard.
Entitlements do a stack walk to find the calling class. When method
refences are used in a lambda, the frame ends up hidden in the stack
walk. In the case of using a method reference with
AccessController.doPrivileged, the call looks like it is the jdk itself,
so the call is trivially allowed. This commit adds hidden frames to the
stack walk so that the lambda frame created for the method reference is
included. Several internal packages are then necessary to filter out of
the stack.
Re-applying #126441 (cf. #127259) with:
- the extra `FlowControlHandler` needed to ensure one-chunk-per-read
semantics (also present in #127259).
- no extra `read()` after exhausting a `Netty4HttpRequestBodyStream`
(the bug behind #127391 and #127391).
See #127111 for related tests.
During reindexing we retrieve the index mode from the template settings. However, we do not fully resolve the settings as we do when validating a template or when creating a data stream. This results on throwing the error reported in #125607.
I do not see a reason to not fix this as suggested in #125607 (comment).
Fixes: #125607
The `s3.client.CLIENT_NAME.protocol` setting became unused in #126843 as
it is inapplicable in the v2 SDK. However, the v2 SDK requires the
`s3.client.CLIENT_NAME.endpoint` setting to be a URL that includes a
scheme, so in #127489 we prepend a `https://` to the endpoint if needed.
This commit generalizes this slightly so that we prepend `http://` if
the endpoint has no scheme and the `.protocol` setting is set to `http`.
* Avoid time-based expiry of channel stats or else `testHttpClientStats`
will fail if running multiple iterations for more than 5m.
* Assert all bytes received in `testHttpClientStats`.
* Initial testHealthIndicator that fails
* Refactor: FileSettingsHealthInfo record
* Propagate file settings health indicator to health node
* ensureStableCluster
* Try to induce a failure from returning node-local info
* Remove redundant node from client() call
* Use local node ID in UpdateHealthInfoCacheAction.Request
* Move logger to top
* Test node-local health on master and health nodes
* Fix calculate to use the given info
* mutateFileSettingsHealthInfo
* Test status from local current info
* FileSettingsHealthTracker
* Spruce up HealthInfoTests
* spotless
* randomNonNegativeLong
* Rename variable
Co-authored-by: Niels Bauman <33722607+nielsbauman@users.noreply.github.com>
* Address Niels' comments
* Test one- and two-node clusters
* [CI] Auto commit changes from spotless
* Ensure there's a master node
Co-authored-by: Niels Bauman <33722607+nielsbauman@users.noreply.github.com>
* setBootstrapMasterNodeIndex
---------
Co-authored-by: Niels Bauman <33722607+nielsbauman@users.noreply.github.com>
Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
This PR is adding the API capability to ensure that the API tests that
check for the default failures retention will only be executed when the
version supports this. This was missed in the original PR
(https://github.com/elastic/elasticsearch/pull/127573).
We introduce a new global retention setting `data_streams.lifecycle.retention.failures_default` which is used by the data stream lifecycle management as the default retention when the failure store lifecycle of the data stream does not specify one.
Elasticsearch comes with the default value of 30 days. The value can be changed via the settings API to any time value higher than 10 seconds or -1 to indicate no default retention should apply.
The failures default retention can be set to values higher than the max retention, but then the max retention will be effective. The reason for this choice it to ensure that no deployments will be broken, if the user has already set up max retention less than 30 days.
This PR adds to the indexing write load, the time taken to flush write indexing buffers using the indexing threads (this is done here to push back on indexing)
This changes the semantics of InternalIndexingStats#recentIndexMetric and InternalIndexingStats#peakIndexMetric to more accurately account for load on the indexing thread. Address ES-11356.
This change introduces Settings to ProjectMetadata and adds project scope support for Setting.
For now, project-scoped settings are independent from cluster settings and do not fall back to cluster-level settings.
Also, setting update consumers do not yet work correctly for project-scoped settings. These issues will be addressed separately in future PRs.
The failure store is a set of data stream indices that are used to store certain type of ingestion failures. Until this moment they were sharing the configuration of the backing indices. We understand that the two data sets have different lifecycle needs.
We believe that typically the failures will need to be retained much less than the data. Considering this we believe the lifecycle needs of the failures also more limited and they fit better the simplicity of the data stream lifecycle feature.
This allows the user to only set the desired retention and we will perform the rollover and other maintenance tasks without the user having to think about them. Furthermore, having only one lifecycle management feature allows us to ensure that these data is managed by default.
This PR introduces the following:
Configuration
We extend the failure store configuration to allow lifecycle configuration too, this configuration reflects the user's configuration only as shown below:
PUT _data_stream/*/options
{
"failure_store": {
"lifecycle": {
"data_retention": "5d"
}
}
}
GET _data_stream/*/options
{
"data_streams": [
{
"name": "my-ds",
"options": {
"failure_store": {
"lifecycle": {
"data_retention": "5d"
}
}
}
}
]
}
To retrieve the effective configuration you need to use the GET data streams API, see #126668
Functionality
The data stream lifecycle (DLM) will manage the failure indices regardless if the failure store is enabled or not. This will ensure that if the failure store gets disabled we will not have stagnant data.
The data stream options APIs reflect only the user's configuration.
The GET data stream API should be used to check the current state of the effective failure store configuration.
Telemetry
We extend the data stream failure store telemetry to also include the lifecycle telemetry.
{
"data_streams": {
"available": true,
"enabled": true,
"data_streams": 10,
"indices_count": 50,
"failure_store": {
"explicitly_enabled_count": 1,
"effectively_enabled_count": 15,
"failure_indices_count": 30
"lifecycle": {
"explicitly_enabled_count": 5,
"effectively_enabled_count": 20,
"data_retention": {
"configured_data_streams": 5,
"minimum_millis": X,
"maximum_millis": Y,
"average_millis": Z,
},
"effective_retention": {
"retained_data_streams": 20,
"minimum_millis": X,
"maximum_millis": Y,
"average_millis": Z
},
"global_retention": {
"max": {
"defined": false
},
"default": {
"defined": true, <------ this is the default value applicable for the failure store
"millis": X
}
}
}
}
}
Implementation details
We ensure that partially reset failure store will create valid failure store configuration.
We ensure that when a node communicates with a note with a previous version it will ensure it will not send an invalid failure store configuration enabled: null.
We replace usages of time sensitive
`DataStream#getDefaultBackingIndexName` with the retrieval of the name
via an API call. The problem with using the time sensitive method is
that we can have test failures around midnight.
Relates #123376
Catching `Exception` instead of `SdkException` in `copyBlob` and
`executeMultipart` led to failures in `S3RepositoryAnalysisRestIT` due
to the injected exceptions getting wrapped in `IOExceptions` that
prevented them from being caught and handled in `BlobAnalyzeAction`.
Repeat of #126731, regressed due to #126843Closes#127399
This method would default to starting a new node when the cluster was
empty. This is pretty trappy as `getClient()` (or things like
`getMaster()` that depend on `getClient()`) don't look at all like
something that would start a new node.
In any case, the intention of tests is much clearer when they explicitly
define a cluster configuration.
Today these tests assert that the requests received by the handler are
signed in region `us-east-1` with no region specified, but in fact when
running in EC2 the SDK will pick up the actual region which may be
different. This commit skips this region validation for now (it is
tested elsewhere).