The class `DataStreamLifecycle` is currently capturing the lifecycle
configuration that currently manages all data stream indices, but soon
enough it will be split into two variants, the data and the failures
lifecycle.
Some pre-work has been done already but as we are progressing in our
POC, we see that it will be really useful if the `DataStreamLifecycle`
is "aware" of the target index component. This will allow us to
correctly apply global retention or to throw an error if a downsampling
configuration is provided to a failure lifecycle.
In this PR, we perform a small refactoring to reduce the noise in
https://github.com/elastic/elasticsearch/pull/125658. Here we introduce
the following:
- A factory method that creates a data lifecycle, for now it's trivial but it will be more useful soon.
- We rename the "empty" builder to explicitly mention the index component it refers to.
Recently we changed the implementation of
`testDataStreamLifecycleDownsampleRollingRestart` to use a temporary
state listener. We missed that the listener also had a timeout that was
quite shorter than the `safeGet` timeout we were configuring. In this PR
we align these two timeouts.
Fixes: #123769
Transport actions have associated request and response classes. However,
the base type restrictions are not necessary to duplicate when creating
a map of transport actions. Relatedly, the ActionHandler class doesn't
actually need strongly typed action type and classes since they are lost
when shoved into the node client map. This commit removes these type
restrictions and generic parameters.
This PR implements authorization logic for failure store access. It
builds on https://github.com/elastic/elasticsearch/pull/122715.
Access to the failure store is granted by two privileges:
`read_failure_store` and `manage_failure_store`. Either of these
privileges lets a user access a failure store via the `::failures`
selector, as well as access its backing failure indices.
`read_failure_store` grants read access (for example to search documents
in a failure store), `manage_failure_store` grants access to write
operations, such as rollover. Users with only `read` or `manage` on a
data stream do not get failure store access. Vice versa, users with
`read_failure_store` and `manage_failure_store` do not get access to
regular data in a data stream.
The PR implements this by making authorization logic selector-aware. It
involves two main changes:
1. Index permission groups now compare the selector under which an index resource is accessed to the selector associated with the group.
2. The `AuthorizedIndices` interface likewise uses selectors to decide which indices to treat as authorized. This part of the change requires a sizable refactor and changes to the interface.
The high-level behavior for selector-aware search is as follows:
For a user with `read_failure_store` over data stream `logs`:
- `POST /logs::failures/_search` returns the documents in the failure store.
- `POST /logs/_search` returns a 403.
- `POST /logs/_search?ignore_unavailable=true` and `POST /*/_search` return an empty result.
Similarly, for a user with `read` over data stream `logs`:
- `POST /logs::failures/_search` returns a 403.
- `POST /logs/_search` returns documents in the data stream.
- `POST /logs::failures/_search?ignore_unavailable=true` and `POST /*::failures/_search` return an empty result.
A user with both `read` and `read_failure_store` over data stream `logs`
gets access to both `POST /logs::failures/_search` and `POST
/logs/_search`.
The index privilege `all` automatically grants access to both data and
the failures store, as well as all hypothetical future selectors.
Resolves: ES-10873
Remove `-da:org.elasticsearch.index.mapper.DocumentMapper` and `-da:org.elasticsearch.index.mapper.MapperService` from mixed cluster/bwc cluster setups. Given that #122606 is now backported to the 8.18 branch.
This makes using usesDefaultDistribution in our test setup for explicit by requiring a reason why it's needed.
This is helpful as part of revisiting the need for all those usages in our code base.
By indexing the test data set in multiple bulk requests, which ensure there are multiple segments, which improves testing coverage for DownsampleShardIndexer.
We don't catch more complex DownsampleShardIndexer issues if we end up with just one segment. Hence, we need to make the DownsampleShardIndexer test suite a little bit more evil by ensuring we have more than one segment.
Read dimension values once per tsid/bucket docid range instead of for each document being processed.
The dimension value within a bucket-interval docid range is always to same and this avoids unnecessary reads.
Latency of downsampling the tsdb track index into a 1 hour interval downsample index drop by ~16% (running on my local machine).
There is no point in having `GroupShardsIterator`, it's mostly an
unnecessary layer of indirection as it has no state and a single field
only. It's only value could be seen in it hiding the ability to mutate
the list it wraps, but that hardly justifies the overhead on the search
path and extra code complexity. Moreover, the list it references is not
copied/immutable in any way, so the value of hiding is limited also.
Some areas of the code call this field type
AggregateDoubleMetric and others AggregateMetricDouble, but the docs
use aggregate_metric_double, so for consistency this commit refactors
the former into the latter.
* Exhaustive testParseFractionalNumber
* Refactor: encapsulate ByteSizeUnit constructor
* Refactor: store size in bytes
* Support up to 2 decimals in parsed ByteSizeValue
* Fix test for rounding up with no warnings
* ByteSizeUnit transport changes
* Update docs/changelog/120142.yaml
* Changelog details and impact
* Fix change log breaking.area
* Address PR comments
This removes a redundant thread creation when triggering a rolling
restart as the method is already async and drops the check for cluster
health as that might hit a node that's being shut down (the master node
in particular).
This updates the gradle wrapper to 8.12
We addressed deprecation warnings due to the update that includes:
- Fix change in TestOutputEvent api
- Fix deprecation in groovy syntax
- Use latest ospackage plugin containing our fix
- Remove project usages at execution time
- Fix deprecated project references in repository-old-versions