Merge main into multi-project

This commit is contained in:
Tim Vernum 2025-02-19 16:40:34 +11:00
commit 838d8389de
144 changed files with 1643 additions and 2426 deletions

View file

@ -37,7 +37,7 @@ for BRANCH in "${BRANCHES[@]}"; do
if [[ "$SHOULD_TRIGGER" == "true" ]]; then
if [[ "$BRANCH" == "9.0" ]]; then
export VERSION_QUALIFIER="beta1"
export VERSION_QUALIFIER="rc1"
fi
echo "Triggering DRA staging workflow for $BRANCH"
cat << EOF | buildkite-agent pipeline upload

View file

@ -8,7 +8,7 @@ source .buildkite/scripts/branches.sh
for BRANCH in "${BRANCHES[@]}"; do
if [[ "$BRANCH" == "9.0" ]]; then
export VERSION_QUALIFIER="beta1"
export VERSION_QUALIFIER="rc1"
fi
INTAKE_PIPELINE_SLUG="elasticsearch-intake"

View file

@ -22,7 +22,7 @@ public enum DockerBase {
// Chainguard based wolfi image with latest jdk
// This is usually updated via renovatebot
// spotless:off
WOLFI("docker.elastic.co/wolfi/chainguard-base:latest@sha256:ecd940be9f342ee6173397c48f3df5bb410e95000f8726fd01759b6c39b0beda",
WOLFI("docker.elastic.co/wolfi/chainguard-base:latest@sha256:d74b1fda6b7fee2c90b410df258e005c049e0672fe16d79d00e58f14fb69f90b",
"-wolfi",
"apk"
),

View file

@ -0,0 +1,5 @@
pr: 120952
summary: Add `_metric_names_hash` field to OTel metric mappings
area: Data streams
type: bug
issues: []

View file

@ -0,0 +1,6 @@
pr: 122409
summary: Allow setting the `type` in the reroute processor
area: Ingest Node
type: enhancement
issues:
- 121553

View file

@ -0,0 +1,5 @@
pr: 122538
summary: Fix `ArrayIndexOutOfBoundsException` in `ShardBulkInferenceActionFilter`
area: Ingest
type: bug
issues: []

View file

@ -0,0 +1,6 @@
pr: 122637
summary: Use `FallbackSyntheticSourceBlockLoader` for `unsigned_long` and `scaled_float`
fields
area: Mapping
type: enhancement
issues: []

View file

@ -0,0 +1,5 @@
pr: 122737
summary: Bump json-smart and oauth2-oidc-sdk
area: Authentication
type: upgrade
issues: []

View file

@ -229,19 +229,45 @@ works in parallel with the storage engine.)
# Allocation
(AllocationService runs on the master node)
### Core Components
(Discuss different deciders that limit allocation. Sketch / list the different deciders that we have.)
The `DesiredBalanceShardsAllocator` is what runs shard allocation decisions. It leverages the `DesiredBalanceComputer` to produce
`DesiredBalance` instances for the cluster based on the latest cluster changes (add/remove nodes, create/remove indices, load, etc.). Then
the `DesiredBalanceReconciler` is invoked to choose the next steps to take to move the cluster from the current shard allocation to the
latest computed `DesiredBalance` shard allocation. The `DesiredBalanceReconciler` will apply changes to a copy of the `RoutingNodes`, which
is then published in a cluster state update that will reach the data nodes to start the individual shard recovery/deletion/move work.
### APIs for Balancing Operations
The `DesiredBalanceReconciler` is throttled by cluster settings, like the max number of concurrent shard moves and recoveries per cluster
and node: this is why the `DesiredBalanceReconciler` will make, and publish via cluster state updates, incremental changes to the cluster
shard allocation. The `DesiredBalanceShardsAllocator` is the endpoint for reroute requests, which may trigger immediate requests to the
`DesiredBalanceReconciler`, but asynchronous requests to the `DesiredBalanceComputer` via the `ContinuousComputation` component. Cluster
state changes that affect shard balancing (for example index deletion) all call some reroute method interface that reaches the
`DesiredBalanceShardsAllocator` to run reconciliation and queue a request for the `DesiredBalancerComputer`, leading to desired balance
computation and reconciliation actions. Asynchronous completion of a new `DesiredBalance` will also invoke a reconciliation action, as will
cluster state updates completing shard moves/recoveries (unthrottling the next shard move/recovery).
(Significant internal APIs for balancing a cluster)
The `ContinuousComputation` saves the latest desired balance computation request, which holds the cluster information at the time of that
request, and a thread that runs the `DesiredBalanceComputer`. The `ContinuousComputation` thread takes the latest request, with the
associated cluster information, feeds it into the `DesiredBalanceComputer` and publishes a `DesiredBalance` back to the
`DesiredBalanceShardsAllocator` to use for reconciliation actions. Sometimes the `ContinuousComputation` thread's desired balance
computation will be signalled to exit early and publish the initial `DesiredBalance` improvements it has made, when newer rebalancing
requests (due to cluster state changes) have arrived, or in order to begin recovery of unassigned shards as quickly as possible.
### Heuristics for Allocation
### Rebalancing Process
### Cluster Reroute Command
(How does this command behave with the desired auto balancer.)
There are different priorities in shard allocation, reflected in which moves the `DesiredBalancerReconciler` selects to do first given that
it can only move, recover, or remove a limited number of shards at once. The first priority is assigning unassigned shards, primaries being
more important than replicas. The second is to move shards that violate any rule (such as node resource limits) as defined by an
`AllocationDecider`. The `AllocationDeciders` holds a group of `AllocationDecider` implementations that place hard constraints on shard
allocation. There is a decider, `DiskThresholdDecider`, that manages disk memory usage thresholds, such that further shards may not be
allowed assignment to a node, or shards may be required to move off because they grew to exceed the disk space; or another,
`FilterAllocationDecider`, that excludes a configurable list of indices from certain nodes; or `MaxRetryAllocationDecider` that will not
attempt to recover a shard on a certain node after so many failed retries. The third priority is to rebalance shards to even out the
relative weight of shards on each node: the intention is to avoid, or ease, future hot-spotting on data nodes due to too many shards being
placed on the same data node. Node shard weight is based on a sum of factors: disk memory usage, projected shard write load, total number
of shards, and an incentive to distribute shards within the same index across different nodes. See the `WeightFunction` and
`NodeAllocationStatsAndWeightsCalculator` classes for more details on the weight calculations that support the `DesiredBalanceComputer`
decisions.
# Autoscaling

View file

@ -45,6 +45,9 @@ Otherwise, the document will be rejected with a security exception which looks l
|======
| Name | Required | Default | Description
| `destination` | no | - | A static value for the target. Can't be set when the `dataset` or `namespace` option is set.
| `type` | no | `{{data_stream.type}}` a| Field references or a static value for the type part of the data stream name. In addition to the criteria for <<indices-create-api-path-params, index names>>, cannot contain `-` and must be no longer than 100 characters. Example values are `logs` and `metrics`.
Supports field references with a mustache-like syntax (denoted as `{{double}}` or `{{{triple}}}` curly braces). When resolving field references, the processor replaces invalid characters with `_`. Uses the `<type>` part of the index name as a fallback if all field references resolve to a `null`, missing, or non-string value.
| `dataset` | no | `{{data_stream.dataset}}` a| Field references or a static value for the dataset part of the data stream name. In addition to the criteria for <<indices-create-api-path-params, index names>>, cannot contain `-` and must be no longer than 100 characters. Example values are `nginx.access` and `nginx.error`.
Supports field references with a mustache-like syntax (denoted as `{{double}}` or `{{{triple}}}` curly braces). When resolving field references, the processor replaces invalid characters with `_`. Uses the `<dataset>` part of the index name as a fallback if all field references resolve to a `null`, missing, or non-string value.

View file

@ -984,36 +984,19 @@
<sha256 value="e8c1c594e2425bdbea2d860de55c69b69fc5d59454452449a0f0913c2a5b8a31" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="com.nimbusds" name="nimbus-jose-jwt" version="10.0.1">
<artifact name="nimbus-jose-jwt-10.0.1.jar">
<sha256 value="f28dbd9ab128324f05050d76b78469d3a9cd83e0319aabc68d1c276e3923e13a" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="com.nimbusds" name="nimbus-jose-jwt" version="4.41.1">
<artifact name="nimbus-jose-jwt-4.41.1.jar">
<sha256 value="fbfd0d5f2b2f86758b821daa5e79b5d7c965edd9dc1b2cc80b515df1c6ddc22d" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="com.nimbusds" name="nimbus-jose-jwt" version="9.37.3">
<artifact name="nimbus-jose-jwt-9.37.3.jar">
<sha256 value="12ae4a3a260095d7aeba2adea7ae396e8b9570db8b7b409e09a824c219cc0444" origin="Generated by Gradle">
<also-trust value="afc63b689d881439b95f343b1dca750391edac63b87392be4d90d19c94ccafbe"/>
</sha256>
</artifact>
</component>
<component group="com.nimbusds" name="nimbus-jose-jwt" version="9.8.1">
<artifact name="nimbus-jose-jwt-9.8.1.jar">
<sha256 value="7664cf8c6f2adadf600287812b32878277beda54912eab9d4c2932cd50cb704a" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="com.nimbusds" name="oauth2-oidc-sdk" version="11.10.1">
<artifact name="oauth2-oidc-sdk-11.10.1.jar">
<sha256 value="9e51b2c17503cdd3eb97f41491c712aff7783bb3c67185d789f44ccf2a603b26" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="com.nimbusds" name="oauth2-oidc-sdk" version="11.9.1">
<artifact name="oauth2-oidc-sdk-11.9.1.jar">
<sha256 value="0820c9690966304d075347b88e81ae490213440fc4d2c84f3d370d41941b2b9c" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="com.nimbusds" name="oauth2-oidc-sdk" version="9.37">
<artifact name="oauth2-oidc-sdk-9.37.jar">
<sha256 value="44a04bbed5ae3f6d198aa73ee6b545c476e528ec1a267ef3e9f7033f886dd6fe" origin="Generated by Gradle"/>
<component group="com.nimbusds" name="oauth2-oidc-sdk" version="11.22.2">
<artifact name="oauth2-oidc-sdk-11.22.2.jar">
<sha256 value="64fab42f17bf8e0efb193dd34da716ef7abb7515234036119df1776b808dc066" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="com.perforce" name="p4java" version="2015.2.1365273">
@ -1779,9 +1762,9 @@
<sha256 value="0972bbc99437c4163acd09b630e6c77eab4cfab8a9594621c95466c0c6645396" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="net.minidev" name="accessors-smart" version="2.5.0">
<artifact name="accessors-smart-2.5.0.jar">
<sha256 value="12314fc6881d66a413fd66370787adba16e504fbf7e138690b0f3952e3fbd321" origin="Generated by Gradle"/>
<component group="net.minidev" name="accessors-smart" version="2.5.2">
<artifact name="accessors-smart-2.5.2.jar">
<sha256 value="9b8a7bc43861d6156c021166d941fb7dddbe4463e2fa5ee88077e4b01452a836" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="net.minidev" name="json-smart" version="2.3">
@ -1789,24 +1772,14 @@
<sha256 value="903f48c8aa4c3f6426440b8d32de89fa1dc23b1169abde25e4e1d068aa67708b" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="net.minidev" name="json-smart" version="2.4.10">
<artifact name="json-smart-2.4.10.jar">
<sha256 value="70cab5e9488630dc631b1fc6e7fa550d95cddd19ba14db39ceca7cabfbd4e5ae" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="net.minidev" name="json-smart" version="2.4.2">
<artifact name="json-smart-2.4.2.jar">
<sha256 value="64072f56d9dff5040b2acec477c5d5e6bcebfc88c508f12acb26072d07942146" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="net.minidev" name="json-smart" version="2.5.0">
<artifact name="json-smart-2.5.0.jar">
<sha256 value="432b9e545848c4141b80717b26e367f83bf33f19250a228ce75da6e967da2bc7" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="net.minidev" name="json-smart" version="2.5.1">
<artifact name="json-smart-2.5.1.jar">
<sha256 value="86c0c189581b79b57b0719f443a724e9f628ffbb9eef645cf79194f5973a1001" origin="Generated by Gradle"/>
<component group="net.minidev" name="json-smart" version="2.5.2">
<artifact name="json-smart-2.5.2.jar">
<sha256 value="4fbdedb0105cedc7f766b95c297d2e88fb6a560da48f3bbaa0cc538ea8b7bf71" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="net.nextencia" name="rrdiagram" version="0.9.4">
@ -4408,31 +4381,6 @@
<sha256 value="ca5b8d11569e53921b0e3486469e7c674361c79845dad3d514f38ab6e0c8c10a" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="org.ow2.asm" name="asm" version="9.2">
<artifact name="asm-9.2.jar">
<sha256 value="b9d4fe4d71938df38839f0eca42aaaa64cf8b313d678da036f0cb3ca199b47f5" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="org.ow2.asm" name="asm" version="9.3">
<artifact name="asm-9.3.jar">
<sha256 value="1263369b59e29c943918de11d6d6152e2ec6085ce63e5710516f8c67d368e4bc" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="org.ow2.asm" name="asm" version="9.4">
<artifact name="asm-9.4.jar">
<sha256 value="39d0e2b3dc45af65a09b097945750a94a126e052e124f93468443a1d0e15f381" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="org.ow2.asm" name="asm" version="9.5">
<artifact name="asm-9.5.jar">
<sha256 value="b62e84b5980729751b0458c534cf1366f727542bb8d158621335682a460f0353" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="org.ow2.asm" name="asm" version="9.6">
<artifact name="asm-9.6.jar">
<sha256 value="3c6fac2424db3d4a853b669f4e3d1d9c3c552235e19a319673f887083c2303a1" origin="Generated by Gradle"/>
</artifact>
</component>
<component group="org.ow2.asm" name="asm" version="9.7.1">
<artifact name="asm-9.7.1.jar">
<sha256 value="8cadd43ac5eb6d09de05faecca38b917a040bb9139c7edeb4cc81c740b713281" origin="Generated by Gradle"/>

View file

@ -10,6 +10,7 @@
package org.elasticsearch.entitlement.bridge;
import java.io.File;
import java.io.FileDescriptor;
import java.io.FileFilter;
import java.io.FilenameFilter;
import java.io.InputStream;
@ -572,14 +573,54 @@ public interface EntitlementChecker {
void check$java_io_File$setWritable(Class<?> callerClass, File file, boolean writable, boolean ownerOnly);
void check$java_io_FileInputStream$(Class<?> callerClass, File file);
void check$java_io_FileInputStream$(Class<?> callerClass, FileDescriptor fd);
void check$java_io_FileInputStream$(Class<?> callerClass, String name);
void check$java_io_FileOutputStream$(Class<?> callerClass, File file);
void check$java_io_FileOutputStream$(Class<?> callerClass, File file, boolean append);
void check$java_io_FileOutputStream$(Class<?> callerClass, FileDescriptor fd);
void check$java_io_FileOutputStream$(Class<?> callerClass, String name);
void check$java_io_FileOutputStream$(Class<?> callerClass, String name, boolean append);
void check$java_io_FileReader$(Class<?> callerClass, File file);
void check$java_io_FileReader$(Class<?> callerClass, File file, Charset charset);
void check$java_io_FileReader$(Class<?> callerClass, FileDescriptor fd);
void check$java_io_FileReader$(Class<?> callerClass, String name);
void check$java_io_FileReader$(Class<?> callerClass, String name, Charset charset);
void check$java_io_FileWriter$(Class<?> callerClass, File file);
void check$java_io_FileWriter$(Class<?> callerClass, File file, boolean append);
void check$java_io_FileWriter$(Class<?> callerClass, File file, Charset charset);
void check$java_io_FileWriter$(Class<?> callerClass, File file, Charset charset, boolean append);
void check$java_io_FileWriter$(Class<?> callerClass, FileDescriptor fd);
void check$java_io_FileWriter$(Class<?> callerClass, String name);
void check$java_io_FileWriter$(Class<?> callerClass, String name, boolean append);
void check$java_io_FileWriter$(Class<?> callerClass, String name, Charset charset);
void check$java_io_FileWriter$(Class<?> callerClass, String name, Charset charset, boolean append);
void check$java_io_RandomAccessFile$(Class<?> callerClass, String name, String mode);
void check$java_io_RandomAccessFile$(Class<?> callerClass, File file, String mode);
void check$java_util_Scanner$(Class<?> callerClass, File source);
void check$java_util_Scanner$(Class<?> callerClass, File source, String charsetName);

View file

@ -13,9 +13,14 @@ import org.elasticsearch.core.SuppressForbidden;
import org.elasticsearch.entitlement.qa.entitled.EntitledActions;
import java.io.File;
import java.io.FileDescriptor;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
@ -23,6 +28,7 @@ import java.nio.file.Paths;
import java.nio.file.attribute.UserPrincipal;
import java.util.Scanner;
import static org.elasticsearch.entitlement.qa.test.EntitlementTest.ExpectedAccess.ALWAYS_DENIED;
import static org.elasticsearch.entitlement.qa.test.EntitlementTest.ExpectedAccess.PLUGINS;
@SuppressForbidden(reason = "Explicitly checking APIs that are forbidden")
@ -216,6 +222,21 @@ class FileCheckActions {
new Scanner(readFile().toFile(), "UTF-8");
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileInputStreamFile() throws IOException {
new FileInputStream(readFile().toFile()).close();
}
@EntitlementTest(expectedAccess = ALWAYS_DENIED)
static void createFileInputStreamFileDescriptor() throws IOException {
new FileInputStream(FileDescriptor.in).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileInputStreamString() throws IOException {
new FileInputStream(readFile().toString()).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileOutputStreamString() throws IOException {
new FileOutputStream(readWriteFile().toString()).close();
@ -236,6 +257,96 @@ class FileCheckActions {
new FileOutputStream(readWriteFile().toFile(), false).close();
}
@EntitlementTest(expectedAccess = ALWAYS_DENIED)
static void createFileOutputStreamFileDescriptor() throws IOException {
new FileOutputStream(FileDescriptor.out).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileReaderFile() throws IOException {
new FileReader(readFile().toFile()).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileReaderFileCharset() throws IOException {
new FileReader(readFile().toFile(), StandardCharsets.UTF_8).close();
}
@EntitlementTest(expectedAccess = ALWAYS_DENIED)
static void createFileReaderFileDescriptor() throws IOException {
new FileReader(FileDescriptor.in).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileReaderString() throws IOException {
new FileReader(readFile().toString()).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileReaderStringCharset() throws IOException {
new FileReader(readFile().toString(), StandardCharsets.UTF_8).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileWriterFile() throws IOException {
new FileWriter(readWriteFile().toFile()).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileWriterFileWithAppend() throws IOException {
new FileWriter(readWriteFile().toFile(), false).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileWriterFileCharsetWithAppend() throws IOException {
new FileWriter(readWriteFile().toFile(), StandardCharsets.UTF_8, false).close();
}
@EntitlementTest(expectedAccess = ALWAYS_DENIED)
static void createFileWriterFileDescriptor() throws IOException {
new FileWriter(FileDescriptor.out).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileWriterString() throws IOException {
new FileWriter(readWriteFile().toString()).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileWriterStringWithAppend() throws IOException {
new FileWriter(readWriteFile().toString(), false).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileWriterStringCharset() throws IOException {
new FileWriter(readWriteFile().toString(), StandardCharsets.UTF_8).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createFileWriterStringCharsetWithAppend() throws IOException {
new FileWriter(readWriteFile().toString(), StandardCharsets.UTF_8, false).close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createRandomAccessFileStringRead() throws IOException {
new RandomAccessFile(readFile().toString(), "r").close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createRandomAccessFileStringReadWrite() throws IOException {
new RandomAccessFile(readWriteFile().toString(), "rw").close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createRandomAccessFileRead() throws IOException {
new RandomAccessFile(readFile().toFile(), "r").close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void createRandomAccessFileReadWrite() throws IOException {
new RandomAccessFile(readWriteFile().toFile(), "rw").close();
}
@EntitlementTest(expectedAccess = PLUGINS)
static void filesGetOwner() throws IOException {
Files.getOwner(readFile());

View file

@ -28,6 +28,7 @@ import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Map;
import java.util.function.Function;
import java.util.stream.Stream;
import static java.util.Objects.requireNonNull;
@ -36,19 +37,24 @@ public class EntitlementBootstrap {
public record BootstrapArgs(
Map<String, Policy> pluginPolicies,
Function<Class<?>, String> pluginResolver,
Function<String, String> settingResolver,
Function<String, Stream<String>> settingGlobResolver,
Path[] dataDirs,
Path configDir,
Path tempDir,
Path logsDir
Path logsDir,
Path tempDir
) {
public BootstrapArgs {
requireNonNull(pluginPolicies);
requireNonNull(pluginResolver);
requireNonNull(settingResolver);
requireNonNull(settingGlobResolver);
requireNonNull(dataDirs);
if (dataDirs.length == 0) {
throw new IllegalArgumentException("must provide at least one data directory");
}
requireNonNull(configDir);
requireNonNull(logsDir);
requireNonNull(tempDir);
}
}
@ -73,16 +79,27 @@ public class EntitlementBootstrap {
public static void bootstrap(
Map<String, Policy> pluginPolicies,
Function<Class<?>, String> pluginResolver,
Function<String, String> settingResolver,
Function<String, Stream<String>> settingGlobResolver,
Path[] dataDirs,
Path configDir,
Path tempDir,
Path logsDir
Path logsDir,
Path tempDir
) {
logger.debug("Loading entitlement agent");
if (EntitlementBootstrap.bootstrapArgs != null) {
throw new IllegalStateException("plugin data is already set");
}
EntitlementBootstrap.bootstrapArgs = new BootstrapArgs(pluginPolicies, pluginResolver, dataDirs, configDir, tempDir, logsDir);
EntitlementBootstrap.bootstrapArgs = new BootstrapArgs(
pluginPolicies,
pluginResolver,
settingResolver,
settingGlobResolver,
dataDirs,
configDir,
logsDir,
tempDir
);
exportInitializationToAgent();
loadAgent(findAgentJar());
selfTest();

View file

@ -53,6 +53,7 @@ import java.nio.file.attribute.FileAttribute;
import java.nio.file.spi.FileSystemProvider;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@ -134,13 +135,18 @@ public class EntitlementInitialization {
private static PolicyManager createPolicyManager() {
EntitlementBootstrap.BootstrapArgs bootstrapArgs = EntitlementBootstrap.bootstrapArgs();
Map<String, Policy> pluginPolicies = bootstrapArgs.pluginPolicies();
var pathLookup = new PathLookup(getUserHome(), bootstrapArgs.configDir(), bootstrapArgs.dataDirs(), bootstrapArgs.tempDir());
Path logsDir = EntitlementBootstrap.bootstrapArgs().logsDir();
var pathLookup = new PathLookup(
getUserHome(),
bootstrapArgs.configDir(),
bootstrapArgs.dataDirs(),
bootstrapArgs.tempDir(),
bootstrapArgs.settingResolver(),
bootstrapArgs.settingGlobResolver()
);
// TODO(ES-10031): Decide what goes in the elasticsearch default policy and extend it
var serverPolicy = new Policy(
"server",
List.of(
List<Scope> serverScopes = new ArrayList<>();
Collections.addAll(
serverScopes,
new Scope("org.elasticsearch.base", List.of(new CreateClassLoaderEntitlement())),
new Scope("org.elasticsearch.xcontent", List.of(new CreateClassLoaderEntitlement())),
new Scope(
@ -205,8 +211,17 @@ public class EntitlementInitialization {
new FilesEntitlement(List.of(FileData.ofRelativePath(Path.of(""), FilesEntitlement.BaseDir.DATA, READ_WRITE)))
)
)
)
);
Path trustStorePath = trustStorePath();
if (trustStorePath != null) {
serverScopes.add(
new Scope("org.bouncycastle.fips.tls", List.of(new FilesEntitlement(List.of(FileData.ofPath(trustStorePath, READ)))))
);
}
// TODO(ES-10031): Decide what goes in the elasticsearch default policy and extend it
var serverPolicy = new Policy("server", serverScopes);
// agents run without a module, so this is a special hack for the apm agent
// this should be removed once https://github.com/elastic/elasticsearch/issues/109335 is completed
List<Entitlement> agentEntitlements = List.of(new CreateClassLoaderEntitlement(), new ManageThreadsEntitlement());
@ -230,6 +245,11 @@ public class EntitlementInitialization {
return PathUtils.get(userHome);
}
private static Path trustStorePath() {
String trustStore = System.getProperty("javax.net.ssl.trustStore");
return trustStore != null ? Path.of(trustStore) : null;
}
private static Stream<InstrumentationService.InstrumentationInfo> fileSystemProviderChecks() throws ClassNotFoundException,
NoSuchMethodException {
var fileSystemProviderClass = FileSystems.getDefault().provider().getClass();

View file

@ -14,6 +14,7 @@ import org.elasticsearch.entitlement.bridge.EntitlementChecker;
import org.elasticsearch.entitlement.runtime.policy.PolicyManager;
import java.io.File;
import java.io.FileDescriptor;
import java.io.FileFilter;
import java.io.FilenameFilter;
import java.io.IOException;
@ -1103,6 +1104,21 @@ public class ElasticsearchEntitlementChecker implements EntitlementChecker {
policyManager.checkFileWrite(callerClass, file);
}
@Override
public void check$java_io_FileInputStream$(Class<?> callerClass, File file) {
policyManager.checkFileRead(callerClass, file);
}
@Override
public void check$java_io_FileInputStream$(Class<?> callerClass, FileDescriptor fd) {
policyManager.checkFileDescriptorRead(callerClass);
}
@Override
public void check$java_io_FileInputStream$(Class<?> callerClass, String name) {
policyManager.checkFileRead(callerClass, new File(name));
}
@Override
public void check$java_io_FileOutputStream$(Class<?> callerClass, String name) {
policyManager.checkFileWrite(callerClass, new File(name));
@ -1123,6 +1139,99 @@ public class ElasticsearchEntitlementChecker implements EntitlementChecker {
policyManager.checkFileWrite(callerClass, file);
}
@Override
public void check$java_io_FileOutputStream$(Class<?> callerClass, FileDescriptor fd) {
policyManager.checkFileDescriptorWrite(callerClass);
}
@Override
public void check$java_io_FileReader$(Class<?> callerClass, File file) {
policyManager.checkFileRead(callerClass, file);
}
@Override
public void check$java_io_FileReader$(Class<?> callerClass, File file, Charset charset) {
policyManager.checkFileRead(callerClass, file);
}
@Override
public void check$java_io_FileReader$(Class<?> callerClass, FileDescriptor fd) {
policyManager.checkFileDescriptorRead(callerClass);
}
@Override
public void check$java_io_FileReader$(Class<?> callerClass, String name) {
policyManager.checkFileRead(callerClass, new File(name));
}
@Override
public void check$java_io_FileReader$(Class<?> callerClass, String name, Charset charset) {
policyManager.checkFileRead(callerClass, new File(name));
}
@Override
public void check$java_io_FileWriter$(Class<?> callerClass, File file) {
policyManager.checkFileWrite(callerClass, file);
}
@Override
public void check$java_io_FileWriter$(Class<?> callerClass, File file, boolean append) {
policyManager.checkFileWrite(callerClass, file);
}
@Override
public void check$java_io_FileWriter$(Class<?> callerClass, File file, Charset charset) {
policyManager.checkFileWrite(callerClass, file);
}
@Override
public void check$java_io_FileWriter$(Class<?> callerClass, File file, Charset charset, boolean append) {
policyManager.checkFileWrite(callerClass, file);
}
@Override
public void check$java_io_FileWriter$(Class<?> callerClass, FileDescriptor fd) {
policyManager.checkFileDescriptorWrite(callerClass);
}
@Override
public void check$java_io_FileWriter$(Class<?> callerClass, String name) {
policyManager.checkFileWrite(callerClass, new File(name));
}
@Override
public void check$java_io_FileWriter$(Class<?> callerClass, String name, boolean append) {
policyManager.checkFileWrite(callerClass, new File(name));
}
@Override
public void check$java_io_FileWriter$(Class<?> callerClass, String name, Charset charset) {
policyManager.checkFileWrite(callerClass, new File(name));
}
@Override
public void check$java_io_FileWriter$(Class<?> callerClass, String name, Charset charset, boolean append) {
policyManager.checkFileWrite(callerClass, new File(name));
}
@Override
public void check$java_io_RandomAccessFile$(Class<?> callerClass, String name, String mode) {
if (mode.equals("r")) {
policyManager.checkFileRead(callerClass, new File(name));
} else {
policyManager.checkFileWrite(callerClass, new File(name));
}
}
@Override
public void check$java_io_RandomAccessFile$(Class<?> callerClass, File file, String mode) {
if (mode.equals("r")) {
policyManager.checkFileRead(callerClass, file);
} else {
policyManager.checkFileWrite(callerClass, file);
}
}
@Override
public void check$java_util_Scanner$(Class<?> callerClass, File source) {
policyManager.checkFileRead(callerClass, source);

View file

@ -42,8 +42,9 @@ public final class FileAccessTree {
}
// everything has access to the temp dir
readPaths.add(pathLookup.tempDir().toString());
writePaths.add(pathLookup.tempDir().toString());
String tempDir = normalizePath(pathLookup.tempDir());
readPaths.add(tempDir);
writePaths.add(tempDir);
readPaths.sort(String::compareTo);
writePaths.sort(String::compareTo);

View file

@ -10,5 +10,14 @@
package org.elasticsearch.entitlement.runtime.policy;
import java.nio.file.Path;
import java.util.function.Function;
import java.util.stream.Stream;
public record PathLookup(Path homeDir, Path configDir, Path[] dataDirs, Path tempDir) {}
public record PathLookup(
Path homeDir,
Path configDir,
Path[] dataDirs,
Path tempDir,
Function<String, String> settingResolver,
Function<String, Stream<String>> settingGlobResolver
) {}

View file

@ -304,6 +304,14 @@ public class PolicyManager {
}
}
public void checkFileDescriptorRead(Class<?> callerClass) {
neverEntitled(callerClass, () -> "read file descriptor");
}
public void checkFileDescriptorWrite(Class<?> callerClass) {
neverEntitled(callerClass, () -> "write file descriptor");
}
/**
* Invoked when we try to get an arbitrary {@code FileAttributeView} class. Such a class can modify attributes, like owner etc.;
* we could think about introducing checks for each of the operations, but for now we over-approximate this and simply deny when it is

View file

@ -15,7 +15,6 @@ import org.elasticsearch.entitlement.runtime.policy.PolicyValidationException;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@ -53,6 +52,43 @@ public record FilesEntitlement(List<FileData> filesData) implements Entitlement
static FileData ofRelativePath(Path relativePath, BaseDir baseDir, Mode mode) {
return new RelativePathFileData(relativePath, baseDir, mode);
}
static FileData ofPathSetting(String setting, Mode mode) {
return new PathSettingFileData(setting, mode);
}
static FileData ofRelativePathSetting(String setting, BaseDir baseDir, Mode mode) {
return new RelativePathSettingFileData(setting, baseDir, mode);
}
}
private sealed interface RelativeFileData extends FileData {
BaseDir baseDir();
Stream<Path> resolveRelativePaths(PathLookup pathLookup);
@Override
default Stream<Path> resolvePaths(PathLookup pathLookup) {
Objects.requireNonNull(pathLookup);
var relativePaths = resolveRelativePaths(pathLookup);
switch (baseDir()) {
case CONFIG:
return relativePaths.map(relativePath -> pathLookup.configDir().resolve(relativePath));
case DATA:
// multiple data dirs are a pain...we need the combination of relative paths and data dirs
List<Path> paths = new ArrayList<>();
for (var relativePath : relativePaths.toList()) {
for (var dataDir : pathLookup.dataDirs()) {
paths.add(dataDir.resolve(relativePath));
}
}
return paths.stream();
case HOME:
return relativePaths.map(relativePath -> pathLookup.homeDir().resolve(relativePath));
default:
throw new IllegalArgumentException();
}
}
}
private record AbsolutePathFileData(Path path, Mode mode) implements FileData {
@ -62,22 +98,33 @@ public record FilesEntitlement(List<FileData> filesData) implements Entitlement
}
}
private record RelativePathFileData(Path relativePath, BaseDir baseDir, Mode mode) implements FileData {
private record RelativePathFileData(Path relativePath, BaseDir baseDir, Mode mode) implements FileData, RelativeFileData {
@Override
public Stream<Path> resolveRelativePaths(PathLookup pathLookup) {
return Stream.of(relativePath);
}
}
private record PathSettingFileData(String setting, Mode mode) implements FileData {
@Override
public Stream<Path> resolvePaths(PathLookup pathLookup) {
Objects.requireNonNull(pathLookup);
switch (baseDir) {
case CONFIG:
return Stream.of(pathLookup.configDir().resolve(relativePath));
case DATA:
return Arrays.stream(pathLookup.dataDirs()).map(d -> d.resolve(relativePath));
case HOME:
return Stream.of(pathLookup.homeDir().resolve(relativePath));
default:
throw new IllegalArgumentException();
return resolvePathSettings(pathLookup, setting);
}
}
private record RelativePathSettingFileData(String setting, BaseDir baseDir, Mode mode) implements FileData, RelativeFileData {
@Override
public Stream<Path> resolveRelativePaths(PathLookup pathLookup) {
return resolvePathSettings(pathLookup, setting);
}
}
private static Stream<Path> resolvePathSettings(PathLookup pathLookup, String setting) {
if (setting.contains("*")) {
return pathLookup.settingGlobResolver().apply(setting).map(Path::of);
}
String path = pathLookup.settingResolver().apply(setting);
return path == null ? Stream.of() : Stream.of(Path.of(path));
}
private static Mode parseMode(String mode) {
@ -113,37 +160,56 @@ public record FilesEntitlement(List<FileData> filesData) implements Entitlement
String pathAsString = file.remove("path");
String relativePathAsString = file.remove("relative_path");
String relativeTo = file.remove("relative_to");
String mode = file.remove("mode");
String pathSetting = file.remove("path_setting");
String relativePathSetting = file.remove("relative_path_setting");
String modeAsString = file.remove("mode");
if (file.isEmpty() == false) {
throw new PolicyValidationException("unknown key(s) [" + file + "] in a listed file for files entitlement");
}
if (mode == null) {
int foundKeys = (pathAsString != null ? 1 : 0) + (relativePathAsString != null ? 1 : 0) + (pathSetting != null ? 1 : 0)
+ (relativePathSetting != null ? 1 : 0);
if (foundKeys != 1) {
throw new PolicyValidationException(
"a files entitlement entry must contain one of " + "[path, relative_path, path_setting, relative_path_setting]"
);
}
if (modeAsString == null) {
throw new PolicyValidationException("files entitlement must contain 'mode' for every listed file");
}
if (pathAsString != null && relativePathAsString != null) {
throw new PolicyValidationException("a files entitlement entry cannot contain both 'path' and 'relative_path'");
Mode mode = parseMode(modeAsString);
BaseDir baseDir = null;
if (relativeTo != null) {
baseDir = parseBaseDir(relativeTo);
}
if (relativePathAsString != null) {
if (relativeTo == null) {
if (baseDir == null) {
throw new PolicyValidationException("files entitlement with a 'relative_path' must specify 'relative_to'");
}
final BaseDir baseDir = parseBaseDir(relativeTo);
Path relativePath = Path.of(relativePathAsString);
if (relativePath.isAbsolute()) {
throw new PolicyValidationException("'relative_path' [" + relativePathAsString + "] must be relative");
}
filesData.add(FileData.ofRelativePath(relativePath, baseDir, parseMode(mode)));
filesData.add(FileData.ofRelativePath(relativePath, baseDir, mode));
} else if (pathAsString != null) {
Path path = Path.of(pathAsString);
if (path.isAbsolute() == false) {
throw new PolicyValidationException("'path' [" + pathAsString + "] must be absolute");
}
filesData.add(FileData.ofPath(path, parseMode(mode)));
filesData.add(FileData.ofPath(path, mode));
} else if (pathSetting != null) {
filesData.add(FileData.ofPathSetting(pathSetting, mode));
} else if (relativePathSetting != null) {
if (baseDir == null) {
throw new PolicyValidationException("files entitlement with a 'relative_path_setting' must specify 'relative_to'");
}
filesData.add(FileData.ofRelativePathSetting(relativePathSetting, baseDir, mode));
} else {
throw new PolicyValidationException("files entitlement must contain either 'path' or 'relative_path' for every entry");
throw new AssertionError("File entry validation error");
}
}
return new FilesEntitlement(filesData);

View file

@ -9,6 +9,7 @@
package org.elasticsearch.entitlement.runtime.policy;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.entitlement.runtime.policy.entitlements.FilesEntitlement;
import org.elasticsearch.test.ESTestCase;
import org.junit.BeforeClass;
@ -25,10 +26,12 @@ import static org.hamcrest.Matchers.is;
public class FileAccessTreeTests extends ESTestCase {
static Path root;
static Settings settings;
@BeforeClass
public static void setupRoot() {
root = createTempDir();
settings = Settings.EMPTY;
}
private static Path path(String s) {
@ -39,7 +42,9 @@ public class FileAccessTreeTests extends ESTestCase {
Path.of("/home"),
Path.of("/config"),
new Path[] { Path.of("/data1"), Path.of("/data2") },
Path.of("/tmp")
Path.of("/tmp"),
setting -> settings.get(setting),
glob -> settings.getGlobValues(glob)
);
public void testEmpty() {
@ -163,13 +168,9 @@ public class FileAccessTreeTests extends ESTestCase {
}
public void testTempDirAccess() {
Path tempDir = createTempDir();
var tree = FileAccessTree.of(
FilesEntitlement.EMPTY,
new PathLookup(Path.of("/home"), Path.of("/config"), new Path[] { Path.of("/data1"), Path.of("/data2") }, tempDir)
);
assertThat(tree.canRead(tempDir), is(true));
assertThat(tree.canWrite(tempDir), is(true));
var tree = FileAccessTree.of(FilesEntitlement.EMPTY, TEST_PATH_LOOKUP);
assertThat(tree.canRead(TEST_PATH_LOOKUP.tempDir()), is(true));
assertThat(tree.canWrite(TEST_PATH_LOOKUP.tempDir()), is(true));
}
FileAccessTree accessTree(FilesEntitlement entitlement) {

View file

@ -9,6 +9,7 @@
package org.elasticsearch.entitlement.runtime.policy;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.entitlement.runtime.policy.PolicyManager.ModuleEntitlements;
import org.elasticsearch.entitlement.runtime.policy.agent.TestAgent;
import org.elasticsearch.entitlement.runtime.policy.agent.inner.TestInnerAgent;
@ -68,7 +69,9 @@ public class PolicyManagerTests extends ESTestCase {
TEST_BASE_DIR.resolve("/user/home"),
TEST_BASE_DIR.resolve("/config"),
new Path[] { TEST_BASE_DIR.resolve("/data1/"), TEST_BASE_DIR.resolve("/data2") },
TEST_BASE_DIR.resolve("/temp")
TEST_BASE_DIR.resolve("/temp"),
Settings.EMPTY::get,
Settings.EMPTY::getGlobValues
);
} catch (Exception e) {
throw new IllegalStateException(e);

View file

@ -74,7 +74,8 @@ public class PolicyParserFailureTests extends ESTestCase {
""".getBytes(StandardCharsets.UTF_8)), "test-failure-policy.yaml", false).parsePolicy());
assertEquals(
"[2:5] policy parsing error for [test-failure-policy.yaml] in scope [entitlement-module-name] "
+ "for entitlement type [files]: a files entitlement entry cannot contain both 'path' and 'relative_path'",
+ "for entitlement type [files]: a files entitlement entry must contain one of "
+ "[path, relative_path, path_setting, relative_path_setting]",
ppe.getMessage()
);
}
@ -87,7 +88,8 @@ public class PolicyParserFailureTests extends ESTestCase {
""".getBytes(StandardCharsets.UTF_8)), "test-failure-policy.yaml", false).parsePolicy());
assertEquals(
"[2:5] policy parsing error for [test-failure-policy.yaml] in scope [entitlement-module-name] "
+ "for entitlement type [files]: files entitlement must contain either 'path' or 'relative_path' for every entry",
+ "for entitlement type [files]: a files entitlement entry must contain one of "
+ "[path, relative_path, path_setting, relative_path_setting]",
ppe.getMessage()
);
}

View file

@ -182,6 +182,11 @@ public class PolicyParserTests extends ESTestCase {
mode: "read"
- path: '%s'
mode: "read_write"
- path_setting: foo.bar
mode: read
- relative_path_setting: foo.bar
relative_to: config
mode: read
""", relativePathToFile, relativePathToDir, TEST_ABSOLUTE_PATH_TO_FILE).getBytes(StandardCharsets.UTF_8)),
"test-policy.yaml",
false
@ -196,7 +201,9 @@ public class PolicyParserTests extends ESTestCase {
List.of(
Map.of("relative_path", relativePathToFile, "mode", "read_write", "relative_to", "data"),
Map.of("relative_path", relativePathToDir, "mode", "read", "relative_to", "config"),
Map.of("path", TEST_ABSOLUTE_PATH_TO_FILE, "mode", "read_write")
Map.of("path", TEST_ABSOLUTE_PATH_TO_FILE, "mode", "read_write"),
Map.of("path_setting", "foo.bar", "mode", "read"),
Map.of("relative_path_setting", "foo.bar", "relative_to", "config", "mode", "read")
)
)
)

View file

@ -9,20 +9,42 @@
package org.elasticsearch.entitlement.runtime.policy.entitlements;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.entitlement.runtime.policy.PathLookup;
import org.elasticsearch.entitlement.runtime.policy.PolicyValidationException;
import org.elasticsearch.entitlement.runtime.policy.entitlements.FilesEntitlement.FileData;
import org.elasticsearch.test.ESTestCase;
import org.junit.BeforeClass;
import java.nio.file.Path;
import java.util.List;
import java.util.Map;
import static org.elasticsearch.entitlement.runtime.policy.entitlements.FilesEntitlement.Mode.READ;
import static org.elasticsearch.entitlement.runtime.policy.entitlements.FilesEntitlement.Mode.READ_WRITE;
import static org.hamcrest.Matchers.contains;
import static org.hamcrest.Matchers.containsInAnyOrder;
import static org.hamcrest.Matchers.empty;
import static org.hamcrest.Matchers.is;
public class FilesEntitlementTests extends ESTestCase {
static Settings settings;
@BeforeClass
public static void setupRoot() {
settings = Settings.EMPTY;
}
private static final PathLookup TEST_PATH_LOOKUP = new PathLookup(
Path.of("home"),
Path.of("/config"),
new Path[] { Path.of("/data1"), Path.of("/data2") },
Path.of("/tmp"),
setting -> settings.get(setting),
glob -> settings.getGlobValues(glob)
);
public void testEmptyBuild() {
PolicyValidationException pve = expectThrows(PolicyValidationException.class, () -> FilesEntitlement.build(List.of()));
assertEquals("must specify at least one path", pve.getMessage());
@ -39,10 +61,30 @@ public class FilesEntitlementTests extends ESTestCase {
}
public void testFileDataRelativeWithEmptyDirectory() {
var fileData = FilesEntitlement.FileData.ofRelativePath(Path.of(""), FilesEntitlement.BaseDir.DATA, READ_WRITE);
var dataDirs = fileData.resolvePaths(
new PathLookup(Path.of("/home"), Path.of("/config"), new Path[] { Path.of("/data1/"), Path.of("/data2") }, Path.of("/temp"))
);
var fileData = FileData.ofRelativePath(Path.of(""), FilesEntitlement.BaseDir.DATA, READ_WRITE);
var dataDirs = fileData.resolvePaths(TEST_PATH_LOOKUP);
assertThat(dataDirs.toList(), contains(Path.of("/data1/"), Path.of("/data2")));
}
public void testPathSettingResolve() {
var entitlement = FilesEntitlement.build(List.of(Map.of("path_setting", "foo.bar", "mode", "read")));
var filesData = entitlement.filesData();
assertThat(filesData, contains(FileData.ofPathSetting("foo.bar", READ)));
var fileData = FileData.ofPathSetting("foo.bar", READ);
// empty settings
assertThat(fileData.resolvePaths(TEST_PATH_LOOKUP).toList(), empty());
fileData = FileData.ofPathSetting("foo.bar", READ);
settings = Settings.builder().put("foo.bar", "/setting/path").build();
assertThat(fileData.resolvePaths(TEST_PATH_LOOKUP).toList(), contains(Path.of("/setting/path")));
fileData = FileData.ofPathSetting("foo.*.bar", READ);
settings = Settings.builder().put("foo.baz.bar", "/setting/path").build();
assertThat(fileData.resolvePaths(TEST_PATH_LOOKUP).toList(), contains(Path.of("/setting/path")));
fileData = FileData.ofPathSetting("foo.*.bar", READ);
settings = Settings.builder().put("foo.baz.bar", "/setting/path").put("foo.baz2.bar", "/other/path").build();
assertThat(fileData.resolvePaths(TEST_PATH_LOOKUP).toList(), containsInAnyOrder(Path.of("/setting/path"), Path.of("/other/path")));
}
}

View file

@ -26,6 +26,7 @@ import static org.elasticsearch.core.Strings.format;
import static org.elasticsearch.ingest.ConfigurationUtils.newConfigurationException;
import static org.elasticsearch.ingest.common.RerouteProcessor.DataStreamValueSource.DATASET_VALUE_SOURCE;
import static org.elasticsearch.ingest.common.RerouteProcessor.DataStreamValueSource.NAMESPACE_VALUE_SOURCE;
import static org.elasticsearch.ingest.common.RerouteProcessor.DataStreamValueSource.TYPE_VALUE_SOURCE;
public final class RerouteProcessor extends AbstractProcessor {
@ -39,6 +40,7 @@ public final class RerouteProcessor extends AbstractProcessor {
private static final String DATA_STREAM_DATASET = DATA_STREAM_PREFIX + "dataset";
private static final String DATA_STREAM_NAMESPACE = DATA_STREAM_PREFIX + "namespace";
private static final String EVENT_DATASET = "event.dataset";
private final List<DataStreamValueSource> type;
private final List<DataStreamValueSource> dataset;
private final List<DataStreamValueSource> namespace;
private final String destination;
@ -46,11 +48,17 @@ public final class RerouteProcessor extends AbstractProcessor {
RerouteProcessor(
String tag,
String description,
List<DataStreamValueSource> type,
List<DataStreamValueSource> dataset,
List<DataStreamValueSource> namespace,
String destination
) {
super(tag, description);
if (type.isEmpty()) {
this.type = List.of(TYPE_VALUE_SOURCE);
} else {
this.type = type;
}
if (dataset.isEmpty()) {
this.dataset = List.of(DATASET_VALUE_SOURCE);
} else {
@ -71,7 +79,7 @@ public final class RerouteProcessor extends AbstractProcessor {
return ingestDocument;
}
final String indexName = ingestDocument.getFieldValue(IngestDocument.Metadata.INDEX.getFieldName(), String.class);
final String type;
final String currentType;
final String currentDataset;
final String currentNamespace;
@ -84,10 +92,11 @@ public final class RerouteProcessor extends AbstractProcessor {
if (indexOfSecondDash < 0) {
throw new IllegalArgumentException(format(NAMING_SCHEME_ERROR_MESSAGE, indexName));
}
type = parseDataStreamType(indexName, indexOfFirstDash);
currentType = parseDataStreamType(indexName, indexOfFirstDash);
currentDataset = parseDataStreamDataset(indexName, indexOfFirstDash, indexOfSecondDash);
currentNamespace = parseDataStreamNamespace(indexName, indexOfSecondDash);
String type = determineDataStreamField(ingestDocument, this.type, currentType);
String dataset = determineDataStreamField(ingestDocument, this.dataset, currentDataset);
String namespace = determineDataStreamField(ingestDocument, this.namespace, currentNamespace);
String newTarget = type + "-" + dataset + "-" + namespace;
@ -168,6 +177,15 @@ public final class RerouteProcessor extends AbstractProcessor {
String description,
Map<String, Object> config
) throws Exception {
List<DataStreamValueSource> type;
try {
type = ConfigurationUtils.readOptionalListOrString(TYPE, tag, config, "type")
.stream()
.map(DataStreamValueSource::type)
.toList();
} catch (IllegalArgumentException e) {
throw newConfigurationException(TYPE, tag, "type", e.getMessage());
}
List<DataStreamValueSource> dataset;
try {
dataset = ConfigurationUtils.readOptionalListOrString(TYPE, tag, config, "dataset")
@ -188,11 +206,11 @@ public final class RerouteProcessor extends AbstractProcessor {
}
String destination = ConfigurationUtils.readOptionalStringProperty(TYPE, tag, config, "destination");
if (destination != null && (dataset.isEmpty() == false || namespace.isEmpty() == false)) {
throw newConfigurationException(TYPE, tag, "destination", "can only be set if dataset and namespace are not set");
if (destination != null && (type.isEmpty() == false || dataset.isEmpty() == false || namespace.isEmpty() == false)) {
throw newConfigurationException(TYPE, tag, "destination", "can only be set if type, dataset, and namespace are not set");
}
return new RerouteProcessor(tag, description, dataset, namespace, destination);
return new RerouteProcessor(tag, description, type, dataset, namespace, destination);
}
}
@ -203,8 +221,10 @@ public final class RerouteProcessor extends AbstractProcessor {
private static final int MAX_LENGTH = 100;
private static final String REPLACEMENT = "_";
private static final Pattern DISALLOWED_IN_TYPE = Pattern.compile("[\\\\/*?\"<>| ,#:-]");
private static final Pattern DISALLOWED_IN_DATASET = Pattern.compile("[\\\\/*?\"<>| ,#:-]");
private static final Pattern DISALLOWED_IN_NAMESPACE = Pattern.compile("[\\\\/*?\"<>| ,#:]");
static final DataStreamValueSource TYPE_VALUE_SOURCE = type("{{" + DATA_STREAM_TYPE + "}}");
static final DataStreamValueSource DATASET_VALUE_SOURCE = dataset("{{" + DATA_STREAM_DATASET + "}}");
static final DataStreamValueSource NAMESPACE_VALUE_SOURCE = namespace("{{" + DATA_STREAM_NAMESPACE + "}}");
@ -212,6 +232,10 @@ public final class RerouteProcessor extends AbstractProcessor {
private final String fieldReference;
private final Function<String, String> sanitizer;
public static DataStreamValueSource type(String type) {
return new DataStreamValueSource(type, ds -> sanitizeDataStreamField(ds, DISALLOWED_IN_TYPE));
}
public static DataStreamValueSource dataset(String dataset) {
return new DataStreamValueSource(dataset, ds -> sanitizeDataStreamField(ds, DISALLOWED_IN_DATASET));
}

View file

@ -47,7 +47,7 @@ public class RerouteProcessorFactoryTests extends ESTestCase {
ElasticsearchParseException.class,
() -> create(Map.of("destination", "foo", "dataset", "bar"))
);
assertThat(e.getMessage(), equalTo("[destination] can only be set if dataset and namespace are not set"));
assertThat(e.getMessage(), equalTo("[destination] can only be set if type, dataset, and namespace are not set"));
}
public void testFieldReference() throws Exception {

View file

@ -27,16 +27,25 @@ public class RerouteProcessorTests extends ESTestCase {
public void testDefaults() throws Exception {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of());
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of(), List.of());
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "generic", "default");
}
public void testRouteOnType() throws Exception {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
ingestDocument.setFieldValue("event.type", "foo");
RerouteProcessor processor = createRerouteProcessor(List.of("{{event.type}}"), List.of(), List.of());
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "foo", "generic", "default");
}
public void testEventDataset() throws Exception {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
ingestDocument.setFieldValue("event.dataset", "foo");
RerouteProcessor processor = createRerouteProcessor(List.of("{{event.dataset }}"), List.of());
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of("{{event.dataset }}"), List.of());
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "foo", "default");
assertThat(ingestDocument.getFieldValue("event.dataset", String.class), equalTo("foo"));
@ -46,7 +55,7 @@ public class RerouteProcessorTests extends ESTestCase {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
ingestDocument.getCtxMap().put("event.dataset", "foo");
RerouteProcessor processor = createRerouteProcessor(List.of("{{ event.dataset}}"), List.of());
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of("{{ event.dataset}}"), List.of());
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "foo", "default");
assertThat(ingestDocument.getCtxMap().get("event.dataset"), equalTo("foo"));
@ -57,7 +66,7 @@ public class RerouteProcessorTests extends ESTestCase {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
ingestDocument.setFieldValue("ds", "foo");
RerouteProcessor processor = createRerouteProcessor(List.of("{{ ds }}"), List.of());
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of("{{ ds }}"), List.of());
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "foo", "default");
assertFalse(ingestDocument.hasField("event.dataset"));
@ -66,8 +75,8 @@ public class RerouteProcessorTests extends ESTestCase {
public void testSkipFirstProcessor() throws Exception {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
RerouteProcessor skippedProcessor = createRerouteProcessor(List.of("skip"), List.of());
RerouteProcessor executedProcessor = createRerouteProcessor(List.of("executed"), List.of());
RerouteProcessor skippedProcessor = createRerouteProcessor(List.of(), List.of("skip"), List.of());
RerouteProcessor executedProcessor = createRerouteProcessor(List.of(), List.of("executed"), List.of());
CompoundProcessor processor = new CompoundProcessor(new SkipProcessor(skippedProcessor), executedProcessor);
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "executed", "default");
@ -76,8 +85,8 @@ public class RerouteProcessorTests extends ESTestCase {
public void testSkipLastProcessor() throws Exception {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
RerouteProcessor executedProcessor = createRerouteProcessor(List.of("executed"), List.of());
RerouteProcessor skippedProcessor = createRerouteProcessor(List.of("skip"), List.of());
RerouteProcessor executedProcessor = createRerouteProcessor(List.of(), List.of("executed"), List.of());
RerouteProcessor skippedProcessor = createRerouteProcessor(List.of(), List.of("skip"), List.of());
CompoundProcessor processor = new CompoundProcessor(executedProcessor, skippedProcessor);
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "executed", "default");
@ -85,23 +94,24 @@ public class RerouteProcessorTests extends ESTestCase {
public void testDataStreamFieldsFromDocument() throws Exception {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
ingestDocument.setFieldValue("data_stream.type", "eggplant");
ingestDocument.setFieldValue("data_stream.dataset", "foo");
ingestDocument.setFieldValue("data_stream.namespace", "bar");
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of());
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of(), List.of());
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "foo", "bar");
assertDataSetFields(ingestDocument, "eggplant", "foo", "bar");
}
public void testDataStreamFieldsFromDocumentDottedNotation() throws Exception {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
ingestDocument.getCtxMap().put("data_stream.type", "logs");
ingestDocument.getCtxMap().put("data_stream.type", "eggplant");
ingestDocument.getCtxMap().put("data_stream.dataset", "foo");
ingestDocument.getCtxMap().put("data_stream.namespace", "bar");
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of());
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of(), List.of());
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "foo", "bar");
assertDataSetFields(ingestDocument, "eggplant", "foo", "bar");
}
public void testInvalidDataStreamFieldsFromDocument() throws Exception {
@ -109,7 +119,7 @@ public class RerouteProcessorTests extends ESTestCase {
ingestDocument.setFieldValue("data_stream.dataset", "foo-bar");
ingestDocument.setFieldValue("data_stream.namespace", "baz#qux");
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of());
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of(), List.of());
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "foo_bar", "baz_qux");
}
@ -128,7 +138,7 @@ public class RerouteProcessorTests extends ESTestCase {
ingestDocument.setFieldValue("service.name", "opbeans-java");
ingestDocument.setFieldValue("service.environment", "dev");
RerouteProcessor processor = createRerouteProcessor(List.of("{{service.name}}"), List.of("{{service.environment}}"));
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of("{{service.name}}"), List.of("{{service.environment}}"));
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "opbeans_java", "dev");
}
@ -136,7 +146,7 @@ public class RerouteProcessorTests extends ESTestCase {
public void testRerouteToCurrentTarget() throws Exception {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
RerouteProcessor reroute = createRerouteProcessor(List.of("generic"), List.of("default"));
RerouteProcessor reroute = createRerouteProcessor(List.of(), List.of("generic"), List.of("default"));
CompoundProcessor processor = new CompoundProcessor(
reroute,
new TestProcessor(doc -> doc.setFieldValue("pipeline_is_continued", true))
@ -149,7 +159,7 @@ public class RerouteProcessorTests extends ESTestCase {
public void testFieldReferenceWithMissingReroutesToCurrentTarget() throws Exception {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
RerouteProcessor reroute = createRerouteProcessor(List.of("{{service.name}}"), List.of("{{service.environment}}"));
RerouteProcessor reroute = createRerouteProcessor(List.of(), List.of("{{service.name}}"), List.of("{{service.environment}}"));
CompoundProcessor processor = new CompoundProcessor(
reroute,
new TestProcessor(doc -> doc.setFieldValue("pipeline_is_continued", true))
@ -166,6 +176,7 @@ public class RerouteProcessorTests extends ESTestCase {
ingestDocument.setFieldValue("data_stream.namespace", "namespace_from_doc");
RerouteProcessor processor = createRerouteProcessor(
List.of(),
List.of("{{{data_stream.dataset}}}", "fallback"),
List.of("{{data_stream.namespace}}", "fallback")
);
@ -177,6 +188,7 @@ public class RerouteProcessorTests extends ESTestCase {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
RerouteProcessor processor = createRerouteProcessor(
List.of(),
List.of("{{data_stream.dataset}}", "fallback"),
List.of("{{data_stream.namespace}}", "fallback")
);
@ -190,6 +202,7 @@ public class RerouteProcessorTests extends ESTestCase {
ingestDocument.setFieldValue("data_stream.namespace", "default");
RerouteProcessor processor = createRerouteProcessor(
List.of(),
List.of("{{data_stream.dataset}}", "fallback"),
List.of("{{{data_stream.namespace}}}", "fallback")
);
@ -202,7 +215,7 @@ public class RerouteProcessorTests extends ESTestCase {
ingestDocument.setFieldValue("data_stream.dataset", "foo");
ingestDocument.setFieldValue("data_stream.namespace", "bar");
RerouteProcessor processor = createRerouteProcessor(List.of("{{foo}}"), List.of("{{bar}}"));
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of("{{foo}}"), List.of("{{bar}}"));
processor.execute(ingestDocument);
assertDataSetFields(ingestDocument, "logs", "generic", "default");
}
@ -210,7 +223,7 @@ public class RerouteProcessorTests extends ESTestCase {
public void testInvalidDataStreamName() throws Exception {
{
IngestDocument ingestDocument = createIngestDocument("foo");
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of());
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of(), List.of());
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> processor.execute(ingestDocument));
assertThat(e.getMessage(), equalTo("invalid data stream name: [foo]; must follow naming scheme <type>-<dataset>-<namespace>"));
}
@ -227,11 +240,16 @@ public class RerouteProcessorTests extends ESTestCase {
public void testRouteOnNonStringFieldFails() {
IngestDocument ingestDocument = createIngestDocument("logs-generic-default");
ingestDocument.setFieldValue("numeric_field", 42);
RerouteProcessor processor = createRerouteProcessor(List.of("{{numeric_field}}"), List.of());
RerouteProcessor processor = createRerouteProcessor(List.of(), List.of("{{numeric_field}}"), List.of());
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> processor.execute(ingestDocument));
assertThat(e.getMessage(), equalTo("field [numeric_field] of type [java.lang.Integer] cannot be cast to [java.lang.String]"));
}
public void testTypeSanitization() {
assertTypeSanitization("\\/*?\"<>| ,#:-", "_____________");
assertTypeSanitization("foo*bar", "foo_bar");
}
public void testDatasetSanitization() {
assertDatasetSanitization("\\/*?\"<>| ,#:-", "_____________");
assertDatasetSanitization("foo*bar", "foo_bar");
@ -242,6 +260,14 @@ public class RerouteProcessorTests extends ESTestCase {
assertNamespaceSanitization("foo*bar", "foo_bar");
}
private static void assertTypeSanitization(String type, String sanitizedType) {
assertThat(
RerouteProcessor.DataStreamValueSource.type("{{foo}}")
.resolve(RandomDocumentPicks.randomIngestDocument(random(), Map.of("foo", type))),
equalTo(sanitizedType)
);
}
private static void assertDatasetSanitization(String dataset, String sanitizedDataset) {
assertThat(
RerouteProcessor.DataStreamValueSource.dataset("{{foo}}")
@ -258,10 +284,11 @@ public class RerouteProcessorTests extends ESTestCase {
);
}
private RerouteProcessor createRerouteProcessor(List<String> dataset, List<String> namespace) {
private RerouteProcessor createRerouteProcessor(List<String> type, List<String> dataset, List<String> namespace) {
return new RerouteProcessor(
null,
null,
type.stream().map(RerouteProcessor.DataStreamValueSource::type).toList(),
dataset.stream().map(RerouteProcessor.DataStreamValueSource::dataset).toList(),
namespace.stream().map(RerouteProcessor.DataStreamValueSource::namespace).toList(),
null
@ -269,7 +296,7 @@ public class RerouteProcessorTests extends ESTestCase {
}
private RerouteProcessor createRerouteProcessor(String destination) {
return new RerouteProcessor(null, null, List.of(), List.of(), destination);
return new RerouteProcessor(null, null, List.of(), List.of(), List.of(), destination);
}
private void assertDataSetFields(IngestDocument ingestDocument, String type, String dataset, String namespace) {

View file

@ -0,0 +1,5 @@
com.maxmind.db:
- files:
- relative_path: "ingest-geoip/"
relative_to: "config"
mode: "read_write"

View file

@ -33,6 +33,7 @@ import org.elasticsearch.index.mapper.BlockDocValuesReader;
import org.elasticsearch.index.mapper.BlockLoader;
import org.elasticsearch.index.mapper.BlockSourceReader;
import org.elasticsearch.index.mapper.DocumentParserContext;
import org.elasticsearch.index.mapper.FallbackSyntheticSourceBlockLoader;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.IgnoreMalformedStoredValues;
import org.elasticsearch.index.mapper.MapperBuilderContext;
@ -195,7 +196,9 @@ public class ScaledFloatFieldMapper extends FieldMapper {
scalingFactor.getValue(),
nullValue.getValue(),
metric.getValue(),
indexMode
indexMode,
coerce.getValue().value(),
context.isSourceSynthetic()
);
return new ScaledFloatFieldMapper(leafName(), type, builderParams(this, context), context.isSourceSynthetic(), this);
}
@ -209,6 +212,8 @@ public class ScaledFloatFieldMapper extends FieldMapper {
private final Double nullValue;
private final TimeSeriesParams.MetricType metricType;
private final IndexMode indexMode;
private final boolean coerce;
private final boolean isSyntheticSource;
public ScaledFloatFieldType(
String name,
@ -219,13 +224,17 @@ public class ScaledFloatFieldMapper extends FieldMapper {
double scalingFactor,
Double nullValue,
TimeSeriesParams.MetricType metricType,
IndexMode indexMode
IndexMode indexMode,
boolean coerce,
boolean isSyntheticSource
) {
super(name, indexed, stored, hasDocValues, TextSearchInfo.SIMPLE_MATCH_WITHOUT_TERMS, meta);
this.scalingFactor = scalingFactor;
this.nullValue = nullValue;
this.metricType = metricType;
this.indexMode = indexMode;
this.coerce = coerce;
this.isSyntheticSource = isSyntheticSource;
}
public ScaledFloatFieldType(String name, double scalingFactor) {
@ -233,7 +242,7 @@ public class ScaledFloatFieldMapper extends FieldMapper {
}
public ScaledFloatFieldType(String name, double scalingFactor, boolean indexed) {
this(name, indexed, false, true, Collections.emptyMap(), scalingFactor, null, null, null);
this(name, indexed, false, true, Collections.emptyMap(), scalingFactor, null, null, null, false, false);
}
public double getScalingFactor() {
@ -315,6 +324,15 @@ public class ScaledFloatFieldMapper extends FieldMapper {
double scalingFactorInverse = 1d / scalingFactor;
return new BlockDocValuesReader.DoublesBlockLoader(name(), l -> l * scalingFactorInverse);
}
if (isSyntheticSource) {
return new FallbackSyntheticSourceBlockLoader(fallbackSyntheticSourceBlockLoaderReader(), name()) {
@Override
public Builder builder(BlockFactory factory, int expectedCount) {
return factory.doubles(expectedCount);
}
};
}
ValueFetcher valueFetcher = sourceValueFetcher(blContext.sourcePaths(name()));
BlockSourceReader.LeafIteratorLookup lookup = isStored() || isIndexed()
? BlockSourceReader.lookupFromFieldNames(blContext.fieldNames(), name())
@ -322,6 +340,57 @@ public class ScaledFloatFieldMapper extends FieldMapper {
return new BlockSourceReader.DoublesBlockLoader(valueFetcher, lookup);
}
private FallbackSyntheticSourceBlockLoader.Reader<?> fallbackSyntheticSourceBlockLoaderReader() {
var nullValueAdjusted = nullValue != null ? adjustSourceValue(nullValue, scalingFactor) : null;
return new FallbackSyntheticSourceBlockLoader.ReaderWithNullValueSupport<>(nullValue) {
@Override
public void convertValue(Object value, List<Double> accumulator) {
if (coerce && value.equals("")) {
if (nullValueAdjusted != null) {
accumulator.add(nullValueAdjusted);
}
}
try {
// Convert to doc_values format
var converted = adjustSourceValue(NumberFieldMapper.NumberType.objectToDouble(value), scalingFactor);
accumulator.add(converted);
} catch (Exception e) {
// Malformed value, skip it
}
}
@Override
protected void parseNonNullValue(XContentParser parser, List<Double> accumulator) throws IOException {
// Aligned with implementation of `parseCreateField(XContentParser)`
if (coerce && parser.currentToken() == XContentParser.Token.VALUE_STRING && parser.textLength() == 0) {
if (nullValueAdjusted != null) {
accumulator.add(nullValueAdjusted);
}
}
try {
double value = parser.doubleValue(coerce);
// Convert to doc_values format
var converted = adjustSourceValue(value, scalingFactor);
accumulator.add(converted);
} catch (Exception e) {
// Malformed value, skip it
}
}
@Override
public void writeToBlock(List<Double> values, BlockLoader.Builder blockBuilder) {
var longBuilder = (BlockLoader.DoubleBuilder) blockBuilder;
for (var value : values) {
longBuilder.appendDouble(value);
}
}
};
}
@Override
public IndexFieldData.Builder fielddataBuilder(FieldDataContext fieldDataContext) {
FielddataOperation operation = fieldDataContext.fielddataOperation();
@ -386,12 +455,16 @@ public class ScaledFloatFieldMapper extends FieldMapper {
doubleValue = NumberFieldMapper.NumberType.objectToDouble(value);
}
double factor = getScalingFactor();
return Math.round(doubleValue * factor) / factor;
return adjustSourceValue(doubleValue, getScalingFactor());
}
};
}
// Adjusts precision of a double value so that it looks like it came from doc_values.
private static Double adjustSourceValue(double value, double scalingFactor) {
return Math.round(value * scalingFactor) / scalingFactor;
}
@Override
public Object valueForDisplay(Object value) {
if (value == null) {

View file

@ -0,0 +1,48 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.index.mapper.extras;
import org.elasticsearch.index.mapper.NumberFieldBlockLoaderTestCase;
import org.elasticsearch.logsdb.datageneration.FieldType;
import org.elasticsearch.plugins.Plugin;
import java.util.Collection;
import java.util.List;
import java.util.Map;
public class ScaledFloatFieldBlockLoaderTests extends NumberFieldBlockLoaderTestCase<Double> {
public ScaledFloatFieldBlockLoaderTests() {
super(FieldType.SCALED_FLOAT);
}
@Override
protected Double convert(Number value, Map<String, Object> fieldMapping) {
var scalingFactor = ((Number) fieldMapping.get("scaling_factor")).doubleValue();
var docValues = (boolean) fieldMapping.getOrDefault("doc_values", false);
// There is a slight inconsistency between values that are read from doc_values and from source.
// Due to how precision reduction is applied to source values so that they are consistent with doc_values.
// See #122547.
if (docValues) {
var reverseScalingFactor = 1d / scalingFactor;
return Math.round(value.doubleValue() * scalingFactor) * reverseScalingFactor;
}
// Adjust values coming from source to the way they are stored in doc_values.
// See mapper implementation.
return Math.round(value.doubleValue() * scalingFactor) / scalingFactor;
}
@Override
protected Collection<? extends Plugin> getPlugins() {
return List.of(new MapperExtrasPlugin());
}
}

View file

@ -95,7 +95,9 @@ public class ScaledFloatFieldTypeTests extends FieldTypeTestCase {
0.1 + randomDouble() * 100,
null,
null,
null
null,
false,
false
);
Directory dir = newDirectory();
IndexWriter w = new IndexWriter(dir, new IndexWriterConfig(null));

View file

@ -63,20 +63,20 @@ dependencies {
api "com.github.stephenc.jcip:jcip-annotations:1.0-1"
api "com.nimbusds:content-type:2.3"
api "com.nimbusds:lang-tag:1.7"
api("com.nimbusds:nimbus-jose-jwt:9.37.3"){
api("com.nimbusds:nimbus-jose-jwt:10.0.1"){
exclude group: 'com.google.crypto.tink', module: 'tink' // it's an optional dependency on which we don't rely
}
api("com.nimbusds:oauth2-oidc-sdk:11.9.1"){
api("com.nimbusds:oauth2-oidc-sdk:11.22.2"){
exclude group: 'com.google.crypto.tink', module: 'tink' // it's an optional dependency on which we don't rely
}
api "jakarta.activation:jakarta.activation-api:1.2.1"
api "jakarta.xml.bind:jakarta.xml.bind-api:2.3.3"
api "net.java.dev.jna:jna-platform:${versions.jna}" // Maven says 5.14.0 but this aligns with the Elasticsearch-wide version
api "net.java.dev.jna:jna:${versions.jna}" // Maven says 5.14.0 but this aligns with the Elasticsearch-wide version
api "net.minidev:accessors-smart:2.5.0"
api "net.minidev:json-smart:2.5.0"
api "net.minidev:accessors-smart:2.5.2"
api "net.minidev:json-smart:2.5.2"
api "org.codehaus.woodstox:stax2-api:4.2.2"
api "org.ow2.asm:asm:9.3"
api "org.ow2.asm:asm:9.7.1"
runtimeOnly "com.google.code.gson:gson:2.11.0"
runtimeOnly "org.cryptomator:siv-mode:1.5.2"
@ -190,11 +190,6 @@ tasks.named("thirdPartyAudit").configure {
'org.bouncycastle.cert.X509CertificateHolder',
'org.bouncycastle.cert.jcajce.JcaX509CertificateHolder',
'org.bouncycastle.cert.jcajce.JcaX509v3CertificateBuilder',
'org.bouncycastle.crypto.InvalidCipherTextException',
'org.bouncycastle.crypto.engines.AESEngine',
'org.bouncycastle.crypto.modes.GCMBlockCipher',
'org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider',
'org.bouncycastle.jce.provider.BouncyCastleProvider',
'org.bouncycastle.openssl.PEMKeyPair',
'org.bouncycastle.openssl.PEMParser',
'org.bouncycastle.openssl.jcajce.JcaPEMKeyConverter',

View file

@ -899,11 +899,10 @@ class S3BlobContainer extends AbstractBlobContainer {
logger.trace(() -> Strings.format("[%s]: compareAndExchangeRegister failed", key), e);
if (e instanceof AmazonS3Exception amazonS3Exception
&& (amazonS3Exception.getStatusCode() == 404
|| amazonS3Exception.getStatusCode() == 0 && "NoSuchUpload".equals(amazonS3Exception.getErrorCode()))) {
|| amazonS3Exception.getStatusCode() == 200 && "NoSuchUpload".equals(amazonS3Exception.getErrorCode()))) {
// An uncaught 404 means that our multipart upload was aborted by a concurrent operation before we could complete it.
// Also (rarely) S3 can start processing the request during a concurrent abort and this can result in a 200 OK with an
// <Error><Code>NoSuchUpload</Code>... in the response, which the SDK translates to status code 0. Either way, this means
// that our write encountered contention:
// <Error><Code>NoSuchUpload</Code>... in the response. Either way, this means that our write encountered contention:
delegate.onResponse(OptionalBytesReference.MISSING);
} else {
delegate.onFailure(e);

View file

@ -1,3 +1,7 @@
ALL-UNNAMED:
- manage_threads
- outbound_network
- files:
- relative_path: "repository-s3/aws-web-identity-token-file"
relative_to: "config"
mode: "read"

View file

@ -227,18 +227,6 @@ tests:
- class: org.elasticsearch.xpack.test.rest.XPackRestIT
method: test {p0=ml/*}
issue: https://github.com/elastic/elasticsearch/issues/120816
- class: org.elasticsearch.upgrades.VectorSearchIT
method: testBBQVectorSearch {upgradedNodes=0}
issue: https://github.com/elastic/elasticsearch/issues/121253
- class: org.elasticsearch.upgrades.VectorSearchIT
method: testBBQVectorSearch {upgradedNodes=1}
issue: https://github.com/elastic/elasticsearch/issues/121271
- class: org.elasticsearch.upgrades.VectorSearchIT
method: testBBQVectorSearch {upgradedNodes=2}
issue: https://github.com/elastic/elasticsearch/issues/121272
- class: org.elasticsearch.upgrades.VectorSearchIT
method: testBBQVectorSearch {upgradedNodes=3}
issue: https://github.com/elastic/elasticsearch/issues/121273
- class: org.elasticsearch.xpack.security.authc.ldap.ActiveDirectorySessionFactoryTests
issue: https://github.com/elastic/elasticsearch/issues/121285
- class: org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT
@ -288,9 +276,6 @@ tests:
- class: org.elasticsearch.xpack.esql.action.CrossClusterAsyncQueryStopIT
method: testStopQueryLocal
issue: https://github.com/elastic/elasticsearch/issues/121672
- class: org.elasticsearch.xpack.security.authz.IndexAliasesTests
method: testRemoveIndex
issue: https://github.com/elastic/elasticsearch/issues/122221
- class: org.elasticsearch.blocks.SimpleBlocksIT
method: testConcurrentAddBlock
issue: https://github.com/elastic/elasticsearch/issues/122324
@ -312,8 +297,6 @@ tests:
issue: https://github.com/elastic/elasticsearch/issues/122377
- class: org.elasticsearch.repositories.blobstore.testkit.analyze.HdfsRepositoryAnalysisRestIT
issue: https://github.com/elastic/elasticsearch/issues/122378
- class: org.elasticsearch.telemetry.apm.ApmAgentSettingsIT
issue: https://github.com/elastic/elasticsearch/issues/122546
- class: org.elasticsearch.xpack.inference.mapper.SemanticInferenceMetadataFieldsRecoveryTests
method: testSnapshotRecovery {p0=false p1=false}
issue: https://github.com/elastic/elasticsearch/issues/122549
@ -332,71 +315,17 @@ tests:
- class: org.elasticsearch.xpack.autoscaling.storage.ReactiveStorageIT
method: testScaleWhileShrinking
issue: https://github.com/elastic/elasticsearch/issues/122119
- class: org.elasticsearch.lucene.RollingUpgradeLuceneIndexCompatibilityTestCase
method: testIndexUpgrade {p0=[9.1.0, 8.19.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122688
- class: org.elasticsearch.lucene.RollingUpgradeLuceneIndexCompatibilityTestCase
method: testRestoreIndex {p0=[9.1.0, 9.1.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122689
- class: org.elasticsearch.lucene.RollingUpgradeLuceneIndexCompatibilityTestCase
method: testClosedIndexUpgrade {p0=[9.1.0, 8.19.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122690
- class: org.elasticsearch.lucene.RollingUpgradeLuceneIndexCompatibilityTestCase
method: testRestoreIndex {p0=[9.1.0, 8.19.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122691
- class: org.elasticsearch.xpack.searchablesnapshots.FrozenSearchableSnapshotsIntegTests
method: testCreateAndRestorePartialSearchableSnapshot
issue: https://github.com/elastic/elasticsearch/issues/122693
- class: org.elasticsearch.lucene.RollingUpgradeLuceneIndexCompatibilityTestCase
method: testClosedIndexUpgrade {p0=[9.1.0, 9.1.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122694
- class: org.elasticsearch.lucene.RollingUpgradeLuceneIndexCompatibilityTestCase
method: testClosedIndexUpgrade {p0=[9.1.0, 9.1.0, 9.1.0]}
issue: https://github.com/elastic/elasticsearch/issues/122695
- class: org.elasticsearch.lucene.RollingUpgradeLuceneIndexCompatibilityTestCase
method: testIndexUpgrade {p0=[9.1.0, 9.1.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122696
- class: org.elasticsearch.lucene.RollingUpgradeLuceneIndexCompatibilityTestCase
method: testIndexUpgrade {p0=[9.1.0, 9.1.0, 9.1.0]}
issue: https://github.com/elastic/elasticsearch/issues/122697
- class: org.elasticsearch.lucene.RollingUpgradeLuceneIndexCompatibilityTestCase
method: testRestoreIndex {p0=[9.1.0, 9.1.0, 9.1.0]}
issue: https://github.com/elastic/elasticsearch/issues/122698
- class: org.elasticsearch.lucene.RollingUpgradeSearchableSnapshotIndexCompatibilityIT
method: testSearchableSnapshotUpgrade {p0=[9.1.0, 8.19.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122700
- class: org.elasticsearch.lucene.RollingUpgradeSearchableSnapshotIndexCompatibilityIT
method: testSearchableSnapshotUpgrade {p0=[9.1.0, 9.1.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122701
- class: org.elasticsearch.lucene.RollingUpgradeSearchableSnapshotIndexCompatibilityIT
method: testMountSearchableSnapshot {p0=[9.1.0, 8.19.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122702
- class: org.elasticsearch.lucene.RollingUpgradeSearchableSnapshotIndexCompatibilityIT
method: testMountSearchableSnapshot {p0=[9.1.0, 9.1.0, 8.19.0]}
issue: https://github.com/elastic/elasticsearch/issues/122703
- class: org.elasticsearch.lucene.RollingUpgradeSearchableSnapshotIndexCompatibilityIT
method: testSearchableSnapshotUpgrade {p0=[9.1.0, 9.1.0, 9.1.0]}
issue: https://github.com/elastic/elasticsearch/issues/122704
- class: org.elasticsearch.lucene.RollingUpgradeSearchableSnapshotIndexCompatibilityIT
method: testMountSearchableSnapshot {p0=[9.1.0, 9.1.0, 9.1.0]}
issue: https://github.com/elastic/elasticsearch/issues/122705
- class: org.elasticsearch.search.basic.SearchWithRandomDisconnectsIT
method: testSearchWithRandomDisconnects
issue: https://github.com/elastic/elasticsearch/issues/122707
- class: org.elasticsearch.indices.recovery.IndexRecoveryIT
method: testSourceThrottling
issue: https://github.com/elastic/elasticsearch/issues/122712
- class: org.elasticsearch.entitlement.qa.EntitlementsDeniedNonModularIT
issue: https://github.com/elastic/elasticsearch/issues/122569
- class: org.elasticsearch.entitlement.qa.EntitlementsAllowedNonModularIT
issue: https://github.com/elastic/elasticsearch/issues/122568
- class: org.elasticsearch.entitlement.qa.EntitlementsAllowedIT
issue: https://github.com/elastic/elasticsearch/issues/122680
- class: org.elasticsearch.entitlement.qa.EntitlementsDeniedIT
issue: https://github.com/elastic/elasticsearch/issues/122566
- class: org.elasticsearch.repositories.blobstore.testkit.analyze.S3RepositoryAnalysisRestIT
method: testRepositoryAnalysis
issue: https://github.com/elastic/elasticsearch/issues/122799
- class: org.elasticsearch.xpack.esql.action.EsqlActionBreakerIT
issue: https://github.com/elastic/elasticsearch/issues/122810
- class: org.elasticsearch.snapshots.DedicatedClusterSnapshotRestoreIT
method: testRestoreShrinkIndex
issue: https://github.com/elastic/elasticsearch/issues/121717
- class: org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT
method: test {yaml=reference/cat/allocation/cat-allocation-example}
issue: https://github.com/elastic/elasticsearch/issues/121976
# Examples:
#

View file

@ -10,6 +10,7 @@
package org.elasticsearch.bootstrap;
import org.elasticsearch.core.SuppressForbidden;
import org.elasticsearch.jdk.RuntimeVersionFeature;
import org.elasticsearch.test.ESTestCase;
import org.junit.BeforeClass;
@ -42,6 +43,7 @@ public class ESPolicyUnitTests extends ESTestCase {
@BeforeClass
public static void setupPolicy() {
assumeTrue("test requires security manager to be supported", RuntimeVersionFeature.isSecurityManagerAvailable());
assumeTrue("test cannot run with security manager", System.getSecurityManager() == null);
DEFAULT_POLICY = PolicyUtil.readPolicy(ESPolicy.class.getResource(POLICY_RESOURCE), TEST_CODEBASES);
}

View file

@ -10,6 +10,7 @@
package org.elasticsearch.bootstrap;
import org.elasticsearch.core.SuppressForbidden;
import org.elasticsearch.jdk.RuntimeVersionFeature;
import org.elasticsearch.plugins.PluginDescriptor;
import org.elasticsearch.test.ESTestCase;
import org.junit.Before;
@ -40,6 +41,7 @@ public class PolicyUtilTests extends ESTestCase {
@Before
public void assumeSecurityManagerDisabled() {
assumeTrue("test requires security manager to be supported", RuntimeVersionFeature.isSecurityManagerAvailable());
assumeTrue("test cannot run with security manager enabled", System.getSecurityManager() == null);
}

View file

@ -11,8 +11,10 @@ package org.elasticsearch.plugins.cli;
import org.elasticsearch.bootstrap.PluginPolicyInfo;
import org.elasticsearch.bootstrap.PolicyUtil;
import org.elasticsearch.jdk.RuntimeVersionFeature;
import org.elasticsearch.plugins.PluginDescriptor;
import org.elasticsearch.test.ESTestCase;
import org.junit.Before;
import java.io.IOException;
import java.nio.file.Files;
@ -26,6 +28,11 @@ import static org.hamcrest.Matchers.containsInAnyOrder;
/** Tests plugin manager security check */
public class PluginSecurityTests extends ESTestCase {
@Before
public void assumeSecurityManagerSupported() {
assumeTrue("test requires security manager to be supported", RuntimeVersionFeature.isSecurityManagerAvailable());
}
PluginPolicyInfo makeDummyPlugin(String policy, String... files) throws IOException {
Path plugin = createTempDir();
Files.copy(this.getDataPath(policy), plugin.resolve(PluginDescriptor.ES_PLUGIN_POLICY));

View file

@ -13,6 +13,7 @@ import com.carrotsearch.randomizedtesting.annotations.Name;
import org.elasticsearch.client.Request;
import org.elasticsearch.client.Response;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.support.XContentMapValues;
@ -456,7 +457,11 @@ public class VectorSearchIT extends AbstractRollingUpgradeTestCase {
}
""";
// create index and index 10 random floating point vectors
createIndex(BBQ_INDEX_NAME, Settings.EMPTY, mapping);
createIndex(
BBQ_INDEX_NAME,
Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0).build(),
mapping
);
index64DimVectors(BBQ_INDEX_NAME);
// force merge the index
client().performRequest(new Request("POST", "/" + BBQ_INDEX_NAME + "/_forcemerge?max_num_segments=1"));
@ -485,8 +490,8 @@ public class VectorSearchIT extends AbstractRollingUpgradeTestCase {
Map<String, Object> response = search(searchRequest);
assertThat(extractValue(response, "hits.total.value"), equalTo(7));
List<Map<String, Object>> hits = extractValue(response, "hits.hits");
assertThat(hits.get(0).get("_id"), equalTo("0"));
assertThat((double) hits.get(0).get("_score"), closeTo(1.9869276, 0.0001));
assertThat("hits: " + response, hits.get(0).get("_id"), equalTo("0"));
assertThat("hits: " + response, (double) hits.get(0).get("_score"), closeTo(1.9869276, 0.0001));
// search with knn
searchRequest = new Request("POST", "/" + BBQ_INDEX_NAME + "/_search");
@ -504,8 +509,12 @@ public class VectorSearchIT extends AbstractRollingUpgradeTestCase {
response = search(searchRequest);
assertThat(extractValue(response, "hits.total.value"), equalTo(2));
hits = extractValue(response, "hits.hits");
assertThat(hits.get(0).get("_id"), equalTo("0"));
assertThat((double) hits.get(0).get("_score"), closeTo(0.9934857, 0.005));
assertThat("expected: 0 received" + hits.get(0).get("_id") + " hits: " + response, hits.get(0).get("_id"), equalTo("0"));
assertThat(
"expected_near: 0.99 received" + hits.get(0).get("_score") + "hits: " + response,
(double) hits.get(0).get("_score"),
closeTo(0.9934857, 0.005)
);
}
public void testFlatBBQVectorSearch() throws Exception {
@ -530,7 +539,11 @@ public class VectorSearchIT extends AbstractRollingUpgradeTestCase {
}
""";
// create index and index 10 random floating point vectors
createIndex(FLAT_BBQ_INDEX_NAME, Settings.EMPTY, mapping);
createIndex(
FLAT_BBQ_INDEX_NAME,
Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0).build(),
mapping
);
index64DimVectors(FLAT_BBQ_INDEX_NAME);
// force merge the index
client().performRequest(new Request("POST", "/" + FLAT_BBQ_INDEX_NAME + "/_forcemerge?max_num_segments=1"));
@ -559,8 +572,8 @@ public class VectorSearchIT extends AbstractRollingUpgradeTestCase {
Map<String, Object> response = search(searchRequest);
assertThat(extractValue(response, "hits.total.value"), equalTo(7));
List<Map<String, Object>> hits = extractValue(response, "hits.hits");
assertThat(hits.get(0).get("_id"), equalTo("0"));
assertThat((double) hits.get(0).get("_score"), closeTo(1.9869276, 0.0001));
assertThat("hits: " + response, hits.get(0).get("_id"), equalTo("0"));
assertThat("hits: " + response, (double) hits.get(0).get("_score"), closeTo(1.9869276, 0.0001));
// search with knn
searchRequest = new Request("POST", "/" + FLAT_BBQ_INDEX_NAME + "/_search");
@ -578,8 +591,12 @@ public class VectorSearchIT extends AbstractRollingUpgradeTestCase {
response = search(searchRequest);
assertThat(extractValue(response, "hits.total.value"), equalTo(2));
hits = extractValue(response, "hits.hits");
assertThat(hits.get(0).get("_id"), equalTo("0"));
assertThat((double) hits.get(0).get("_score"), closeTo(0.9934857, 0.005));
assertThat("expected: 0 received" + hits.get(0).get("_id") + " hits: " + response, hits.get(0).get("_id"), equalTo("0"));
assertThat(
"expected_near: 0.99 received" + hits.get(0).get("_score") + "hits: " + response,
(double) hits.get(0).get("_score"),
closeTo(0.9934857, 0.005)
);
}
private void index64DimVectors(String indexName) throws Exception {
@ -605,6 +622,7 @@ public class VectorSearchIT extends AbstractRollingUpgradeTestCase {
assertOK(client().performRequest(indexRequest));
}
// always refresh to ensure the data is visible
flush(indexName, true);
refresh(indexName);
}

View file

@ -342,9 +342,17 @@ public class IndexRecoveryIT extends AbstractIndexRecoveryIntegTestCase {
assertThat(recoveryStats.currentAsSource(), equalTo(0));
assertThat(recoveryStats.currentAsTarget(), equalTo(0));
if (isRecoveryThrottlingNode) {
assertThat("Throttling should be >0 for '" + nodeName + "'", recoveryStats.throttleTime().millis(), greaterThan(0L));
assertThat(
"Throttling should be >0 for '" + nodeName + "'. Node stats: " + nodesStatsResponse,
recoveryStats.throttleTime().millis(),
greaterThan(0L)
);
} else {
assertThat("Throttling should be =0 for '" + nodeName + "'", recoveryStats.throttleTime().millis(), equalTo(0L));
assertThat(
"Throttling should be =0 for '" + nodeName + "'. Node stats: " + nodesStatsResponse,
recoveryStats.throttleTime().millis(),
equalTo(0L)
);
}
}
@ -1967,7 +1975,14 @@ public class IndexRecoveryIT extends AbstractIndexRecoveryIntegTestCase {
internalCluster().startMasterOnlyNode();
final var dataNode = internalCluster().startDataOnlyNode();
final var indexName = randomIdentifier();
createIndex(indexName, indexSettings(1, 0).put(INDEX_MERGE_ENABLED, false).build());
final var indexSettingsBuilder = indexSettings(1, 0).put(INDEX_MERGE_ENABLED, false);
if (randomBoolean()) {
indexSettingsBuilder.put(
IndexMetadata.SETTING_VERSION_CREATED,
IndexVersionUtils.randomVersionBetween(random(), IndexVersions.UPGRADE_TO_LUCENE_10_0_0, IndexVersion.current())
);
}
createIndex(indexName, indexSettingsBuilder.build());
final var initialSegmentCount = 20;
for (int i = 0; i < initialSegmentCount; i++) {
@ -2051,7 +2066,7 @@ public class IndexRecoveryIT extends AbstractIndexRecoveryIntegTestCase {
IndexVersionUtils.randomVersionBetween(
random(),
IndexVersionUtils.getLowestWriteCompatibleVersion(),
IndexVersionUtils.getPreviousVersion(IndexVersions.MERGE_ON_RECOVERY_VERSION)
IndexVersionUtils.getPreviousVersion(IndexVersions.UPGRADE_TO_LUCENE_10_0_0)
)
)
.build()

View file

@ -127,10 +127,7 @@ public class RetrieverRewriteIT extends ESIntegTestCase {
SearchPhaseExecutionException.class,
client().prepareSearch(testIndex).setSource(source)::get
);
assertThat(
ex.getDetailedMessage(),
containsString("[open_point_in_time] action requires all shards to be available. Missing shards")
);
assertThat(ex.getDetailedMessage(), containsString("Search rejected due to missing shards"));
} finally {
internalCluster().restartNode(randomDataNode);
}

View file

@ -201,7 +201,7 @@ final class CanMatchPreFilterSearchPhase {
private void checkNoMissingShards(List<SearchShardIterator> shards) {
assert assertSearchCoordinationThread();
SearchPhase.doCheckNoMissingShards("can_match", request, shards, SearchPhase::makeMissingShardsError);
SearchPhase.doCheckNoMissingShards("can_match", request, shards);
}
private Map<SendingTarget, List<SearchShardIterator>> groupByNode(List<SearchShardIterator> shards) {

View file

@ -14,7 +14,6 @@ import org.elasticsearch.transport.Transport;
import java.util.List;
import java.util.Objects;
import java.util.function.Function;
/**
* Base class for all individual search phases like collecting distributed frequencies, fetching documents, querying shards.
@ -35,26 +34,13 @@ abstract class SearchPhase {
return name;
}
protected String missingShardsErrorMessage(StringBuilder missingShards) {
return makeMissingShardsError(missingShards);
}
protected static String makeMissingShardsError(StringBuilder missingShards) {
private static String makeMissingShardsError(StringBuilder missingShards) {
return "Search rejected due to missing shards ["
+ missingShards
+ "]. Consider using `allow_partial_search_results` setting to bypass this error.";
}
protected void doCheckNoMissingShards(String phaseName, SearchRequest request, List<SearchShardIterator> shardsIts) {
doCheckNoMissingShards(phaseName, request, shardsIts, this::missingShardsErrorMessage);
}
protected static void doCheckNoMissingShards(
String phaseName,
SearchRequest request,
List<SearchShardIterator> shardsIts,
Function<StringBuilder, String> makeErrorMessage
) {
protected static void doCheckNoMissingShards(String phaseName, SearchRequest request, List<SearchShardIterator> shardsIts) {
assert request.allowPartialSearchResults() != null : "SearchRequest missing setting for allowPartialSearchResults";
if (request.allowPartialSearchResults() == false) {
final StringBuilder missingShards = new StringBuilder();
@ -70,7 +56,7 @@ abstract class SearchPhase {
}
if (missingShards.isEmpty() == false) {
// Status red - shard is missing all copies and would produce partial results for an index search
final String msg = makeErrorMessage.apply(missingShards);
final String msg = makeMissingShardsError(missingShards);
throw new SearchPhaseExecutionException(phaseName, msg, null, ShardSearchFailure.EMPTY_ARRAY);
}
}

View file

@ -241,12 +241,6 @@ public class TransportOpenPointInTimeAction extends HandledTransportAction<OpenP
searchRequest.getMaxConcurrentShardRequests(),
clusters
) {
protected String missingShardsErrorMessage(StringBuilder missingShards) {
return "[open_point_in_time] action requires all shards to be available. Missing shards: ["
+ missingShards
+ "]. Consider using `allow_partial_search_results` setting to bypass this error.";
}
@Override
protected void executePhaseOnShard(
SearchShardIterator shardIt,

View file

@ -245,10 +245,12 @@ class Elasticsearch {
EntitlementBootstrap.bootstrap(
pluginPolicies,
pluginsResolver::resolveClassToPluginName,
nodeEnv.settings()::get,
nodeEnv.settings()::getGlobValues,
nodeEnv.dataDirs(),
nodeEnv.configDir(),
nodeEnv.tmpDir(),
nodeEnv.logsDir()
nodeEnv.logsDir(),
nodeEnv.tmpDir()
);
} else {
assert RuntimeVersionFeature.isSecurityManagerAvailable();

View file

@ -34,9 +34,15 @@ public class Iterators {
* Returns a single element iterator over the supplied value.
*/
public static <T> Iterator<T> single(T element) {
return new Iterator<>() {
return new SingleIterator<>(element);
}
private T value = Objects.requireNonNull(element);
private static final class SingleIterator<T> implements Iterator<T> {
private T value;
SingleIterator(T element) {
value = Objects.requireNonNull(element);
}
@Override
public boolean hasNext() {
@ -49,7 +55,6 @@ public class Iterators {
value = null;
return res;
}
};
}
@SafeVarargs
@ -533,5 +538,4 @@ public class Iterators {
}
return result;
}
}

View file

@ -290,6 +290,24 @@ public final class Settings implements ToXContentFragment, Writeable, Diffable<S
return retVal == null ? defaultValue : retVal;
}
/**
* Returns the values for settings that match the given glob pattern.
* A single glob is supported.
*
* @param settingGlob setting name containing a glob
* @return zero or more values for any settings in this settings object that match the glob pattern
*/
public Stream<String> getGlobValues(String settingGlob) {
int globIndex = settingGlob.indexOf(".*.");
if (globIndex == -1) {
throw new IllegalArgumentException("Pattern [" + settingGlob + "] does not contain a glob [*]");
}
String prefix = settingGlob.substring(0, globIndex + 1);
String suffix = settingGlob.substring(globIndex + 2);
Settings subSettings = getByPrefix(prefix);
return subSettings.names().stream().map(k -> k + suffix).map(subSettings::get).filter(Objects::nonNull);
}
/**
* Returns the setting value (as float) associated with the setting key. If it does not exists,
* returns the default value provided.

View file

@ -293,6 +293,6 @@ public abstract class FallbackSyntheticSourceBlockLoader implements BlockLoader
parseNonNullValue(parser, accumulator);
}
abstract void parseNonNullValue(XContentParser parser, List<T> accumulator) throws IOException;
protected abstract void parseNonNullValue(XContentParser parser, List<T> accumulator) throws IOException;
}
}

View file

@ -18,6 +18,7 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThrottledTaskRunner;
import org.elasticsearch.core.Releasable;
import org.elasticsearch.core.Strings;
import org.elasticsearch.core.UpdateForV10;
import org.elasticsearch.index.IndexVersions;
import org.elasticsearch.index.shard.IndexShard;
import org.elasticsearch.index.shard.ShardId;
@ -46,6 +47,7 @@ class PostRecoveryMerger {
private static final boolean TRIGGER_MERGE_AFTER_RECOVERY;
static {
@UpdateForV10(owner = UpdateForV10.Owner.DISTRIBUTED_INDEXING) // remove this escape hatch
final var propertyValue = System.getProperty("es.trigger_merge_after_recovery");
if (propertyValue == null) {
TRIGGER_MERGE_AFTER_RECOVERY = true;
@ -95,7 +97,7 @@ class PostRecoveryMerger {
return recoveryListener;
}
if (indexMetadata.getCreationVersion().before(IndexVersions.MERGE_ON_RECOVERY_VERSION)) {
if (indexMetadata.getCreationVersion().before(IndexVersions.UPGRADE_TO_LUCENE_10_0_0)) {
return recoveryListener;
}

View file

@ -703,4 +703,10 @@ public class SettingsTests extends ESTestCase {
{"ant.bee":{"cat.dog":{"ewe":"value3"},"cat":"value2"},"ant":"value1"}""", Strings.toString(builder));
}
public void testGlobValues() throws IOException {
Settings test = Settings.builder().put("foo.x.bar", "1").put("foo.y.bar", "2").build();
var values = test.getGlobValues("foo.*.bar").toList();
assertThat(values, containsInAnyOrder("1", "2"));
}
}

View file

@ -9,15 +9,18 @@
package org.elasticsearch.index.mapper.blockloader;
import org.elasticsearch.index.mapper.NumberFieldBlockLoaderTestCase;
import org.elasticsearch.logsdb.datageneration.FieldType;
import java.util.Map;
public class ByteFieldBlockLoaderTests extends NumberFieldBlockLoaderTestCase<Integer> {
public ByteFieldBlockLoaderTests() {
super(FieldType.BYTE);
}
@Override
protected Integer convert(Number value) {
protected Integer convert(Number value, Map<String, Object> fieldMapping) {
// All values that fit into int are represented as ints
return value.intValue();
}

View file

@ -9,15 +9,18 @@
package org.elasticsearch.index.mapper.blockloader;
import org.elasticsearch.index.mapper.NumberFieldBlockLoaderTestCase;
import org.elasticsearch.logsdb.datageneration.FieldType;
import java.util.Map;
public class DoubleFieldBlockLoaderTests extends NumberFieldBlockLoaderTestCase<Double> {
public DoubleFieldBlockLoaderTests() {
super(FieldType.DOUBLE);
}
@Override
protected Double convert(Number value) {
protected Double convert(Number value, Map<String, Object> fieldMapping) {
return value.doubleValue();
}
}

View file

@ -9,15 +9,18 @@
package org.elasticsearch.index.mapper.blockloader;
import org.elasticsearch.index.mapper.NumberFieldBlockLoaderTestCase;
import org.elasticsearch.logsdb.datageneration.FieldType;
import java.util.Map;
public class FloatFieldBlockLoaderTests extends NumberFieldBlockLoaderTestCase<Double> {
public FloatFieldBlockLoaderTests() {
super(FieldType.FLOAT);
}
@Override
protected Double convert(Number value) {
protected Double convert(Number value, Map<String, Object> fieldMapping) {
// All float values are represented as double
return value.doubleValue();
}

View file

@ -10,15 +10,18 @@
package org.elasticsearch.index.mapper.blockloader;
import org.apache.lucene.sandbox.document.HalfFloatPoint;
import org.elasticsearch.index.mapper.NumberFieldBlockLoaderTestCase;
import org.elasticsearch.logsdb.datageneration.FieldType;
import java.util.Map;
public class HalfFloatFieldBlockLoaderTests extends NumberFieldBlockLoaderTestCase<Double> {
public HalfFloatFieldBlockLoaderTests() {
super(FieldType.HALF_FLOAT);
}
@Override
protected Double convert(Number value) {
protected Double convert(Number value, Map<String, Object> fieldMapping) {
// All float values are represented as double
return (double) HalfFloatPoint.sortableShortToHalfFloat(HalfFloatPoint.halfFloatToSortableShort(value.floatValue()));
}

View file

@ -9,15 +9,18 @@
package org.elasticsearch.index.mapper.blockloader;
import org.elasticsearch.index.mapper.NumberFieldBlockLoaderTestCase;
import org.elasticsearch.logsdb.datageneration.FieldType;
import java.util.Map;
public class IntegerFieldBlockLoaderTests extends NumberFieldBlockLoaderTestCase<Integer> {
public IntegerFieldBlockLoaderTests() {
super(FieldType.INTEGER);
}
@Override
protected Integer convert(Number value) {
protected Integer convert(Number value, Map<String, Object> fieldMapping) {
return value.intValue();
}
}

View file

@ -9,15 +9,18 @@
package org.elasticsearch.index.mapper.blockloader;
import org.elasticsearch.index.mapper.NumberFieldBlockLoaderTestCase;
import org.elasticsearch.logsdb.datageneration.FieldType;
import java.util.Map;
public class LongFieldBlockLoaderTests extends NumberFieldBlockLoaderTestCase<Long> {
public LongFieldBlockLoaderTests() {
super(FieldType.LONG);
}
@Override
protected Long convert(Number value) {
protected Long convert(Number value, Map<String, Object> fieldMapping) {
return value.longValue();
}
}

View file

@ -9,15 +9,18 @@
package org.elasticsearch.index.mapper.blockloader;
import org.elasticsearch.index.mapper.NumberFieldBlockLoaderTestCase;
import org.elasticsearch.logsdb.datageneration.FieldType;
import java.util.Map;
public class ShortFieldBlockLoaderTests extends NumberFieldBlockLoaderTestCase<Integer> {
public ShortFieldBlockLoaderTests() {
super(FieldType.SHORT);
}
@Override
protected Integer convert(Number value) {
protected Integer convert(Number value, Map<String, Object> fieldMapping) {
// All values that fit into int are represented as ints
return value.intValue();
}

View file

@ -7,9 +7,8 @@
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.index.mapper.blockloader;
package org.elasticsearch.index.mapper;
import org.elasticsearch.index.mapper.BlockLoaderTestCase;
import org.elasticsearch.logsdb.datageneration.FieldType;
import java.util.List;
@ -24,25 +23,29 @@ public abstract class NumberFieldBlockLoaderTestCase<T extends Number> extends B
@Override
@SuppressWarnings("unchecked")
protected Object expected(Map<String, Object> fieldMapping, Object value, boolean syntheticSource) {
var nullValue = fieldMapping.get("null_value") != null ? convert((Number) fieldMapping.get("null_value")) : null;
var nullValue = fieldMapping.get("null_value") != null ? convert((Number) fieldMapping.get("null_value"), fieldMapping) : null;
if (value instanceof List<?> == false) {
return convert(value, nullValue);
return convert(value, nullValue, fieldMapping);
}
if ((boolean) fieldMapping.getOrDefault("doc_values", false)) {
// Sorted and no duplicates
var resultList = ((List<Object>) value).stream().map(v -> convert(v, nullValue)).filter(Objects::nonNull).sorted().toList();
var resultList = ((List<Object>) value).stream()
.map(v -> convert(v, nullValue, fieldMapping))
.filter(Objects::nonNull)
.sorted()
.toList();
return maybeFoldList(resultList);
}
// parsing from source
var resultList = ((List<Object>) value).stream().map(v -> convert(v, nullValue)).filter(Objects::nonNull).toList();
var resultList = ((List<Object>) value).stream().map(v -> convert(v, nullValue, fieldMapping)).filter(Objects::nonNull).toList();
return maybeFoldList(resultList);
}
@SuppressWarnings("unchecked")
private T convert(Object value, T nullValue) {
private T convert(Object value, T nullValue, Map<String, Object> fieldMapping) {
if (value == null) {
return nullValue;
}
@ -51,12 +54,12 @@ public abstract class NumberFieldBlockLoaderTestCase<T extends Number> extends B
return nullValue;
}
if (value instanceof Number n) {
return convert(n);
return convert(n, fieldMapping);
}
// Malformed values are excluded
return null;
}
protected abstract T convert(Number value);
protected abstract T convert(Number value, Map<String, Object> fieldMapping);
}

View file

@ -102,7 +102,7 @@ public class DefaultMappingParametersHandler implements DataSourceHandler {
injected.put("scaling_factor", ESTestCase.randomFrom(10, 1000, 100000, 100.5));
if (ESTestCase.randomDouble() <= 0.2) {
injected.put("null_value", ESTestCase.randomFloat());
injected.put("null_value", ESTestCase.randomDouble());
}
if (ESTestCase.randomBoolean()) {

View file

@ -0,0 +1,5 @@
org.elasticsearch.blobcache:
- files:
- relative_path: "shared_snapshot_cache"
relative_to: "data"
mode: "read_write"

View file

@ -1,6 +1,5 @@
apply plugin: 'elasticsearch.internal-es-plugin'
apply plugin: 'elasticsearch.internal-cluster-test'
apply plugin: 'elasticsearch.internal-java-rest-test'
esplugin {
name = 'x-pack-ccr'
description = 'Elasticsearch Expanded Pack Plugin - CCR'
@ -34,16 +33,6 @@ tasks.named('internalClusterTestTestingConventions').configure {
baseClass 'org.elasticsearch.test.ESIntegTestCase'
}
tasks.named("javaRestTest").configure {
usesDefaultDistribution()
}
restResources {
restApi {
include 'bulk', 'search', '_common', 'indices', 'index', 'cluster', 'data_stream'
}
}
addQaCheckDependencies(project)
dependencies {

View file

@ -1,399 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
package org.elasticsearch.xpack.ccr.rest;
import org.apache.http.util.EntityUtils;
import org.elasticsearch.Build;
import org.elasticsearch.client.Request;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.ResponseException;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.test.cluster.ElasticsearchCluster;
import org.elasticsearch.test.cluster.local.distribution.DistributionType;
import org.elasticsearch.test.rest.ESRestTestCase;
import org.elasticsearch.xcontent.json.JsonXContent;
import org.hamcrest.Matchers;
import org.junit.Before;
import org.junit.ClassRule;
import java.io.IOException;
import java.time.Instant;
import java.time.ZoneOffset;
import java.time.format.DateTimeFormatter;
import java.util.List;
import java.util.Locale;
import java.util.Map;
public class ShardChangesRestIT extends ESRestTestCase {
private static final String CCR_SHARD_CHANGES_ENDPOINT = "/%s/ccr/shard_changes";
private static final String BULK_INDEX_ENDPOINT = "/%s/_bulk";
private static final String DATA_STREAM_ENDPOINT = "/_data_stream/%s";
private static final String INDEX_TEMPLATE_ENDPOINT = "/_index_template/%s";
private static final String[] SHARD_RESPONSE_FIELDS = new String[] {
"took_in_millis",
"operations",
"shard_id",
"index_abstraction",
"index",
"settings_version",
"max_seq_no_of_updates_or_deletes",
"number_of_operations",
"mapping_version",
"aliases_version",
"max_seq_no",
"global_checkpoint" };
private static final String BULK_INDEX_TEMPLATE = """
{ "index": { "op_type": "create" } }
{ "@timestamp": "%s", "name": "%s" }
""";;
private static final String[] NAMES = { "skywalker", "leia", "obi-wan", "yoda", "chewbacca", "r2-d2", "c-3po", "darth-vader" };
@ClassRule
public static ElasticsearchCluster cluster = ElasticsearchCluster.local()
.distribution(DistributionType.DEFAULT)
.setting("xpack.security.enabled", "false")
.setting("xpack.license.self_generated.type", "trial")
.build();
@Override
protected String getTestRestCluster() {
return cluster.getHttpAddresses();
}
@Before
public void assumeSnapshotBuild() {
assumeTrue("/{index}/ccr/shard_changes endpoint only available in snapshot builds", Build.current().isSnapshot());
}
public void testShardChangesNoOperation() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
createIndex(
indexName,
Settings.builder()
.put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1)
.put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0)
.put(IndexSettings.INDEX_TRANSLOG_SYNC_INTERVAL_SETTING.getKey(), "1s")
.build()
);
assertTrue(indexExists(indexName));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(indexName));
assertOK(client().performRequest(shardChangesRequest));
}
public void testShardChangesDefaultParams() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
final Settings settings = Settings.builder()
.put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1)
.put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0)
.put(IndexSettings.INDEX_TRANSLOG_SYNC_INTERVAL_SETTING.getKey(), "1s")
.build();
final String mappings = """
{
"properties": {
"name": {
"type": "keyword"
}
}
}
""";
createIndex(indexName, settings, mappings);
assertTrue(indexExists(indexName));
assertOK(bulkIndex(indexName, randomIntBetween(10, 20)));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(indexName));
final Response response = client().performRequest(shardChangesRequest);
assertOK(response);
assertShardChangesResponse(
XContentHelper.convertToMap(JsonXContent.jsonXContent, EntityUtils.toString(response.getEntity()), false),
indexName
);
}
public void testDataStreamShardChangesDefaultParams() throws IOException {
final String templateName = randomAlphanumericOfLength(8).toLowerCase(Locale.ROOT);
assertOK(createIndexTemplate(templateName, """
{
"index_patterns": [ "test-*-*" ],
"data_stream": {},
"priority": 100,
"template": {
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"name": {
"type": "keyword"
}
}
}
}
}"""));
final String dataStreamName = "test-"
+ randomAlphanumericOfLength(5).toLowerCase(Locale.ROOT)
+ "-"
+ randomAlphaOfLength(5).toLowerCase(Locale.ROOT);
assertOK(createDataStream(dataStreamName));
assertOK(bulkIndex(dataStreamName, randomIntBetween(10, 20)));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(dataStreamName));
final Response response = client().performRequest(shardChangesRequest);
assertOK(response);
assertShardChangesResponse(
XContentHelper.convertToMap(JsonXContent.jsonXContent, EntityUtils.toString(response.getEntity()), false),
dataStreamName
);
}
public void testIndexAliasShardChangesDefaultParams() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
final String aliasName = randomAlphanumericOfLength(8).toLowerCase(Locale.ROOT);
final Settings settings = Settings.builder()
.put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1)
.put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0)
.put(IndexSettings.INDEX_TRANSLOG_SYNC_INTERVAL_SETTING.getKey(), "1s")
.build();
final String mappings = """
{
"properties": {
"name": {
"type": "keyword"
}
}
}
""";
createIndex(indexName, settings, mappings);
assertTrue(indexExists(indexName));
final Request putAliasRequest = new Request("PUT", "/" + indexName + "/_alias/" + aliasName);
assertOK(client().performRequest(putAliasRequest));
assertOK(bulkIndex(aliasName, randomIntBetween(10, 20)));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(aliasName));
final Response response = client().performRequest(shardChangesRequest);
assertOK(response);
assertShardChangesResponse(
XContentHelper.convertToMap(JsonXContent.jsonXContent, EntityUtils.toString(response.getEntity()), false),
aliasName
);
}
public void testShardChangesWithAllParameters() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
createIndex(
indexName,
Settings.builder()
.put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1)
.put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0)
.put(IndexSettings.INDEX_TRANSLOG_SYNC_INTERVAL_SETTING.getKey(), "1s")
.build()
);
assertTrue(indexExists(indexName));
assertOK(bulkIndex(indexName, randomIntBetween(100, 200)));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(indexName));
shardChangesRequest.addParameter("from_seq_no", "0");
shardChangesRequest.addParameter("max_operations_count", "1");
shardChangesRequest.addParameter("poll_timeout", "10s");
shardChangesRequest.addParameter("max_batch_size", "1MB");
final Response response = client().performRequest(shardChangesRequest);
assertOK(response);
assertShardChangesResponse(
XContentHelper.convertToMap(JsonXContent.jsonXContent, EntityUtils.toString(response.getEntity()), false),
indexName
);
}
public void testShardChangesMultipleRequests() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
createIndex(
indexName,
Settings.builder()
.put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1)
.put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0)
.put(IndexSettings.INDEX_TRANSLOG_SYNC_INTERVAL_SETTING.getKey(), "1s")
.build()
);
assertTrue(indexExists(indexName));
assertOK(bulkIndex(indexName, randomIntBetween(100, 200)));
final Request firstRequest = new Request("GET", shardChangesEndpoint(indexName));
firstRequest.addParameter("from_seq_no", "0");
firstRequest.addParameter("max_operations_count", "10");
firstRequest.addParameter("poll_timeout", "10s");
firstRequest.addParameter("max_batch_size", "1MB");
final Response firstResponse = client().performRequest(firstRequest);
assertOK(firstResponse);
assertShardChangesResponse(
XContentHelper.convertToMap(JsonXContent.jsonXContent, EntityUtils.toString(firstResponse.getEntity()), false),
indexName
);
final Request secondRequest = new Request("GET", shardChangesEndpoint(indexName));
secondRequest.addParameter("from_seq_no", "10");
secondRequest.addParameter("max_operations_count", "10");
secondRequest.addParameter("poll_timeout", "10s");
secondRequest.addParameter("max_batch_size", "1MB");
final Response secondResponse = client().performRequest(secondRequest);
assertOK(secondResponse);
assertShardChangesResponse(
XContentHelper.convertToMap(JsonXContent.jsonXContent, EntityUtils.toString(secondResponse.getEntity()), false),
indexName
);
}
public void testShardChangesInvalidFromSeqNo() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
createIndex(indexName);
assertTrue(indexExists(indexName));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(indexName));
shardChangesRequest.addParameter("from_seq_no", "-1");
final ResponseException ex = assertThrows(ResponseException.class, () -> client().performRequest(shardChangesRequest));
assertResponseException(ex, RestStatus.BAD_REQUEST, "Validation Failed: 1: fromSeqNo [-1] cannot be lower than 0");
}
public void testShardChangesInvalidMaxOperationsCount() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
createIndex(indexName);
assertTrue(indexExists(indexName));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(indexName));
shardChangesRequest.addParameter("max_operations_count", "-1");
final ResponseException ex = assertThrows(ResponseException.class, () -> client().performRequest(shardChangesRequest));
assertResponseException(ex, RestStatus.BAD_REQUEST, "Validation Failed: 1: maxOperationCount [-1] cannot be lower than 0");
}
public void testShardChangesNegativePollTimeout() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
createIndex(indexName);
assertTrue(indexExists(indexName));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(indexName));
shardChangesRequest.addParameter("poll_timeout", "-1s");
assertOK(client().performRequest(shardChangesRequest));
}
public void testShardChangesInvalidMaxBatchSize() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
createIndex(indexName);
assertTrue(indexExists(indexName));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(indexName));
shardChangesRequest.addParameter("max_batch_size", "-1MB");
final ResponseException ex = assertThrows(ResponseException.class, () -> client().performRequest(shardChangesRequest));
assertResponseException(
ex,
RestStatus.BAD_REQUEST,
"failed to parse setting [max_batch_size] with value [-1MB] as a size in bytes"
);
}
public void testShardChangesMissingIndex() throws IOException {
final String indexName = randomAlphanumericOfLength(10).toLowerCase(Locale.ROOT);
assertFalse(indexExists(indexName));
final Request shardChangesRequest = new Request("GET", shardChangesEndpoint(indexName));
final ResponseException ex = assertThrows(ResponseException.class, () -> client().performRequest(shardChangesRequest));
assertResponseException(ex, RestStatus.BAD_REQUEST, "Failed to process shard changes for index [" + indexName + "]");
}
private static Response bulkIndex(final String indexName, int numberOfDocuments) throws IOException {
final StringBuilder sb = new StringBuilder();
long timestamp = System.currentTimeMillis();
for (int i = 0; i < numberOfDocuments; i++) {
sb.append(
String.format(
Locale.ROOT,
BULK_INDEX_TEMPLATE,
Instant.ofEpochMilli(timestamp).atOffset(ZoneOffset.UTC).format(DateTimeFormatter.ISO_OFFSET_DATE_TIME),
randomFrom(NAMES)
)
);
timestamp += 1000; // 1 second
}
final Request request = new Request("POST", bulkEndpoint(indexName));
request.setJsonEntity(sb.toString());
request.addParameter("refresh", "true");
return client().performRequest(request);
}
private Response createDataStream(final String dataStreamName) throws IOException {
return client().performRequest(new Request("PUT", dataStreamEndpoint(dataStreamName)));
}
private static Response createIndexTemplate(final String templateName, final String mappings) throws IOException {
final Request request = new Request("PUT", indexTemplateEndpoint(templateName));
request.setJsonEntity(mappings);
return client().performRequest(request);
}
private static String shardChangesEndpoint(final String indexName) {
return String.format(Locale.ROOT, CCR_SHARD_CHANGES_ENDPOINT, indexName);
}
private static String bulkEndpoint(final String indexName) {
return String.format(Locale.ROOT, BULK_INDEX_ENDPOINT, indexName);
}
private static String dataStreamEndpoint(final String dataStreamName) {
return String.format(Locale.ROOT, DATA_STREAM_ENDPOINT, dataStreamName);
}
private static String indexTemplateEndpoint(final String templateName) {
return String.format(Locale.ROOT, INDEX_TEMPLATE_ENDPOINT, templateName);
}
private void assertResponseException(final ResponseException ex, final RestStatus restStatus, final String error) {
assertEquals(restStatus.getStatus(), ex.getResponse().getStatusLine().getStatusCode());
assertThat(ex.getMessage(), Matchers.containsString(error));
}
private void assertShardChangesResponse(final Map<String, Object> shardChangesResponseBody, final String indexAbstractionName) {
for (final String fieldName : SHARD_RESPONSE_FIELDS) {
final Object fieldValue = shardChangesResponseBody.get(fieldName);
assertNotNull("Field " + fieldName + " is missing or has a null value.", fieldValue);
if ("index_abstraction".equals(fieldName)) {
assertEquals(indexAbstractionName, fieldValue);
}
if ("operations".equals(fieldName)) {
if (fieldValue instanceof List<?> operationsList) {
assertFalse("Field 'operations' is empty.", operationsList.isEmpty());
for (final Object operation : operationsList) {
assertNotNull("Operation is null.", operation);
if (operation instanceof Map<?, ?> operationMap) {
assertNotNull("seq_no is missing in operation.", operationMap.get("seq_no"));
assertNotNull("op_type is missing in operation.", operationMap.get("op_type"));
assertNotNull("primary_term is missing in operation.", operationMap.get("primary_term"));
}
}
}
}
}
}
}

View file

@ -7,7 +7,6 @@
package org.elasticsearch.xpack.ccr;
import org.apache.lucene.util.SetOnce;
import org.elasticsearch.Build;
import org.elasticsearch.TransportVersion;
import org.elasticsearch.TransportVersions;
import org.elasticsearch.action.ActionRequest;
@ -92,7 +91,6 @@ import org.elasticsearch.xpack.ccr.rest.RestPutAutoFollowPatternAction;
import org.elasticsearch.xpack.ccr.rest.RestPutFollowAction;
import org.elasticsearch.xpack.ccr.rest.RestResumeAutoFollowPatternAction;
import org.elasticsearch.xpack.ccr.rest.RestResumeFollowAction;
import org.elasticsearch.xpack.ccr.rest.RestShardChangesAction;
import org.elasticsearch.xpack.ccr.rest.RestUnfollowAction;
import org.elasticsearch.xpack.core.XPackFeatureUsage;
import org.elasticsearch.xpack.core.XPackField;
@ -114,7 +112,6 @@ import org.elasticsearch.xpack.core.ccr.action.ResumeFollowAction;
import org.elasticsearch.xpack.core.ccr.action.ShardFollowTask;
import org.elasticsearch.xpack.core.ccr.action.UnfollowAction;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
@ -143,34 +140,7 @@ public class Ccr extends Plugin implements ActionPlugin, PersistentTaskPlugin, E
public static final String REQUESTED_OPS_MISSING_METADATA_KEY = "es.requested_operations_missing";
public static final TransportVersion TRANSPORT_VERSION_ACTION_WITH_SHARD_ID = TransportVersions.V_8_9_X;
private static final List<RestHandler> BASE_REST_HANDLERS = Arrays.asList(
// stats API
new RestFollowStatsAction(),
new RestCcrStatsAction(),
new RestFollowInfoAction(),
// follow APIs
new RestPutFollowAction(),
new RestResumeFollowAction(),
new RestPauseFollowAction(),
new RestUnfollowAction(),
// auto-follow APIs
new RestDeleteAutoFollowPatternAction(),
new RestPutAutoFollowPatternAction(),
new RestGetAutoFollowPatternAction(),
new RestPauseAutoFollowPatternAction(),
new RestResumeAutoFollowPatternAction(),
// forget follower API
new RestForgetFollowerAction()
);
private static final List<RestHandler> REST_HANDLERS = Collections.unmodifiableList(BASE_REST_HANDLERS);
private static final List<RestHandler> SNAPSHOT_BUILD_REST_HANDLERS;
static {
List<RestHandler> snapshotBuildHandlers = new ArrayList<>(BASE_REST_HANDLERS);
snapshotBuildHandlers.add(new RestShardChangesAction());
SNAPSHOT_BUILD_REST_HANDLERS = Collections.unmodifiableList(snapshotBuildHandlers);
}
private final boolean enabled;
private final Settings settings;
private final CcrLicenseChecker ccrLicenseChecker;
@ -302,7 +272,25 @@ public class Ccr extends Plugin implements ActionPlugin, PersistentTaskPlugin, E
return emptyList();
}
return Build.current().isSnapshot() ? SNAPSHOT_BUILD_REST_HANDLERS : REST_HANDLERS;
return Arrays.asList(
// stats API
new RestFollowStatsAction(),
new RestCcrStatsAction(),
new RestFollowInfoAction(),
// follow APIs
new RestPutFollowAction(),
new RestResumeFollowAction(),
new RestPauseFollowAction(),
new RestUnfollowAction(),
// auto-follow APIs
new RestDeleteAutoFollowPatternAction(),
new RestPutAutoFollowPatternAction(),
new RestGetAutoFollowPatternAction(),
new RestPauseAutoFollowPatternAction(),
new RestResumeAutoFollowPatternAction(),
// forget follower API
new RestForgetFollowerAction()
);
}
public List<NamedWriteableRegistry.Entry> getNamedWriteables() {

View file

@ -1,366 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
package org.elasticsearch.xpack.ccr.rest;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.admin.indices.stats.ShardStats;
import org.elasticsearch.client.internal.node.NodeClient;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexAbstraction;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.core.TimeValue;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.engine.Engine;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.translog.Translog;
import org.elasticsearch.rest.BaseRestHandler;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.RestResponse;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.rest.RestUtils;
import org.elasticsearch.rest.action.RestActionListener;
import org.elasticsearch.xcontent.XContentBuilder;
import org.elasticsearch.xcontent.XContentFactory;
import org.elasticsearch.xpack.ccr.Ccr;
import org.elasticsearch.xpack.ccr.action.ShardChangesAction;
import java.io.IOException;
import java.util.Arrays;
import java.util.Comparator;
import java.util.List;
import java.util.Locale;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.function.Supplier;
import static org.elasticsearch.rest.RestRequest.Method.GET;
/**
* A REST handler that retrieves shard changes in a specific index, data stream or alias whose name is
* provided as a parameter. It handles GET requests to the "/{index}/ccr/shard_changes" endpoint retrieving
* shard-level changes, such as Translog operations, mapping version, settings version, aliases version,
* the global checkpoint, maximum sequence number and maximum sequence number of updates or deletes.
* <p>
* In the case of a data stream, the first backing index is considered the target for retrieving shard changes.
* In the case of an alias, the first index that the alias points to is considered the target for retrieving
* shard changes.
* <p>
* Note: This handler is only available for snapshot builds.
*/
public class RestShardChangesAction extends BaseRestHandler {
private static final long DEFAULT_FROM_SEQ_NO = 0L;
private static final ByteSizeValue DEFAULT_MAX_BATCH_SIZE = ByteSizeValue.of(32, ByteSizeUnit.MB);
private static final TimeValue DEFAULT_POLL_TIMEOUT = new TimeValue(1, TimeUnit.MINUTES);
private static final int DEFAULT_MAX_OPERATIONS_COUNT = 1024;
private static final int DEFAULT_TIMEOUT_SECONDS = 60;
private static final TimeValue GET_INDEX_UUID_TIMEOUT = new TimeValue(DEFAULT_TIMEOUT_SECONDS, TimeUnit.SECONDS);
private static final TimeValue SHARD_STATS_TIMEOUT = new TimeValue(DEFAULT_TIMEOUT_SECONDS, TimeUnit.SECONDS);
private static final String INDEX_PARAM_NAME = "index";
private static final String FROM_SEQ_NO_PARAM_NAME = "from_seq_no";
private static final String MAX_BATCH_SIZE_PARAM_NAME = "max_batch_size";
private static final String POLL_TIMEOUT_PARAM_NAME = "poll_timeout";
private static final String MAX_OPERATIONS_COUNT_PARAM_NAME = "max_operations_count";
@Override
public String getName() {
return "ccr_shard_changes_action";
}
@Override
public List<Route> routes() {
return List.of(new Route(GET, "/{index}/ccr/shard_changes"));
}
/**
* Prepares the request for retrieving shard changes.
*
* @param restRequest The REST request.
* @param client The NodeClient for executing the request.
* @return A RestChannelConsumer for handling the request.
* @throws IOException If an error occurs while preparing the request.
*/
@Override
protected RestChannelConsumer prepareRequest(final RestRequest restRequest, final NodeClient client) throws IOException {
final var indexAbstractionName = restRequest.param(INDEX_PARAM_NAME);
final var fromSeqNo = restRequest.paramAsLong(FROM_SEQ_NO_PARAM_NAME, DEFAULT_FROM_SEQ_NO);
final var maxBatchSize = restRequest.paramAsSize(MAX_BATCH_SIZE_PARAM_NAME, DEFAULT_MAX_BATCH_SIZE);
final var pollTimeout = restRequest.paramAsTime(POLL_TIMEOUT_PARAM_NAME, DEFAULT_POLL_TIMEOUT);
final var maxOperationsCount = restRequest.paramAsInt(MAX_OPERATIONS_COUNT_PARAM_NAME, DEFAULT_MAX_OPERATIONS_COUNT);
// NOTE: we first retrieve the concrete index name in case we are dealing with an alias or data stream.
// Then we use the concrete index name to retrieve the index UUID and shard stats.
final CompletableFuture<String> indexNameCompletableFuture = asyncGetIndexName(
client,
indexAbstractionName,
client.threadPool().executor(Ccr.CCR_THREAD_POOL_NAME)
);
final CompletableFuture<String> indexUUIDCompletableFuture = indexNameCompletableFuture.thenCompose(
concreteIndexName -> asyncGetIndexUUID(
client,
concreteIndexName,
client.threadPool().executor(Ccr.CCR_THREAD_POOL_NAME),
RestUtils.getMasterNodeTimeout(restRequest)
)
);
final CompletableFuture<ShardStats> shardStatsCompletableFuture = indexNameCompletableFuture.thenCompose(
concreteIndexName -> asyncShardStats(client, concreteIndexName, client.threadPool().executor(Ccr.CCR_THREAD_POOL_NAME))
);
return channel -> CompletableFuture.allOf(indexUUIDCompletableFuture, shardStatsCompletableFuture).thenRun(() -> {
try {
final String concreteIndexName = indexNameCompletableFuture.get(DEFAULT_TIMEOUT_SECONDS, TimeUnit.SECONDS);
final String indexUUID = indexUUIDCompletableFuture.get(DEFAULT_TIMEOUT_SECONDS, TimeUnit.SECONDS);
final ShardStats shardStats = shardStatsCompletableFuture.get(DEFAULT_TIMEOUT_SECONDS, TimeUnit.SECONDS);
final ShardId shardId = shardStats.getShardRouting().shardId();
final String expectedHistoryUUID = shardStats.getCommitStats().getUserData().get(Engine.HISTORY_UUID_KEY);
final ShardChangesAction.Request shardChangesRequest = shardChangesRequest(
concreteIndexName,
indexUUID,
shardId,
expectedHistoryUUID,
fromSeqNo,
maxBatchSize,
pollTimeout,
maxOperationsCount
);
client.execute(ShardChangesAction.INSTANCE, shardChangesRequest, new RestActionListener<>(channel) {
@Override
protected void processResponse(final ShardChangesAction.Response response) {
channel.sendResponse(
new RestResponse(
RestStatus.OK,
shardChangesResponseToXContent(response, indexAbstractionName, concreteIndexName, shardId)
)
);
}
});
} catch (InterruptedException | ExecutionException e) {
Thread.currentThread().interrupt();
throw new IllegalStateException("Error while retrieving shard changes", e);
} catch (TimeoutException te) {
throw new IllegalStateException("Timeout while waiting for shard stats or index UUID", te);
}
}).exceptionally(ex -> {
channel.sendResponse(
new RestResponse(
RestStatus.BAD_REQUEST,
"Failed to process shard changes for index [" + indexAbstractionName + "] " + ex.getMessage()
)
);
return null;
});
}
/**
* Creates a ShardChangesAction.Request object with the provided parameters.
*
* @param indexName The name of the index for which to retrieve shard changes.
* @param indexUUID The UUID of the index.
* @param shardId The ShardId for which to retrieve shard changes.
* @param expectedHistoryUUID The expected history UUID of the shard.
* @param fromSeqNo The sequence number from which to start retrieving shard changes.
* @param maxBatchSize The maximum size of a batch of operations to retrieve.
* @param pollTimeout The maximum time to wait for shard changes.
* @param maxOperationsCount The maximum number of operations to retrieve in a single request.
* @return A ShardChangesAction.Request object with the provided parameters.
*/
private static ShardChangesAction.Request shardChangesRequest(
final String indexName,
final String indexUUID,
final ShardId shardId,
final String expectedHistoryUUID,
long fromSeqNo,
final ByteSizeValue maxBatchSize,
final TimeValue pollTimeout,
int maxOperationsCount
) {
final ShardChangesAction.Request shardChangesRequest = new ShardChangesAction.Request(
new ShardId(new Index(indexName, indexUUID), shardId.id()),
expectedHistoryUUID
);
shardChangesRequest.setFromSeqNo(fromSeqNo);
shardChangesRequest.setMaxBatchSize(maxBatchSize);
shardChangesRequest.setPollTimeout(pollTimeout);
shardChangesRequest.setMaxOperationCount(maxOperationsCount);
return shardChangesRequest;
}
/**
* Converts the response to XContent JSOn format.
*
* @param response The ShardChangesAction response.
* @param indexAbstractionName The name of the index abstraction.
* @param concreteIndexName The name of the index.
* @param shardId The ShardId.
*/
private static XContentBuilder shardChangesResponseToXContent(
final ShardChangesAction.Response response,
final String indexAbstractionName,
final String concreteIndexName,
final ShardId shardId
) {
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
builder.startObject();
builder.field("index_abstraction", indexAbstractionName);
builder.field("index", concreteIndexName);
builder.field("shard_id", shardId);
builder.field("mapping_version", response.getMappingVersion());
builder.field("settings_version", response.getSettingsVersion());
builder.field("aliases_version", response.getAliasesVersion());
builder.field("global_checkpoint", response.getGlobalCheckpoint());
builder.field("max_seq_no", response.getMaxSeqNo());
builder.field("max_seq_no_of_updates_or_deletes", response.getMaxSeqNoOfUpdatesOrDeletes());
builder.field("took_in_millis", response.getTookInMillis());
if (response.getOperations() != null && response.getOperations().length > 0) {
operationsToXContent(response, builder);
}
builder.endObject();
return builder;
} catch (IOException e) {
throw new RuntimeException(e);
}
}
/**
* Converts the operations from a ShardChangesAction response to XContent JSON format.
*
* @param response The ShardChangesAction response containing the operations to be converted.
* @param builder The XContentBuilder to which the converted operations will be added.
* @throws IOException If an error occurs while writing to the XContentBuilder.
*/
private static void operationsToXContent(final ShardChangesAction.Response response, final XContentBuilder builder) throws IOException {
builder.field("number_of_operations", response.getOperations().length);
builder.field("operations");
builder.startArray();
for (final Translog.Operation operation : response.getOperations()) {
builder.startObject();
builder.field("op_type", operation.opType());
builder.field("seq_no", operation.seqNo());
builder.field("primary_term", operation.primaryTerm());
builder.endObject();
}
builder.endArray();
}
/**
* Execute an asynchronous task using a task supplier and an executor service.
*
* @param <T> The type of data to be retrieved.
* @param task The supplier task that provides the data.
* @param executorService The executorService service for executing the asynchronous task.
* @param errorMessage The error message to be thrown if the task execution fails.
* @return A CompletableFuture that completes with the retrieved data.
*/
private static <T> CompletableFuture<T> supplyAsyncTask(
final Supplier<T> task,
final ExecutorService executorService,
final String errorMessage
) {
return CompletableFuture.supplyAsync(() -> {
try {
return task.get();
} catch (Exception e) {
throw new ElasticsearchException(errorMessage, e);
}
}, executorService);
}
/**
* Asynchronously retrieves the index name for a given index, alias or data stream.
* If the name represents a data stream, the name of the first backing index is returned.
* If the name represents an alias, the name of the first index that the alias points to is returned.
*
* @param client The NodeClient for executing the asynchronous request.
* @param indexAbstractionName The name of the index, alias or data stream.
* @return A CompletableFuture that completes with the retrieved index name.
*/
private static CompletableFuture<String> asyncGetIndexName(
final NodeClient client,
final String indexAbstractionName,
final ExecutorService executorService
) {
return supplyAsyncTask(() -> {
final ClusterState clusterState = client.admin()
.cluster()
.prepareState(new TimeValue(DEFAULT_TIMEOUT_SECONDS, TimeUnit.SECONDS))
.get(GET_INDEX_UUID_TIMEOUT)
.getState();
final IndexAbstraction indexAbstraction = clusterState.metadata().getProject().getIndicesLookup().get(indexAbstractionName);
if (indexAbstraction == null) {
throw new IllegalArgumentException(
String.format(Locale.ROOT, "Invalid index or data stream name [%s]", indexAbstractionName)
);
}
if (indexAbstraction.getType() == IndexAbstraction.Type.DATA_STREAM
|| indexAbstraction.getType() == IndexAbstraction.Type.ALIAS) {
return indexAbstraction.getIndices().getFirst().getName();
}
return indexAbstractionName;
}, executorService, "Error while retrieving index name for index or data stream [" + indexAbstractionName + "]");
}
/**
* Asynchronously retrieves the shard stats for a given index using an executor service.
*
* @param client The NodeClient for executing the asynchronous request.
* @param concreteIndexName The name of the index for which to retrieve shard statistics.
* @param executorService The executorService service for executing the asynchronous task.
* @return A CompletableFuture that completes with the retrieved ShardStats.
* @throws ElasticsearchException If an error occurs while retrieving shard statistics.
*/
private static CompletableFuture<ShardStats> asyncShardStats(
final NodeClient client,
final String concreteIndexName,
final ExecutorService executorService
) {
return supplyAsyncTask(
() -> Arrays.stream(client.admin().indices().prepareStats(concreteIndexName).clear().get(SHARD_STATS_TIMEOUT).getShards())
.max(Comparator.comparingLong(shardStats -> shardStats.getCommitStats().getGeneration()))
.orElseThrow(() -> new ElasticsearchException("Unable to retrieve shard stats for index: " + concreteIndexName)),
executorService,
"Error while retrieving shard stats for index [" + concreteIndexName + "]"
);
}
/**
* Asynchronously retrieves the index UUID for a given index using an executor service.
*
* @param client The NodeClient for executing the asynchronous request.
* @param concreteIndexName The name of the index for which to retrieve the index UUID.
* @param executorService The executorService service for executing the asynchronous task.
* @param masterTimeout The timeout for waiting until the cluster is unblocked.
* @return A CompletableFuture that completes with the retrieved index UUID.
* @throws ElasticsearchException If an error occurs while retrieving the index UUID.
*/
private static CompletableFuture<String> asyncGetIndexUUID(
final NodeClient client,
final String concreteIndexName,
final ExecutorService executorService,
TimeValue masterTimeout
) {
return supplyAsyncTask(
() -> client.admin()
.indices()
.prepareGetIndex(masterTimeout)
.setIndices(concreteIndexName)
.get(GET_INDEX_UUID_TIMEOUT)
.getSetting(concreteIndexName, IndexMetadata.SETTING_INDEX_UUID),
executorService,
"Error while retrieving index UUID for index [" + concreteIndexName + "]"
);
}
}

View file

@ -1,77 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
package org.elasticsearch.xpack.core;
import org.elasticsearch.core.PathUtils;
import org.elasticsearch.core.SuppressForbidden;
import java.io.IOException;
import java.net.URISyntaxException;
import java.net.URL;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.jar.JarInputStream;
import java.util.jar.Manifest;
/**
* Information about the built version of x-pack that is running.
*/
public class XPackBuild {
public static final XPackBuild CURRENT;
static {
final String shortHash;
final String date;
Path path = getElasticsearchCodebase();
if (path.toString().endsWith(".jar")) {
try (JarInputStream jar = new JarInputStream(Files.newInputStream(path))) {
Manifest manifest = jar.getManifest();
shortHash = manifest.getMainAttributes().getValue("Change");
date = manifest.getMainAttributes().getValue("Build-Date");
} catch (IOException e) {
throw new RuntimeException(e);
}
} else {
// not running from a jar (unit tests, IDE)
shortHash = "Unknown";
date = "Unknown";
}
CURRENT = new XPackBuild(shortHash, date);
}
/**
* Returns path to xpack codebase path
*/
@SuppressForbidden(reason = "looks up path of xpack.jar directly")
static Path getElasticsearchCodebase() {
URL url = XPackBuild.class.getProtectionDomain().getCodeSource().getLocation();
try {
return PathUtils.get(url.toURI());
} catch (URISyntaxException bogus) {
throw new RuntimeException(bogus);
}
}
private String shortHash;
private String date;
XPackBuild(String shortHash, String date) {
this.shortHash = shortHash;
this.date = date;
}
public String shortHash() {
return shortHash;
}
public String date() {
return date;
}
}

View file

@ -6,6 +6,7 @@
*/
package org.elasticsearch.xpack.core.action;
import org.elasticsearch.Build;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.ActionType;
import org.elasticsearch.action.support.ActionFilters;
@ -23,7 +24,6 @@ import org.elasticsearch.protocol.xpack.XPackInfoResponse.FeatureSetsInfo.Featur
import org.elasticsearch.protocol.xpack.XPackInfoResponse.LicenseInfo;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.transport.TransportService;
import org.elasticsearch.xpack.core.XPackBuild;
import java.util.HashSet;
import java.util.List;
@ -58,7 +58,7 @@ public class TransportXPackInfoAction extends HandledTransportAction<XPackInfoRe
XPackInfoResponse.BuildInfo buildInfo = null;
if (request.getCategories().contains(XPackInfoRequest.Category.BUILD)) {
buildInfo = new XPackInfoResponse.BuildInfo(XPackBuild.CURRENT.shortHash(), XPackBuild.CURRENT.date());
buildInfo = new XPackInfoResponse.BuildInfo(Build.current().hash(), Build.current().date());
}
LicenseInfo licenseInfo = null;

View file

@ -35,7 +35,6 @@ import org.elasticsearch.threadpool.FixedExecutorBuilder;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xcontent.NamedXContentRegistry;
import org.elasticsearch.xcontent.ParseField;
import org.elasticsearch.xpack.core.downsample.DownsampleIndexerAction;
import org.elasticsearch.xpack.core.downsample.DownsampleShardPersistentTaskState;
import org.elasticsearch.xpack.core.downsample.DownsampleShardTask;
@ -66,7 +65,6 @@ public class Downsample extends Plugin implements ActionPlugin, PersistentTaskPl
@Override
public List<ActionHandler<? extends ActionRequest, ? extends ActionResponse>> getActions() {
return List.of(
new ActionHandler<>(DownsampleIndexerAction.INSTANCE, TransportDownsampleIndexerAction.class),
new ActionHandler<>(DownsampleAction.INSTANCE, TransportDownsampleAction.class),
new ActionHandler<>(
DownsampleShardPersistentTaskExecutor.DelegatingAction.INSTANCE,

View file

@ -25,8 +25,6 @@ import java.util.Map;
* - Add a constant for its name, following the naming conventions for metrics.
* - Register it in method {@link #doStart}.
* - Add a function for recording its value.
* - If needed, inject {@link DownsampleMetrics} to the action containing the logic
* that records the metric value. For reference, see {@link TransportDownsampleIndexerAction}.
*/
public class DownsampleMetrics extends AbstractLifecycleComponent {

View file

@ -104,7 +104,7 @@ import static org.elasticsearch.xpack.core.ilm.DownsampleAction.DOWNSAMPLED_INDE
/**
* The master downsample action that coordinates
* - creating the downsample index
* - instantiating {@link DownsampleShardIndexer}s to index downsample documents
* - instantiating {@link org.elasticsearch.persistent.PersistentTasksExecutor} to start a persistent downsample task
* - cleaning up state
*/
public class TransportDownsampleAction extends AcknowledgedTransportMasterNodeAction<DownsampleAction.Request> {

View file

@ -1,206 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
package org.elasticsearch.xpack.downsample;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.NoShardAvailableActionException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.broadcast.TransportBroadcastAction;
import org.elasticsearch.client.internal.Client;
import org.elasticsearch.client.internal.OriginSettingClient;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ProjectState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.project.ProjectResolver;
import org.elasticsearch.cluster.routing.ShardIterator;
import org.elasticsearch.cluster.routing.ShardRouting;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.index.IndexService;
import org.elasticsearch.indices.IndicesService;
import org.elasticsearch.injection.guice.Inject;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.transport.TransportService;
import org.elasticsearch.xpack.core.ClientHelper;
import org.elasticsearch.xpack.core.downsample.DownsampleIndexerAction;
import org.elasticsearch.xpack.core.downsample.DownsampleShardIndexerStatus;
import org.elasticsearch.xpack.core.downsample.DownsampleShardPersistentTaskState;
import org.elasticsearch.xpack.core.downsample.DownsampleShardTask;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
* A {@link TransportBroadcastAction} that downsamples all the shards of a source index into a new downsample index.
*
* TODO: Enforce that we don't retry on another replica if we throw an error after sending some buckets.
*/
public class TransportDownsampleIndexerAction extends TransportBroadcastAction<
DownsampleIndexerAction.Request,
DownsampleIndexerAction.Response,
DownsampleIndexerAction.ShardDownsampleRequest,
DownsampleIndexerAction.ShardDownsampleResponse> {
private final Client client;
private final IndicesService indicesService;
private final ProjectResolver projectResolver;
private final DownsampleMetrics downsampleMetrics;
@Inject
public TransportDownsampleIndexerAction(
Client client,
ClusterService clusterService,
TransportService transportService,
IndicesService indicesService,
ActionFilters actionFilters,
ProjectResolver projectResolver,
IndexNameExpressionResolver indexNameExpressionResolver,
DownsampleMetrics downsampleMetrics
) {
super(
DownsampleIndexerAction.NAME,
clusterService,
transportService,
actionFilters,
indexNameExpressionResolver,
DownsampleIndexerAction.Request::new,
DownsampleIndexerAction.ShardDownsampleRequest::new,
transportService.getThreadPool().executor(Downsample.DOWNSAMPLE_TASK_THREAD_POOL_NAME)
);
this.client = new OriginSettingClient(client, ClientHelper.ROLLUP_ORIGIN);
this.indicesService = indicesService;
this.projectResolver = projectResolver;
this.downsampleMetrics = downsampleMetrics;
}
@Override
protected List<ShardIterator> shards(ClusterState clusterState, DownsampleIndexerAction.Request request, String[] concreteIndices) {
if (concreteIndices.length > 1) {
throw new IllegalArgumentException("multiple indices: " + Arrays.toString(concreteIndices));
}
ProjectState project = projectResolver.getProjectState(clusterState);
final List<ShardIterator> groups = clusterService.operationRouting().searchShards(project, concreteIndices, null, null);
for (ShardIterator group : groups) {
// fails fast if any non-active groups
if (group.size() == 0) {
throw new NoShardAvailableActionException(group.shardId());
}
}
return groups;
}
@Override
protected ClusterBlockException checkGlobalBlock(ClusterState state, DownsampleIndexerAction.Request request) {
return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);
}
@Override
protected ClusterBlockException checkRequestBlock(
ClusterState state,
DownsampleIndexerAction.Request request,
String[] concreteIndices
) {
return state.blocks().indicesBlockedException(projectResolver.getProjectId(), ClusterBlockLevel.METADATA_WRITE, concreteIndices);
}
@Override
protected void doExecute(
Task task,
DownsampleIndexerAction.Request request,
ActionListener<DownsampleIndexerAction.Response> listener
) {
new Async(task, request, listener).start();
}
@Override
protected DownsampleIndexerAction.ShardDownsampleRequest newShardRequest(
int numShards,
ShardRouting shard,
DownsampleIndexerAction.Request request
) {
return new DownsampleIndexerAction.ShardDownsampleRequest(shard.shardId(), request);
}
@Override
protected DownsampleIndexerAction.ShardDownsampleResponse shardOperation(
DownsampleIndexerAction.ShardDownsampleRequest request,
Task task
) throws IOException {
IndexService indexService = indicesService.indexService(request.shardId().getIndex());
DownsampleShardIndexer indexer = new DownsampleShardIndexer(
(DownsampleShardTask) task,
client,
indexService,
downsampleMetrics,
request.shardId(),
request.getDownsampleIndex(),
request.getRollupConfig(),
request.getMetricFields(),
request.getLabelFields(),
request.getDimensionFields(),
new DownsampleShardPersistentTaskState(DownsampleShardIndexerStatus.INITIALIZED, null)
);
return indexer.execute();
}
@Override
protected DownsampleIndexerAction.ShardDownsampleResponse readShardResponse(StreamInput in) throws IOException {
return new DownsampleIndexerAction.ShardDownsampleResponse(in);
}
@Override
protected DownsampleIndexerAction.Response newResponse(
DownsampleIndexerAction.Request request,
AtomicReferenceArray<?> shardsResponses,
ClusterState clusterState
) {
long numIndexed = 0;
int successfulShards = 0;
for (int i = 0; i < shardsResponses.length(); i++) {
Object shardResponse = shardsResponses.get(i);
if (shardResponse == null) {
throw new ElasticsearchException("missing shard");
} else if (shardResponse instanceof DownsampleIndexerAction.ShardDownsampleResponse r) {
successfulShards++;
numIndexed += r.getNumIndexed();
} else if (shardResponse instanceof Exception e) {
throw new ElasticsearchException(e);
} else {
assert false : "unknown response [" + shardResponse + "]";
throw new IllegalStateException("unknown response [" + shardResponse + "]");
}
}
return new DownsampleIndexerAction.Response(true, shardsResponses.length(), successfulShards, 0, numIndexed);
}
private class Async extends AsyncBroadcastAction {
private final DownsampleIndexerAction.Request request;
private final ActionListener<DownsampleIndexerAction.Response> listener;
protected Async(Task task, DownsampleIndexerAction.Request request, ActionListener<DownsampleIndexerAction.Response> listener) {
super(task, request, listener);
this.request = request;
this.listener = listener;
}
@Override
protected void finishHim() {
try {
DownsampleIndexerAction.Response resp = newResponse(request, shardsResponses, clusterService.state());
listener.onResponse(resp);
} catch (Exception e) {
listener.onFailure(e);
}
}
}
}

View file

@ -105,6 +105,10 @@ public final class Source implements Writeable {
return text + location;
}
/**
* @deprecated Sources created by this can't be correctly deserialized. For use in tests only.
*/
@Deprecated
public static Source synthetic(String text) {
return new Source(Location.EMPTY, text);
}

View file

@ -14,15 +14,22 @@ import org.elasticsearch.test.cluster.util.Version;
public class Clusters {
public static ElasticsearchCluster mixedVersionCluster() {
Version oldVersion = Version.fromString(System.getProperty("tests.old_cluster_version"));
return ElasticsearchCluster.local()
var cluster = ElasticsearchCluster.local()
.distribution(DistributionType.DEFAULT)
.withNode(node -> node.version(oldVersion))
.withNode(node -> node.version(Version.CURRENT))
.withNode(node -> node.version(oldVersion))
.withNode(node -> node.version(Version.CURRENT))
.setting("xpack.security.enabled", "false")
.setting("xpack.license.self_generated.type", "trial")
.setting("cluster.routing.rebalance.enable", "none") // disable relocation until we have retry in ESQL
.build();
.setting("xpack.license.self_generated.type", "trial");
if (supportRetryOnShardFailures(oldVersion) == false) {
cluster.setting("cluster.routing.rebalance.enable", "none");
}
return cluster.build();
}
private static boolean supportRetryOnShardFailures(Version version) {
return version.onOrAfter(Version.fromString("9.1.0"))
|| (version.onOrAfter(Version.fromString("8.19.0")) && version.before(Version.fromString("9.0.0")));
}
}

View file

@ -17,17 +17,20 @@ public class Clusters {
static final String LOCAL_CLUSTER_NAME = "local_cluster";
public static ElasticsearchCluster remoteCluster() {
return ElasticsearchCluster.local()
Version version = distributionVersion("tests.version.remote_cluster");
var cluster = ElasticsearchCluster.local()
.name(REMOTE_CLUSTER_NAME)
.distribution(DistributionType.DEFAULT)
.version(distributionVersion("tests.version.remote_cluster"))
.version(version)
.nodes(2)
.setting("node.roles", "[data,ingest,master]")
.setting("xpack.security.enabled", "false")
.setting("xpack.license.self_generated.type", "trial")
.shared(true)
.setting("cluster.routing.rebalance.enable", "none")
.build();
.shared(true);
if (supportRetryOnShardFailures(version) == false) {
cluster.setting("cluster.routing.rebalance.enable", "none");
}
return cluster.build();
}
public static ElasticsearchCluster localCluster(ElasticsearchCluster remoteCluster) {
@ -35,10 +38,11 @@ public class Clusters {
}
public static ElasticsearchCluster localCluster(ElasticsearchCluster remoteCluster, Boolean skipUnavailable) {
return ElasticsearchCluster.local()
Version version = distributionVersion("tests.version.local_cluster");
var cluster = ElasticsearchCluster.local()
.name(LOCAL_CLUSTER_NAME)
.distribution(DistributionType.DEFAULT)
.version(distributionVersion("tests.version.local_cluster"))
.version(version)
.nodes(2)
.setting("xpack.security.enabled", "false")
.setting("xpack.license.self_generated.type", "trial")
@ -46,9 +50,11 @@ public class Clusters {
.setting("cluster.remote.remote_cluster.seeds", () -> "\"" + remoteCluster.getTransportEndpoint(0) + "\"")
.setting("cluster.remote.connections_per_cluster", "1")
.setting("cluster.remote." + REMOTE_CLUSTER_NAME + ".skip_unavailable", skipUnavailable.toString())
.shared(true)
.setting("cluster.routing.rebalance.enable", "none")
.build();
.shared(true);
if (supportRetryOnShardFailures(version) == false) {
cluster.setting("cluster.routing.rebalance.enable", "none");
}
return cluster.build();
}
public static org.elasticsearch.Version localClusterVersion() {
@ -65,4 +71,9 @@ public class Clusters {
final String val = System.getProperty(key);
return val != null ? Version.fromString(val) : Version.CURRENT;
}
private static boolean supportRetryOnShardFailures(Version version) {
return version.onOrAfter(Version.fromString("9.1.0"))
|| (version.onOrAfter(Version.fromString("8.19.0")) && version.before(Version.fromString("9.0.0")));
}
}

View file

@ -64,6 +64,7 @@ import org.elasticsearch.xpack.esql.expression.predicate.operator.comparison.In;
import org.elasticsearch.xpack.esql.expression.predicate.operator.comparison.LessThan;
import org.elasticsearch.xpack.esql.expression.predicate.operator.comparison.LessThanOrEqual;
import org.elasticsearch.xpack.esql.expression.predicate.operator.comparison.NotEquals;
import org.elasticsearch.xpack.esql.io.stream.PlanStreamOutput;
import org.elasticsearch.xpack.esql.optimizer.rules.logical.FoldNull;
import org.elasticsearch.xpack.esql.parser.ExpressionBuilder;
import org.elasticsearch.xpack.esql.planner.Layout;
@ -102,6 +103,7 @@ import static org.elasticsearch.compute.data.BlockUtils.toJavaObject;
import static org.elasticsearch.xpack.esql.EsqlTestUtils.randomLiteral;
import static org.elasticsearch.xpack.esql.EsqlTestUtils.unboundLogicalOptimizerContext;
import static org.elasticsearch.xpack.esql.SerializationTestUtils.assertSerialization;
import static org.elasticsearch.xpack.esql.SerializationTestUtils.serializeDeserialize;
import static org.elasticsearch.xpack.esql.expression.function.EsqlFunctionRegistry.mapParam;
import static org.elasticsearch.xpack.esql.expression.function.EsqlFunctionRegistry.param;
import static org.elasticsearch.xpack.esql.expression.function.EsqlFunctionRegistry.paramWithoutAnnotation;
@ -331,7 +333,7 @@ public abstract class AbstractFunctionTestCase extends ESTestCase {
String ordinal = includeOrdinal ? TypeResolutions.ParamOrdinal.fromIndex(badArgPosition).name().toLowerCase(Locale.ROOT) + " " : "";
String expectedTypeString = expectedTypeSupplier.apply(validPerPosition.get(badArgPosition), badArgPosition);
String name = types.get(badArgPosition).typeName();
return ordinal + "argument of [] must be [" + expectedTypeString + "], found value [" + name + "] type [" + name + "]";
return ordinal + "argument of [source] must be [" + expectedTypeString + "], found value [" + name + "] type [" + name + "]";
}
@FunctionalInterface
@ -522,7 +524,7 @@ public abstract class AbstractFunctionTestCase extends ESTestCase {
* <strong>except</strong> those that have been marked with {@link TestCaseSupplier.TypedData#forceLiteral()}.
*/
protected final Expression buildFieldExpression(TestCaseSupplier.TestCase testCase) {
return build(testCase.getSource(), testCase.getDataAsFields());
return randomSerializeDeserialize(build(testCase.getSource(), testCase.getDataAsFields()));
}
/**
@ -531,12 +533,47 @@ public abstract class AbstractFunctionTestCase extends ESTestCase {
* those that have been marked with {@link TestCaseSupplier.TypedData#forceLiteral()}.
*/
protected final Expression buildDeepCopyOfFieldExpression(TestCaseSupplier.TestCase testCase) {
// We don't use `randomSerializeDeserialize()` here as the deep copied fields aren't deserializable right now
return build(testCase.getSource(), testCase.getDataAsDeepCopiedFields());
}
private Expression randomSerializeDeserialize(Expression expression) {
if (randomBoolean()) {
return expression;
}
return serializeDeserializeExpression(expression);
}
/**
* Returns the expression after being serialized and deserialized.
* <p>
* Tests randomly go through this method to ensure that the function retains the same logic after serialization and deserialization.
* </p>
* <p>
* Can be overridden to provide custom serialization and deserialization logic, or disable it if needed.
* </p>
*/
protected Expression serializeDeserializeExpression(Expression expression) {
Expression newExpression = serializeDeserialize(
expression,
PlanStreamOutput::writeNamedWriteable,
in -> in.readNamedWriteable(Expression.class),
testCase.getConfiguration() // The configuration query should be == to the source text of the function for this to work
);
// Fields use synthetic sources, which can't be serialized. So we replace with the originals instead.
var dummyChildren = newExpression.children()
.stream()
.<Expression>map(c -> new Literal(Source.EMPTY, "anything that won't match any test case", c.dataType()))
.toList();
// We first replace them with other unrelated expressions to force a replace, as some replaceChildren() will check for equality
return newExpression.replaceChildrenSameSize(dummyChildren).replaceChildrenSameSize(expression.children());
}
protected final Expression buildLiteralExpression(TestCaseSupplier.TestCase testCase) {
assumeTrue("Data can't be converted to literals", testCase.canGetDataAsLiterals());
return build(testCase.getSource(), testCase.getDataAsLiterals());
return randomSerializeDeserialize(build(testCase.getSource(), testCase.getDataAsLiterals()));
}
public static EvaluatorMapper.ToEvaluator toEvaluator() {
@ -711,7 +748,7 @@ public abstract class AbstractFunctionTestCase extends ESTestCase {
}
public void testSerializationOfSimple() {
assertSerialization(buildFieldExpression(testCase));
assertSerialization(buildFieldExpression(testCase), testCase.getConfiguration());
}
/**

View file

@ -401,8 +401,8 @@ public abstract class AbstractScalarFunctionTestCase extends AbstractFunctionTes
evaluator + "[lhs=Attribute[channel=0], rhs=Attribute[channel=1]]",
dataType,
is(nullValue())
).withWarning("Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.")
.withWarning("Line -1:-1: java.lang.ArithmeticException: " + typeNameOverflow)
).withWarning("Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.")
.withWarning("Line 1:1: java.lang.ArithmeticException: " + typeNameOverflow)
);
}
}

View file

@ -16,12 +16,15 @@ import org.elasticsearch.geo.ShapeTestUtils;
import org.elasticsearch.logging.LogManager;
import org.elasticsearch.logging.Logger;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xpack.esql.EsqlTestUtils;
import org.elasticsearch.xpack.esql.core.expression.Expression;
import org.elasticsearch.xpack.esql.core.expression.Literal;
import org.elasticsearch.xpack.esql.core.expression.MapExpression;
import org.elasticsearch.xpack.esql.core.tree.Location;
import org.elasticsearch.xpack.esql.core.tree.Source;
import org.elasticsearch.xpack.esql.core.type.DataType;
import org.elasticsearch.xpack.esql.core.util.NumericUtils;
import org.elasticsearch.xpack.esql.session.Configuration;
import org.elasticsearch.xpack.versionfield.Version;
import org.hamcrest.Matcher;
@ -54,6 +57,9 @@ public record TestCaseSupplier(String name, List<DataType> types, Supplier<TestC
implements
Supplier<TestCaseSupplier.TestCase> {
public static final Source TEST_SOURCE = new Source(new Location(1, 0), "source");
public static final Configuration TEST_CONFIGURATION = EsqlTestUtils.configuration(TEST_SOURCE.text());
private static final Logger logger = LogManager.getLogger(TestCaseSupplier.class);
/**
@ -1388,6 +1394,10 @@ public record TestCaseSupplier(String name, List<DataType> types, Supplier<TestC
* The {@link Source} this test case should be run with
*/
private final Source source;
/**
* The {@link Configuration} this test case should use
*/
private final Configuration configuration;
/**
* The parameter values and types to pass into the function for this test run
*/
@ -1490,7 +1500,8 @@ public record TestCaseSupplier(String name, List<DataType> types, Supplier<TestC
Object extra,
boolean canBuildEvaluator
) {
this.source = Source.EMPTY;
this.source = TEST_SOURCE;
this.configuration = TEST_CONFIGURATION;
this.data = data;
this.evaluatorToString = evaluatorToString;
this.expectedType = expectedType == null ? null : expectedType.noText();
@ -1510,6 +1521,10 @@ public record TestCaseSupplier(String name, List<DataType> types, Supplier<TestC
return source;
}
public Configuration getConfiguration() {
return configuration;
}
public List<TypedData> getData() {
return data;
}

View file

@ -241,7 +241,7 @@ public class TopTests extends AbstractAggregationTestCase {
new TestCaseSupplier.TypedData(0, DataType.INTEGER, "limit").forceLiteral(),
new TestCaseSupplier.TypedData(new BytesRef("desc"), DataType.KEYWORD, "order").forceLiteral()
),
"Limit must be greater than 0 in [], found [0]"
"Limit must be greater than 0 in [source], found [0]"
)
),
new TestCaseSupplier(
@ -252,7 +252,7 @@ public class TopTests extends AbstractAggregationTestCase {
new TestCaseSupplier.TypedData(2, DataType.INTEGER, "limit").forceLiteral(),
new TestCaseSupplier.TypedData(new BytesRef("wrong-order"), DataType.KEYWORD, "order").forceLiteral()
),
"Invalid order value in [], expected [ASC, DESC] but got [wrong-order]"
"Invalid order value in [source], expected [ASC, DESC] but got [wrong-order]"
)
),
new TestCaseSupplier(
@ -263,7 +263,7 @@ public class TopTests extends AbstractAggregationTestCase {
new TestCaseSupplier.TypedData(null, DataType.INTEGER, "limit").forceLiteral(),
new TestCaseSupplier.TypedData(new BytesRef("desc"), DataType.KEYWORD, "order").forceLiteral()
),
"second argument of [] cannot be null, received [limit]"
"second argument of [source] cannot be null, received [limit]"
)
),
new TestCaseSupplier(
@ -274,7 +274,7 @@ public class TopTests extends AbstractAggregationTestCase {
new TestCaseSupplier.TypedData(1, DataType.INTEGER, "limit").forceLiteral(),
new TestCaseSupplier.TypedData(null, DataType.KEYWORD, "order").forceLiteral()
),
"third argument of [] cannot be null, received [order]"
"third argument of [source] cannot be null, received [order]"
)
)
)
@ -317,4 +317,10 @@ public class TopTests extends AbstractAggregationTestCase {
);
});
}
@Override
protected Expression serializeDeserializeExpression(Expression expression) {
// TODO: This aggregation doesn't serialize the Source, and must be fixed.
return expression;
}
}

View file

@ -19,11 +19,13 @@ import org.elasticsearch.xpack.esql.core.tree.Source;
import org.elasticsearch.xpack.esql.core.type.DataType;
import org.elasticsearch.xpack.esql.expression.function.FunctionName;
import org.elasticsearch.xpack.esql.expression.function.TestCaseSupplier;
import org.elasticsearch.xpack.esql.io.stream.PlanStreamOutput;
import java.util.ArrayList;
import java.util.List;
import java.util.function.Supplier;
import static org.elasticsearch.xpack.esql.SerializationTestUtils.serializeDeserialize;
import static org.elasticsearch.xpack.esql.core.type.DataType.BOOLEAN;
import static org.elasticsearch.xpack.esql.core.type.DataType.KEYWORD;
import static org.elasticsearch.xpack.esql.core.type.DataType.UNSUPPORTED;
@ -83,4 +85,19 @@ public class MatchTests extends AbstractMatchFullTextFunctionTests {
}
return match;
}
/**
* Copy of the overridden method that doesn't check for children size, as the {@code options} child isn't serialized in Match.
*/
@Override
protected Expression serializeDeserializeExpression(Expression expression) {
Expression newExpression = serializeDeserialize(
expression,
PlanStreamOutput::writeNamedWriteable,
in -> in.readNamedWriteable(Expression.class),
testCase.getConfiguration() // The configuration query should be == to the source text of the function for this to work
);
// Fields use synthetic sources, which can't be serialized. So we use the originals instead.
return newExpression.replaceChildren(expression.children());
}
}

View file

@ -8,7 +8,6 @@
package org.elasticsearch.xpack.esql.expression.function.scalar;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.xpack.esql.EsqlTestUtils;
import org.elasticsearch.xpack.esql.core.expression.Expression;
import org.elasticsearch.xpack.esql.core.tree.Source;
import org.elasticsearch.xpack.esql.core.util.StringUtils;
@ -27,10 +26,22 @@ public abstract class AbstractConfigurationFunctionTestCase extends AbstractScal
@Override
protected Expression build(Source source, List<Expression> args) {
return buildWithConfiguration(source, args, EsqlTestUtils.TEST_CFG);
return buildWithConfiguration(source, args, testCase.getConfiguration());
}
static Configuration randomConfiguration() {
public void testSerializationWithConfiguration() {
Configuration config = randomConfiguration();
Expression expr = buildWithConfiguration(testCase.getSource(), testCase.getDataAsFields(), config);
assertSerialization(expr, config);
Configuration differentConfig = randomValueOtherThan(config, AbstractConfigurationFunctionTestCase::randomConfiguration);
Expression differentExpr = buildWithConfiguration(testCase.getSource(), testCase.getDataAsFields(), differentConfig);
assertNotEquals(expr, differentExpr);
}
private static Configuration randomConfiguration() {
// TODO: Randomize the query and maybe the pragmas.
return new Configuration(
randomZone(),
@ -47,19 +58,4 @@ public abstract class AbstractConfigurationFunctionTestCase extends AbstractScal
randomBoolean()
);
}
public void testSerializationWithConfiguration() {
Configuration config = randomConfiguration();
Expression expr = buildWithConfiguration(testCase.getSource(), testCase.getDataAsFields(), config);
assertSerialization(expr, config);
Configuration differentConfig;
do {
differentConfig = randomConfiguration();
} while (config.equals(differentConfig));
Expression differentExpr = buildWithConfiguration(testCase.getSource(), testCase.getDataAsFields(), differentConfig);
assertFalse(expr.equals(differentExpr));
}
}

View file

@ -75,4 +75,10 @@ public class FromAggregateMetricDoubleTests extends AbstractScalarFunctionTestCa
)
);
}
@Override
protected Expression serializeDeserializeExpression(Expression expression) {
// AggregateMetricDoubleLiteral can't be serialized when it's a literal
return expression;
}
}

View file

@ -49,8 +49,8 @@ public class ToCartesianPointTests extends AbstractScalarFunctionTestCase {
bytesRef -> {
var exception = expectThrows(Exception.class, () -> CARTESIAN.wktToWkb(bytesRef.utf8ToString()));
return List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: " + exception
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: " + exception
);
}
);

View file

@ -50,8 +50,8 @@ public class ToCartesianShapeTests extends AbstractScalarFunctionTestCase {
bytesRef -> {
var exception = expectThrows(Exception.class, () -> CARTESIAN.wktToWkb(bytesRef.utf8ToString()));
return List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: " + exception
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: " + exception
);
}
);

View file

@ -70,8 +70,8 @@ public class ToDateNanosTests extends AbstractScalarFunctionTestCase {
Long.MIN_VALUE,
-1L,
List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.IllegalArgumentException: Nanosecond dates before 1970-01-01T00:00:00.000Z are not supported."
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.IllegalArgumentException: Nanosecond dates before 1970-01-01T00:00:00.000Z are not supported."
)
);
TestCaseSupplier.forUnaryUnsignedLong(
@ -91,8 +91,8 @@ public class ToDateNanosTests extends AbstractScalarFunctionTestCase {
BigInteger.valueOf(Long.MAX_VALUE).add(BigInteger.TWO),
UNSIGNED_LONG_MAX,
bi -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + bi + "] out of [long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + bi + "] out of [long] range"
)
);
TestCaseSupplier.forUnaryDouble(
@ -103,8 +103,8 @@ public class ToDateNanosTests extends AbstractScalarFunctionTestCase {
Double.NEGATIVE_INFINITY,
-Double.MIN_VALUE,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.IllegalArgumentException: Nanosecond dates before 1970-01-01T00:00:00.000Z are not supported."
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.IllegalArgumentException: Nanosecond dates before 1970-01-01T00:00:00.000Z are not supported."
)
);
TestCaseSupplier.forUnaryDouble(
@ -115,8 +115,8 @@ public class ToDateNanosTests extends AbstractScalarFunctionTestCase {
9.223372036854777E18, // a "convenient" value larger than `(double) Long.MAX_VALUE` (== ...776E18)
Double.POSITIVE_INFINITY,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
)
);
TestCaseSupplier.forUnaryStrings(
@ -125,8 +125,8 @@ public class ToDateNanosTests extends AbstractScalarFunctionTestCase {
DataType.DATE_NANOS,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.IllegalArgumentException: "
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.IllegalArgumentException: "
+ (bytesRef.utf8ToString().isEmpty()
? "cannot parse empty datetime"
: ("failed to parse date field [" + bytesRef.utf8ToString() + "] with format [strict_date_optional_time_nanos]"))

View file

@ -83,4 +83,10 @@ public class ToDatePeriodTests extends AbstractScalarFunctionTestCase {
public void testSerializationOfSimple() {
assertTrue("Serialization test does not apply", true);
}
@Override
protected Expression serializeDeserializeExpression(Expression expression) {
// Can't be serialized
return expression;
}
}

View file

@ -81,8 +81,8 @@ public class ToDatetimeTests extends AbstractScalarFunctionTestCase {
BigInteger.valueOf(Long.MAX_VALUE).add(BigInteger.TWO),
UNSIGNED_LONG_MAX,
bi -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + bi + "] out of [long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + bi + "] out of [long] range"
)
);
TestCaseSupplier.forUnaryDouble(
@ -93,8 +93,8 @@ public class ToDatetimeTests extends AbstractScalarFunctionTestCase {
Double.NEGATIVE_INFINITY,
-9.223372036854777E18, // a "convenient" value smaller than `(double) Long.MIN_VALUE` (== ...776E18)
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
)
);
TestCaseSupplier.forUnaryDouble(
@ -105,8 +105,8 @@ public class ToDatetimeTests extends AbstractScalarFunctionTestCase {
9.223372036854777E18, // a "convenient" value larger than `(double) Long.MAX_VALUE` (== ...776E18)
Double.POSITIVE_INFINITY,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
)
);
TestCaseSupplier.forUnaryStrings(
@ -115,8 +115,8 @@ public class ToDatetimeTests extends AbstractScalarFunctionTestCase {
DataType.DATETIME,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.IllegalArgumentException: "
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.IllegalArgumentException: "
+ (bytesRef.utf8ToString().isEmpty()
? "cannot parse empty datetime"
: ("failed to parse date field [" + bytesRef.utf8ToString() + "] with format [strict_date_optional_time]"))
@ -151,8 +151,8 @@ public class ToDatetimeTests extends AbstractScalarFunctionTestCase {
DataType.DATETIME,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.IllegalArgumentException: failed to parse date field ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.IllegalArgumentException: failed to parse date field ["
+ ((BytesRef) bytesRef).utf8ToString()
+ "] with format [strict_date_optional_time]"
)
@ -171,8 +171,8 @@ public class ToDatetimeTests extends AbstractScalarFunctionTestCase {
DataType.DATETIME,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.IllegalArgumentException: failed to parse date field ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.IllegalArgumentException: failed to parse date field ["
+ ((BytesRef) bytesRef).utf8ToString()
+ "] with format [strict_date_optional_time]"
)

View file

@ -67,8 +67,8 @@ public class ToDegreesTests extends AbstractScalarFunctionTestCase {
double deg = Math.toDegrees(d);
ArrayList<String> warnings = new ArrayList<>(2);
if (Double.isNaN(deg) || Double.isInfinite(deg)) {
warnings.add("Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.");
warnings.add("Line -1:-1: java.lang.ArithmeticException: not a finite double number: " + deg);
warnings.add("Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.");
warnings.add("Line 1:1: java.lang.ArithmeticException: not a finite double number: " + deg);
}
return warnings;
});
@ -84,8 +84,8 @@ public class ToDegreesTests extends AbstractScalarFunctionTestCase {
DataType.DOUBLE,
d -> null,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.ArithmeticException: not a finite double number: " + ((double) d > 0 ? "Infinity" : "-Infinity")
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.ArithmeticException: not a finite double number: " + ((double) d > 0 ? "Infinity" : "-Infinity")
)
);

View file

@ -65,8 +65,8 @@ public class ToDoubleTests extends AbstractScalarFunctionTestCase {
() -> EsqlDataTypeConverter.stringToDouble(bytesRef.utf8ToString())
);
return List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: " + exception
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: " + exception
);
});
TestCaseSupplier.forUnaryUnsignedLong(

View file

@ -44,8 +44,8 @@ public class ToGeoPointTests extends AbstractScalarFunctionTestCase {
TestCaseSupplier.forUnaryStrings(suppliers, evaluatorName.apply("FromString"), DataType.GEO_POINT, bytesRef -> null, bytesRef -> {
var exception = expectThrows(Exception.class, () -> GEO.wktToWkb(bytesRef.utf8ToString()));
return List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: " + exception
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: " + exception
);
});
// strings that are geo point representations

View file

@ -45,8 +45,8 @@ public class ToGeoShapeTests extends AbstractScalarFunctionTestCase {
TestCaseSupplier.forUnaryStrings(suppliers, evaluatorName.apply("FromString"), DataType.GEO_SHAPE, bytesRef -> null, bytesRef -> {
var exception = expectThrows(Exception.class, () -> GEO.wktToWkb(bytesRef.utf8ToString()));
return List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: " + exception
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: " + exception
);
});
// strings that are geo_shape representations

View file

@ -47,8 +47,8 @@ public class ToIPTests extends AbstractScalarFunctionTestCase {
DataType.IP,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.IllegalArgumentException: '" + bytesRef.utf8ToString() + "' is not an IP string literal."
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.IllegalArgumentException: '" + bytesRef.utf8ToString() + "' is not an IP string literal."
)
);

View file

@ -60,8 +60,8 @@ public class ToIntegerTests extends AbstractScalarFunctionTestCase {
DataType.INTEGER,
l -> null,
l -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: ["
+ ((Instant) l).toEpochMilli()
+ "] out of [integer] range"
)
@ -73,8 +73,8 @@ public class ToIntegerTests extends AbstractScalarFunctionTestCase {
DataType.INTEGER,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
+ bytesRef.utf8ToString()
+ "]"
)
@ -98,8 +98,8 @@ public class ToIntegerTests extends AbstractScalarFunctionTestCase {
Double.NEGATIVE_INFINITY,
Integer.MIN_VALUE - 1d,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [integer] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [integer] range"
)
);
// from doubles outside Integer's range, positive
@ -111,8 +111,8 @@ public class ToIntegerTests extends AbstractScalarFunctionTestCase {
Integer.MAX_VALUE + 1d,
Double.POSITIVE_INFINITY,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [integer] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [integer] range"
)
);
@ -135,8 +135,8 @@ public class ToIntegerTests extends AbstractScalarFunctionTestCase {
BigInteger.valueOf(Integer.MAX_VALUE).add(BigInteger.ONE),
UNSIGNED_LONG_MAX,
ul -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + ul + "] out of [integer] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + ul + "] out of [integer] range"
)
);
@ -160,8 +160,8 @@ public class ToIntegerTests extends AbstractScalarFunctionTestCase {
Long.MIN_VALUE,
Integer.MIN_VALUE - 1L,
l -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + l + "] out of [integer] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + l + "] out of [integer] range"
)
);
@ -174,8 +174,8 @@ public class ToIntegerTests extends AbstractScalarFunctionTestCase {
Integer.MAX_VALUE + 1L,
Long.MAX_VALUE,
l -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + l + "] out of [integer] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + l + "] out of [integer] range"
)
);
@ -232,8 +232,8 @@ public class ToIntegerTests extends AbstractScalarFunctionTestCase {
DataType.INTEGER,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
+ ((BytesRef) bytesRef).utf8ToString()
+ "]"
)
@ -255,8 +255,8 @@ public class ToIntegerTests extends AbstractScalarFunctionTestCase {
DataType.INTEGER,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
+ ((BytesRef) bytesRef).utf8ToString()
+ "]"
)

View file

@ -59,8 +59,8 @@ public class ToLongTests extends AbstractScalarFunctionTestCase {
DataType.LONG,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
+ bytesRef.utf8ToString()
+ "]"
)
@ -84,8 +84,8 @@ public class ToLongTests extends AbstractScalarFunctionTestCase {
Double.NEGATIVE_INFINITY,
Long.MIN_VALUE - 1d,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
)
);
// from doubles outside long's range, positive
@ -97,8 +97,8 @@ public class ToLongTests extends AbstractScalarFunctionTestCase {
Long.MAX_VALUE + 1d,
Double.POSITIVE_INFINITY,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [long] range"
)
);
@ -120,8 +120,8 @@ public class ToLongTests extends AbstractScalarFunctionTestCase {
BigInteger.valueOf(Long.MAX_VALUE).add(BigInteger.ONE),
UNSIGNED_LONG_MAX,
ul -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + ul + "] out of [long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + ul + "] out of [long] range"
)
);
@ -190,8 +190,8 @@ public class ToLongTests extends AbstractScalarFunctionTestCase {
DataType.LONG,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
+ ((BytesRef) bytesRef).utf8ToString()
+ "]"
)
@ -213,8 +213,8 @@ public class ToLongTests extends AbstractScalarFunctionTestCase {
DataType.LONG,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: Cannot parse number ["
+ ((BytesRef) bytesRef).utf8ToString()
+ "]"
)

View file

@ -82,4 +82,10 @@ public class ToTimeDurationTests extends AbstractScalarFunctionTestCase {
public void testSerializationOfSimple() {
assertTrue("Serialization test does not apply", true);
}
@Override
protected Expression serializeDeserializeExpression(Expression expression) {
// Can't be serialized
return expression;
}
}

View file

@ -74,8 +74,8 @@ public class ToUnsignedLongTests extends AbstractScalarFunctionTestCase {
// this shortcut here.
Exception e = expectThrows(NumberFormatException.class, () -> new BigDecimal(bytesRef.utf8ToString()));
return List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.NumberFormatException: " + e.getMessage()
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.NumberFormatException: " + e.getMessage()
);
});
// from doubles within unsigned_long's range
@ -97,8 +97,8 @@ public class ToUnsignedLongTests extends AbstractScalarFunctionTestCase {
Double.NEGATIVE_INFINITY,
-1d,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [unsigned_long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [unsigned_long] range"
)
);
// from doubles outside Long's range, positive
@ -110,8 +110,8 @@ public class ToUnsignedLongTests extends AbstractScalarFunctionTestCase {
UNSIGNED_LONG_MAX_AS_DOUBLE + 10e5,
Double.POSITIVE_INFINITY,
d -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [unsigned_long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + d + "] out of [unsigned_long] range"
)
);
@ -134,8 +134,8 @@ public class ToUnsignedLongTests extends AbstractScalarFunctionTestCase {
Long.MIN_VALUE,
-1L,
l -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + l + "] out of [unsigned_long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + l + "] out of [unsigned_long] range"
)
);
@ -158,8 +158,8 @@ public class ToUnsignedLongTests extends AbstractScalarFunctionTestCase {
Integer.MIN_VALUE,
-1,
l -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + l + "] out of [unsigned_long] range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [" + l + "] out of [unsigned_long] range"
)
);
@ -216,8 +216,8 @@ public class ToUnsignedLongTests extends AbstractScalarFunctionTestCase {
DataType.UNSIGNED_LONG,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: ["
+ ((BytesRef) bytesRef).utf8ToString()
+ "] out of [unsigned_long] range"
)
@ -239,8 +239,8 @@ public class ToUnsignedLongTests extends AbstractScalarFunctionTestCase {
DataType.UNSIGNED_LONG,
bytesRef -> null,
bytesRef -> List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: ["
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: ["
+ ((BytesRef) bytesRef).utf8ToString()
+ "] out of [unsigned_long] range"
)

View file

@ -91,7 +91,7 @@ public class DateDiffTests extends AbstractScalarFunctionTestCase {
zdtStart2,
zdtEnd2,
"nanoseconds",
"Line -1:-1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [300000000000] out of [integer] range"
"Line 1:1: org.elasticsearch.xpack.esql.core.InvalidArgumentException: [300000000000] out of [integer] range"
)
);
@ -241,7 +241,7 @@ public class DateDiffTests extends AbstractScalarFunctionTestCase {
+ "endTimestamp=Attribute[channel=2]]",
DataType.INTEGER,
equalTo(null)
).withWarning("Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.")
).withWarning("Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.")
.withWarning(warning)
),
// Units as text case
@ -258,7 +258,7 @@ public class DateDiffTests extends AbstractScalarFunctionTestCase {
+ "endTimestamp=Attribute[channel=2]]",
DataType.INTEGER,
equalTo(null)
).withWarning("Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.")
).withWarning("Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.")
.withWarning(warning)
)
);
@ -268,4 +268,10 @@ public class DateDiffTests extends AbstractScalarFunctionTestCase {
protected Expression build(Source source, List<Expression> args) {
return new DateDiff(source, args.get(0), args.get(1), args.get(2));
}
@Override
protected Expression serializeDeserializeExpression(Expression expression) {
// TODO: This function doesn't serialize the Source, and must be fixed.
return expression;
}
}

View file

@ -93,12 +93,12 @@ public class DateExtractTests extends AbstractConfigurationFunctionTestCase {
"DateExtractMillisEvaluator[value=Attribute[channel=1], chronoField=Attribute[channel=0], zone=Z]",
DataType.LONG,
is(nullValue())
).withWarning("Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.")
).withWarning("Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.")
.withWarning(
"Line -1:-1: java.lang.IllegalArgumentException: "
"Line 1:1: java.lang.IllegalArgumentException: "
+ "No enum constant java.time.temporal.ChronoField.NOT A UNIT"
)
.withFoldingException(InvalidArgumentException.class, "invalid date field for []: not a unit")
.withFoldingException(InvalidArgumentException.class, "invalid date field for [source]: not a unit")
)
)
);

View file

@ -117,13 +117,13 @@ public class DateParseTests extends AbstractScalarFunctionTestCase {
"DateParseEvaluator[val=Attribute[channel=1], formatter=Attribute[channel=0]]",
DataType.DATETIME,
is(nullValue())
).withWarning("Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.")
).withWarning("Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.")
.withWarning(
"Line -1:-1: java.lang.IllegalArgumentException: Invalid format: " + "[not a format]: Unknown pattern letter: o"
"Line 1:1: java.lang.IllegalArgumentException: Invalid format: [not a format]: Unknown pattern letter: o"
)
.withFoldingException(
InvalidArgumentException.class,
"invalid date pattern for []: Invalid format: [not a format]: Unknown pattern letter: o"
"invalid date pattern for [source]: Invalid format: [not a format]: Unknown pattern letter: o"
)
),
new TestCaseSupplier(
@ -137,9 +137,9 @@ public class DateParseTests extends AbstractScalarFunctionTestCase {
"DateParseEvaluator[val=Attribute[channel=1], formatter=Attribute[channel=0]]",
DataType.DATETIME,
is(nullValue())
).withWarning("Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.")
).withWarning("Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.")
.withWarning(
"Line -1:-1: java.lang.IllegalArgumentException: "
"Line 1:1: java.lang.IllegalArgumentException: "
+ "failed to parse date field [not a date] with format [yyyy-MM-dd]"
)
)

View file

@ -10,7 +10,6 @@ package org.elasticsearch.xpack.esql.expression.function.scalar.date;
import com.carrotsearch.randomizedtesting.annotations.Name;
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
import org.elasticsearch.xpack.esql.EsqlTestUtils;
import org.elasticsearch.xpack.esql.core.expression.Expression;
import org.elasticsearch.xpack.esql.core.tree.Source;
import org.elasticsearch.xpack.esql.core.type.DataType;
@ -42,7 +41,7 @@ public class NowTests extends AbstractConfigurationFunctionTestCase {
List.of(),
matchesPattern("LiteralsEvaluator\\[lit=.*]"),
DataType.DATETIME,
equalTo(EsqlTestUtils.TEST_CFG.now().toInstant().toEpochMilli())
equalTo(TestCaseSupplier.TEST_CONFIGURATION.now().toInstant().toEpochMilli())
)
)
)
@ -56,7 +55,7 @@ public class NowTests extends AbstractConfigurationFunctionTestCase {
@Override
protected Matcher<Object> allNullsMatcher() {
return equalTo(EsqlTestUtils.TEST_CFG.now().toInstant().toEpochMilli());
return equalTo(testCase.getConfiguration().now().toInstant().toEpochMilli());
}
}

View file

@ -38,8 +38,8 @@ public class AcosTests extends AbstractScalarFunctionTestCase {
Double.NEGATIVE_INFINITY,
Math.nextDown(-1d),
List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.ArithmeticException: Acos input out of range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.ArithmeticException: Acos input out of range"
)
)
);
@ -51,8 +51,8 @@ public class AcosTests extends AbstractScalarFunctionTestCase {
Math.nextUp(1d),
Double.POSITIVE_INFINITY,
List.of(
"Line -1:-1: evaluation of [] failed, treating result as null. Only first 20 failures recorded.",
"Line -1:-1: java.lang.ArithmeticException: Acos input out of range"
"Line 1:1: evaluation of [source] failed, treating result as null. Only first 20 failures recorded.",
"Line 1:1: java.lang.ArithmeticException: Acos input out of range"
)
)
);

Some files were not shown because too many files have changed in this diff Show more