Merge main into multi-project

This commit is contained in:
Tim Vernum 2024-11-11 16:20:38 +11:00
commit 17c27bc42b
179 changed files with 6844 additions and 1954 deletions

View file

@ -15,6 +15,7 @@ sles-15.2
sles-15.3 sles-15.3
sles-15.4 sles-15.4
sles-15.5 sles-15.5
sles-15.6
# These OSes are deprecated and filtered starting with 8.0.0, but need to be excluded # These OSes are deprecated and filtered starting with 8.0.0, but need to be excluded
# for PR checks # for PR checks

View file

@ -4,7 +4,7 @@ Elasticsearch is a distributed search and analytics engine, scalable data store
Use cases enabled by Elasticsearch include: Use cases enabled by Elasticsearch include:
* https://www.elastic.co/search-labs/blog/articles/retrieval-augmented-generation-rag[Retrieval Augmented Generation (RAG)] * https://www.elastic.co/search-labs/blog/articles/retrieval-augmented-generation-rag[Retrieval Augmented Generation (RAG)]
* https://www.elastic.co/search-labs/blog/categories/vector-search[Vector search] * https://www.elastic.co/search-labs/blog/categories/vector-search[Vector search]
* Full-text search * Full-text search
* Logs * Logs
@ -17,7 +17,7 @@ Use cases enabled by Elasticsearch include:
To learn more about Elasticsearch's features and capabilities, see our To learn more about Elasticsearch's features and capabilities, see our
https://www.elastic.co/products/elasticsearch[product page]. https://www.elastic.co/products/elasticsearch[product page].
To access information on https://www.elastic.co/search-labs/blog/categories/ml-research[machine learning innovations] and the latest https://www.elastic.co/search-labs/blog/categories/lucene[Lucene contributions from Elastic], more information can be found in https://www.elastic.co/search-labs[Search Labs]. To access information on https://www.elastic.co/search-labs/blog/categories/ml-research[machine learning innovations] and the latest https://www.elastic.co/search-labs/blog/categories/lucene[Lucene contributions from Elastic], more information can be found in https://www.elastic.co/search-labs[Search Labs].
[[get-started]] [[get-started]]
== Get started == Get started
@ -27,20 +27,20 @@ https://www.elastic.co/cloud/as-a-service[Elasticsearch Service on Elastic
Cloud]. Cloud].
If you prefer to install and manage Elasticsearch yourself, you can download If you prefer to install and manage Elasticsearch yourself, you can download
the latest version from the latest version from
https://www.elastic.co/downloads/elasticsearch[elastic.co/downloads/elasticsearch]. https://www.elastic.co/downloads/elasticsearch[elastic.co/downloads/elasticsearch].
=== Run Elasticsearch locally === Run Elasticsearch locally
//// ////
IMPORTANT: This content is replicated in the Elasticsearch repo. See `run-elasticsearch-locally.asciidoc`. IMPORTANT: This content is replicated in the Elasticsearch repo. See `run-elasticsearch-locally.asciidoc`.
Ensure both files are in sync. Ensure both files are in sync.
https://github.com/elastic/start-local is the source of truth. https://github.com/elastic/start-local is the source of truth.
//// ////
[WARNING] [WARNING]
==== ====
DO NOT USE THESE INSTRUCTIONS FOR PRODUCTION DEPLOYMENTS. DO NOT USE THESE INSTRUCTIONS FOR PRODUCTION DEPLOYMENTS.
This setup is intended for local development and testing only. This setup is intended for local development and testing only.
@ -93,7 +93,7 @@ Use this key to connect to Elasticsearch with a https://www.elastic.co/guide/en/
From the `elastic-start-local` folder, check the connection to Elasticsearch using `curl`: From the `elastic-start-local` folder, check the connection to Elasticsearch using `curl`:
[source,sh] [source,sh]
---- ----
source .env source .env
curl $ES_LOCAL_URL -H "Authorization: ApiKey ${ES_LOCAL_API_KEY}" curl $ES_LOCAL_URL -H "Authorization: ApiKey ${ES_LOCAL_API_KEY}"
---- ----
@ -101,12 +101,12 @@ curl $ES_LOCAL_URL -H "Authorization: ApiKey ${ES_LOCAL_API_KEY}"
=== Send requests to Elasticsearch === Send requests to Elasticsearch
You send data and other requests to Elasticsearch through REST APIs. You send data and other requests to Elasticsearch through REST APIs.
You can interact with Elasticsearch using any client that sends HTTP requests, You can interact with Elasticsearch using any client that sends HTTP requests,
such as the https://www.elastic.co/guide/en/elasticsearch/client/index.html[Elasticsearch such as the https://www.elastic.co/guide/en/elasticsearch/client/index.html[Elasticsearch
language clients] and https://curl.se[curl]. language clients] and https://curl.se[curl].
==== Using curl ==== Using curl
Here's an example curl command to create a new Elasticsearch index, using basic auth: Here's an example curl command to create a new Elasticsearch index, using basic auth:
@ -149,19 +149,19 @@ print(client.info())
==== Using the Dev Tools Console ==== Using the Dev Tools Console
Kibana's developer console provides an easy way to experiment and test requests. Kibana's developer console provides an easy way to experiment and test requests.
To access the console, open Kibana, then go to **Management** > **Dev Tools**. To access the console, open Kibana, then go to **Management** > **Dev Tools**.
**Add data** **Add data**
You index data into Elasticsearch by sending JSON objects (documents) through the REST APIs. You index data into Elasticsearch by sending JSON objects (documents) through the REST APIs.
Whether you have structured or unstructured text, numerical data, or geospatial data, Whether you have structured or unstructured text, numerical data, or geospatial data,
Elasticsearch efficiently stores and indexes it in a way that supports fast searches. Elasticsearch efficiently stores and indexes it in a way that supports fast searches.
For timestamped data such as logs and metrics, you typically add documents to a For timestamped data such as logs and metrics, you typically add documents to a
data stream made up of multiple auto-generated backing indices. data stream made up of multiple auto-generated backing indices.
To add a single document to an index, submit an HTTP post request that targets the index. To add a single document to an index, submit an HTTP post request that targets the index.
---- ----
POST /customer/_doc/1 POST /customer/_doc/1
@ -171,11 +171,11 @@ POST /customer/_doc/1
} }
---- ----
This request automatically creates the `customer` index if it doesn't exist, This request automatically creates the `customer` index if it doesn't exist,
adds a new document that has an ID of 1, and adds a new document that has an ID of 1, and
stores and indexes the `firstname` and `lastname` fields. stores and indexes the `firstname` and `lastname` fields.
The new document is available immediately from any node in the cluster. The new document is available immediately from any node in the cluster.
You can retrieve it with a GET request that specifies its document ID: You can retrieve it with a GET request that specifies its document ID:
---- ----
@ -183,7 +183,7 @@ GET /customer/_doc/1
---- ----
To add multiple documents in one request, use the `_bulk` API. To add multiple documents in one request, use the `_bulk` API.
Bulk data must be newline-delimited JSON (NDJSON). Bulk data must be newline-delimited JSON (NDJSON).
Each line must end in a newline character (`\n`), including the last line. Each line must end in a newline character (`\n`), including the last line.
---- ----
@ -200,15 +200,15 @@ PUT customer/_bulk
**Search** **Search**
Indexed documents are available for search in near real-time. Indexed documents are available for search in near real-time.
The following search matches all customers with a first name of _Jennifer_ The following search matches all customers with a first name of _Jennifer_
in the `customer` index. in the `customer` index.
---- ----
GET customer/_search GET customer/_search
{ {
"query" : { "query" : {
"match" : { "firstname": "Jennifer" } "match" : { "firstname": "Jennifer" }
} }
} }
---- ----
@ -223,9 +223,9 @@ data streams, or index aliases.
. Go to **Management > Stack Management > Kibana > Data Views**. . Go to **Management > Stack Management > Kibana > Data Views**.
. Select **Create data view**. . Select **Create data view**.
. Enter a name for the data view and a pattern that matches one or more indices, . Enter a name for the data view and a pattern that matches one or more indices,
such as _customer_. such as _customer_.
. Select **Save data view to Kibana**. . Select **Save data view to Kibana**.
To start exploring, go to **Analytics > Discover**. To start exploring, go to **Analytics > Discover**.
@ -254,11 +254,6 @@ To build a distribution for another platform, run the related command:
./gradlew :distribution:archives:windows-zip:assemble ./gradlew :distribution:archives:windows-zip:assemble
---- ----
To build distributions for all supported platforms, run:
----
./gradlew assemble
----
Distributions are output to `distribution/archives`. Distributions are output to `distribution/archives`.
To run the test suite, see xref:TESTING.asciidoc[TESTING]. To run the test suite, see xref:TESTING.asciidoc[TESTING].
@ -281,7 +276,7 @@ The https://github.com/elastic/elasticsearch-labs[`elasticsearch-labs`] repo con
[[contribute]] [[contribute]]
== Contribute == Contribute
For contribution guidelines, see xref:CONTRIBUTING.md[CONTRIBUTING]. For contribution guidelines, see xref:CONTRIBUTING.md[CONTRIBUTING].
[[questions]] [[questions]]
== Questions? Problems? Suggestions? == Questions? Problems? Suggestions?

View file

@ -0,0 +1,138 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.benchmark.indices.resolution;
import org.elasticsearch.action.IndicesRequest;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.DataStream;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.Metadata;
import org.elasticsearch.cluster.project.DefaultProjectResolver;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.indices.SystemIndices;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
@State(Scope.Benchmark)
@Fork(3)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@SuppressWarnings("unused") // invoked by benchmarking framework
public class IndexNameExpressionResolverBenchmark {
private static final String DATA_STREAM_PREFIX = "my-ds-";
private static final String INDEX_PREFIX = "my-index-";
@Param(
{
// # data streams | # indices
" 1000| 100",
" 5000| 500",
" 10000| 1000" }
)
public String resourceMix = "100|10";
@Setup
public void setUp() {
final String[] params = resourceMix.split("\\|");
int numDataStreams = toInt(params[0]);
int numIndices = toInt(params[1]);
Metadata.Builder mb = Metadata.builder();
String[] indices = new String[numIndices + numDataStreams * (numIndices + 1)];
int position = 0;
for (int i = 1; i <= numIndices; i++) {
String indexName = INDEX_PREFIX + i;
createIndexMetadata(indexName, mb);
indices[position++] = indexName;
}
for (int i = 1; i <= numDataStreams; i++) {
String dataStreamName = DATA_STREAM_PREFIX + i;
List<Index> backingIndices = new ArrayList<>();
for (int j = 1; j <= numIndices; j++) {
String backingIndexName = DataStream.getDefaultBackingIndexName(dataStreamName, j);
backingIndices.add(createIndexMetadata(backingIndexName, mb).getIndex());
indices[position++] = backingIndexName;
}
indices[position++] = dataStreamName;
mb.put(DataStream.builder(dataStreamName, backingIndices).build());
}
int mid = indices.length / 2;
clusterState = ClusterState.builder(ClusterName.DEFAULT).metadata(mb).build();
resolver = new IndexNameExpressionResolver(
new ThreadContext(Settings.EMPTY),
new SystemIndices(List.of()),
DefaultProjectResolver.INSTANCE
);
indexListRequest = new Request(IndicesOptions.lenientExpandOpenHidden(), indices);
starRequest = new Request(IndicesOptions.lenientExpandOpenHidden(), "*");
String[] mixed = indices.clone();
mixed[mid] = "my-*";
mixedRequest = new Request(IndicesOptions.lenientExpandOpenHidden(), mixed);
}
private IndexMetadata createIndexMetadata(String indexName, Metadata.Builder mb) {
IndexMetadata indexMetadata = IndexMetadata.builder(indexName)
.settings(Settings.builder().put(IndexMetadata.SETTING_VERSION_CREATED, IndexVersion.current()))
.numberOfShards(1)
.numberOfReplicas(0)
.build();
mb.put(indexMetadata, false);
return indexMetadata;
}
private IndexNameExpressionResolver resolver;
private ClusterState clusterState;
private Request starRequest;
private Request indexListRequest;
private Request mixedRequest;
@Benchmark
public String[] resolveResourcesListToConcreteIndices() {
return resolver.concreteIndexNames(clusterState, indexListRequest);
}
@Benchmark
public String[] resolveAllStarToConcreteIndices() {
return resolver.concreteIndexNames(clusterState, starRequest);
}
@Benchmark
public String[] resolveMixedConcreteIndices() {
return resolver.concreteIndexNames(clusterState, mixedRequest);
}
private int toInt(String v) {
return Integer.parseInt(v.trim());
}
record Request(IndicesOptions indicesOptions, String... indices) implements IndicesRequest {
}
}

View file

@ -13,14 +13,13 @@ import com.avast.gradle.dockercompose.tasks.ComposePull
import com.fasterxml.jackson.databind.JsonNode import com.fasterxml.jackson.databind.JsonNode
import com.fasterxml.jackson.databind.ObjectMapper import com.fasterxml.jackson.databind.ObjectMapper
import org.elasticsearch.gradle.DistributionDownloadPlugin
import org.elasticsearch.gradle.Version import org.elasticsearch.gradle.Version
import org.elasticsearch.gradle.internal.BaseInternalPluginBuildPlugin import org.elasticsearch.gradle.internal.BaseInternalPluginBuildPlugin
import org.elasticsearch.gradle.internal.ResolveAllDependencies import org.elasticsearch.gradle.internal.ResolveAllDependencies
import org.elasticsearch.gradle.internal.info.BuildParams import org.elasticsearch.gradle.internal.info.BuildParams
import org.elasticsearch.gradle.util.GradleUtils import org.elasticsearch.gradle.util.GradleUtils
import org.gradle.plugins.ide.eclipse.model.AccessRule import org.gradle.plugins.ide.eclipse.model.AccessRule
import org.gradle.plugins.ide.eclipse.model.ProjectDependency
import org.elasticsearch.gradle.DistributionDownloadPlugin
import java.nio.file.Files import java.nio.file.Files
@ -89,7 +88,7 @@ class ListExpansion {
// Filters out intermediate patch releases to reduce the load of CI testing // Filters out intermediate patch releases to reduce the load of CI testing
def filterIntermediatePatches = { List<Version> versions -> def filterIntermediatePatches = { List<Version> versions ->
versions.groupBy {"${it.major}.${it.minor}"}.values().collect {it.max()} versions.groupBy { "${it.major}.${it.minor}" }.values().collect { it.max() }
} }
tasks.register("updateCIBwcVersions") { tasks.register("updateCIBwcVersions") {
@ -101,7 +100,10 @@ tasks.register("updateCIBwcVersions") {
} }
} }
def writeBuildkitePipeline = { String outputFilePath, String pipelineTemplatePath, List<ListExpansion> listExpansions, List<StepExpansion> stepExpansions = [] -> def writeBuildkitePipeline = { String outputFilePath,
String pipelineTemplatePath,
List<ListExpansion> listExpansions,
List<StepExpansion> stepExpansions = [] ->
def outputFile = file(outputFilePath) def outputFile = file(outputFilePath)
def pipelineTemplate = file(pipelineTemplatePath) def pipelineTemplate = file(pipelineTemplatePath)
@ -132,7 +134,12 @@ tasks.register("updateCIBwcVersions") {
// Writes a Buildkite pipeline from a template, and replaces $BWC_STEPS with a list of steps, one for each version // Writes a Buildkite pipeline from a template, and replaces $BWC_STEPS with a list of steps, one for each version
// Useful when you need to configure more versions than are allowed in a matrix configuration // Useful when you need to configure more versions than are allowed in a matrix configuration
def expandBwcSteps = { String outputFilePath, String pipelineTemplatePath, String stepTemplatePath, List<Version> versions -> def expandBwcSteps = { String outputFilePath, String pipelineTemplatePath, String stepTemplatePath, List<Version> versions ->
writeBuildkitePipeline(outputFilePath, pipelineTemplatePath, [], [new StepExpansion(templatePath: stepTemplatePath, versions: versions, variable: "BWC_STEPS")]) writeBuildkitePipeline(
outputFilePath,
pipelineTemplatePath,
[],
[new StepExpansion(templatePath: stepTemplatePath, versions: versions, variable: "BWC_STEPS")]
)
} }
doLast { doLast {
@ -150,7 +157,11 @@ tasks.register("updateCIBwcVersions") {
new ListExpansion(versions: filterIntermediatePatches(BuildParams.bwcVersions.unreleasedIndexCompatible), variable: "BWC_LIST"), new ListExpansion(versions: filterIntermediatePatches(BuildParams.bwcVersions.unreleasedIndexCompatible), variable: "BWC_LIST"),
], ],
[ [
new StepExpansion(templatePath: ".buildkite/pipelines/periodic.bwc.template.yml", versions: filterIntermediatePatches(BuildParams.bwcVersions.indexCompatible), variable: "BWC_STEPS"), new StepExpansion(
templatePath: ".buildkite/pipelines/periodic.bwc.template.yml",
versions: filterIntermediatePatches(BuildParams.bwcVersions.indexCompatible),
variable: "BWC_STEPS"
),
] ]
) )
@ -302,7 +313,7 @@ allprojects {
if (project.path.startsWith(":x-pack:")) { if (project.path.startsWith(":x-pack:")) {
if (project.path.contains("security") || project.path.contains(":ml")) { if (project.path.contains("security") || project.path.contains(":ml")) {
tasks.register('checkPart4') { dependsOn 'check' } tasks.register('checkPart4') { dependsOn 'check' }
} else if (project.path == ":x-pack:plugin" || project.path.contains("ql") || project.path.contains("smoke-test")) { } else if (project.path == ":x-pack:plugin" || project.path.contains("ql") || project.path.contains("smoke-test")) {
tasks.register('checkPart3') { dependsOn 'check' } tasks.register('checkPart3') { dependsOn 'check' }
} else if (project.path.contains("multi-node")) { } else if (project.path.contains("multi-node")) {
tasks.register('checkPart5') { dependsOn 'check' } tasks.register('checkPart5') { dependsOn 'check' }

View file

@ -0,0 +1,6 @@
pr: 114964
summary: Add a `monitor_stats` privilege and allow that privilege for remote cluster
privileges
area: Authorization
type: enhancement
issues: []

View file

@ -0,0 +1,6 @@
pr: 115744
summary: Use `SearchStats` instead of field.isAggregatable in data node planning
area: ES|QL
type: bug
issues:
- 115737

View file

@ -0,0 +1,5 @@
pr: 116325
summary: Adjust analyze limit exception to be a `bad_request`
area: Analysis
type: bug
issues: []

View file

@ -0,0 +1,5 @@
pr: 116382
summary: Validate missing shards after the coordinator rewrite
area: Search
type: bug
issues: []

View file

@ -0,0 +1,5 @@
pr: 116478
summary: Semantic text simple partial update
area: Search
type: bug
issues: []

View file

@ -127,10 +127,11 @@ And the following may be the response:
==== Percentiles_bucket implementation ==== Percentiles_bucket implementation
The Percentile Bucket returns the nearest input data point that is not greater than the requested percentile; it does not
interpolate between data points.
The percentiles are calculated exactly and is not an approximation (unlike the Percentiles Metric). This means The percentiles are calculated exactly and is not an approximation (unlike the Percentiles Metric). This means
the implementation maintains an in-memory, sorted list of your data to compute the percentiles, before discarding the the implementation maintains an in-memory, sorted list of your data to compute the percentiles, before discarding the
data. You may run into memory pressure issues if you attempt to calculate percentiles over many millions of data. You may run into memory pressure issues if you attempt to calculate percentiles over many millions of
data-points in a single `percentiles_bucket`. data-points in a single `percentiles_bucket`.
The Percentile Bucket returns the nearest input data point to the requested percentile, rounding indices toward
positive infinity; it does not interpolate between data points. For example, if there are eight data points and
you request the `50%th` percentile, it will return the `4th` item because `ROUND_UP(.50 * (8-1))` is `4`.

View file

@ -9,9 +9,9 @@ You can use {esql} in {kib} to query and aggregate your data, create
visualizations, and set up alerts. visualizations, and set up alerts.
This guide shows you how to use {esql} in Kibana. To follow along with the This guide shows you how to use {esql} in Kibana. To follow along with the
queries, load the "Sample web logs" sample data set by clicking *Try sample queries, load the "Sample web logs" sample data set by selecting **Sample Data**
data* from the {kib} Home, selecting *Other sample data sets*, and clicking *Add from the **Integrations** page in {kib}, selecting *Other sample data sets*,
data* on the *Sample web logs* card. and clicking *Add data* on the *Sample web logs* card.
[discrete] [discrete]
[[esql-kibana-enable]] [[esql-kibana-enable]]
@ -30,9 +30,7 @@ However, users will be able to access existing {esql} artifacts like saved searc
// tag::esql-mode[] // tag::esql-mode[]
To get started with {esql} in Discover, open the main menu and select To get started with {esql} in Discover, open the main menu and select
*Discover*. Next, from the Data views menu, select *Language: ES|QL*. *Discover*. Next, select *Try ES|QL* from the application menu bar.
image::images/esql/esql-data-view-menu.png[align="center",width=33%]
// end::esql-mode[] // end::esql-mode[]
[discrete] [discrete]
@ -54,8 +52,9 @@ A source command can be followed by one or more <<esql-commands,processing
commands>>. In this query, the processing command is <<esql-limit>>. `LIMIT` commands>>. In this query, the processing command is <<esql-limit>>. `LIMIT`
limits the number of rows that are retrieved. limits the number of rows that are retrieved.
TIP: Click the help icon (image:images/esql/esql-icon-help.svg[Static,20]) to open the TIP: Click the **ES|QL help** button to open the
in-product reference documentation for all commands and functions. in-product reference documentation for all commands and functions or to get
recommended queries that will help you get started.
// tag::autocomplete[] // tag::autocomplete[]
To make it easier to write queries, auto-complete offers suggestions with To make it easier to write queries, auto-complete offers suggestions with
@ -76,7 +75,7 @@ FROM kibana_sample_data_logs | LIMIT 10
==== ====
[discrete] [discrete]
==== Expand the query bar ==== Make your query readable
For readability, you can put each processing command on a new line. The For readability, you can put each processing command on a new line. The
following query is identical to the previous one: following query is identical to the previous one:
@ -87,15 +86,12 @@ FROM kibana_sample_data_logs
| LIMIT 10 | LIMIT 10
---- ----
You can do that using the **Add line breaks on pipes** button from the query editor's footer.
image::https://images.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltd5554518309e10f6/672d153cfeb8f9d479ebcc6e/esql-line-breakdown.gif[Automatic line breaks for ES|QL queries]
// tag::compact[] // tag::compact[]
To make it easier to write multi-line queries, click the double-headed arrow You can adjust the editor's height by dragging its bottom border to your liking.
button (image:images/esql/esql-icon-expand-query-bar.svg[]) to expand the query
bar:
image::images/esql/esql-expanded-query-bar.png[align="center"]
To return to a compact query bar, click the minimize editor button
(image:images/esql/esql-icon-minimize-query-bar.svg[]).
// end::compact[] // end::compact[]
[discrete] [discrete]
@ -110,9 +106,7 @@ detailed warning, expand the query bar, and click *warnings*.
==== Query history ==== Query history
You can reuse your recent {esql} queries in the query bar. You can reuse your recent {esql} queries in the query bar.
In the query bar click *Show recent queries*: In the query bar click *Show recent queries*.
image::images/esql/esql-discover-show-recent-query.png[align="center",size="50%"]
You can then scroll through your recent queries: You can then scroll through your recent queries:
@ -220,8 +214,9 @@ FROM kibana_sample_data_logs
=== Analyze and visualize data === Analyze and visualize data
Between the query bar and the results table, Discover shows a date histogram Between the query bar and the results table, Discover shows a date histogram
visualization. If the indices you're querying do not contain a `@timestamp` visualization. By default, if the indices you're querying do not contain a `@timestamp`
field, the histogram is not shown. field, the histogram is not shown. But you can use a custom time field with the `?_tstart`
and `?_tend` parameters to enable it.
The visualization adapts to the query. A query's nature determines the type of The visualization adapts to the query. A query's nature determines the type of
visualization. For example, this query aggregates the total number of bytes per visualization. For example, this query aggregates the total number of bytes per
@ -250,7 +245,7 @@ save button (image:images/esql/esql-icon-save-visualization.svg[]). Once saved
to a dashboard, you'll be taken to the Dashboards page. You can continue to to a dashboard, you'll be taken to the Dashboards page. You can continue to
make changes to the visualization. Click the make changes to the visualization. Click the
options button in the top-right (image:images/esql/esql-icon-options.svg[]) and options button in the top-right (image:images/esql/esql-icon-options.svg[]) and
select *Edit ESQL visualization* to open the in-line editor: select *Edit ES|QL visualization* to open the in-line editor:
image::images/esql/esql-kibana-edit-on-dashboard.png[align="center",width=66%] image::images/esql/esql-kibana-edit-on-dashboard.png[align="center",width=66%]

View file

@ -72,15 +72,13 @@ least enough RAM to hold the vector data and index structures. To check the
size of the vector data, you can use the <<indices-disk-usage>> API. size of the vector data, you can use the <<indices-disk-usage>> API.
Here are estimates for different element types and quantization levels: Here are estimates for different element types and quantization levels:
+
-- * `element_type: float`: `num_vectors * num_dimensions * 4`
`element_type: float`: `num_vectors * num_dimensions * 4` * `element_type: float` with `quantization: int8`: `num_vectors * (num_dimensions + 4)`
`element_type: float` with `quantization: int8`: `num_vectors * (num_dimensions + 4)` * `element_type: float` with `quantization: int4`: `num_vectors * (num_dimensions/2 + 4)`
`element_type: float` with `quantization: int4`: `num_vectors * (num_dimensions/2 + 4)` * `element_type: float` with `quantization: bbq`: `num_vectors * (num_dimensions/8 + 12)`
`element_type: float` with `quantization: bbq`: `num_vectors * (num_dimensions/8 + 12)` * `element_type: byte`: `num_vectors * num_dimensions`
`element_type: byte`: `num_vectors * num_dimensions` * `element_type: bit`: `num_vectors * (num_dimensions/8)`
`element_type: bit`: `num_vectors * (num_dimensions/8)`
--
If utilizing HNSW, the graph must also be in memory, to estimate the required bytes use `num_vectors * 4 * HNSW.m`. The If utilizing HNSW, the graph must also be in memory, to estimate the required bytes use `num_vectors * 4 * HNSW.m`. The
default value for `HNSW.m` is 16, so by default `num_vectors * 4 * 16`. default value for `HNSW.m` is 16, so by default `num_vectors * 4 * 16`.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 123 KiB

After

Width:  |  Height:  |  Size: 94 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

After

Width:  |  Height:  |  Size: 284 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 168 KiB

After

Width:  |  Height:  |  Size: 274 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 217 KiB

After

Width:  |  Height:  |  Size: 286 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 234 KiB

After

Width:  |  Height:  |  Size: 159 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 360 KiB

After

Width:  |  Height:  |  Size: 392 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 84 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 348 KiB

After

Width:  |  Height:  |  Size: 438 KiB

Before After
Before After

View file

@ -327,7 +327,7 @@ The result would then have the `errors` field set to `true` and hold the error f
"details": { "details": {
"my_admin_role": { <4> "my_admin_role": { <4>
"type": "action_request_validation_exception", "type": "action_request_validation_exception",
"reason": "Validation Failed: 1: unknown cluster privilege [bad_cluster_privilege]. a privilege must be either one of the predefined cluster privilege names [manage_own_api_key,manage_data_stream_global_retention,monitor_data_stream_global_retention,none,cancel_task,cross_cluster_replication,cross_cluster_search,delegate_pki,grant_api_key,manage_autoscaling,manage_index_templates,manage_logstash_pipelines,manage_oidc,manage_saml,manage_search_application,manage_search_query_rules,manage_search_synonyms,manage_service_account,manage_token,manage_user_profile,monitor_connector,monitor_enrich,monitor_inference,monitor_ml,monitor_rollup,monitor_snapshot,monitor_text_structure,monitor_watcher,post_behavioral_analytics_event,read_ccr,read_connector_secrets,read_fleet_secrets,read_ilm,read_pipeline,read_security,read_slm,transport_client,write_connector_secrets,write_fleet_secrets,create_snapshot,manage_behavioral_analytics,manage_ccr,manage_connector,manage_enrich,manage_ilm,manage_inference,manage_ml,manage_rollup,manage_slm,manage_watcher,monitor_data_frame_transforms,monitor_transform,manage_api_key,manage_ingest_pipelines,manage_pipeline,manage_data_frame_transforms,manage_transform,manage_security,monitor,manage,all] or a pattern over one of the available cluster actions;" "reason": "Validation Failed: 1: unknown cluster privilege [bad_cluster_privilege]. a privilege must be either one of the predefined cluster privilege names [manage_own_api_key,manage_data_stream_global_retention,monitor_data_stream_global_retention,none,cancel_task,cross_cluster_replication,cross_cluster_search,delegate_pki,grant_api_key,manage_autoscaling,manage_index_templates,manage_logstash_pipelines,manage_oidc,manage_saml,manage_search_application,manage_search_query_rules,manage_search_synonyms,manage_service_account,manage_token,manage_user_profile,monitor_connector,monitor_enrich,monitor_inference,monitor_ml,monitor_rollup,monitor_snapshot,monitor_stats,monitor_text_structure,monitor_watcher,post_behavioral_analytics_event,read_ccr,read_connector_secrets,read_fleet_secrets,read_ilm,read_pipeline,read_security,read_slm,transport_client,write_connector_secrets,write_fleet_secrets,create_snapshot,manage_behavioral_analytics,manage_ccr,manage_connector,manage_enrich,manage_ilm,manage_inference,manage_ml,manage_rollup,manage_slm,manage_watcher,monitor_data_frame_transforms,monitor_transform,manage_api_key,manage_ingest_pipelines,manage_pipeline,manage_data_frame_transforms,manage_transform,manage_security,monitor,manage,all] or a pattern over one of the available cluster actions;"
} }
} }
} }

View file

@ -111,6 +111,7 @@ A successful call returns an object with "cluster", "index", and "remote_cluster
"monitor_ml", "monitor_ml",
"monitor_rollup", "monitor_rollup",
"monitor_snapshot", "monitor_snapshot",
"monitor_stats",
"monitor_text_structure", "monitor_text_structure",
"monitor_transform", "monitor_transform",
"monitor_watcher", "monitor_watcher",
@ -152,7 +153,8 @@ A successful call returns an object with "cluster", "index", and "remote_cluster
"write" "write"
], ],
"remote_cluster" : [ "remote_cluster" : [
"monitor_enrich" "monitor_enrich",
"monitor_stats"
] ]
} }
-------------------------------------------------- --------------------------------------------------

View file

@ -86,6 +86,15 @@ docker run --name es01 --net elastic -p 9200:9200 -it -m 1GB {docker-image}
TIP: Use the `-m` flag to set a memory limit for the container. This removes the TIP: Use the `-m` flag to set a memory limit for the container. This removes the
need to <<docker-set-heap-size,manually set the JVM size>>. need to <<docker-set-heap-size,manually set the JVM size>>.
+ +
{ml-cap} features such as <<semantic-search-elser, semantic search with ELSER>>
require a larger container with more than 1GB of memory.
If you intend to use the {ml} capabilities, then start the container with this command:
+
[source,sh,subs="attributes"]
----
docker run --name es01 --net elastic -p 9200:9200 -it -m 6GB -e "xpack.ml.use_auto_machine_memory_percent=true" {docker-image}
----
The command prints the `elastic` user password and an enrollment token for {kib}. The command prints the `elastic` user password and an enrollment token for {kib}.
. Copy the generated `elastic` password and enrollment token. These credentials . Copy the generated `elastic` password and enrollment token. These credentials

View file

@ -90,6 +90,7 @@ services:
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate - xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE} - xpack.license.self_generated.type=${LICENSE}
- xpack.ml.use_auto_machine_memory_percent=true
mem_limit: ${MEM_LIMIT} mem_limit: ${MEM_LIMIT}
ulimits: ulimits:
memlock: memlock:
@ -130,6 +131,7 @@ services:
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate - xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE} - xpack.license.self_generated.type=${LICENSE}
- xpack.ml.use_auto_machine_memory_percent=true
mem_limit: ${MEM_LIMIT} mem_limit: ${MEM_LIMIT}
ulimits: ulimits:
memlock: memlock:
@ -170,6 +172,7 @@ services:
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate - xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE} - xpack.license.self_generated.type=${LICENSE}
- xpack.ml.use_auto_machine_memory_percent=true
mem_limit: ${MEM_LIMIT} mem_limit: ${MEM_LIMIT}
ulimits: ulimits:
memlock: memlock:

View file

@ -5,3 +5,7 @@ This module implements mechanisms to grant and check permissions under the _enti
The entitlements system provides an alternative to the legacy `SecurityManager` system, which is deprecated for removal. The entitlements system provides an alternative to the legacy `SecurityManager` system, which is deprecated for removal.
The `entitlement-agent` instruments sensitive class library methods with calls to this module, in order to enforce the controls. The `entitlement-agent` instruments sensitive class library methods with calls to this module, in order to enforce the controls.
This feature is currently under development, and it is completely disabled by default (the agent is not loaded). To enable it, run Elasticsearch with
```shell
./gradlew run --entitlements
```

View file

@ -15,8 +15,6 @@ import org.elasticsearch.logging.Logger;
import java.util.Optional; import java.util.Optional;
import static org.elasticsearch.entitlement.runtime.internals.EntitlementInternals.isActive;
/** /**
* Implementation of the {@link EntitlementChecker} interface, providing additional * Implementation of the {@link EntitlementChecker} interface, providing additional
* API methods for managing the checks. * API methods for managing the checks.
@ -25,13 +23,6 @@ import static org.elasticsearch.entitlement.runtime.internals.EntitlementInterna
public class ElasticsearchEntitlementChecker implements EntitlementChecker { public class ElasticsearchEntitlementChecker implements EntitlementChecker {
private static final Logger logger = LogManager.getLogger(ElasticsearchEntitlementChecker.class); private static final Logger logger = LogManager.getLogger(ElasticsearchEntitlementChecker.class);
/**
* Causes entitlements to be enforced.
*/
public void activate() {
isActive = true;
}
@Override @Override
public void checkSystemExit(Class<?> callerClass, int status) { public void checkSystemExit(Class<?> callerClass, int status) {
var requestingModule = requestingModule(callerClass); var requestingModule = requestingModule(callerClass);
@ -66,10 +57,6 @@ public class ElasticsearchEntitlementChecker implements EntitlementChecker {
} }
private static boolean isTriviallyAllowed(Module requestingModule) { private static boolean isTriviallyAllowed(Module requestingModule) {
if (isActive == false) {
logger.debug("Trivially allowed: entitlements are inactive");
return true;
}
if (requestingModule == null) { if (requestingModule == null) {
logger.debug("Trivially allowed: Entire call stack is in the boot module layer"); logger.debug("Trivially allowed: Entire call stack is in the boot module layer");
return true; return true;
@ -81,5 +68,4 @@ public class ElasticsearchEntitlementChecker implements EntitlementChecker {
logger.trace("Not trivially allowed"); logger.trace("Not trivially allowed");
return false; return false;
} }
} }

View file

@ -1,24 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.entitlement.runtime.internals;
/**
* Don't export this from the module. Just don't.
*/
public class EntitlementInternals {
/**
* When false, entitlement rules are not enforced; all operations are allowed.
*/
public static volatile boolean isActive = false;
public static void reset() {
isActive = false;
}
}

View file

@ -112,9 +112,6 @@ tests:
- class: org.elasticsearch.xpack.remotecluster.RemoteClusterSecurityWithApmTracingRestIT - class: org.elasticsearch.xpack.remotecluster.RemoteClusterSecurityWithApmTracingRestIT
method: testTracingCrossCluster method: testTracingCrossCluster
issue: https://github.com/elastic/elasticsearch/issues/112731 issue: https://github.com/elastic/elasticsearch/issues/112731
- class: org.elasticsearch.xpack.test.rest.XPackRestIT
method: test {p0=esql/60_usage/Basic ESQL usage output (telemetry)}
issue: https://github.com/elastic/elasticsearch/issues/115231
- class: org.elasticsearch.xpack.inference.DefaultEndPointsIT - class: org.elasticsearch.xpack.inference.DefaultEndPointsIT
method: testInferDeploysDefaultE5 method: testInferDeploysDefaultE5
issue: https://github.com/elastic/elasticsearch/issues/115361 issue: https://github.com/elastic/elasticsearch/issues/115361
@ -279,9 +276,32 @@ tests:
- class: org.elasticsearch.smoketest.MlWithSecurityIT - class: org.elasticsearch.smoketest.MlWithSecurityIT
method: test {yaml=ml/inference_crud/Test force delete given model with alias referenced by pipeline} method: test {yaml=ml/inference_crud/Test force delete given model with alias referenced by pipeline}
issue: https://github.com/elastic/elasticsearch/issues/116443 issue: https://github.com/elastic/elasticsearch/issues/116443
- class: org.elasticsearch.xpack.downsample.ILMDownsampleDisruptionIT
method: testILMDownsampleRollingRestart
issue: https://github.com/elastic/elasticsearch/issues/114233
- class: org.elasticsearch.xpack.test.rest.XPackRestIT - class: org.elasticsearch.xpack.test.rest.XPackRestIT
method: test {p0=esql/60_usage/Basic ESQL usage output (telemetry) non-snapshot version} method: test {p0=ml/data_frame_analytics_crud/Test put config with unknown field in outlier detection analysis}
issue: https://github.com/elastic/elasticsearch/issues/116448 issue: https://github.com/elastic/elasticsearch/issues/116458
- class: org.elasticsearch.xpack.test.rest.XPackRestIT
method: test {p0=ml/evaluate_data_frame/Test outlier_detection with query}
issue: https://github.com/elastic/elasticsearch/issues/116484
- class: org.elasticsearch.xpack.kql.query.KqlQueryBuilderTests
issue: https://github.com/elastic/elasticsearch/issues/116487
- class: org.elasticsearch.reservedstate.service.FileSettingsServiceTests
method: testInvalidJSON
issue: https://github.com/elastic/elasticsearch/issues/116521
- class: org.elasticsearch.xpack.searchablesnapshots.SearchableSnapshotsCanMatchOnCoordinatorIntegTests
method: testSearchableSnapshotShardsAreSkippedBySearchRequestWithoutQueryingAnyNodeWhenTheyAreOutsideOfTheQueryRange
issue: https://github.com/elastic/elasticsearch/issues/116523
- class: org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissionsTests
method: testCollapseAndRemoveUnsupportedPrivileges
issue: https://github.com/elastic/elasticsearch/issues/116520
- class: org.elasticsearch.xpack.logsdb.qa.StandardVersusLogsIndexModeRandomDataDynamicMappingChallengeRestIT
method: testMatchAllQuery
issue: https://github.com/elastic/elasticsearch/issues/116536
- class: org.elasticsearch.xpack.test.rest.XPackRestIT
method: test {p0=ml/inference_crud/Test force delete given model referenced by pipeline}
issue: https://github.com/elastic/elasticsearch/issues/116555
# Examples: # Examples:
# #

View file

@ -0,0 +1,141 @@
setup:
- requires:
capabilities:
- method: POST
path: /_search
capabilities: [ multi_dense_vector_field_mapper ]
test_runner_features: capabilities
reason: "Support for multi dense vector field mapper capability required"
---
"Test create multi-vector field":
- do:
indices.create:
index: test
body:
mappings:
properties:
vector1:
type: multi_dense_vector
dims: 3
- do:
index:
index: test
id: "1"
body:
vector1: [[2, -1, 1]]
- do:
index:
index: test
id: "2"
body:
vector1: [[2, -1, 1], [3, 4, 5]]
- do:
index:
index: test
id: "3"
body:
vector1: [[2, -1, 1], [3, 4, 5], [6, 7, 8]]
- do:
indices.refresh: {}
---
"Test create dynamic dim multi-vector field":
- do:
indices.create:
index: test
body:
mappings:
properties:
name:
type: keyword
vector1:
type: multi_dense_vector
- do:
index:
index: test
id: "1"
body:
vector1: [[2, -1, 1]]
- do:
index:
index: test
id: "2"
body:
vector1: [[2, -1, 1], [3, 4, 5]]
- do:
index:
index: test
id: "3"
body:
vector1: [[2, -1, 1], [3, 4, 5], [6, 7, 8]]
- do:
cluster.health:
wait_for_events: languid
# verify some other dimension will fail
- do:
catch: bad_request
index:
index: test
id: "4"
body:
vector1: [[2, -1, 1], [3, 4, 5], [6, 7, 8, 9]]
---
"Test dynamic dim mismatch fails multi-vector field":
- do:
indices.create:
index: test
body:
mappings:
properties:
vector1:
type: multi_dense_vector
- do:
catch: bad_request
index:
index: test
id: "1"
body:
vector1: [[2, -1, 1], [2]]
---
"Test static dim mismatch fails multi-vector field":
- do:
indices.create:
index: test
body:
mappings:
properties:
vector1:
type: multi_dense_vector
dims: 3
- do:
catch: bad_request
index:
index: test
id: "1"
body:
vector1: [[2, -1, 1], [2]]
---
"Test poorly formatted multi-vector field":
- do:
indices.create:
index: poorly_formatted_vector
body:
mappings:
properties:
vector1:
type: multi_dense_vector
dims: 3
- do:
catch: bad_request
index:
index: poorly_formatted_vector
id: "1"
body:
vector1: [[[2, -1, 1]]]
- do:
catch: bad_request
index:
index: poorly_formatted_vector
id: "1"
body:
vector1: [[2, -1, 1], [[2, -1, 1]]]

View file

@ -10,15 +10,14 @@
package org.elasticsearch.action.support; package org.elasticsearch.action.support;
import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.DocWriteResponse;
import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Priority; import org.elasticsearch.common.Priority;
import org.elasticsearch.test.ESIntegTestCase; import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xcontent.XContentType; import org.elasticsearch.xcontent.XContentType;
import java.util.concurrent.CountDownLatch; import java.util.concurrent.CountDownLatch;
import java.util.concurrent.CyclicBarrier; import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.TimeUnit;
import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.hasItems; import static org.hamcrest.Matchers.hasItems;
@ -29,65 +28,39 @@ public class AutoCreateIndexIT extends ESIntegTestCase {
final var masterNodeClusterService = internalCluster().getCurrentMasterNodeInstance(ClusterService.class); final var masterNodeClusterService = internalCluster().getCurrentMasterNodeInstance(ClusterService.class);
final var barrier = new CyclicBarrier(2); final var barrier = new CyclicBarrier(2);
masterNodeClusterService.createTaskQueue("block", Priority.NORMAL, batchExecutionContext -> { masterNodeClusterService.createTaskQueue("block", Priority.NORMAL, batchExecutionContext -> {
barrier.await(10, TimeUnit.SECONDS); safeAwait(barrier);
barrier.await(10, TimeUnit.SECONDS); safeAwait(barrier);
batchExecutionContext.taskContexts().forEach(c -> c.success(() -> {})); batchExecutionContext.taskContexts().forEach(c -> c.success(() -> {}));
return batchExecutionContext.initialState(); return batchExecutionContext.initialState();
}).submitTask("block", e -> { assert false : e; }, null); }).submitTask("block", ESTestCase::fail, null);
barrier.await(10, TimeUnit.SECONDS); safeAwait(barrier);
final var countDownLatch = new CountDownLatch(2); final var countDownLatch = new CountDownLatch(2);
final var client = client(); final var client = client();
client.prepareIndex("no-dot").setSource("{}", XContentType.JSON).execute(new ActionListener<>() { client.prepareIndex("no-dot")
@Override .setSource("{}", XContentType.JSON)
public void onResponse(DocWriteResponse indexResponse) { .execute(ActionListener.releaseAfter(ActionTestUtils.assertNoFailureListener(indexResponse -> {
try { final var warningHeaders = client.threadPool().getThreadContext().getResponseHeaders().get("Warning");
final var warningHeaders = client.threadPool().getThreadContext().getResponseHeaders().get("Warning"); if (warningHeaders != null) {
if (warningHeaders != null) {
assertThat(
warningHeaders,
not(
hasItems(
containsString("index names starting with a dot are reserved for hidden indices and system indices")
)
)
);
}
} finally {
countDownLatch.countDown();
}
}
@Override
public void onFailure(Exception e) {
countDownLatch.countDown();
assert false : e;
}
});
client.prepareIndex(".has-dot").setSource("{}", XContentType.JSON).execute(new ActionListener<>() {
@Override
public void onResponse(DocWriteResponse indexResponse) {
try {
final var warningHeaders = client.threadPool().getThreadContext().getResponseHeaders().get("Warning");
assertNotNull(warningHeaders);
assertThat( assertThat(
warningHeaders, warningHeaders,
hasItems(containsString("index names starting with a dot are reserved for hidden indices and system indices")) not(hasItems(containsString("index names starting with a dot are reserved for hidden indices and system indices")))
); );
} finally {
countDownLatch.countDown();
} }
} }), countDownLatch::countDown));
@Override client.prepareIndex(".has-dot")
public void onFailure(Exception e) { .setSource("{}", XContentType.JSON)
countDownLatch.countDown(); .execute(ActionListener.releaseAfter(ActionTestUtils.assertNoFailureListener(indexResponse -> {
assert false : e; final var warningHeaders = client.threadPool().getThreadContext().getResponseHeaders().get("Warning");
} assertNotNull(warningHeaders);
}); assertThat(
warningHeaders,
hasItems(containsString("index names starting with a dot are reserved for hidden indices and system indices"))
);
}), countDownLatch::countDown));
assertBusy( assertBusy(
() -> assertThat( () -> assertThat(
@ -100,7 +73,7 @@ public class AutoCreateIndexIT extends ESIntegTestCase {
) )
); );
barrier.await(10, TimeUnit.SECONDS); safeAwait(barrier);
assertTrue(countDownLatch.await(10, TimeUnit.SECONDS)); safeAwait(countDownLatch);
} }
} }

View file

@ -150,7 +150,7 @@ public class ClusterStateDiffIT extends ESIntegTestCase {
for (Map.Entry<String, DiscoveryNode> node : clusterStateFromDiffs.nodes().getNodes().entrySet()) { for (Map.Entry<String, DiscoveryNode> node : clusterStateFromDiffs.nodes().getNodes().entrySet()) {
DiscoveryNode node1 = clusterState.nodes().get(node.getKey()); DiscoveryNode node1 = clusterState.nodes().get(node.getKey());
DiscoveryNode node2 = clusterStateFromDiffs.nodes().get(node.getKey()); DiscoveryNode node2 = clusterStateFromDiffs.nodes().get(node.getKey());
assertThat(node1.getVersion(), equalTo(node2.getVersion())); assertThat(node1.getBuildVersion(), equalTo(node2.getBuildVersion()));
assertThat(node1.getAddress(), equalTo(node2.getAddress())); assertThat(node1.getAddress(), equalTo(node2.getAddress()));
assertThat(node1.getAttributes(), equalTo(node2.getAttributes())); assertThat(node1.getAttributes(), equalTo(node2.getAttributes()));
} }

View file

@ -189,6 +189,8 @@ public class TransportVersions {
public static final TransportVersion ESQL_CCS_EXEC_INFO_WITH_FAILURES = def(8_783_00_0); public static final TransportVersion ESQL_CCS_EXEC_INFO_WITH_FAILURES = def(8_783_00_0);
public static final TransportVersion LOGSDB_TELEMETRY = def(8_784_00_0); public static final TransportVersion LOGSDB_TELEMETRY = def(8_784_00_0);
public static final TransportVersion LOGSDB_TELEMETRY_STATS = def(8_785_00_0); public static final TransportVersion LOGSDB_TELEMETRY_STATS = def(8_785_00_0);
public static final TransportVersion KQL_QUERY_ADDED = def(8_786_00_0);
public static final TransportVersion ROLE_MONITOR_STATS = def(8_787_00_0);
/* /*
* WARNING: DO NOT MERGE INTO MAIN! * WARNING: DO NOT MERGE INTO MAIN!

View file

@ -18,6 +18,7 @@ import org.apache.lucene.analysis.tokenattributes.PositionLengthAttribute;
import org.apache.lucene.analysis.tokenattributes.TypeAttribute; import org.apache.lucene.analysis.tokenattributes.TypeAttribute;
import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.BytesRef;
import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ElasticsearchStatusException;
import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.single.shard.TransportSingleShardAction; import org.elasticsearch.action.support.single.shard.TransportSingleShardAction;
import org.elasticsearch.cluster.ProjectState; import org.elasticsearch.cluster.ProjectState;
@ -45,6 +46,7 @@ import org.elasticsearch.index.mapper.StringFieldType;
import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.IndicesService;
import org.elasticsearch.injection.guice.Inject; import org.elasticsearch.injection.guice.Inject;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService; import org.elasticsearch.transport.TransportService;
@ -458,11 +460,12 @@ public class TransportAnalyzeAction extends TransportSingleShardAction<AnalyzeAc
private void increment() { private void increment() {
tokenCount++; tokenCount++;
if (tokenCount > maxTokenCount) { if (tokenCount > maxTokenCount) {
throw new IllegalStateException( throw new ElasticsearchStatusException(
"The number of tokens produced by calling _analyze has exceeded the allowed maximum of [" "The number of tokens produced by calling _analyze has exceeded the allowed maximum of ["
+ maxTokenCount + maxTokenCount
+ "]." + "]."
+ " This limit can be set by changing the [index.analyze.max_token_count] index level setting." + " This limit can be set by changing the [index.analyze.max_token_count] index level setting.",
RestStatus.BAD_REQUEST
); );
} }
} }

View file

@ -68,7 +68,7 @@ import static org.elasticsearch.core.Strings.format;
* The fan out and collect algorithm is traditionally used as the initial phase which can either be a query execution or collection of * The fan out and collect algorithm is traditionally used as the initial phase which can either be a query execution or collection of
* distributed frequencies * distributed frequencies
*/ */
abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> extends SearchPhase implements SearchPhaseContext { abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> extends SearchPhase {
private static final float DEFAULT_INDEX_BOOST = 1.0f; private static final float DEFAULT_INDEX_BOOST = 1.0f;
private final Logger logger; private final Logger logger;
private final NamedWriteableRegistry namedWriteableRegistry; private final NamedWriteableRegistry namedWriteableRegistry;
@ -106,7 +106,8 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
private final boolean throttleConcurrentRequests; private final boolean throttleConcurrentRequests;
private final AtomicBoolean requestCancelled = new AtomicBoolean(); private final AtomicBoolean requestCancelled = new AtomicBoolean();
private final List<Releasable> releasables = new ArrayList<>(); // protected for tests
protected final List<Releasable> releasables = new ArrayList<>();
AbstractSearchAsyncAction( AbstractSearchAsyncAction(
String name, String name,
@ -194,7 +195,9 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
); );
} }
@Override /**
* Registers a {@link Releasable} that will be closed when the search request finishes or fails.
*/
public void addReleasable(Releasable releasable) { public void addReleasable(Releasable releasable) {
releasables.add(releasable); releasables.add(releasable);
} }
@ -333,8 +336,12 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
SearchActionListener<Result> listener SearchActionListener<Result> listener
); );
@Override /**
public final void executeNextPhase(SearchPhase currentPhase, Supplier<SearchPhase> nextPhaseSupplier) { * Processes the phase transition from on phase to another. This method handles all errors that happen during the initial run execution
* of the next phase. If there are no successful operations in the context when this method is executed the search is aborted and
* a response is returned to the user indicating that all shards have failed.
*/
protected void executeNextPhase(SearchPhase currentPhase, Supplier<SearchPhase> nextPhaseSupplier) {
/* This is the main search phase transition where we move to the next phase. If all shards /* This is the main search phase transition where we move to the next phase. If all shards
* failed or if there was a failure and partial results are not allowed, then we immediately * failed or if there was a failure and partial results are not allowed, then we immediately
* fail. Otherwise we continue to the next phase. * fail. Otherwise we continue to the next phase.
@ -470,8 +477,7 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
* @param shardTarget the shard target for this failure * @param shardTarget the shard target for this failure
* @param e the failure reason * @param e the failure reason
*/ */
@Override void onShardFailure(final int shardIndex, SearchShardTarget shardTarget, Exception e) {
public final void onShardFailure(final int shardIndex, SearchShardTarget shardTarget, Exception e) {
if (TransportActions.isShardNotAvailableException(e)) { if (TransportActions.isShardNotAvailableException(e)) {
// Groups shard not available exceptions under a generic exception that returns a SERVICE_UNAVAILABLE(503) // Groups shard not available exceptions under a generic exception that returns a SERVICE_UNAVAILABLE(503)
// temporary error. // temporary error.
@ -568,32 +574,45 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
} }
} }
@Override /**
* Returns the total number of shards to the current search across all indices
*/
public final int getNumShards() { public final int getNumShards() {
return results.getNumShards(); return results.getNumShards();
} }
@Override /**
* Returns a logger for this context to prevent each individual phase to create their own logger.
*/
public final Logger getLogger() { public final Logger getLogger() {
return logger; return logger;
} }
@Override /**
* Returns the currently executing search task
*/
public final SearchTask getTask() { public final SearchTask getTask() {
return task; return task;
} }
@Override /**
* Returns the currently executing search request
*/
public final SearchRequest getRequest() { public final SearchRequest getRequest() {
return request; return request;
} }
@Override /**
* Returns the targeted {@link OriginalIndices} for the provided {@code shardIndex}.
*/
public OriginalIndices getOriginalIndices(int shardIndex) { public OriginalIndices getOriginalIndices(int shardIndex) {
return shardIterators[shardIndex].getOriginalIndices(); return shardIterators[shardIndex].getOriginalIndices();
} }
@Override /**
* Checks if the given context id is part of the point in time of this search (if exists).
* We should not release search contexts that belong to the point in time during or after searches.
*/
public boolean isPartOfPointInTime(ShardSearchContextId contextId) { public boolean isPartOfPointInTime(ShardSearchContextId contextId) {
final PointInTimeBuilder pointInTimeBuilder = request.pointInTimeBuilder(); final PointInTimeBuilder pointInTimeBuilder = request.pointInTimeBuilder();
if (pointInTimeBuilder != null) { if (pointInTimeBuilder != null) {
@ -630,7 +649,12 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
return false; return false;
} }
@Override /**
* Builds and sends the final search response back to the user.
*
* @param internalSearchResponse the internal search response
* @param queryResults the results of the query phase
*/
public void sendSearchResponse(SearchResponseSections internalSearchResponse, AtomicArray<SearchPhaseResult> queryResults) { public void sendSearchResponse(SearchResponseSections internalSearchResponse, AtomicArray<SearchPhaseResult> queryResults) {
ShardSearchFailure[] failures = buildShardFailures(); ShardSearchFailure[] failures = buildShardFailures();
Boolean allowPartialResults = request.allowPartialSearchResults(); Boolean allowPartialResults = request.allowPartialSearchResults();
@ -655,8 +679,14 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
} }
} }
@Override /**
public final void onPhaseFailure(SearchPhase phase, String msg, Throwable cause) { * This method will communicate a fatal phase failure back to the user. In contrast to a shard failure
* will this method immediately fail the search request and return the failure to the issuer of the request
* @param phase the phase that failed
* @param msg an optional message
* @param cause the cause of the phase failure
*/
public void onPhaseFailure(SearchPhase phase, String msg, Throwable cause) {
raisePhaseFailure(new SearchPhaseExecutionException(phase.getName(), msg, cause, buildShardFailures())); raisePhaseFailure(new SearchPhaseExecutionException(phase.getName(), msg, cause, buildShardFailures()));
} }
@ -683,6 +713,19 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
listener.onFailure(exception); listener.onFailure(exception);
} }
/**
* Releases a search context with the given context ID on the node the given connection is connected to.
* @see org.elasticsearch.search.query.QuerySearchResult#getContextId()
* @see org.elasticsearch.search.fetch.FetchSearchResult#getContextId()
*
*/
void sendReleaseSearchContext(ShardSearchContextId contextId, Transport.Connection connection, OriginalIndices originalIndices) {
assert isPartOfPointInTime(contextId) == false : "Must not release point in time context [" + contextId + "]";
if (connection != null) {
searchTransportService.sendFreeContext(connection, contextId, originalIndices);
}
}
/** /**
* Executed once all shard results have been received and processed * Executed once all shard results have been received and processed
* @see #onShardFailure(int, SearchShardTarget, Exception) * @see #onShardFailure(int, SearchShardTarget, Exception)
@ -692,23 +735,29 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
executeNextPhase(this, this::getNextPhase); executeNextPhase(this, this::getNextPhase);
} }
@Override /**
* Returns a connection to the node if connected otherwise and {@link org.elasticsearch.transport.ConnectTransportException} will be
* thrown.
*/
public final Transport.Connection getConnection(String clusterAlias, String nodeId) { public final Transport.Connection getConnection(String clusterAlias, String nodeId) {
return nodeIdToConnection.apply(clusterAlias, nodeId); return nodeIdToConnection.apply(clusterAlias, nodeId);
} }
@Override /**
public final SearchTransportService getSearchTransport() { * Returns the {@link SearchTransportService} to send shard request to other nodes
*/
public SearchTransportService getSearchTransport() {
return searchTransportService; return searchTransportService;
} }
@Override
public final void execute(Runnable command) { public final void execute(Runnable command) {
executor.execute(command); executor.execute(command);
} }
@Override /**
public final void onFailure(Exception e) { * Notifies the top-level listener of the provided exception
*/
public void onFailure(Exception e) {
listener.onFailure(e); listener.onFailure(e);
} }

View file

@ -131,7 +131,6 @@ final class CanMatchPreFilterSearchPhase extends SearchPhase {
@Override @Override
public void run() { public void run() {
assert assertSearchCoordinationThread(); assert assertSearchCoordinationThread();
checkNoMissingShards();
runCoordinatorRewritePhase(); runCoordinatorRewritePhase();
} }
@ -175,7 +174,10 @@ final class CanMatchPreFilterSearchPhase extends SearchPhase {
if (matchedShardLevelRequests.isEmpty()) { if (matchedShardLevelRequests.isEmpty()) {
finishPhase(); finishPhase();
} else { } else {
new Round(new GroupShardsIterator<>(matchedShardLevelRequests)).run(); GroupShardsIterator<SearchShardIterator> matchingShards = new GroupShardsIterator<>(matchedShardLevelRequests);
// verify missing shards only for the shards that we hit for the query
checkNoMissingShards(matchingShards);
new Round(matchingShards).run();
} }
} }
@ -185,9 +187,9 @@ final class CanMatchPreFilterSearchPhase extends SearchPhase {
results.consumeResult(result, () -> {}); results.consumeResult(result, () -> {});
} }
private void checkNoMissingShards() { private void checkNoMissingShards(GroupShardsIterator<SearchShardIterator> shards) {
assert assertSearchCoordinationThread(); assert assertSearchCoordinationThread();
doCheckNoMissingShards(getName(), request, shardsIts); doCheckNoMissingShards(getName(), request, shards);
} }
private Map<SendingTarget, List<SearchShardIterator>> groupByNode(GroupShardsIterator<SearchShardIterator> shards) { private Map<SendingTarget, List<SearchShardIterator>> groupByNode(GroupShardsIterator<SearchShardIterator> shards) {

View file

@ -22,9 +22,9 @@ final class CountedCollector<R extends SearchPhaseResult> {
private final SearchPhaseResults<R> resultConsumer; private final SearchPhaseResults<R> resultConsumer;
private final CountDown counter; private final CountDown counter;
private final Runnable onFinish; private final Runnable onFinish;
private final SearchPhaseContext context; private final AbstractSearchAsyncAction<?> context;
CountedCollector(SearchPhaseResults<R> resultConsumer, int expectedOps, Runnable onFinish, SearchPhaseContext context) { CountedCollector(SearchPhaseResults<R> resultConsumer, int expectedOps, Runnable onFinish, AbstractSearchAsyncAction<?> context) {
this.resultConsumer = resultConsumer; this.resultConsumer = resultConsumer;
this.counter = new CountDown(expectedOps); this.counter = new CountDown(expectedOps);
this.onFinish = onFinish; this.onFinish = onFinish;
@ -50,7 +50,7 @@ final class CountedCollector<R extends SearchPhaseResult> {
} }
/** /**
* Escalates the failure via {@link SearchPhaseContext#onShardFailure(int, SearchShardTarget, Exception)} * Escalates the failure via {@link AbstractSearchAsyncAction#onShardFailure(int, SearchShardTarget, Exception)}
* and then runs {@link #countDown()} * and then runs {@link #countDown()}
*/ */
void onFailure(final int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e) { void onFailure(final int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e) {

View file

@ -44,7 +44,7 @@ final class DfsQueryPhase extends SearchPhase {
private final AggregatedDfs dfs; private final AggregatedDfs dfs;
private final List<DfsKnnResults> knnResults; private final List<DfsKnnResults> knnResults;
private final Function<SearchPhaseResults<SearchPhaseResult>, SearchPhase> nextPhaseFactory; private final Function<SearchPhaseResults<SearchPhaseResult>, SearchPhase> nextPhaseFactory;
private final SearchPhaseContext context; private final AbstractSearchAsyncAction<?> context;
private final SearchTransportService searchTransportService; private final SearchTransportService searchTransportService;
private final SearchProgressListener progressListener; private final SearchProgressListener progressListener;
@ -54,7 +54,7 @@ final class DfsQueryPhase extends SearchPhase {
List<DfsKnnResults> knnResults, List<DfsKnnResults> knnResults,
SearchPhaseResults<SearchPhaseResult> queryResult, SearchPhaseResults<SearchPhaseResult> queryResult,
Function<SearchPhaseResults<SearchPhaseResult>, SearchPhase> nextPhaseFactory, Function<SearchPhaseResults<SearchPhaseResult>, SearchPhase> nextPhaseFactory,
SearchPhaseContext context AbstractSearchAsyncAction<?> context
) { ) {
super("dfs_query"); super("dfs_query");
this.progressListener = context.getTask().getProgressListener(); this.progressListener = context.getTask().getProgressListener();

View file

@ -31,11 +31,11 @@ import java.util.function.Supplier;
* forwards to the next phase immediately. * forwards to the next phase immediately.
*/ */
final class ExpandSearchPhase extends SearchPhase { final class ExpandSearchPhase extends SearchPhase {
private final SearchPhaseContext context; private final AbstractSearchAsyncAction<?> context;
private final SearchHits searchHits; private final SearchHits searchHits;
private final Supplier<SearchPhase> nextPhase; private final Supplier<SearchPhase> nextPhase;
ExpandSearchPhase(SearchPhaseContext context, SearchHits searchHits, Supplier<SearchPhase> nextPhase) { ExpandSearchPhase(AbstractSearchAsyncAction<?> context, SearchHits searchHits, Supplier<SearchPhase> nextPhase) {
super("expand"); super("expand");
this.context = context; this.context = context;
this.searchHits = searchHits; this.searchHits = searchHits;

View file

@ -33,11 +33,15 @@ import java.util.stream.Collectors;
* @see org.elasticsearch.index.mapper.LookupRuntimeFieldType * @see org.elasticsearch.index.mapper.LookupRuntimeFieldType
*/ */
final class FetchLookupFieldsPhase extends SearchPhase { final class FetchLookupFieldsPhase extends SearchPhase {
private final SearchPhaseContext context; private final AbstractSearchAsyncAction<?> context;
private final SearchResponseSections searchResponse; private final SearchResponseSections searchResponse;
private final AtomicArray<SearchPhaseResult> queryResults; private final AtomicArray<SearchPhaseResult> queryResults;
FetchLookupFieldsPhase(SearchPhaseContext context, SearchResponseSections searchResponse, AtomicArray<SearchPhaseResult> queryResults) { FetchLookupFieldsPhase(
AbstractSearchAsyncAction<?> context,
SearchResponseSections searchResponse,
AtomicArray<SearchPhaseResult> queryResults
) {
super("fetch_lookup_fields"); super("fetch_lookup_fields");
this.context = context; this.context = context;
this.searchResponse = searchResponse; this.searchResponse = searchResponse;

View file

@ -36,7 +36,7 @@ import java.util.function.BiFunction;
final class FetchSearchPhase extends SearchPhase { final class FetchSearchPhase extends SearchPhase {
private final AtomicArray<SearchPhaseResult> searchPhaseShardResults; private final AtomicArray<SearchPhaseResult> searchPhaseShardResults;
private final BiFunction<SearchResponseSections, AtomicArray<SearchPhaseResult>, SearchPhase> nextPhaseFactory; private final BiFunction<SearchResponseSections, AtomicArray<SearchPhaseResult>, SearchPhase> nextPhaseFactory;
private final SearchPhaseContext context; private final AbstractSearchAsyncAction<?> context;
private final Logger logger; private final Logger logger;
private final SearchProgressListener progressListener; private final SearchProgressListener progressListener;
private final AggregatedDfs aggregatedDfs; private final AggregatedDfs aggregatedDfs;
@ -47,7 +47,7 @@ final class FetchSearchPhase extends SearchPhase {
FetchSearchPhase( FetchSearchPhase(
SearchPhaseResults<SearchPhaseResult> resultConsumer, SearchPhaseResults<SearchPhaseResult> resultConsumer,
AggregatedDfs aggregatedDfs, AggregatedDfs aggregatedDfs,
SearchPhaseContext context, AbstractSearchAsyncAction<?> context,
@Nullable SearchPhaseController.ReducedQueryPhase reducedQueryPhase @Nullable SearchPhaseController.ReducedQueryPhase reducedQueryPhase
) { ) {
this( this(
@ -66,7 +66,7 @@ final class FetchSearchPhase extends SearchPhase {
FetchSearchPhase( FetchSearchPhase(
SearchPhaseResults<SearchPhaseResult> resultConsumer, SearchPhaseResults<SearchPhaseResult> resultConsumer,
AggregatedDfs aggregatedDfs, AggregatedDfs aggregatedDfs,
SearchPhaseContext context, AbstractSearchAsyncAction<?> context,
@Nullable SearchPhaseController.ReducedQueryPhase reducedQueryPhase, @Nullable SearchPhaseController.ReducedQueryPhase reducedQueryPhase,
BiFunction<SearchResponseSections, AtomicArray<SearchPhaseResult>, SearchPhase> nextPhaseFactory BiFunction<SearchResponseSections, AtomicArray<SearchPhaseResult>, SearchPhase> nextPhaseFactory
) { ) {

View file

@ -38,7 +38,7 @@ import java.util.List;
public class RankFeaturePhase extends SearchPhase { public class RankFeaturePhase extends SearchPhase {
private static final Logger logger = LogManager.getLogger(RankFeaturePhase.class); private static final Logger logger = LogManager.getLogger(RankFeaturePhase.class);
private final SearchPhaseContext context; private final AbstractSearchAsyncAction<?> context;
final SearchPhaseResults<SearchPhaseResult> queryPhaseResults; final SearchPhaseResults<SearchPhaseResult> queryPhaseResults;
final SearchPhaseResults<SearchPhaseResult> rankPhaseResults; final SearchPhaseResults<SearchPhaseResult> rankPhaseResults;
private final AggregatedDfs aggregatedDfs; private final AggregatedDfs aggregatedDfs;
@ -48,7 +48,7 @@ public class RankFeaturePhase extends SearchPhase {
RankFeaturePhase( RankFeaturePhase(
SearchPhaseResults<SearchPhaseResult> queryPhaseResults, SearchPhaseResults<SearchPhaseResult> queryPhaseResults,
AggregatedDfs aggregatedDfs, AggregatedDfs aggregatedDfs,
SearchPhaseContext context, AbstractSearchAsyncAction<?> context,
RankFeaturePhaseRankCoordinatorContext rankFeaturePhaseRankCoordinatorContext RankFeaturePhaseRankCoordinatorContext rankFeaturePhaseRankCoordinatorContext
) { ) {
super("rank-feature"); super("rank-feature");
@ -179,22 +179,25 @@ public class RankFeaturePhase extends SearchPhase {
RankFeaturePhaseRankCoordinatorContext rankFeaturePhaseRankCoordinatorContext, RankFeaturePhaseRankCoordinatorContext rankFeaturePhaseRankCoordinatorContext,
SearchPhaseController.ReducedQueryPhase reducedQueryPhase SearchPhaseController.ReducedQueryPhase reducedQueryPhase
) { ) {
ThreadedActionListener<RankFeatureDoc[]> rankResultListener = new ThreadedActionListener<>(context, new ActionListener<>() { ThreadedActionListener<RankFeatureDoc[]> rankResultListener = new ThreadedActionListener<>(
@Override context::execute,
public void onResponse(RankFeatureDoc[] docsWithUpdatedScores) { new ActionListener<>() {
RankFeatureDoc[] topResults = rankFeaturePhaseRankCoordinatorContext.rankAndPaginate(docsWithUpdatedScores); @Override
SearchPhaseController.ReducedQueryPhase reducedRankFeaturePhase = newReducedQueryPhaseResults( public void onResponse(RankFeatureDoc[] docsWithUpdatedScores) {
reducedQueryPhase, RankFeatureDoc[] topResults = rankFeaturePhaseRankCoordinatorContext.rankAndPaginate(docsWithUpdatedScores);
topResults SearchPhaseController.ReducedQueryPhase reducedRankFeaturePhase = newReducedQueryPhaseResults(
); reducedQueryPhase,
moveToNextPhase(rankPhaseResults, reducedRankFeaturePhase); topResults
} );
moveToNextPhase(rankPhaseResults, reducedRankFeaturePhase);
}
@Override @Override
public void onFailure(Exception e) { public void onFailure(Exception e) {
context.onPhaseFailure(RankFeaturePhase.this, "Computing updated ranks for results failed", e); context.onPhaseFailure(RankFeaturePhase.this, "Computing updated ranks for results failed", e);
}
} }
}); );
rankFeaturePhaseRankCoordinatorContext.computeRankScoresForGlobalResults( rankFeaturePhaseRankCoordinatorContext.computeRankScoresForGlobalResults(
rankPhaseResults.getAtomicArray().asList().stream().map(SearchPhaseResult::rankFeatureResult).toList(), rankPhaseResults.getAtomicArray().asList().stream().map(SearchPhaseResult::rankFeatureResult).toList(),
rankResultListener rankResultListener

View file

@ -74,7 +74,7 @@ abstract class SearchPhase implements CheckedRunnable<IOException> {
/** /**
* Releases shard targets that are not used in the docsIdsToLoad. * Releases shard targets that are not used in the docsIdsToLoad.
*/ */
protected void releaseIrrelevantSearchContext(SearchPhaseResult searchPhaseResult, SearchPhaseContext context) { protected void releaseIrrelevantSearchContext(SearchPhaseResult searchPhaseResult, AbstractSearchAsyncAction<?> context) {
// we only release search context that we did not fetch from, if we are not scrolling // we only release search context that we did not fetch from, if we are not scrolling
// or using a PIT and if it has at least one hit that didn't make it to the global topDocs // or using a PIT and if it has at least one hit that didn't make it to the global topDocs
if (searchPhaseResult == null) { if (searchPhaseResult == null) {

View file

@ -1,130 +0,0 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.action.search;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.action.OriginalIndices;
import org.elasticsearch.common.util.concurrent.AtomicArray;
import org.elasticsearch.core.Nullable;
import org.elasticsearch.core.Releasable;
import org.elasticsearch.search.SearchPhaseResult;
import org.elasticsearch.search.SearchShardTarget;
import org.elasticsearch.search.internal.ShardSearchContextId;
import org.elasticsearch.transport.Transport;
import java.util.concurrent.Executor;
import java.util.function.Supplier;
/**
* This class provide contextual state and access to resources across multiple search phases.
*/
interface SearchPhaseContext extends Executor {
// TODO maybe we can make this concrete later - for now we just implement this in the base class for all initial phases
/**
* Returns the total number of shards to the current search across all indices
*/
int getNumShards();
/**
* Returns a logger for this context to prevent each individual phase to create their own logger.
*/
Logger getLogger();
/**
* Returns the currently executing search task
*/
SearchTask getTask();
/**
* Returns the currently executing search request
*/
SearchRequest getRequest();
/**
* Returns the targeted {@link OriginalIndices} for the provided {@code shardIndex}.
*/
OriginalIndices getOriginalIndices(int shardIndex);
/**
* Checks if the given context id is part of the point in time of this search (if exists).
* We should not release search contexts that belong to the point in time during or after searches.
*/
boolean isPartOfPointInTime(ShardSearchContextId contextId);
/**
* Builds and sends the final search response back to the user.
*
* @param internalSearchResponse the internal search response
* @param queryResults the results of the query phase
*/
void sendSearchResponse(SearchResponseSections internalSearchResponse, AtomicArray<SearchPhaseResult> queryResults);
/**
* Notifies the top-level listener of the provided exception
*/
void onFailure(Exception e);
/**
* This method will communicate a fatal phase failure back to the user. In contrast to a shard failure
* will this method immediately fail the search request and return the failure to the issuer of the request
* @param phase the phase that failed
* @param msg an optional message
* @param cause the cause of the phase failure
*/
void onPhaseFailure(SearchPhase phase, String msg, Throwable cause);
/**
* This method will record a shard failure for the given shard index. In contrast to a phase failure
* ({@link #onPhaseFailure(SearchPhase, String, Throwable)}) this method will immediately return to the user but will record
* a shard failure for the given shard index. This should be called if a shard failure happens after we successfully retrieved
* a result from that shard in a previous phase.
*/
void onShardFailure(int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e);
/**
* Returns a connection to the node if connected otherwise and {@link org.elasticsearch.transport.ConnectTransportException} will be
* thrown.
*/
Transport.Connection getConnection(String clusterAlias, String nodeId);
/**
* Returns the {@link SearchTransportService} to send shard request to other nodes
*/
SearchTransportService getSearchTransport();
/**
* Releases a search context with the given context ID on the node the given connection is connected to.
* @see org.elasticsearch.search.query.QuerySearchResult#getContextId()
* @see org.elasticsearch.search.fetch.FetchSearchResult#getContextId()
*
*/
default void sendReleaseSearchContext(
ShardSearchContextId contextId,
Transport.Connection connection,
OriginalIndices originalIndices
) {
assert isPartOfPointInTime(contextId) == false : "Must not release point in time context [" + contextId + "]";
if (connection != null) {
getSearchTransport().sendFreeContext(connection, contextId, originalIndices);
}
}
/**
* Processes the phase transition from on phase to another. This method handles all errors that happen during the initial run execution
* of the next phase. If there are no successful operations in the context when this method is executed the search is aborted and
* a response is returned to the user indicating that all shards have failed.
*/
void executeNextPhase(SearchPhase currentPhase, Supplier<SearchPhase> nextPhaseSupplier);
/**
* Registers a {@link Releasable} that will be closed when the search request finishes or fails.
*/
void addReleasable(Releasable releasable);
}

View file

@ -135,7 +135,7 @@ class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction<SearchPh
static SearchPhase nextPhase( static SearchPhase nextPhase(
Client client, Client client,
SearchPhaseContext context, AbstractSearchAsyncAction<?> context,
SearchPhaseResults<SearchPhaseResult> queryResults, SearchPhaseResults<SearchPhaseResult> queryResults,
AggregatedDfs aggregatedDfs AggregatedDfs aggregatedDfs
) { ) {

View file

@ -21,8 +21,8 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.util.StringLiteralDeduplicator; import org.elasticsearch.common.util.StringLiteralDeduplicator;
import org.elasticsearch.core.Nullable; import org.elasticsearch.core.Nullable;
import org.elasticsearch.env.BuildVersion;
import org.elasticsearch.index.IndexVersion; import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.index.IndexVersions;
import org.elasticsearch.node.Node; import org.elasticsearch.node.Node;
import org.elasticsearch.xcontent.ToXContentFragment; import org.elasticsearch.xcontent.ToXContentFragment;
import org.elasticsearch.xcontent.XContentBuilder; import org.elasticsearch.xcontent.XContentBuilder;
@ -33,7 +33,6 @@ import java.util.Comparator;
import java.util.Map; import java.util.Map;
import java.util.Objects; import java.util.Objects;
import java.util.Optional; import java.util.Optional;
import java.util.OptionalInt;
import java.util.Set; import java.util.Set;
import java.util.SortedSet; import java.util.SortedSet;
import java.util.TreeSet; import java.util.TreeSet;
@ -290,18 +289,6 @@ public class DiscoveryNode implements Writeable, ToXContentFragment {
return Set.copyOf(NODE_ROLES_SETTING.get(settings)); return Set.copyOf(NODE_ROLES_SETTING.get(settings));
} }
private static VersionInformation inferVersionInformation(Version version) {
if (version.before(Version.V_8_10_0)) {
return new VersionInformation(
version,
IndexVersion.getMinimumCompatibleIndexVersion(version.id),
IndexVersion.fromId(version.id)
);
} else {
return new VersionInformation(version, IndexVersions.MINIMUM_COMPATIBLE, IndexVersion.current());
}
}
private static final Writeable.Reader<String> readStringLiteral = s -> nodeStringDeduplicator.deduplicate(s.readString()); private static final Writeable.Reader<String> readStringLiteral = s -> nodeStringDeduplicator.deduplicate(s.readString());
/** /**
@ -338,11 +325,7 @@ public class DiscoveryNode implements Writeable, ToXContentFragment {
} }
} }
this.roles = Collections.unmodifiableSortedSet(roles); this.roles = Collections.unmodifiableSortedSet(roles);
if (in.getTransportVersion().onOrAfter(TransportVersions.V_8_10_X)) { versionInfo = new VersionInformation(Version.readVersion(in), IndexVersion.readVersion(in), IndexVersion.readVersion(in));
versionInfo = new VersionInformation(Version.readVersion(in), IndexVersion.readVersion(in), IndexVersion.readVersion(in));
} else {
versionInfo = inferVersionInformation(Version.readVersion(in));
}
if (in.getTransportVersion().onOrAfter(EXTERNAL_ID_VERSION)) { if (in.getTransportVersion().onOrAfter(EXTERNAL_ID_VERSION)) {
this.externalId = readStringLiteral.read(in); this.externalId = readStringLiteral.read(in);
} else { } else {
@ -375,13 +358,9 @@ public class DiscoveryNode implements Writeable, ToXContentFragment {
o.writeString(role.roleNameAbbreviation()); o.writeString(role.roleNameAbbreviation());
o.writeBoolean(role.canContainData()); o.writeBoolean(role.canContainData());
}); });
if (out.getTransportVersion().onOrAfter(TransportVersions.V_8_10_X)) { Version.writeVersion(versionInfo.nodeVersion(), out);
Version.writeVersion(versionInfo.nodeVersion(), out); IndexVersion.writeVersion(versionInfo.minIndexVersion(), out);
IndexVersion.writeVersion(versionInfo.minIndexVersion(), out); IndexVersion.writeVersion(versionInfo.maxIndexVersion(), out);
IndexVersion.writeVersion(versionInfo.maxIndexVersion(), out);
} else {
Version.writeVersion(versionInfo.nodeVersion(), out);
}
if (out.getTransportVersion().onOrAfter(EXTERNAL_ID_VERSION)) { if (out.getTransportVersion().onOrAfter(EXTERNAL_ID_VERSION)) {
out.writeString(externalId); out.writeString(externalId);
} }
@ -486,18 +465,13 @@ public class DiscoveryNode implements Writeable, ToXContentFragment {
return this.versionInfo; return this.versionInfo;
} }
public Version getVersion() { public BuildVersion getBuildVersion() {
return this.versionInfo.nodeVersion(); return versionInfo.buildVersion();
} }
public OptionalInt getPre811VersionId() { @Deprecated
// Even if Version is removed from this class completely it will need to read the version ID public Version getVersion() {
// off the wire for old node versions, so the value of this variable can be obtained from that return this.versionInfo.nodeVersion();
int versionId = versionInfo.nodeVersion().id;
if (versionId >= Version.V_8_11_0.id) {
return OptionalInt.empty();
}
return OptionalInt.of(versionId);
} }
public IndexVersion getMinIndexVersion() { public IndexVersion getMinIndexVersion() {
@ -564,7 +538,7 @@ public class DiscoveryNode implements Writeable, ToXContentFragment {
appendRoleAbbreviations(stringBuilder, ""); appendRoleAbbreviations(stringBuilder, "");
stringBuilder.append('}'); stringBuilder.append('}');
} }
stringBuilder.append('{').append(versionInfo.nodeVersion()).append('}'); stringBuilder.append('{').append(versionInfo.buildVersion()).append('}');
stringBuilder.append('{').append(versionInfo.minIndexVersion()).append('-').append(versionInfo.maxIndexVersion()).append('}'); stringBuilder.append('{').append(versionInfo.minIndexVersion()).append('-').append(versionInfo.maxIndexVersion()).append('}');
} }
@ -601,7 +575,7 @@ public class DiscoveryNode implements Writeable, ToXContentFragment {
builder.value(role.roleName()); builder.value(role.roleName());
} }
builder.endArray(); builder.endArray();
builder.field("version", versionInfo.nodeVersion()); builder.field("version", versionInfo.buildVersion().toString());
builder.field("min_index_version", versionInfo.minIndexVersion()); builder.field("min_index_version", versionInfo.minIndexVersion());
builder.field("max_index_version", versionInfo.maxIndexVersion()); builder.field("max_index_version", versionInfo.maxIndexVersion());
builder.endObject(); builder.endObject();

View file

@ -339,6 +339,13 @@ public class DiscoveryNodes implements Iterable<DiscoveryNode>, SimpleDiffable<D
return false; return false;
} }
/**
* {@code true} if this cluster consists of nodes with several release versions
*/
public boolean isMixedVersionCluster() {
return minNodeVersion.equals(maxNodeVersion) == false;
}
/** /**
* Returns the version of the node with the oldest version in the cluster that is not a client node * Returns the version of the node with the oldest version in the cluster that is not a client node
* *

View file

@ -10,6 +10,7 @@
package org.elasticsearch.cluster.node; package org.elasticsearch.cluster.node;
import org.elasticsearch.Version; import org.elasticsearch.Version;
import org.elasticsearch.env.BuildVersion;
import org.elasticsearch.index.IndexVersion; import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.index.IndexVersions; import org.elasticsearch.index.IndexVersions;
@ -17,18 +18,49 @@ import java.util.Objects;
/** /**
* Represents the versions of various aspects of an Elasticsearch node. * Represents the versions of various aspects of an Elasticsearch node.
* @param nodeVersion The node {@link Version} * @param buildVersion The node {@link BuildVersion}
* @param minIndexVersion The minimum {@link IndexVersion} supported by this node * @param minIndexVersion The minimum {@link IndexVersion} supported by this node
* @param maxIndexVersion The maximum {@link IndexVersion} supported by this node * @param maxIndexVersion The maximum {@link IndexVersion} supported by this node
*/ */
public record VersionInformation(Version nodeVersion, IndexVersion minIndexVersion, IndexVersion maxIndexVersion) { public record VersionInformation(
BuildVersion buildVersion,
Version nodeVersion,
IndexVersion minIndexVersion,
IndexVersion maxIndexVersion
) {
public static final VersionInformation CURRENT = new VersionInformation( public static final VersionInformation CURRENT = new VersionInformation(
Version.CURRENT, BuildVersion.current(),
IndexVersions.MINIMUM_COMPATIBLE, IndexVersions.MINIMUM_COMPATIBLE,
IndexVersion.current() IndexVersion.current()
); );
public VersionInformation {
Objects.requireNonNull(buildVersion);
Objects.requireNonNull(nodeVersion);
Objects.requireNonNull(minIndexVersion);
Objects.requireNonNull(maxIndexVersion);
}
public VersionInformation(BuildVersion version, IndexVersion minIndexVersion, IndexVersion maxIndexVersion) {
this(version, Version.CURRENT, minIndexVersion, maxIndexVersion);
/*
* Whilst DiscoveryNode.getVersion exists, we need to be able to get a Version from VersionInfo
* This needs to be consistent - on serverless, BuildVersion has an id of -1, which translates
* to a nonsensical Version. So all consumers of Version need to be moved to BuildVersion
* before we can remove Version from here.
*/
// for the moment, check this is only called with current() so the implied Version is correct
// TODO: work out what needs to happen for other versions. Maybe we can only remove this once the nodeVersion field is gone
assert version.equals(BuildVersion.current()) : version + " is not " + BuildVersion.current();
}
@Deprecated
public VersionInformation(Version version, IndexVersion minIndexVersion, IndexVersion maxIndexVersion) {
this(BuildVersion.fromVersionId(version.id()), version, minIndexVersion, maxIndexVersion);
}
@Deprecated
public static VersionInformation inferVersions(Version nodeVersion) { public static VersionInformation inferVersions(Version nodeVersion) {
if (nodeVersion == null) { if (nodeVersion == null) {
return null; return null;
@ -44,10 +76,4 @@ public record VersionInformation(Version nodeVersion, IndexVersion minIndexVersi
throw new IllegalArgumentException("Node versions can only be inferred before release version 8.10.0"); throw new IllegalArgumentException("Node versions can only be inferred before release version 8.10.0");
} }
} }
public VersionInformation {
Objects.requireNonNull(nodeVersion);
Objects.requireNonNull(minIndexVersion);
Objects.requireNonNull(maxIndexVersion);
}
} }

View file

@ -302,11 +302,12 @@ public abstract class AbstractFileWatchingService extends AbstractLifecycleCompo
void processSettingsOnServiceStartAndNotifyListeners() throws InterruptedException { void processSettingsOnServiceStartAndNotifyListeners() throws InterruptedException {
try { try {
processFileOnServiceStart(); processFileOnServiceStart();
for (var listener : eventListeners) {
listener.watchedFileChanged();
}
} catch (IOException | ExecutionException e) { } catch (IOException | ExecutionException e) {
logger.error(() -> "Error processing watched file: " + watchedFile(), e); onProcessFileChangesException(e);
return;
}
for (var listener : eventListeners) {
listener.watchedFileChanged();
} }
} }

View file

@ -968,15 +968,27 @@ public final class TextFieldMapper extends FieldMapper {
return fielddata; return fielddata;
} }
public boolean canUseSyntheticSourceDelegateForQuerying() { /**
* Returns true if the delegate sub-field can be used for loading and querying (ie. either isIndexed or isStored is true)
*/
public boolean canUseSyntheticSourceDelegateForLoading() {
return syntheticSourceDelegate != null return syntheticSourceDelegate != null
&& syntheticSourceDelegate.ignoreAbove() == Integer.MAX_VALUE && syntheticSourceDelegate.ignoreAbove() == Integer.MAX_VALUE
&& (syntheticSourceDelegate.isIndexed() || syntheticSourceDelegate.isStored()); && (syntheticSourceDelegate.isIndexed() || syntheticSourceDelegate.isStored());
} }
/**
* Returns true if the delegate sub-field can be used for querying only (ie. isIndexed must be true)
*/
public boolean canUseSyntheticSourceDelegateForQuerying() {
return syntheticSourceDelegate != null
&& syntheticSourceDelegate.ignoreAbove() == Integer.MAX_VALUE
&& syntheticSourceDelegate.isIndexed();
}
@Override @Override
public BlockLoader blockLoader(BlockLoaderContext blContext) { public BlockLoader blockLoader(BlockLoaderContext blContext) {
if (canUseSyntheticSourceDelegateForQuerying()) { if (canUseSyntheticSourceDelegateForLoading()) {
return new BlockLoader.Delegating(syntheticSourceDelegate.blockLoader(blContext)) { return new BlockLoader.Delegating(syntheticSourceDelegate.blockLoader(blContext)) {
@Override @Override
protected String delegatingTo() { protected String delegatingTo() {

View file

@ -416,13 +416,18 @@ public class DenseVectorFieldMapper extends FieldMapper {
return VectorUtil.dotProduct(vectorData.asByteVector(), vectorData.asByteVector()); return VectorUtil.dotProduct(vectorData.asByteVector(), vectorData.asByteVector());
} }
private VectorData parseVectorArray(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException { private VectorData parseVectorArray(
DocumentParserContext context,
int dims,
IntBooleanConsumer dimChecker,
VectorSimilarity similarity
) throws IOException {
int index = 0; int index = 0;
byte[] vector = new byte[fieldMapper.fieldType().dims]; byte[] vector = new byte[dims];
float squaredMagnitude = 0; float squaredMagnitude = 0;
for (XContentParser.Token token = context.parser().nextToken(); token != Token.END_ARRAY; token = context.parser() for (XContentParser.Token token = context.parser().nextToken(); token != Token.END_ARRAY; token = context.parser()
.nextToken()) { .nextToken()) {
fieldMapper.checkDimensionExceeded(index, context); dimChecker.accept(index, false);
ensureExpectedToken(Token.VALUE_NUMBER, token, context.parser()); ensureExpectedToken(Token.VALUE_NUMBER, token, context.parser());
final int value; final int value;
if (context.parser().numberType() != XContentParser.NumberType.INT) { if (context.parser().numberType() != XContentParser.NumberType.INT) {
@ -460,30 +465,31 @@ public class DenseVectorFieldMapper extends FieldMapper {
vector[index++] = (byte) value; vector[index++] = (byte) value;
squaredMagnitude += value * value; squaredMagnitude += value * value;
} }
fieldMapper.checkDimensionMatches(index, context); dimChecker.accept(index, true);
checkVectorMagnitude(fieldMapper.fieldType().similarity, errorByteElementsAppender(vector), squaredMagnitude); checkVectorMagnitude(similarity, errorByteElementsAppender(vector), squaredMagnitude);
return VectorData.fromBytes(vector); return VectorData.fromBytes(vector);
} }
private VectorData parseHexEncodedVector(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException { private VectorData parseHexEncodedVector(
DocumentParserContext context,
IntBooleanConsumer dimChecker,
VectorSimilarity similarity
) throws IOException {
byte[] decodedVector = HexFormat.of().parseHex(context.parser().text()); byte[] decodedVector = HexFormat.of().parseHex(context.parser().text());
fieldMapper.checkDimensionMatches(decodedVector.length, context); dimChecker.accept(decodedVector.length, true);
VectorData vectorData = VectorData.fromBytes(decodedVector); VectorData vectorData = VectorData.fromBytes(decodedVector);
double squaredMagnitude = computeSquaredMagnitude(vectorData); double squaredMagnitude = computeSquaredMagnitude(vectorData);
checkVectorMagnitude( checkVectorMagnitude(similarity, errorByteElementsAppender(decodedVector), (float) squaredMagnitude);
fieldMapper.fieldType().similarity,
errorByteElementsAppender(decodedVector),
(float) squaredMagnitude
);
return vectorData; return vectorData;
} }
@Override @Override
VectorData parseKnnVector(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException { VectorData parseKnnVector(DocumentParserContext context, int dims, IntBooleanConsumer dimChecker, VectorSimilarity similarity)
throws IOException {
XContentParser.Token token = context.parser().currentToken(); XContentParser.Token token = context.parser().currentToken();
return switch (token) { return switch (token) {
case START_ARRAY -> parseVectorArray(context, fieldMapper); case START_ARRAY -> parseVectorArray(context, dims, dimChecker, similarity);
case VALUE_STRING -> parseHexEncodedVector(context, fieldMapper); case VALUE_STRING -> parseHexEncodedVector(context, dimChecker, similarity);
default -> throw new ParsingException( default -> throw new ParsingException(
context.parser().getTokenLocation(), context.parser().getTokenLocation(),
format("Unsupported type [%s] for provided value [%s]", token, context.parser().text()) format("Unsupported type [%s] for provided value [%s]", token, context.parser().text())
@ -493,7 +499,13 @@ public class DenseVectorFieldMapper extends FieldMapper {
@Override @Override
public void parseKnnVectorAndIndex(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException { public void parseKnnVectorAndIndex(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException {
VectorData vectorData = parseKnnVector(context, fieldMapper); VectorData vectorData = parseKnnVector(context, fieldMapper.fieldType().dims, (i, end) -> {
if (end) {
fieldMapper.checkDimensionMatches(i, context);
} else {
fieldMapper.checkDimensionExceeded(i, context);
}
}, fieldMapper.fieldType().similarity);
Field field = createKnnVectorField( Field field = createKnnVectorField(
fieldMapper.fieldType().name(), fieldMapper.fieldType().name(),
vectorData.asByteVector(), vectorData.asByteVector(),
@ -677,21 +689,22 @@ public class DenseVectorFieldMapper extends FieldMapper {
} }
@Override @Override
VectorData parseKnnVector(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException { VectorData parseKnnVector(DocumentParserContext context, int dims, IntBooleanConsumer dimChecker, VectorSimilarity similarity)
throws IOException {
int index = 0; int index = 0;
float squaredMagnitude = 0; float squaredMagnitude = 0;
float[] vector = new float[fieldMapper.fieldType().dims]; float[] vector = new float[dims];
for (Token token = context.parser().nextToken(); token != Token.END_ARRAY; token = context.parser().nextToken()) { for (Token token = context.parser().nextToken(); token != Token.END_ARRAY; token = context.parser().nextToken()) {
fieldMapper.checkDimensionExceeded(index, context); dimChecker.accept(index, false);
ensureExpectedToken(Token.VALUE_NUMBER, token, context.parser()); ensureExpectedToken(Token.VALUE_NUMBER, token, context.parser());
float value = context.parser().floatValue(true); float value = context.parser().floatValue(true);
vector[index] = value; vector[index] = value;
squaredMagnitude += value * value; squaredMagnitude += value * value;
index++; index++;
} }
fieldMapper.checkDimensionMatches(index, context); dimChecker.accept(index, true);
checkVectorBounds(vector); checkVectorBounds(vector);
checkVectorMagnitude(fieldMapper.fieldType().similarity, errorFloatElementsAppender(vector), squaredMagnitude); checkVectorMagnitude(similarity, errorFloatElementsAppender(vector), squaredMagnitude);
return VectorData.fromFloats(vector); return VectorData.fromFloats(vector);
} }
@ -816,12 +829,17 @@ public class DenseVectorFieldMapper extends FieldMapper {
return count; return count;
} }
private VectorData parseVectorArray(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException { private VectorData parseVectorArray(
DocumentParserContext context,
int dims,
IntBooleanConsumer dimChecker,
VectorSimilarity similarity
) throws IOException {
int index = 0; int index = 0;
byte[] vector = new byte[fieldMapper.fieldType().dims / Byte.SIZE]; byte[] vector = new byte[dims / Byte.SIZE];
for (XContentParser.Token token = context.parser().nextToken(); token != Token.END_ARRAY; token = context.parser() for (XContentParser.Token token = context.parser().nextToken(); token != Token.END_ARRAY; token = context.parser()
.nextToken()) { .nextToken()) {
fieldMapper.checkDimensionExceeded(index, context); dimChecker.accept(index * Byte.SIZE, false);
ensureExpectedToken(Token.VALUE_NUMBER, token, context.parser()); ensureExpectedToken(Token.VALUE_NUMBER, token, context.parser());
final int value; final int value;
if (context.parser().numberType() != XContentParser.NumberType.INT) { if (context.parser().numberType() != XContentParser.NumberType.INT) {
@ -856,35 +874,25 @@ public class DenseVectorFieldMapper extends FieldMapper {
+ "];" + "];"
); );
} }
if (index >= vector.length) {
throw new IllegalArgumentException(
"The number of dimensions for field ["
+ fieldMapper.fieldType().name()
+ "] should be ["
+ fieldMapper.fieldType().dims
+ "] but found ["
+ (index + 1) * Byte.SIZE
+ "]"
);
}
vector[index++] = (byte) value; vector[index++] = (byte) value;
} }
fieldMapper.checkDimensionMatches(index * Byte.SIZE, context); dimChecker.accept(index * Byte.SIZE, true);
return VectorData.fromBytes(vector); return VectorData.fromBytes(vector);
} }
private VectorData parseHexEncodedVector(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException { private VectorData parseHexEncodedVector(DocumentParserContext context, IntBooleanConsumer dimChecker) throws IOException {
byte[] decodedVector = HexFormat.of().parseHex(context.parser().text()); byte[] decodedVector = HexFormat.of().parseHex(context.parser().text());
fieldMapper.checkDimensionMatches(decodedVector.length * Byte.SIZE, context); dimChecker.accept(decodedVector.length * Byte.SIZE, true);
return VectorData.fromBytes(decodedVector); return VectorData.fromBytes(decodedVector);
} }
@Override @Override
VectorData parseKnnVector(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException { VectorData parseKnnVector(DocumentParserContext context, int dims, IntBooleanConsumer dimChecker, VectorSimilarity similarity)
throws IOException {
XContentParser.Token token = context.parser().currentToken(); XContentParser.Token token = context.parser().currentToken();
return switch (token) { return switch (token) {
case START_ARRAY -> parseVectorArray(context, fieldMapper); case START_ARRAY -> parseVectorArray(context, dims, dimChecker, similarity);
case VALUE_STRING -> parseHexEncodedVector(context, fieldMapper); case VALUE_STRING -> parseHexEncodedVector(context, dimChecker);
default -> throw new ParsingException( default -> throw new ParsingException(
context.parser().getTokenLocation(), context.parser().getTokenLocation(),
format("Unsupported type [%s] for provided value [%s]", token, context.parser().text()) format("Unsupported type [%s] for provided value [%s]", token, context.parser().text())
@ -894,7 +902,13 @@ public class DenseVectorFieldMapper extends FieldMapper {
@Override @Override
public void parseKnnVectorAndIndex(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException { public void parseKnnVectorAndIndex(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException {
VectorData vectorData = parseKnnVector(context, fieldMapper); VectorData vectorData = parseKnnVector(context, fieldMapper.fieldType().dims, (i, end) -> {
if (end) {
fieldMapper.checkDimensionMatches(i, context);
} else {
fieldMapper.checkDimensionExceeded(i, context);
}
}, fieldMapper.fieldType().similarity);
Field field = createKnnVectorField( Field field = createKnnVectorField(
fieldMapper.fieldType().name(), fieldMapper.fieldType().name(),
vectorData.asByteVector(), vectorData.asByteVector(),
@ -958,7 +972,12 @@ public class DenseVectorFieldMapper extends FieldMapper {
abstract void parseKnnVectorAndIndex(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException; abstract void parseKnnVectorAndIndex(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException;
abstract VectorData parseKnnVector(DocumentParserContext context, DenseVectorFieldMapper fieldMapper) throws IOException; abstract VectorData parseKnnVector(
DocumentParserContext context,
int dims,
IntBooleanConsumer dimChecker,
VectorSimilarity similarity
) throws IOException;
abstract int getNumBytes(int dimensions); abstract int getNumBytes(int dimensions);
@ -2180,7 +2199,13 @@ public class DenseVectorFieldMapper extends FieldMapper {
: elementType.getNumBytes(dims); : elementType.getNumBytes(dims);
ByteBuffer byteBuffer = elementType.createByteBuffer(indexCreatedVersion, numBytes); ByteBuffer byteBuffer = elementType.createByteBuffer(indexCreatedVersion, numBytes);
VectorData vectorData = elementType.parseKnnVector(context, this); VectorData vectorData = elementType.parseKnnVector(context, dims, (i, b) -> {
if (b) {
checkDimensionMatches(i, context);
} else {
checkDimensionExceeded(i, context);
}
}, fieldType().similarity);
vectorData.addToBuffer(byteBuffer); vectorData.addToBuffer(byteBuffer);
if (indexCreatedVersion.onOrAfter(MAGNITUDE_STORED_INDEX_VERSION)) { if (indexCreatedVersion.onOrAfter(MAGNITUDE_STORED_INDEX_VERSION)) {
// encode vector magnitude at the end // encode vector magnitude at the end
@ -2433,4 +2458,11 @@ public class DenseVectorFieldMapper extends FieldMapper {
return fullPath(); return fullPath();
} }
} }
/**
* @FunctionalInterface for a function that takes a int and boolean
*/
interface IntBooleanConsumer {
void accept(int value, boolean isComplete);
}
} }

View file

@ -0,0 +1,431 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.index.mapper.vectors;
import org.apache.lucene.document.BinaryDocValuesField;
import org.apache.lucene.index.BinaryDocValues;
import org.apache.lucene.index.LeafReader;
import org.apache.lucene.search.FieldExistsQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.util.FeatureFlag;
import org.elasticsearch.common.xcontent.support.XContentMapValues;
import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.index.fielddata.FieldDataContext;
import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.mapper.ArraySourceValueFetcher;
import org.elasticsearch.index.mapper.DocumentParserContext;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.Mapper;
import org.elasticsearch.index.mapper.MapperBuilderContext;
import org.elasticsearch.index.mapper.MapperParsingException;
import org.elasticsearch.index.mapper.SimpleMappedFieldType;
import org.elasticsearch.index.mapper.SourceLoader;
import org.elasticsearch.index.mapper.TextSearchInfo;
import org.elasticsearch.index.mapper.ValueFetcher;
import org.elasticsearch.index.query.SearchExecutionContext;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.aggregations.support.CoreValuesSourceType;
import org.elasticsearch.search.vectors.VectorData;
import org.elasticsearch.xcontent.XContentBuilder;
import org.elasticsearch.xcontent.XContentParser;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.time.ZoneId;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import static org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapper.MAX_DIMS_COUNT;
import static org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapper.MAX_DIMS_COUNT_BIT;
import static org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapper.namesToElementType;
public class MultiDenseVectorFieldMapper extends FieldMapper {
public static final String VECTOR_MAGNITUDES_SUFFIX = "._magnitude";
public static final FeatureFlag FEATURE_FLAG = new FeatureFlag("multi_dense_vector");
public static final String CONTENT_TYPE = "multi_dense_vector";
private static MultiDenseVectorFieldMapper toType(FieldMapper in) {
return (MultiDenseVectorFieldMapper) in;
}
public static class Builder extends FieldMapper.Builder {
private final Parameter<DenseVectorFieldMapper.ElementType> elementType = new Parameter<>(
"element_type",
false,
() -> DenseVectorFieldMapper.ElementType.FLOAT,
(n, c, o) -> {
DenseVectorFieldMapper.ElementType elementType = namesToElementType.get((String) o);
if (elementType == null) {
throw new MapperParsingException(
"invalid element_type [" + o + "]; available types are " + namesToElementType.keySet()
);
}
return elementType;
},
m -> toType(m).fieldType().elementType,
XContentBuilder::field,
Objects::toString
);
// This is defined as updatable because it can be updated once, from [null] to a valid dim size,
// by a dynamic mapping update. Once it has been set, however, the value cannot be changed.
private final Parameter<Integer> dims = new Parameter<>("dims", true, () -> null, (n, c, o) -> {
if (o instanceof Integer == false) {
throw new MapperParsingException("Property [dims] on field [" + n + "] must be an integer but got [" + o + "]");
}
return XContentMapValues.nodeIntegerValue(o);
}, m -> toType(m).fieldType().dims, XContentBuilder::field, Object::toString).setSerializerCheck((id, ic, v) -> v != null)
.setMergeValidator((previous, current, c) -> previous == null || Objects.equals(previous, current))
.addValidator(dims -> {
if (dims == null) {
return;
}
int maxDims = elementType.getValue() == DenseVectorFieldMapper.ElementType.BIT ? MAX_DIMS_COUNT_BIT : MAX_DIMS_COUNT;
int minDims = elementType.getValue() == DenseVectorFieldMapper.ElementType.BIT ? Byte.SIZE : 1;
if (dims < minDims || dims > maxDims) {
throw new MapperParsingException(
"The number of dimensions should be in the range [" + minDims + ", " + maxDims + "] but was [" + dims + "]"
);
}
if (elementType.getValue() == DenseVectorFieldMapper.ElementType.BIT) {
if (dims % Byte.SIZE != 0) {
throw new MapperParsingException("The number of dimensions for should be a multiple of 8 but was [" + dims + "]");
}
}
});
private final Parameter<Map<String, String>> meta = Parameter.metaParam();
private final IndexVersion indexCreatedVersion;
public Builder(String name, IndexVersion indexCreatedVersion) {
super(name);
this.indexCreatedVersion = indexCreatedVersion;
}
@Override
protected Parameter<?>[] getParameters() {
return new Parameter<?>[] { elementType, dims, meta };
}
public MultiDenseVectorFieldMapper.Builder dimensions(int dimensions) {
this.dims.setValue(dimensions);
return this;
}
public MultiDenseVectorFieldMapper.Builder elementType(DenseVectorFieldMapper.ElementType elementType) {
this.elementType.setValue(elementType);
return this;
}
@Override
public MultiDenseVectorFieldMapper build(MapperBuilderContext context) {
// Validate again here because the dimensions or element type could have been set programmatically,
// which affects index option validity
validate();
return new MultiDenseVectorFieldMapper(
leafName(),
new MultiDenseVectorFieldType(
context.buildFullName(leafName()),
elementType.getValue(),
dims.getValue(),
indexCreatedVersion,
meta.getValue()
),
builderParams(this, context),
indexCreatedVersion
);
}
}
public static final TypeParser PARSER = new TypeParser(
(n, c) -> new MultiDenseVectorFieldMapper.Builder(n, c.indexVersionCreated()),
notInMultiFields(CONTENT_TYPE)
);
public static final class MultiDenseVectorFieldType extends SimpleMappedFieldType {
private final DenseVectorFieldMapper.ElementType elementType;
private final Integer dims;
private final IndexVersion indexCreatedVersion;
public MultiDenseVectorFieldType(
String name,
DenseVectorFieldMapper.ElementType elementType,
Integer dims,
IndexVersion indexCreatedVersion,
Map<String, String> meta
) {
super(name, false, false, true, TextSearchInfo.NONE, meta);
this.elementType = elementType;
this.dims = dims;
this.indexCreatedVersion = indexCreatedVersion;
}
@Override
public String typeName() {
return CONTENT_TYPE;
}
@Override
public ValueFetcher valueFetcher(SearchExecutionContext context, String format) {
if (format != null) {
throw new IllegalArgumentException("Field [" + name() + "] of type [" + typeName() + "] doesn't support formats.");
}
return new ArraySourceValueFetcher(name(), context) {
@Override
protected Object parseSourceValue(Object value) {
return value;
}
};
}
@Override
public DocValueFormat docValueFormat(String format, ZoneId timeZone) {
throw new IllegalArgumentException(
"Field [" + name() + "] of type [" + typeName() + "] doesn't support docvalue_fields or aggregations"
);
}
@Override
public boolean isAggregatable() {
return false;
}
@Override
public IndexFieldData.Builder fielddataBuilder(FieldDataContext fieldDataContext) {
return new MultiVectorIndexFieldData.Builder(name(), CoreValuesSourceType.KEYWORD, indexCreatedVersion, dims, elementType);
}
@Override
public Query existsQuery(SearchExecutionContext context) {
return new FieldExistsQuery(name());
}
@Override
public Query termQuery(Object value, SearchExecutionContext context) {
throw new IllegalArgumentException("Field [" + name() + "] of type [" + typeName() + "] doesn't support term queries");
}
int getVectorDimensions() {
return dims;
}
DenseVectorFieldMapper.ElementType getElementType() {
return elementType;
}
}
private final IndexVersion indexCreatedVersion;
private MultiDenseVectorFieldMapper(
String simpleName,
MappedFieldType fieldType,
BuilderParams params,
IndexVersion indexCreatedVersion
) {
super(simpleName, fieldType, params);
this.indexCreatedVersion = indexCreatedVersion;
}
@Override
public MultiDenseVectorFieldType fieldType() {
return (MultiDenseVectorFieldType) super.fieldType();
}
@Override
public boolean parsesArrayValue() {
return true;
}
@Override
public void parse(DocumentParserContext context) throws IOException {
if (context.doc().getByKey(fieldType().name()) != null) {
throw new IllegalArgumentException(
"Field ["
+ fullPath()
+ "] of type ["
+ typeName()
+ "] doesn't support indexing multiple values for the same field in the same document"
);
}
if (XContentParser.Token.VALUE_NULL == context.parser().currentToken()) {
return;
}
if (XContentParser.Token.START_ARRAY != context.parser().currentToken()) {
throw new IllegalArgumentException(
"Field [" + fullPath() + "] of type [" + typeName() + "] cannot be indexed with a single value"
);
}
if (fieldType().dims == null) {
int currentDims = -1;
while (XContentParser.Token.END_ARRAY != context.parser().nextToken()) {
int dims = fieldType().elementType.parseDimensionCount(context);
if (currentDims == -1) {
currentDims = dims;
} else if (currentDims != dims) {
throw new IllegalArgumentException(
"Field [" + fullPath() + "] of type [" + typeName() + "] cannot be indexed with vectors of different dimensions"
);
}
}
MultiDenseVectorFieldType updatedFieldType = new MultiDenseVectorFieldType(
fieldType().name(),
fieldType().elementType,
currentDims,
indexCreatedVersion,
fieldType().meta()
);
Mapper update = new MultiDenseVectorFieldMapper(leafName(), updatedFieldType, builderParams, indexCreatedVersion);
context.addDynamicMapper(update);
return;
}
int dims = fieldType().dims;
DenseVectorFieldMapper.ElementType elementType = fieldType().elementType;
List<VectorData> vectors = new ArrayList<>();
while (XContentParser.Token.END_ARRAY != context.parser().nextToken()) {
VectorData vector = elementType.parseKnnVector(context, dims, (i, b) -> {
if (b) {
checkDimensionMatches(i, context);
} else {
checkDimensionExceeded(i, context);
}
}, null);
vectors.add(vector);
}
int bufferSize = elementType.getNumBytes(dims) * vectors.size();
ByteBuffer buffer = ByteBuffer.allocate(bufferSize).order(ByteOrder.LITTLE_ENDIAN);
ByteBuffer magnitudeBuffer = ByteBuffer.allocate(vectors.size() * Float.BYTES).order(ByteOrder.LITTLE_ENDIAN);
for (VectorData vector : vectors) {
vector.addToBuffer(buffer);
magnitudeBuffer.putFloat((float) Math.sqrt(elementType.computeSquaredMagnitude(vector)));
}
String vectorFieldName = fieldType().name();
String vectorMagnitudeFieldName = vectorFieldName + VECTOR_MAGNITUDES_SUFFIX;
context.doc().addWithKey(vectorFieldName, new BinaryDocValuesField(vectorFieldName, new BytesRef(buffer.array())));
context.doc()
.addWithKey(
vectorMagnitudeFieldName,
new BinaryDocValuesField(vectorMagnitudeFieldName, new BytesRef(magnitudeBuffer.array()))
);
}
private void checkDimensionExceeded(int index, DocumentParserContext context) {
if (index >= fieldType().dims) {
throw new IllegalArgumentException(
"The ["
+ typeName()
+ "] field ["
+ fullPath()
+ "] in doc ["
+ context.documentDescription()
+ "] has more dimensions "
+ "than defined in the mapping ["
+ fieldType().dims
+ "]"
);
}
}
private void checkDimensionMatches(int index, DocumentParserContext context) {
if (index != fieldType().dims) {
throw new IllegalArgumentException(
"The ["
+ typeName()
+ "] field ["
+ fullPath()
+ "] in doc ["
+ context.documentDescription()
+ "] has a different number of dimensions "
+ "["
+ index
+ "] than defined in the mapping ["
+ fieldType().dims
+ "]"
);
}
}
@Override
protected void parseCreateField(DocumentParserContext context) {
throw new AssertionError("parse is implemented directly");
}
@Override
protected String contentType() {
return CONTENT_TYPE;
}
@Override
public FieldMapper.Builder getMergeBuilder() {
return new MultiDenseVectorFieldMapper.Builder(leafName(), indexCreatedVersion).init(this);
}
@Override
protected SyntheticSourceSupport syntheticSourceSupport() {
return new SyntheticSourceSupport.Native(new MultiDenseVectorFieldMapper.DocValuesSyntheticFieldLoader());
}
private class DocValuesSyntheticFieldLoader extends SourceLoader.DocValuesBasedSyntheticFieldLoader {
private BinaryDocValues values;
private boolean hasValue;
@Override
public DocValuesLoader docValuesLoader(LeafReader leafReader, int[] docIdsInLeaf) throws IOException {
values = leafReader.getBinaryDocValues(fullPath());
if (values == null) {
return null;
}
return docId -> {
hasValue = docId == values.advance(docId);
return hasValue;
};
}
@Override
public boolean hasValue() {
return hasValue;
}
@Override
public void write(XContentBuilder b) throws IOException {
if (false == hasValue) {
return;
}
b.startArray(leafName());
BytesRef ref = values.binaryValue();
ByteBuffer byteBuffer = ByteBuffer.wrap(ref.bytes, ref.offset, ref.length).order(ByteOrder.LITTLE_ENDIAN);
assert ref.length % fieldType().elementType.getNumBytes(fieldType().dims) == 0;
int numVecs = ref.length / fieldType().elementType.getNumBytes(fieldType().dims);
for (int i = 0; i < numVecs; i++) {
b.startArray();
int dims = fieldType().elementType == DenseVectorFieldMapper.ElementType.BIT
? fieldType().dims / Byte.SIZE
: fieldType().dims;
for (int dim = 0; dim < dims; dim++) {
fieldType().elementType.readAndWriteValue(byteBuffer, b);
}
b.endArray();
}
b.endArray();
}
@Override
public String fieldName() {
return fullPath();
}
}
}

View file

@ -0,0 +1,54 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.index.mapper.vectors;
import org.apache.lucene.index.LeafReader;
import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.index.fielddata.LeafFieldData;
import org.elasticsearch.index.fielddata.SortedBinaryDocValues;
import org.elasticsearch.script.field.DocValuesScriptFieldFactory;
final class MultiVectorDVLeafFieldData implements LeafFieldData {
private final LeafReader reader;
private final String field;
private final IndexVersion indexVersion;
private final DenseVectorFieldMapper.ElementType elementType;
private final int dims;
MultiVectorDVLeafFieldData(
LeafReader reader,
String field,
IndexVersion indexVersion,
DenseVectorFieldMapper.ElementType elementType,
int dims
) {
this.reader = reader;
this.field = field;
this.indexVersion = indexVersion;
this.elementType = elementType;
this.dims = dims;
}
@Override
public DocValuesScriptFieldFactory getScriptFieldFactory(String name) {
// TODO
return null;
}
@Override
public SortedBinaryDocValues getBytesValues() {
throw new UnsupportedOperationException("String representation of doc values for multi-vector fields is not supported");
}
@Override
public long ramBytesUsed() {
return 0;
}
}

View file

@ -0,0 +1,114 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.index.mapper.vectors;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.search.SortField;
import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.MultiValueMode;
import org.elasticsearch.search.aggregations.support.ValuesSourceType;
import org.elasticsearch.search.sort.BucketedSort;
import org.elasticsearch.search.sort.SortOrder;
public class MultiVectorIndexFieldData implements IndexFieldData<MultiVectorDVLeafFieldData> {
protected final String fieldName;
protected final ValuesSourceType valuesSourceType;
private final int dims;
private final IndexVersion indexVersion;
private final DenseVectorFieldMapper.ElementType elementType;
public MultiVectorIndexFieldData(
String fieldName,
int dims,
ValuesSourceType valuesSourceType,
IndexVersion indexVersion,
DenseVectorFieldMapper.ElementType elementType
) {
this.fieldName = fieldName;
this.valuesSourceType = valuesSourceType;
this.indexVersion = indexVersion;
this.elementType = elementType;
this.dims = dims;
}
@Override
public String getFieldName() {
return fieldName;
}
@Override
public ValuesSourceType getValuesSourceType() {
return valuesSourceType;
}
@Override
public MultiVectorDVLeafFieldData load(LeafReaderContext context) {
return new MultiVectorDVLeafFieldData(context.reader(), fieldName, indexVersion, elementType, dims);
}
@Override
public MultiVectorDVLeafFieldData loadDirect(LeafReaderContext context) throws Exception {
return load(context);
}
@Override
public SortField sortField(Object missingValue, MultiValueMode sortMode, XFieldComparatorSource.Nested nested, boolean reverse) {
throw new IllegalArgumentException(
"Field [" + fieldName + "] of type [" + MultiDenseVectorFieldMapper.CONTENT_TYPE + "] doesn't support sort"
);
}
@Override
public BucketedSort newBucketedSort(
BigArrays bigArrays,
Object missingValue,
MultiValueMode sortMode,
XFieldComparatorSource.Nested nested,
SortOrder sortOrder,
DocValueFormat format,
int bucketSize,
BucketedSort.ExtraData extra
) {
throw new IllegalArgumentException("only supported on numeric fields");
}
public static class Builder implements IndexFieldData.Builder {
private final String name;
private final ValuesSourceType valuesSourceType;
private final IndexVersion indexVersion;
private final int dims;
private final DenseVectorFieldMapper.ElementType elementType;
public Builder(
String name,
ValuesSourceType valuesSourceType,
IndexVersion indexVersion,
int dims,
DenseVectorFieldMapper.ElementType elementType
) {
this.name = name;
this.valuesSourceType = valuesSourceType;
this.indexVersion = indexVersion;
this.dims = dims;
this.elementType = elementType;
}
@Override
public IndexFieldData<?> build(IndexFieldDataCache cache, CircuitBreakerService breakerService) {
return new MultiVectorIndexFieldData(name, dims, valuesSourceType, indexVersion, elementType);
}
}
}

View file

@ -67,6 +67,7 @@ import org.elasticsearch.index.mapper.TimeSeriesRoutingHashFieldMapper;
import org.elasticsearch.index.mapper.VersionFieldMapper; import org.elasticsearch.index.mapper.VersionFieldMapper;
import org.elasticsearch.index.mapper.flattened.FlattenedFieldMapper; import org.elasticsearch.index.mapper.flattened.FlattenedFieldMapper;
import org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapper; import org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapper;
import org.elasticsearch.index.mapper.vectors.MultiDenseVectorFieldMapper;
import org.elasticsearch.index.mapper.vectors.SparseVectorFieldMapper; import org.elasticsearch.index.mapper.vectors.SparseVectorFieldMapper;
import org.elasticsearch.index.seqno.RetentionLeaseBackgroundSyncAction; import org.elasticsearch.index.seqno.RetentionLeaseBackgroundSyncAction;
import org.elasticsearch.index.seqno.RetentionLeaseSyncAction; import org.elasticsearch.index.seqno.RetentionLeaseSyncAction;
@ -210,6 +211,9 @@ public class IndicesModule extends AbstractModule {
mappers.put(DenseVectorFieldMapper.CONTENT_TYPE, DenseVectorFieldMapper.PARSER); mappers.put(DenseVectorFieldMapper.CONTENT_TYPE, DenseVectorFieldMapper.PARSER);
mappers.put(SparseVectorFieldMapper.CONTENT_TYPE, SparseVectorFieldMapper.PARSER); mappers.put(SparseVectorFieldMapper.CONTENT_TYPE, SparseVectorFieldMapper.PARSER);
if (MultiDenseVectorFieldMapper.FEATURE_FLAG.isEnabled()) {
mappers.put(MultiDenseVectorFieldMapper.CONTENT_TYPE, MultiDenseVectorFieldMapper.PARSER);
}
for (MapperPlugin mapperPlugin : mapperPlugins) { for (MapperPlugin mapperPlugin : mapperPlugins) {
for (Map.Entry<String, Mapper.TypeParser> entry : mapperPlugin.getMappers().entrySet()) { for (Map.Entry<String, Mapper.TypeParser> entry : mapperPlugin.getMappers().entrySet()) {

View file

@ -15,12 +15,14 @@ import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.action.support.PlainActionFuture; import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ClusterStateListener; import org.elasticsearch.cluster.ClusterStateListener;
import org.elasticsearch.cluster.NotMasterException;
import org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException; import org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException;
import org.elasticsearch.cluster.metadata.Metadata; import org.elasticsearch.cluster.metadata.Metadata;
import org.elasticsearch.cluster.metadata.ReservedStateMetadata; import org.elasticsearch.cluster.metadata.ReservedStateMetadata;
import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.file.MasterNodeFileWatchingService; import org.elasticsearch.common.file.MasterNodeFileWatchingService;
import org.elasticsearch.env.Environment; import org.elasticsearch.env.Environment;
import org.elasticsearch.xcontent.XContentParseException;
import org.elasticsearch.xcontent.XContentParserConfiguration; import org.elasticsearch.xcontent.XContentParserConfiguration;
import java.io.BufferedInputStream; import java.io.BufferedInputStream;
@ -146,11 +148,20 @@ public class FileSettingsService extends MasterNodeFileWatchingService implement
@Override @Override
protected void onProcessFileChangesException(Exception e) { protected void onProcessFileChangesException(Exception e) {
if (e instanceof ExecutionException && e.getCause() instanceof FailedToCommitClusterStateException f) { if (e instanceof ExecutionException) {
logger.error("Unable to commit cluster state", e); var cause = e.getCause();
} else { if (cause instanceof FailedToCommitClusterStateException) {
super.onProcessFileChangesException(e); logger.error("Unable to commit cluster state", e);
return;
} else if (cause instanceof XContentParseException) {
logger.error("Unable to parse settings", e);
return;
} else if (cause instanceof NotMasterException) {
logger.error("Node is no longer master", e);
return;
}
} }
super.onProcessFileChangesException(e);
} }
@Override @Override

View file

@ -367,7 +367,7 @@ public class RestNodesAction extends AbstractCatAction {
table.addCell("-"); table.addCell("-");
} }
table.addCell(node.getVersion().toString()); table.addCell(node.getBuildVersion().toString());
table.addCell(info == null ? null : info.getBuild().type().displayName()); table.addCell(info == null ? null : info.getBuild().type().displayName());
table.addCell(info == null ? null : info.getBuild().hash()); table.addCell(info == null ? null : info.getBuild().hash());
table.addCell(jvmInfo == null ? null : jvmInfo.version()); table.addCell(jvmInfo == null ? null : jvmInfo.version());

View file

@ -140,7 +140,7 @@ public class RestTasksAction extends AbstractCatAction {
table.addCell(node == null ? "-" : node.getHostAddress()); table.addCell(node == null ? "-" : node.getHostAddress());
table.addCell(node.getAddress().address().getPort()); table.addCell(node.getAddress().address().getPort());
table.addCell(node == null ? "-" : node.getName()); table.addCell(node == null ? "-" : node.getName());
table.addCell(node == null ? "-" : node.getVersion().toString()); table.addCell(node == null ? "-" : node.getBuildVersion().toString());
table.addCell(taskInfo.headers().getOrDefault(Task.X_OPAQUE_ID_HTTP_HEADER, "-")); table.addCell(taskInfo.headers().getOrDefault(Task.X_OPAQUE_ID_HTTP_HEADER, "-"));
if (detailed) { if (detailed) {

View file

@ -9,6 +9,10 @@
package org.elasticsearch.rest.action.search; package org.elasticsearch.rest.action.search;
import org.elasticsearch.Build;
import org.elasticsearch.index.mapper.vectors.MultiDenseVectorFieldMapper;
import java.util.HashSet;
import java.util.Set; import java.util.Set;
/** /**
@ -28,12 +32,25 @@ public final class SearchCapabilities {
private static final String DENSE_VECTOR_DOCVALUE_FIELDS = "dense_vector_docvalue_fields"; private static final String DENSE_VECTOR_DOCVALUE_FIELDS = "dense_vector_docvalue_fields";
/** Support transforming rank rrf queries to the corresponding rrf retriever. */ /** Support transforming rank rrf queries to the corresponding rrf retriever. */
private static final String TRANSFORM_RANK_RRF_TO_RETRIEVER = "transform_rank_rrf_to_retriever"; private static final String TRANSFORM_RANK_RRF_TO_RETRIEVER = "transform_rank_rrf_to_retriever";
/** Support kql query. */
private static final String KQL_QUERY_SUPPORTED = "kql_query";
/** Support multi-dense-vector field mapper. */
private static final String MULTI_DENSE_VECTOR_FIELD_MAPPER = "multi_dense_vector_field_mapper";
public static final Set<String> CAPABILITIES = Set.of( public static final Set<String> CAPABILITIES;
RANGE_REGEX_INTERVAL_QUERY_CAPABILITY, static {
BIT_DENSE_VECTOR_SYNTHETIC_SOURCE_CAPABILITY, HashSet<String> capabilities = new HashSet<>();
BYTE_FLOAT_BIT_DOT_PRODUCT_CAPABILITY, capabilities.add(RANGE_REGEX_INTERVAL_QUERY_CAPABILITY);
DENSE_VECTOR_DOCVALUE_FIELDS, capabilities.add(BIT_DENSE_VECTOR_SYNTHETIC_SOURCE_CAPABILITY);
TRANSFORM_RANK_RRF_TO_RETRIEVER capabilities.add(BYTE_FLOAT_BIT_DOT_PRODUCT_CAPABILITY);
); capabilities.add(DENSE_VECTOR_DOCVALUE_FIELDS);
capabilities.add(TRANSFORM_RANK_RRF_TO_RETRIEVER);
if (MultiDenseVectorFieldMapper.FEATURE_FLAG.isEnabled()) {
capabilities.add(MULTI_DENSE_VECTOR_FIELD_MAPPER);
}
if (Build.current().isSnapshot()) {
capabilities.add(KQL_QUERY_SUPPORTED);
}
CAPABILITIES = Set.copyOf(capabilities);
}
} }

View file

@ -9,6 +9,7 @@
package org.elasticsearch.search.aggregations; package org.elasticsearch.search.aggregations;
import org.elasticsearch.common.Strings; import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.DelayableWriteable;
import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.io.stream.NamedWriteable;
import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.StreamOutput;
@ -51,7 +52,12 @@ public abstract class InternalAggregation implements Aggregation, NamedWriteable
* Read from a stream. * Read from a stream.
*/ */
protected InternalAggregation(StreamInput in) throws IOException { protected InternalAggregation(StreamInput in) throws IOException {
name = in.readString(); final String name = in.readString();
if (in instanceof DelayableWriteable.Deduplicator d) {
this.name = d.deduplicate(name);
} else {
this.name = name;
}
metadata = in.readGenericMap(); metadata = in.readGenericMap();
} }

View file

@ -13,6 +13,7 @@ import org.apache.lucene.tests.analysis.MockTokenFilter;
import org.apache.lucene.tests.analysis.MockTokenizer; import org.apache.lucene.tests.analysis.MockTokenizer;
import org.apache.lucene.util.automaton.Automata; import org.apache.lucene.util.automaton.Automata;
import org.apache.lucene.util.automaton.CharacterRunAutomaton; import org.apache.lucene.util.automaton.CharacterRunAutomaton;
import org.elasticsearch.ElasticsearchStatusException;
import org.elasticsearch.action.admin.indices.analyze.AnalyzeAction; import org.elasticsearch.action.admin.indices.analyze.AnalyzeAction;
import org.elasticsearch.action.admin.indices.analyze.TransportAnalyzeAction; import org.elasticsearch.action.admin.indices.analyze.TransportAnalyzeAction;
import org.elasticsearch.cluster.metadata.IndexMetadata; import org.elasticsearch.cluster.metadata.IndexMetadata;
@ -460,8 +461,8 @@ public class TransportAnalyzeActionTests extends ESTestCase {
AnalyzeAction.Request request = new AnalyzeAction.Request(); AnalyzeAction.Request request = new AnalyzeAction.Request();
request.text(text); request.text(text);
request.analyzer("standard"); request.analyzer("standard");
IllegalStateException e = expectThrows( ElasticsearchStatusException e = expectThrows(
IllegalStateException.class, ElasticsearchStatusException.class,
() -> TransportAnalyzeAction.analyze(request, registry, null, maxTokenCount) () -> TransportAnalyzeAction.analyze(request, registry, null, maxTokenCount)
); );
assertEquals( assertEquals(
@ -477,8 +478,8 @@ public class TransportAnalyzeActionTests extends ESTestCase {
request2.text(text); request2.text(text);
request2.analyzer("standard"); request2.analyzer("standard");
request2.explain(true); request2.explain(true);
IllegalStateException e2 = expectThrows( ElasticsearchStatusException e2 = expectThrows(
IllegalStateException.class, ElasticsearchStatusException.class,
() -> TransportAnalyzeAction.analyze(request2, registry, null, maxTokenCount) () -> TransportAnalyzeAction.analyze(request2, registry, null, maxTokenCount)
); );
assertEquals( assertEquals(
@ -506,8 +507,8 @@ public class TransportAnalyzeActionTests extends ESTestCase {
AnalyzeAction.Request request = new AnalyzeAction.Request(); AnalyzeAction.Request request = new AnalyzeAction.Request();
request.text(text); request.text(text);
request.analyzer("standard"); request.analyzer("standard");
IllegalStateException e = expectThrows( ElasticsearchStatusException e = expectThrows(
IllegalStateException.class, ElasticsearchStatusException.class,
() -> TransportAnalyzeAction.analyze(request, registry, null, idxMaxTokenCount) () -> TransportAnalyzeAction.analyze(request, registry, null, idxMaxTokenCount)
); );
assertEquals( assertEquals(

View file

@ -28,6 +28,7 @@ import org.elasticsearch.cluster.routing.allocation.DataTier;
import org.elasticsearch.common.Strings; import org.elasticsearch.common.Strings;
import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.UUIDs;
import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.core.Tuple;
import org.elasticsearch.index.Index; import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexVersion; import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.index.mapper.DateFieldMapper; import org.elasticsearch.index.mapper.DateFieldMapper;
@ -68,6 +69,7 @@ import java.util.Map;
import java.util.Set; import java.util.Set;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.CountDownLatch; import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.atomic.AtomicReference;
import java.util.function.BiConsumer; import java.util.function.BiConsumer;
@ -77,6 +79,7 @@ import static org.elasticsearch.action.search.SearchAsyncActionTests.getShardsIt
import static org.elasticsearch.core.Types.forciblyCast; import static org.elasticsearch.core.Types.forciblyCast;
import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.greaterThan;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.lessThanOrEqualTo; import static org.hamcrest.Matchers.lessThanOrEqualTo;
import static org.mockito.Mockito.mock; import static org.mockito.Mockito.mock;
@ -1087,6 +1090,137 @@ public class CanMatchPreFilterSearchPhaseTests extends ESTestCase {
); );
} }
public void testCanMatchFilteringOnCoordinatorWithMissingShards() throws Exception {
// we'll test that we're executing _tier coordinator rewrite for indices (data stream backing or regular) without any @timestamp
// or event.ingested fields
// for both data stream backing and regular indices we'll have one index in hot and one UNASSIGNED (targeting warm though).
// the warm indices will be skipped as our queries will filter based on _tier: hot and the can match phase will not report error the
// missing index even if allow_partial_search_results is false (because the warm index would've not been part of the search anyway)
Map<Index, Settings.Builder> indexNameToSettings = new HashMap<>();
ClusterState state = ClusterState.EMPTY_STATE;
String dataStreamName = randomAlphaOfLengthBetween(10, 20);
Index warmDataStreamIndex = new Index(DataStream.getDefaultBackingIndexName(dataStreamName, 1), UUIDs.base64UUID());
indexNameToSettings.put(
warmDataStreamIndex,
settings(IndexVersion.current()).put(IndexMetadata.SETTING_INDEX_UUID, warmDataStreamIndex.getUUID())
.put(DataTier.TIER_PREFERENCE, "data_warm,data_hot")
);
Index hotDataStreamIndex = new Index(DataStream.getDefaultBackingIndexName(dataStreamName, 2), UUIDs.base64UUID());
indexNameToSettings.put(
hotDataStreamIndex,
settings(IndexVersion.current()).put(IndexMetadata.SETTING_INDEX_UUID, hotDataStreamIndex.getUUID())
.put(DataTier.TIER_PREFERENCE, "data_hot")
);
DataStream dataStream = DataStreamTestHelper.newInstance(dataStreamName, List.of(warmDataStreamIndex, hotDataStreamIndex));
Index warmRegularIndex = new Index("warm-index", UUIDs.base64UUID());
indexNameToSettings.put(
warmRegularIndex,
settings(IndexVersion.current()).put(IndexMetadata.SETTING_INDEX_UUID, warmRegularIndex.getUUID())
.put(DataTier.TIER_PREFERENCE, "data_warm,data_hot")
);
Index hotRegularIndex = new Index("hot-index", UUIDs.base64UUID());
indexNameToSettings.put(
hotRegularIndex,
settings(IndexVersion.current()).put(IndexMetadata.SETTING_INDEX_UUID, hotRegularIndex.getUUID())
.put(DataTier.TIER_PREFERENCE, "data_hot")
);
List<Index> allIndices = new ArrayList<>(4);
allIndices.addAll(dataStream.getIndices());
allIndices.add(warmRegularIndex);
allIndices.add(hotRegularIndex);
List<Index> hotIndices = List.of(hotRegularIndex, hotDataStreamIndex);
List<Index> warmIndices = List.of(warmRegularIndex, warmDataStreamIndex);
for (Index index : allIndices) {
IndexMetadata.Builder indexMetadataBuilder = IndexMetadata.builder(index.getName())
.settings(indexNameToSettings.get(index))
.numberOfShards(1)
.numberOfReplicas(0);
Metadata.Builder metadataBuilder = Metadata.builder(state.metadata()).put(indexMetadataBuilder);
state = ClusterState.builder(state).metadata(metadataBuilder).build();
}
ClusterState finalState = state;
CoordinatorRewriteContextProvider coordinatorRewriteContextProvider = new CoordinatorRewriteContextProvider(
parserConfig(),
mock(Client.class),
System::currentTimeMillis,
() -> finalState,
(index) -> null
);
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery()
.filter(QueryBuilders.termQuery(CoordinatorRewriteContext.TIER_FIELD_NAME, "data_hot"));
{
// test that a search doesn't fail if the query filters out the unassigned shards
// via _tier (coordinator rewrite will eliminate the shards that don't match)
assignShardsAndExecuteCanMatchPhase(
List.of(dataStream),
List.of(hotRegularIndex, warmRegularIndex),
coordinatorRewriteContextProvider,
boolQueryBuilder,
List.of(),
null,
warmIndices,
false,
(updatedSearchShardIterators, requests) -> {
var skippedShards = updatedSearchShardIterators.stream().filter(SearchShardIterator::skip).toList();
var nonSkippedShards = updatedSearchShardIterators.stream()
.filter(searchShardIterator -> searchShardIterator.skip() == false)
.toList();
boolean allSkippedShardAreFromWarmIndices = skippedShards.stream()
.allMatch(shardIterator -> warmIndices.contains(shardIterator.shardId().getIndex()));
assertThat(allSkippedShardAreFromWarmIndices, equalTo(true));
boolean allNonSkippedShardAreHotIndices = nonSkippedShards.stream()
.allMatch(shardIterator -> hotIndices.contains(shardIterator.shardId().getIndex()));
assertThat(allNonSkippedShardAreHotIndices, equalTo(true));
boolean allRequestMadeToHotIndices = requests.stream()
.allMatch(request -> hotIndices.contains(request.shardId().getIndex()));
assertThat(allRequestMadeToHotIndices, equalTo(true));
}
);
}
{
// test that a search does fail if the query does NOT filter ALL the
// unassigned shards
CountDownLatch latch = new CountDownLatch(1);
Tuple<CanMatchPreFilterSearchPhase, List<ShardSearchRequest>> canMatchPhaseAndRequests = getCanMatchPhaseAndRequests(
List.of(dataStream),
List.of(hotRegularIndex, warmRegularIndex),
coordinatorRewriteContextProvider,
boolQueryBuilder,
List.of(),
null,
List.of(hotRegularIndex, warmRegularIndex, warmDataStreamIndex),
false,
new ActionListener<>() {
@Override
public void onResponse(GroupShardsIterator<SearchShardIterator> searchShardIterators) {
fail(null, "unexpected success with result [%s] while expecting to handle failure with [%s]", searchShardIterators);
latch.countDown();
}
@Override
public void onFailure(Exception e) {
assertThat(e, instanceOf(SearchPhaseExecutionException.class));
latch.countDown();
}
}
);
canMatchPhaseAndRequests.v1().start();
latch.await(10, TimeUnit.SECONDS);
}
}
private void assertAllShardsAreQueried(List<SearchShardIterator> updatedSearchShardIterators, List<ShardSearchRequest> requests) { private void assertAllShardsAreQueried(List<SearchShardIterator> updatedSearchShardIterators, List<ShardSearchRequest> requests) {
int skippedShards = (int) updatedSearchShardIterators.stream().filter(SearchShardIterator::skip).count(); int skippedShards = (int) updatedSearchShardIterators.stream().filter(SearchShardIterator::skip).count();
@ -1111,6 +1245,69 @@ public class CanMatchPreFilterSearchPhaseTests extends ESTestCase {
SuggestBuilder suggest, SuggestBuilder suggest,
BiConsumer<List<SearchShardIterator>, List<ShardSearchRequest>> canMatchResultsConsumer BiConsumer<List<SearchShardIterator>, List<ShardSearchRequest>> canMatchResultsConsumer
) throws Exception { ) throws Exception {
assignShardsAndExecuteCanMatchPhase(
dataStreams,
regularIndices,
contextProvider,
query,
aggregations,
suggest,
List.of(),
true,
canMatchResultsConsumer
);
}
private void assignShardsAndExecuteCanMatchPhase(
List<DataStream> dataStreams,
List<Index> regularIndices,
CoordinatorRewriteContextProvider contextProvider,
QueryBuilder query,
List<AggregationBuilder> aggregations,
SuggestBuilder suggest,
List<Index> unassignedIndices,
boolean allowPartialResults,
BiConsumer<List<SearchShardIterator>, List<ShardSearchRequest>> canMatchResultsConsumer
) throws Exception {
AtomicReference<GroupShardsIterator<SearchShardIterator>> result = new AtomicReference<>();
CountDownLatch latch = new CountDownLatch(1);
Tuple<CanMatchPreFilterSearchPhase, List<ShardSearchRequest>> canMatchAndShardRequests = getCanMatchPhaseAndRequests(
dataStreams,
regularIndices,
contextProvider,
query,
aggregations,
suggest,
unassignedIndices,
allowPartialResults,
ActionTestUtils.assertNoFailureListener(iter -> {
result.set(iter);
latch.countDown();
})
);
canMatchAndShardRequests.v1().start();
latch.await();
List<SearchShardIterator> updatedSearchShardIterators = new ArrayList<>();
for (SearchShardIterator updatedSearchShardIterator : result.get()) {
updatedSearchShardIterators.add(updatedSearchShardIterator);
}
canMatchResultsConsumer.accept(updatedSearchShardIterators, canMatchAndShardRequests.v2());
}
private Tuple<CanMatchPreFilterSearchPhase, List<ShardSearchRequest>> getCanMatchPhaseAndRequests(
List<DataStream> dataStreams,
List<Index> regularIndices,
CoordinatorRewriteContextProvider contextProvider,
QueryBuilder query,
List<AggregationBuilder> aggregations,
SuggestBuilder suggest,
List<Index> unassignedIndices,
boolean allowPartialResults,
ActionListener<GroupShardsIterator<SearchShardIterator>> canMatchActionListener
) {
Map<String, Transport.Connection> lookup = new ConcurrentHashMap<>(); Map<String, Transport.Connection> lookup = new ConcurrentHashMap<>();
DiscoveryNode primaryNode = DiscoveryNodeUtils.create("node_1"); DiscoveryNode primaryNode = DiscoveryNodeUtils.create("node_1");
DiscoveryNode replicaNode = DiscoveryNodeUtils.create("node_2"); DiscoveryNode replicaNode = DiscoveryNodeUtils.create("node_2");
@ -1136,23 +1333,31 @@ public class CanMatchPreFilterSearchPhaseTests extends ESTestCase {
// and none is assigned, the phase is considered as failed meaning that the next phase won't be executed // and none is assigned, the phase is considered as failed meaning that the next phase won't be executed
boolean withAssignedPrimaries = randomBoolean() || atLeastOnePrimaryAssigned == false; boolean withAssignedPrimaries = randomBoolean() || atLeastOnePrimaryAssigned == false;
int numShards = randomIntBetween(1, 6); int numShards = randomIntBetween(1, 6);
originalShardIters.addAll( if (unassignedIndices.contains(dataStreamIndex)) {
getShardsIter(dataStreamIndex, originalIndices, numShards, false, withAssignedPrimaries ? primaryNode : null, null) originalShardIters.addAll(getShardsIter(dataStreamIndex, originalIndices, numShards, false, null, null));
); } else {
atLeastOnePrimaryAssigned |= withAssignedPrimaries; originalShardIters.addAll(
getShardsIter(dataStreamIndex, originalIndices, numShards, false, withAssignedPrimaries ? primaryNode : null, null)
);
atLeastOnePrimaryAssigned |= withAssignedPrimaries;
}
} }
} }
for (Index regularIndex : regularIndices) { for (Index regularIndex : regularIndices) {
originalShardIters.addAll( if (unassignedIndices.contains(regularIndex)) {
getShardsIter(regularIndex, originalIndices, randomIntBetween(1, 6), randomBoolean(), primaryNode, replicaNode) originalShardIters.addAll(getShardsIter(regularIndex, originalIndices, randomIntBetween(1, 6), false, null, null));
); } else {
originalShardIters.addAll(
getShardsIter(regularIndex, originalIndices, randomIntBetween(1, 6), randomBoolean(), primaryNode, replicaNode)
);
}
} }
GroupShardsIterator<SearchShardIterator> shardsIter = GroupShardsIterator.sortAndCreate(originalShardIters); GroupShardsIterator<SearchShardIterator> shardsIter = GroupShardsIterator.sortAndCreate(originalShardIters);
final SearchRequest searchRequest = new SearchRequest(); final SearchRequest searchRequest = new SearchRequest();
searchRequest.indices(indices); searchRequest.indices(indices);
searchRequest.allowPartialSearchResults(true); searchRequest.allowPartialSearchResults(allowPartialResults);
final AliasFilter aliasFilter; final AliasFilter aliasFilter;
if (aggregations.isEmpty() == false || randomBoolean()) { if (aggregations.isEmpty() == false || randomBoolean()) {
@ -1212,35 +1417,24 @@ public class CanMatchPreFilterSearchPhaseTests extends ESTestCase {
); );
AtomicReference<GroupShardsIterator<SearchShardIterator>> result = new AtomicReference<>(); AtomicReference<GroupShardsIterator<SearchShardIterator>> result = new AtomicReference<>();
CountDownLatch latch = new CountDownLatch(1); return new Tuple<>(
CanMatchPreFilterSearchPhase canMatchPhase = new CanMatchPreFilterSearchPhase( new CanMatchPreFilterSearchPhase(
logger, logger,
searchTransportService, searchTransportService,
(clusterAlias, node) -> lookup.get(node), (clusterAlias, node) -> lookup.get(node),
aliasFilters, aliasFilters,
Collections.emptyMap(), Collections.emptyMap(),
threadPool.executor(ThreadPool.Names.SEARCH_COORDINATION), threadPool.executor(ThreadPool.Names.SEARCH_COORDINATION),
searchRequest, searchRequest,
shardsIter, shardsIter,
timeProvider, timeProvider,
null, null,
true, true,
contextProvider, contextProvider,
ActionTestUtils.assertNoFailureListener(iter -> { canMatchActionListener
result.set(iter); ),
latch.countDown(); requests
})
); );
canMatchPhase.start();
latch.await();
List<SearchShardIterator> updatedSearchShardIterators = new ArrayList<>();
for (SearchShardIterator updatedSearchShardIterator : result.get()) {
updatedSearchShardIterators.add(updatedSearchShardIterator);
}
canMatchResultsConsumer.accept(updatedSearchShardIterators, requests);
} }
static class StaticCoordinatorRewriteContextProviderBuilder { static class StaticCoordinatorRewriteContextProviderBuilder {

View file

@ -93,6 +93,7 @@ public class CountedCollectorTests extends ESTestCase {
for (int i = numResultsExpected; i < results.length(); i++) { for (int i = numResultsExpected; i < results.length(); i++) {
assertNull("index: " + i, results.get(i)); assertNull("index: " + i, results.get(i));
} }
context.results.close();
} }
} }
} }

View file

@ -134,7 +134,7 @@ public class DfsQueryPhaseTests extends ESTestCase {
new NoopCircuitBreaker(CircuitBreaker.REQUEST), new NoopCircuitBreaker(CircuitBreaker.REQUEST),
() -> false, () -> false,
SearchProgressListener.NOOP, SearchProgressListener.NOOP,
mockSearchPhaseContext.searchRequest, mockSearchPhaseContext.getRequest(),
results.length(), results.length(),
exc -> {} exc -> {}
) )
@ -159,6 +159,7 @@ public class DfsQueryPhaseTests extends ESTestCase {
assertEquals(84, responseRef.get().get(1).queryResult().topDocs().topDocs.scoreDocs[0].doc); assertEquals(84, responseRef.get().get(1).queryResult().topDocs().topDocs.scoreDocs[0].doc);
assertTrue(mockSearchPhaseContext.releasedSearchContexts.isEmpty()); assertTrue(mockSearchPhaseContext.releasedSearchContexts.isEmpty());
assertEquals(2, mockSearchPhaseContext.numSuccess.get()); assertEquals(2, mockSearchPhaseContext.numSuccess.get());
mockSearchPhaseContext.results.close();
} }
} }
@ -219,7 +220,7 @@ public class DfsQueryPhaseTests extends ESTestCase {
new NoopCircuitBreaker(CircuitBreaker.REQUEST), new NoopCircuitBreaker(CircuitBreaker.REQUEST),
() -> false, () -> false,
SearchProgressListener.NOOP, SearchProgressListener.NOOP,
mockSearchPhaseContext.searchRequest, mockSearchPhaseContext.getRequest(),
results.length(), results.length(),
exc -> {} exc -> {}
) )
@ -246,6 +247,7 @@ public class DfsQueryPhaseTests extends ESTestCase {
assertEquals(1, mockSearchPhaseContext.releasedSearchContexts.size()); assertEquals(1, mockSearchPhaseContext.releasedSearchContexts.size());
assertTrue(mockSearchPhaseContext.releasedSearchContexts.contains(new ShardSearchContextId("", 2L))); assertTrue(mockSearchPhaseContext.releasedSearchContexts.contains(new ShardSearchContextId("", 2L)));
assertNull(responseRef.get().get(1)); assertNull(responseRef.get().get(1));
mockSearchPhaseContext.results.close();
} }
} }
@ -306,7 +308,7 @@ public class DfsQueryPhaseTests extends ESTestCase {
new NoopCircuitBreaker(CircuitBreaker.REQUEST), new NoopCircuitBreaker(CircuitBreaker.REQUEST),
() -> false, () -> false,
SearchProgressListener.NOOP, SearchProgressListener.NOOP,
mockSearchPhaseContext.searchRequest, mockSearchPhaseContext.getRequest(),
results.length(), results.length(),
exc -> {} exc -> {}
) )
@ -322,6 +324,7 @@ public class DfsQueryPhaseTests extends ESTestCase {
assertThat(mockSearchPhaseContext.failures, hasSize(1)); assertThat(mockSearchPhaseContext.failures, hasSize(1));
assertThat(mockSearchPhaseContext.failures.get(0).getCause(), instanceOf(UncheckedIOException.class)); assertThat(mockSearchPhaseContext.failures.get(0).getCause(), instanceOf(UncheckedIOException.class));
assertThat(mockSearchPhaseContext.releasedSearchContexts, hasSize(1)); // phase execution will clean up on the contexts assertThat(mockSearchPhaseContext.releasedSearchContexts, hasSize(1)); // phase execution will clean up on the contexts
mockSearchPhaseContext.results.close();
} }
} }
@ -371,6 +374,7 @@ public class DfsQueryPhaseTests extends ESTestCase {
ssr.source().subSearches().get(2).getQueryBuilder() ssr.source().subSearches().get(2).getQueryBuilder()
) )
); );
mspc.results.close();
} }
private SearchPhaseController searchPhaseController() { private SearchPhaseController searchPhaseController() {

View file

@ -229,7 +229,7 @@ public class ExpandSearchPhaseTests extends ESTestCase {
assertNotNull(mockSearchPhaseContext.phaseFailure.get()); assertNotNull(mockSearchPhaseContext.phaseFailure.get());
assertNull(mockSearchPhaseContext.searchResponse.get()); assertNull(mockSearchPhaseContext.searchResponse.get());
} finally { } finally {
mockSearchPhaseContext.execute(() -> {}); mockSearchPhaseContext.results.close();
hits.decRef(); hits.decRef();
collapsedHits.decRef(); collapsedHits.decRef();
} }
@ -269,7 +269,7 @@ public class ExpandSearchPhaseTests extends ESTestCase {
hits.decRef(); hits.decRef();
} }
} finally { } finally {
mockSearchPhaseContext.execute(() -> {}); mockSearchPhaseContext.results.close();
var resp = mockSearchPhaseContext.searchResponse.get(); var resp = mockSearchPhaseContext.searchResponse.get();
if (resp != null) { if (resp != null) {
resp.decRef(); resp.decRef();
@ -356,6 +356,7 @@ public class ExpandSearchPhaseTests extends ESTestCase {
hits.decRef(); hits.decRef();
} }
} finally { } finally {
mockSearchPhaseContext.results.close();
var resp = mockSearchPhaseContext.searchResponse.get(); var resp = mockSearchPhaseContext.searchResponse.get();
if (resp != null) { if (resp != null) {
resp.decRef(); resp.decRef();
@ -407,6 +408,7 @@ public class ExpandSearchPhaseTests extends ESTestCase {
hits.decRef(); hits.decRef();
} }
} finally { } finally {
mockSearchPhaseContext.results.close();
var resp = mockSearchPhaseContext.searchResponse.get(); var resp = mockSearchPhaseContext.searchResponse.get();
if (resp != null) { if (resp != null) {
resp.decRef(); resp.decRef();

View file

@ -57,7 +57,6 @@ public class FetchLookupFieldsPhaseTests extends ESTestCase {
} }
searchPhaseContext.assertNoFailure(); searchPhaseContext.assertNoFailure();
assertNotNull(searchPhaseContext.searchResponse.get()); assertNotNull(searchPhaseContext.searchResponse.get());
searchPhaseContext.execute(() -> {});
} finally { } finally {
var resp = searchPhaseContext.searchResponse.get(); var resp = searchPhaseContext.searchResponse.get();
if (resp != null) { if (resp != null) {
@ -225,8 +224,8 @@ public class FetchLookupFieldsPhaseTests extends ESTestCase {
leftHit1.field("lookup_field_3").getValues(), leftHit1.field("lookup_field_3").getValues(),
contains(Map.of("field_a", List.of("a2"), "field_b", List.of("b1", "b2"))) contains(Map.of("field_a", List.of("a2"), "field_b", List.of("b1", "b2")))
); );
searchPhaseContext.execute(() -> {});
} finally { } finally {
searchPhaseContext.results.close();
var resp = searchPhaseContext.searchResponse.get(); var resp = searchPhaseContext.searchResponse.get();
if (resp != null) { if (resp != null) {
resp.decRef(); resp.decRef();

View file

@ -123,6 +123,7 @@ public class FetchSearchPhaseTests extends ESTestCase {
assertProfiles(profiled, 1, searchResponse); assertProfiles(profiled, 1, searchResponse);
assertTrue(mockSearchPhaseContext.releasedSearchContexts.isEmpty()); assertTrue(mockSearchPhaseContext.releasedSearchContexts.isEmpty());
} finally { } finally {
mockSearchPhaseContext.results.close();
var resp = mockSearchPhaseContext.searchResponse.get(); var resp = mockSearchPhaseContext.searchResponse.get();
if (resp != null) { if (resp != null) {
resp.decRef(); resp.decRef();
@ -252,6 +253,7 @@ public class FetchSearchPhaseTests extends ESTestCase {
assertProfiles(profiled, 2, searchResponse); assertProfiles(profiled, 2, searchResponse);
assertTrue(mockSearchPhaseContext.releasedSearchContexts.isEmpty()); assertTrue(mockSearchPhaseContext.releasedSearchContexts.isEmpty());
} finally { } finally {
mockSearchPhaseContext.results.close();
var resp = mockSearchPhaseContext.searchResponse.get(); var resp = mockSearchPhaseContext.searchResponse.get();
if (resp != null) { if (resp != null) {
resp.decRef(); resp.decRef();

View file

@ -10,12 +10,15 @@ package org.elasticsearch.action.search;
import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.Logger;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.OriginalIndices; import org.elasticsearch.action.OriginalIndices;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.routing.GroupShardsIterator;
import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.NamedWriteableRegistry;
import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.common.util.concurrent.AtomicArray;
import org.elasticsearch.core.Nullable; import org.elasticsearch.core.Nullable;
import org.elasticsearch.core.Releasable;
import org.elasticsearch.core.Releasables; import org.elasticsearch.core.Releasables;
import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchPhaseResult;
import org.elasticsearch.search.SearchShardTarget; import org.elasticsearch.search.SearchShardTarget;
@ -32,23 +35,41 @@ import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.atomic.AtomicReference;
import java.util.function.Supplier; import java.util.function.Supplier;
import static org.mockito.Mockito.mock;
/** /**
* SearchPhaseContext for tests * SearchPhaseContext for tests
*/ */
public final class MockSearchPhaseContext implements SearchPhaseContext { public final class MockSearchPhaseContext extends AbstractSearchAsyncAction<SearchPhaseResult> {
private static final Logger logger = LogManager.getLogger(MockSearchPhaseContext.class); private static final Logger logger = LogManager.getLogger(MockSearchPhaseContext.class);
final AtomicReference<Throwable> phaseFailure = new AtomicReference<>(); public final AtomicReference<Throwable> phaseFailure = new AtomicReference<>();
final int numShards; final int numShards;
final AtomicInteger numSuccess; final AtomicInteger numSuccess;
final List<ShardSearchFailure> failures = Collections.synchronizedList(new ArrayList<>()); public final List<ShardSearchFailure> failures = Collections.synchronizedList(new ArrayList<>());
SearchTransportService searchTransport; SearchTransportService searchTransport;
final Set<ShardSearchContextId> releasedSearchContexts = new HashSet<>(); final Set<ShardSearchContextId> releasedSearchContexts = new HashSet<>();
final SearchRequest searchRequest = new SearchRequest(); public final AtomicReference<SearchResponse> searchResponse = new AtomicReference<>();
final AtomicReference<SearchResponse> searchResponse = new AtomicReference<>();
private final List<Releasable> releasables = new ArrayList<>();
public MockSearchPhaseContext(int numShards) { public MockSearchPhaseContext(int numShards) {
super(
"mock",
logger,
new NamedWriteableRegistry(List.of()),
mock(SearchTransportService.class),
(clusterAlias, nodeId) -> null,
null,
null,
Runnable::run,
new SearchRequest(),
ActionListener.noop(),
new GroupShardsIterator<SearchShardIterator>(List.of()),
null,
ClusterState.EMPTY_STATE,
new SearchTask(0, "n/a", "n/a", () -> "test", null, Collections.emptyMap()),
new ArraySearchPhaseResults<>(numShards),
5,
null
);
this.numShards = numShards; this.numShards = numShards;
numSuccess = new AtomicInteger(numShards); numSuccess = new AtomicInteger(numShards);
} }
@ -59,28 +80,9 @@ public final class MockSearchPhaseContext implements SearchPhaseContext {
} }
} }
@Override
public int getNumShards() {
return numShards;
}
@Override
public Logger getLogger() {
return logger;
}
@Override
public SearchTask getTask() {
return new SearchTask(0, "n/a", "n/a", () -> "test", null, Collections.emptyMap());
}
@Override
public SearchRequest getRequest() {
return searchRequest;
}
@Override @Override
public OriginalIndices getOriginalIndices(int shardIndex) { public OriginalIndices getOriginalIndices(int shardIndex) {
var searchRequest = getRequest();
return new OriginalIndices(searchRequest.indices(), searchRequest.indicesOptions()); return new OriginalIndices(searchRequest.indices(), searchRequest.indicesOptions());
} }
@ -122,8 +124,8 @@ public final class MockSearchPhaseContext implements SearchPhaseContext {
} }
@Override @Override
public Transport.Connection getConnection(String clusterAlias, String nodeId) { protected SearchPhase getNextPhase() {
return null; // null is ok here for this test return null;
} }
@Override @Override
@ -143,13 +145,13 @@ public final class MockSearchPhaseContext implements SearchPhaseContext {
} }
@Override @Override
public void addReleasable(Releasable releasable) { protected void executePhaseOnShard(
releasables.add(releasable); SearchShardIterator shardIt,
} SearchShardTarget shard,
SearchActionListener<SearchPhaseResult> listener
@Override ) {
public void execute(Runnable command) { onShardResult(new SearchPhaseResult() {
command.run(); }, shardIt);
} }
@Override @Override

View file

@ -155,6 +155,7 @@ public class RankFeaturePhaseTests extends ESTestCase {
rankFeaturePhase.rankPhaseResults.close(); rankFeaturePhase.rankPhaseResults.close();
} }
} finally { } finally {
mockSearchPhaseContext.results.close();
if (mockSearchPhaseContext.searchResponse.get() != null) { if (mockSearchPhaseContext.searchResponse.get() != null) {
mockSearchPhaseContext.searchResponse.get().decRef(); mockSearchPhaseContext.searchResponse.get().decRef();
} }
@ -281,6 +282,7 @@ public class RankFeaturePhaseTests extends ESTestCase {
rankFeaturePhase.rankPhaseResults.close(); rankFeaturePhase.rankPhaseResults.close();
} }
} finally { } finally {
mockSearchPhaseContext.results.close();
if (mockSearchPhaseContext.searchResponse.get() != null) { if (mockSearchPhaseContext.searchResponse.get() != null) {
mockSearchPhaseContext.searchResponse.get().decRef(); mockSearchPhaseContext.searchResponse.get().decRef();
} }
@ -385,6 +387,7 @@ public class RankFeaturePhaseTests extends ESTestCase {
rankFeaturePhase.rankPhaseResults.close(); rankFeaturePhase.rankPhaseResults.close();
} }
} finally { } finally {
mockSearchPhaseContext.results.close();
if (mockSearchPhaseContext.searchResponse.get() != null) { if (mockSearchPhaseContext.searchResponse.get() != null) {
mockSearchPhaseContext.searchResponse.get().decRef(); mockSearchPhaseContext.searchResponse.get().decRef();
} }
@ -480,6 +483,7 @@ public class RankFeaturePhaseTests extends ESTestCase {
rankFeaturePhase.rankPhaseResults.close(); rankFeaturePhase.rankPhaseResults.close();
} }
} finally { } finally {
mockSearchPhaseContext.results.close();
if (mockSearchPhaseContext.searchResponse.get() != null) { if (mockSearchPhaseContext.searchResponse.get() != null) {
mockSearchPhaseContext.searchResponse.get().decRef(); mockSearchPhaseContext.searchResponse.get().decRef();
} }
@ -626,6 +630,7 @@ public class RankFeaturePhaseTests extends ESTestCase {
rankFeaturePhase.rankPhaseResults.close(); rankFeaturePhase.rankPhaseResults.close();
} }
} finally { } finally {
mockSearchPhaseContext.results.close();
if (mockSearchPhaseContext.searchResponse.get() != null) { if (mockSearchPhaseContext.searchResponse.get() != null) {
mockSearchPhaseContext.searchResponse.get().decRef(); mockSearchPhaseContext.searchResponse.get().decRef();
} }
@ -762,6 +767,7 @@ public class RankFeaturePhaseTests extends ESTestCase {
rankFeaturePhase.rankPhaseResults.close(); rankFeaturePhase.rankPhaseResults.close();
} }
} finally { } finally {
mockSearchPhaseContext.results.close();
if (mockSearchPhaseContext.searchResponse.get() != null) { if (mockSearchPhaseContext.searchResponse.get() != null) {
mockSearchPhaseContext.searchResponse.get().decRef(); mockSearchPhaseContext.searchResponse.get().decRef();
} }

View file

@ -576,7 +576,7 @@ public class TransportMasterNodeActionTests extends ESTestCase {
// simulate master restart followed by a state recovery - this will reset the cluster state version // simulate master restart followed by a state recovery - this will reset the cluster state version
final DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(clusterService.state().nodes()); final DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(clusterService.state().nodes());
nodesBuilder.remove(masterNode); nodesBuilder.remove(masterNode);
masterNode = DiscoveryNodeUtils.create(masterNode.getId(), masterNode.getAddress(), masterNode.getVersion()); masterNode = DiscoveryNodeUtils.create(masterNode.getId(), masterNode.getAddress(), masterNode.getVersionInformation());
nodesBuilder.add(masterNode); nodesBuilder.add(masterNode);
nodesBuilder.masterNodeId(masterNode.getId()); nodesBuilder.masterNodeId(masterNode.getId());
final ClusterState.Builder builder = ClusterState.builder(clusterService.state()).nodes(nodesBuilder); final ClusterState.Builder builder = ClusterState.builder(clusterService.state()).nodes(nodesBuilder);

View file

@ -9,7 +9,6 @@
package org.elasticsearch.cluster.metadata; package org.elasticsearch.cluster.metadata;
import org.elasticsearch.TransportVersion; import org.elasticsearch.TransportVersion;
import org.elasticsearch.Version;
import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequest; import org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequest;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.support.ActiveShardCount; import org.elasticsearch.action.support.ActiveShardCount;
@ -98,15 +97,11 @@ public class AutoExpandReplicasTests extends ESTestCase {
private static final AtomicInteger nodeIdGenerator = new AtomicInteger(); private static final AtomicInteger nodeIdGenerator = new AtomicInteger();
protected DiscoveryNode createNode(Version version, DiscoveryNodeRole... mustHaveRoles) { protected DiscoveryNode createNode(DiscoveryNodeRole... mustHaveRoles) {
Set<DiscoveryNodeRole> roles = new HashSet<>(randomSubsetOf(DiscoveryNodeRole.roles())); Set<DiscoveryNodeRole> roles = new HashSet<>(randomSubsetOf(DiscoveryNodeRole.roles()));
Collections.addAll(roles, mustHaveRoles); Collections.addAll(roles, mustHaveRoles);
final String id = Strings.format("node_%03d", nodeIdGenerator.incrementAndGet()); final String id = Strings.format("node_%03d", nodeIdGenerator.incrementAndGet());
return DiscoveryNodeUtils.builder(id).name(id).roles(roles).version(version).build(); return DiscoveryNodeUtils.builder(id).name(id).roles(roles).build();
}
protected DiscoveryNode createNode(DiscoveryNodeRole... mustHaveRoles) {
return createNode(Version.CURRENT, mustHaveRoles);
} }
/** /**

View file

@ -247,7 +247,7 @@ public class DiscoveryNodeTests extends ESTestCase {
assertThat(toString, containsString("{" + node.getEphemeralId() + "}")); assertThat(toString, containsString("{" + node.getEphemeralId() + "}"));
assertThat(toString, containsString("{" + node.getAddress() + "}")); assertThat(toString, containsString("{" + node.getAddress() + "}"));
assertThat(toString, containsString("{IScdfhilmrstvw}"));// roles assertThat(toString, containsString("{IScdfhilmrstvw}"));// roles
assertThat(toString, containsString("{" + node.getVersion() + "}")); assertThat(toString, containsString("{" + node.getBuildVersion() + "}"));
assertThat(toString, containsString("{test-attr=val}"));// attributes assertThat(toString, containsString("{test-attr=val}"));// attributes
} }
} }

View file

@ -130,8 +130,7 @@ public class FailedNodeRoutingTests extends ESAllocationTestCase {
// Log the node versions (for debugging if necessary) // Log the node versions (for debugging if necessary)
for (DiscoveryNode discoveryNode : state.nodes().getDataNodes().values()) { for (DiscoveryNode discoveryNode : state.nodes().getDataNodes().values()) {
Version nodeVer = discoveryNode.getVersion(); logger.info("--> node [{}] has version [{}]", discoveryNode.getId(), discoveryNode.getBuildVersion());
logger.info("--> node [{}] has version [{}]", discoveryNode.getId(), nodeVer);
} }
// randomly create some indices // randomly create some indices

View file

@ -0,0 +1,506 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.index.mapper.vectors;
import org.apache.lucene.document.BinaryDocValuesField;
import org.apache.lucene.index.IndexableField;
import org.apache.lucene.search.FieldExistsQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.index.mapper.DocumentMapper;
import org.elasticsearch.index.mapper.DocumentParsingException;
import org.elasticsearch.index.mapper.LuceneDocument;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperParsingException;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.MapperTestCase;
import org.elasticsearch.index.mapper.ParsedDocument;
import org.elasticsearch.index.mapper.SourceToParse;
import org.elasticsearch.index.mapper.ValueFetcher;
import org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapper.ElementType;
import org.elasticsearch.index.query.SearchExecutionContext;
import org.elasticsearch.search.lookup.Source;
import org.elasticsearch.search.lookup.SourceProvider;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.xcontent.XContentBuilder;
import org.junit.AssumptionViolatedException;
import org.junit.BeforeClass;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;
import java.util.stream.Stream;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.instanceOf;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
public class MultiDenseVectorFieldMapperTests extends MapperTestCase {
@BeforeClass
public static void setup() {
assumeTrue("Requires multi-dense vector support", MultiDenseVectorFieldMapper.FEATURE_FLAG.isEnabled());
}
private final ElementType elementType;
private final int dims;
public MultiDenseVectorFieldMapperTests() {
this.elementType = randomFrom(ElementType.BYTE, ElementType.FLOAT, ElementType.BIT);
this.dims = ElementType.BIT == elementType ? 4 * Byte.SIZE : 4;
}
@Override
protected void minimalMapping(XContentBuilder b) throws IOException {
indexMapping(b, IndexVersion.current());
}
@Override
protected void minimalMapping(XContentBuilder b, IndexVersion indexVersion) throws IOException {
indexMapping(b, indexVersion);
}
private void indexMapping(XContentBuilder b, IndexVersion indexVersion) throws IOException {
b.field("type", "multi_dense_vector").field("dims", dims);
if (elementType != ElementType.FLOAT) {
b.field("element_type", elementType.toString());
}
}
@Override
protected Object getSampleValueForDocument() {
int numVectors = randomIntBetween(1, 16);
return Stream.generate(
() -> elementType == ElementType.FLOAT ? List.of(0.5, 0.5, 0.5, 0.5) : List.of((byte) 1, (byte) 1, (byte) 1, (byte) 1)
).limit(numVectors).toList();
}
@Override
protected void registerParameters(ParameterChecker checker) throws IOException {
checker.registerConflictCheck(
"dims",
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", dims)),
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", dims + 8))
);
checker.registerConflictCheck(
"element_type",
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", dims).field("element_type", "byte")),
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", dims).field("element_type", "float"))
);
checker.registerConflictCheck(
"element_type",
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", dims).field("element_type", "float")),
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", dims * 8).field("element_type", "bit"))
);
checker.registerConflictCheck(
"element_type",
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", dims).field("element_type", "byte")),
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", dims * 8).field("element_type", "bit"))
);
}
@Override
protected boolean supportsStoredFields() {
return false;
}
@Override
protected boolean supportsIgnoreMalformed() {
return false;
}
@Override
protected void assertSearchable(MappedFieldType fieldType) {
assertThat(fieldType, instanceOf(MultiDenseVectorFieldMapper.MultiDenseVectorFieldType.class));
assertFalse(fieldType.isIndexed());
assertFalse(fieldType.isSearchable());
}
protected void assertExistsQuery(MappedFieldType fieldType, Query query, LuceneDocument fields) {
assertThat(query, instanceOf(FieldExistsQuery.class));
FieldExistsQuery existsQuery = (FieldExistsQuery) query;
assertEquals("field", existsQuery.getField());
assertNoFieldNamesField(fields);
}
// We override this because dense vectors are the only field type that are not aggregatable but
// that do provide fielddata. TODO: resolve this inconsistency!
@Override
public void testAggregatableConsistency() {}
public void testDims() {
{
Exception e = expectThrows(MapperParsingException.class, () -> createMapperService(fieldMapping(b -> {
b.field("type", "multi_dense_vector");
b.field("dims", 0);
})));
assertThat(
e.getMessage(),
equalTo("Failed to parse mapping: " + "The number of dimensions should be in the range [1, 4096] but was [0]")
);
}
// test max limit for non-indexed vectors
{
Exception e = expectThrows(MapperParsingException.class, () -> createMapperService(fieldMapping(b -> {
b.field("type", "multi_dense_vector");
b.field("dims", 5000);
})));
assertThat(
e.getMessage(),
equalTo("Failed to parse mapping: " + "The number of dimensions should be in the range [1, 4096] but was [5000]")
);
}
}
public void testMergeDims() throws IOException {
XContentBuilder mapping = mapping(b -> {
b.startObject("field");
b.field("type", "multi_dense_vector");
b.endObject();
});
MapperService mapperService = createMapperService(mapping);
mapping = mapping(b -> {
b.startObject("field");
b.field("type", "multi_dense_vector").field("dims", dims);
b.endObject();
});
merge(mapperService, mapping);
assertEquals(
XContentHelper.convertToMap(BytesReference.bytes(mapping), false, mapping.contentType()).v2(),
XContentHelper.convertToMap(mapperService.documentMapper().mappingSource().uncompressed(), false, mapping.contentType()).v2()
);
}
public void testLargeDimsBit() throws IOException {
createMapperService(fieldMapping(b -> {
b.field("type", "multi_dense_vector");
b.field("dims", 1024 * Byte.SIZE);
b.field("element_type", ElementType.BIT.toString());
}));
}
public void testNonIndexedVector() throws Exception {
DocumentMapper mapper = createDocumentMapper(fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", 3)));
float[][] validVectors = { { -12.1f, 100.7f, -4 }, { 42f, .05f, -1f } };
double[] dotProduct = new double[2];
int vecId = 0;
for (float[] vector : validVectors) {
for (float value : vector) {
dotProduct[vecId] += value * value;
}
vecId++;
}
ParsedDocument doc1 = mapper.parse(source(b -> {
b.startArray("field");
for (float[] vector : validVectors) {
b.startArray();
for (float value : vector) {
b.value(value);
}
b.endArray();
}
b.endArray();
}));
List<IndexableField> fields = doc1.rootDoc().getFields("field");
assertEquals(1, fields.size());
assertThat(fields.get(0), instanceOf(BinaryDocValuesField.class));
// assert that after decoding the indexed value is equal to expected
BytesRef vectorBR = fields.get(0).binaryValue();
assertEquals(ElementType.FLOAT.getNumBytes(validVectors[0].length) * validVectors.length, vectorBR.length);
float[][] decodedValues = new float[validVectors.length][];
for (int i = 0; i < validVectors.length; i++) {
decodedValues[i] = new float[validVectors[i].length];
FloatBuffer fb = ByteBuffer.wrap(vectorBR.bytes, i * Float.BYTES * validVectors[i].length, Float.BYTES * validVectors[i].length)
.order(ByteOrder.LITTLE_ENDIAN)
.asFloatBuffer();
fb.get(decodedValues[i]);
}
List<IndexableField> magFields = doc1.rootDoc().getFields("field" + MultiDenseVectorFieldMapper.VECTOR_MAGNITUDES_SUFFIX);
assertEquals(1, magFields.size());
assertThat(magFields.get(0), instanceOf(BinaryDocValuesField.class));
BytesRef magBR = magFields.get(0).binaryValue();
assertEquals(Float.BYTES * validVectors.length, magBR.length);
FloatBuffer fb = ByteBuffer.wrap(magBR.bytes, magBR.offset, magBR.length).order(ByteOrder.LITTLE_ENDIAN).asFloatBuffer();
for (int i = 0; i < validVectors.length; i++) {
assertEquals((float) Math.sqrt(dotProduct[i]), fb.get(), 0.001f);
}
for (int i = 0; i < validVectors.length; i++) {
assertArrayEquals("Decoded dense vector values is not equal to the indexed one.", validVectors[i], decodedValues[i], 0.001f);
}
}
public void testPoorlyIndexedVector() throws Exception {
DocumentMapper mapper = createDocumentMapper(fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", 3)));
float[][] validVectors = { { -12.1f, 100.7f, -4 }, { 42f, .05f, -1f } };
double[] dotProduct = new double[2];
int vecId = 0;
for (float[] vector : validVectors) {
for (float value : vector) {
dotProduct[vecId] += value * value;
}
vecId++;
}
expectThrows(DocumentParsingException.class, () -> mapper.parse(source(b -> {
b.startArray("field");
b.startArray(); // double nested array should fail
for (float[] vector : validVectors) {
b.startArray();
for (float value : vector) {
b.value(value);
}
b.endArray();
}
b.endArray();
b.endArray();
})));
}
public void testInvalidParameters() {
MapperParsingException e = expectThrows(
MapperParsingException.class,
() -> createDocumentMapper(
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", 3).field("element_type", "foo"))
)
);
assertThat(e.getMessage(), containsString("invalid element_type [foo]; available types are "));
e = expectThrows(
MapperParsingException.class,
() -> createDocumentMapper(
fieldMapping(b -> b.field("type", "multi_dense_vector").field("dims", 3).startObject("foo").endObject())
)
);
assertThat(
e.getMessage(),
containsString("Failed to parse mapping: unknown parameter [foo] on mapper [field] of type [multi_dense_vector]")
);
}
public void testDocumentsWithIncorrectDims() throws Exception {
int dims = 3;
XContentBuilder fieldMapping = fieldMapping(b -> {
b.field("type", "multi_dense_vector");
b.field("dims", dims);
});
DocumentMapper mapper = createDocumentMapper(fieldMapping);
// test that error is thrown when a document has number of dims more than defined in the mapping
float[][] invalidVector = new float[4][dims + 1];
DocumentParsingException e = expectThrows(DocumentParsingException.class, () -> mapper.parse(source(b -> {
b.startArray("field");
for (float[] vector : invalidVector) {
b.startArray();
for (float value : vector) {
b.value(value);
}
b.endArray();
}
b.endArray();
})));
assertThat(e.getCause().getMessage(), containsString("has more dimensions than defined in the mapping [3]"));
// test that error is thrown when a document has number of dims less than defined in the mapping
float[][] invalidVector2 = new float[4][dims - 1];
DocumentParsingException e2 = expectThrows(DocumentParsingException.class, () -> mapper.parse(source(b -> {
b.startArray("field");
for (float[] vector : invalidVector2) {
b.startArray();
for (float value : vector) {
b.value(value);
}
b.endArray();
}
b.endArray();
})));
assertThat(e2.getCause().getMessage(), containsString("has a different number of dimensions [2] than defined in the mapping [3]"));
// test that error is thrown when some of the vectors have correct number of dims, but others do not
DocumentParsingException e3 = expectThrows(DocumentParsingException.class, () -> mapper.parse(source(b -> {
b.startArray("field");
for (float[] vector : new float[4][dims]) {
b.startArray();
for (float value : vector) {
b.value(value);
}
b.endArray();
}
for (float[] vector : invalidVector2) {
b.startArray();
for (float value : vector) {
b.value(value);
}
b.endArray();
}
b.endArray();
})));
assertThat(e3.getCause().getMessage(), containsString("has a different number of dimensions [2] than defined in the mapping [3]"));
}
@Override
protected void assertFetchMany(MapperService mapperService, String field, Object value, String format, int count) throws IOException {
assumeFalse("Dense vectors currently don't support multiple values in the same field", false);
}
/**
* Dense vectors don't support doc values or string representation (for doc value parser/fetching).
* We may eventually support that, but until then, we only verify that the parsing and fields fetching matches the provided value object
*/
@Override
protected void assertFetch(MapperService mapperService, String field, Object value, String format) throws IOException {
MappedFieldType ft = mapperService.fieldType(field);
MappedFieldType.FielddataOperation fdt = MappedFieldType.FielddataOperation.SEARCH;
SourceToParse source = source(b -> b.field(ft.name(), value));
SearchExecutionContext searchExecutionContext = mock(SearchExecutionContext.class);
when(searchExecutionContext.isSourceEnabled()).thenReturn(true);
when(searchExecutionContext.sourcePath(field)).thenReturn(Set.of(field));
when(searchExecutionContext.getForField(ft, fdt)).thenAnswer(inv -> fieldDataLookup(mapperService).apply(ft, () -> {
throw new UnsupportedOperationException();
}, fdt));
ValueFetcher nativeFetcher = ft.valueFetcher(searchExecutionContext, format);
ParsedDocument doc = mapperService.documentMapper().parse(source);
withLuceneIndex(mapperService, iw -> iw.addDocuments(doc.docs()), ir -> {
Source s = SourceProvider.fromStoredFields().getSource(ir.leaves().get(0), 0);
nativeFetcher.setNextReader(ir.leaves().get(0));
List<Object> fromNative = nativeFetcher.fetchValues(s, 0, new ArrayList<>());
MultiDenseVectorFieldMapper.MultiDenseVectorFieldType denseVectorFieldType =
(MultiDenseVectorFieldMapper.MultiDenseVectorFieldType) ft;
switch (denseVectorFieldType.getElementType()) {
case BYTE -> assumeFalse("byte element type testing not currently added", false);
case FLOAT -> {
List<float[]> fetchedFloatsList = new ArrayList<>();
for (var f : fromNative) {
float[] fetchedFloats = new float[denseVectorFieldType.getVectorDimensions()];
assert f instanceof List;
List<?> vector = (List<?>) f;
int i = 0;
for (Object v : vector) {
assert v instanceof Number;
fetchedFloats[i++] = ((Number) v).floatValue();
}
fetchedFloatsList.add(fetchedFloats);
}
float[][] fetchedFloats = fetchedFloatsList.toArray(new float[0][]);
assertThat("fetching " + value, fetchedFloats, equalTo(value));
}
}
});
}
@Override
protected void randomFetchTestFieldConfig(XContentBuilder b) throws IOException {
b.field("type", "multi_dense_vector").field("dims", randomIntBetween(2, 4096)).field("element_type", "float");
}
@Override
protected Object generateRandomInputValue(MappedFieldType ft) {
MultiDenseVectorFieldMapper.MultiDenseVectorFieldType vectorFieldType = (MultiDenseVectorFieldMapper.MultiDenseVectorFieldType) ft;
int numVectors = randomIntBetween(1, 16);
return switch (vectorFieldType.getElementType()) {
case BYTE -> {
byte[][] vectors = new byte[numVectors][vectorFieldType.getVectorDimensions()];
for (int i = 0; i < numVectors; i++) {
vectors[i] = randomByteArrayOfLength(vectorFieldType.getVectorDimensions());
}
yield vectors;
}
case FLOAT -> {
float[][] vectors = new float[numVectors][vectorFieldType.getVectorDimensions()];
for (int i = 0; i < numVectors; i++) {
for (int j = 0; j < vectorFieldType.getVectorDimensions(); j++) {
vectors[i][j] = randomFloat();
}
}
yield vectors;
}
case BIT -> {
byte[][] vectors = new byte[numVectors][vectorFieldType.getVectorDimensions() / 8];
for (int i = 0; i < numVectors; i++) {
vectors[i] = randomByteArrayOfLength(vectorFieldType.getVectorDimensions() / 8);
}
yield vectors;
}
};
}
public void testCannotBeUsedInMultifields() {
Exception e = expectThrows(MapperParsingException.class, () -> createMapperService(fieldMapping(b -> {
b.field("type", "keyword");
b.startObject("fields");
b.startObject("vectors");
minimalMapping(b);
b.endObject();
b.endObject();
})));
assertThat(e.getMessage(), containsString("Field [vectors] of type [multi_dense_vector] can't be used in multifields"));
}
@Override
protected IngestScriptSupport ingestScriptSupport() {
throw new AssumptionViolatedException("not supported");
}
@Override
protected SyntheticSourceSupport syntheticSourceSupport(boolean ignoreMalformed) {
return new DenseVectorSyntheticSourceSupport();
}
@Override
protected boolean supportsEmptyInputArray() {
return false;
}
private static class DenseVectorSyntheticSourceSupport implements SyntheticSourceSupport {
private final int dims = between(5, 1000);
private final int numVecs = between(1, 16);
private final ElementType elementType = randomFrom(ElementType.BYTE, ElementType.FLOAT, ElementType.BIT);
@Override
public SyntheticSourceExample example(int maxValues) {
Object value = switch (elementType) {
case BYTE, BIT:
yield randomList(numVecs, numVecs, () -> randomList(dims, dims, ESTestCase::randomByte));
case FLOAT:
yield randomList(numVecs, numVecs, () -> randomList(dims, dims, ESTestCase::randomFloat));
};
return new SyntheticSourceExample(value, value, this::mapping);
}
private void mapping(XContentBuilder b) throws IOException {
b.field("type", "multi_dense_vector");
if (elementType == ElementType.BYTE || elementType == ElementType.BIT || randomBoolean()) {
b.field("element_type", elementType.toString());
}
b.field("dims", elementType == ElementType.BIT ? dims * Byte.SIZE : dims);
}
@Override
public List<SyntheticSourceInvalidExample> invalidExample() {
return List.of();
}
}
@Override
public void testSyntheticSourceKeepArrays() {
// The mapper expects to parse an array of values by default, it's not compatible with array of arrays.
}
}

View file

@ -0,0 +1,105 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.index.mapper.vectors;
import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.index.fielddata.FieldDataContext;
import org.elasticsearch.index.mapper.FieldTypeTestCase;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.vectors.MultiDenseVectorFieldMapper.MultiDenseVectorFieldType;
import org.junit.BeforeClass;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
import java.util.Set;
import static org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapper.BBQ_MIN_DIMS;
public class MultiDenseVectorFieldTypeTests extends FieldTypeTestCase {
@BeforeClass
public static void setup() {
assumeTrue("Requires multi-dense vector support", MultiDenseVectorFieldMapper.FEATURE_FLAG.isEnabled());
}
private MultiDenseVectorFieldType createFloatFieldType() {
return new MultiDenseVectorFieldType(
"f",
DenseVectorFieldMapper.ElementType.FLOAT,
BBQ_MIN_DIMS,
IndexVersion.current(),
Collections.emptyMap()
);
}
private MultiDenseVectorFieldType createByteFieldType() {
return new MultiDenseVectorFieldType(
"f",
DenseVectorFieldMapper.ElementType.BYTE,
5,
IndexVersion.current(),
Collections.emptyMap()
);
}
public void testHasDocValues() {
MultiDenseVectorFieldType fft = createFloatFieldType();
assertTrue(fft.hasDocValues());
MultiDenseVectorFieldType bft = createByteFieldType();
assertTrue(bft.hasDocValues());
}
public void testIsIndexed() {
MultiDenseVectorFieldType fft = createFloatFieldType();
assertFalse(fft.isIndexed());
MultiDenseVectorFieldType bft = createByteFieldType();
assertFalse(bft.isIndexed());
}
public void testIsSearchable() {
MultiDenseVectorFieldType fft = createFloatFieldType();
assertFalse(fft.isSearchable());
MultiDenseVectorFieldType bft = createByteFieldType();
assertFalse(bft.isSearchable());
}
public void testIsAggregatable() {
MultiDenseVectorFieldType fft = createFloatFieldType();
assertFalse(fft.isAggregatable());
MultiDenseVectorFieldType bft = createByteFieldType();
assertFalse(bft.isAggregatable());
}
public void testFielddataBuilder() {
MultiDenseVectorFieldType fft = createFloatFieldType();
FieldDataContext fdc = new FieldDataContext("test", null, () -> null, Set::of, MappedFieldType.FielddataOperation.SCRIPT);
assertNotNull(fft.fielddataBuilder(fdc));
MultiDenseVectorFieldType bft = createByteFieldType();
FieldDataContext bdc = new FieldDataContext("test", null, () -> null, Set::of, MappedFieldType.FielddataOperation.SCRIPT);
assertNotNull(bft.fielddataBuilder(bdc));
}
public void testDocValueFormat() {
MultiDenseVectorFieldType fft = createFloatFieldType();
expectThrows(IllegalArgumentException.class, () -> fft.docValueFormat(null, null));
MultiDenseVectorFieldType bft = createByteFieldType();
expectThrows(IllegalArgumentException.class, () -> bft.docValueFormat(null, null));
}
public void testFetchSourceValue() throws IOException {
MultiDenseVectorFieldType fft = createFloatFieldType();
List<List<Double>> vector = List.of(List.of(0.0, 1.0, 2.0, 3.0, 4.0, 6.0));
assertEquals(vector, fetchSourceValue(fft, vector));
MultiDenseVectorFieldType bft = createByteFieldType();
assertEquals(vector, fetchSourceValue(bft, vector));
}
}

View file

@ -38,6 +38,7 @@ import org.elasticsearch.test.ClusterServiceUtils;
import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.TestThreadPool; import org.elasticsearch.threadpool.TestThreadPool;
import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xcontent.XContentParseException;
import org.elasticsearch.xcontent.XContentParser; import org.elasticsearch.xcontent.XContentParser;
import org.junit.After; import org.junit.After;
import org.junit.Before; import org.junit.Before;
@ -55,16 +56,22 @@ import java.time.ZoneOffset;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.Set; import java.util.Set;
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CountDownLatch; import java.util.concurrent.CountDownLatch;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicBoolean;
import java.util.function.Consumer; import java.util.function.Consumer;
import static java.nio.file.StandardCopyOption.ATOMIC_MOVE; import static java.nio.file.StandardCopyOption.ATOMIC_MOVE;
import static java.nio.file.StandardCopyOption.REPLACE_EXISTING;
import static org.elasticsearch.node.Node.NODE_NAME_SETTING; import static org.elasticsearch.node.Node.NODE_NAME_SETTING;
import static org.hamcrest.Matchers.anEmptyMap; import static org.hamcrest.Matchers.anEmptyMap;
import static org.hamcrest.Matchers.hasEntry; import static org.hamcrest.Matchers.hasEntry;
import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.argThat;
import static org.mockito.ArgumentMatchers.eq; import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.doAnswer; import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.mock; import static org.mockito.Mockito.mock;
@ -262,6 +269,68 @@ public class FileSettingsServiceTests extends ESTestCase {
verify(controller, times(1)).process(any(), any(XContentParser.class), eq(ReservedStateVersionCheck.HIGHER_VERSION_ONLY), any()); verify(controller, times(1)).process(any(), any(XContentParser.class), eq(ReservedStateVersionCheck.HIGHER_VERSION_ONLY), any());
} }
@SuppressWarnings("unchecked")
public void testInvalidJSON() throws Exception {
doAnswer((Answer<Void>) invocation -> {
invocation.getArgument(1, XContentParser.class).map(); // Throw if JSON is invalid
((Consumer<Exception>) invocation.getArgument(3)).accept(null);
return null;
}).when(controller).process(any(), any(XContentParser.class), any(), any());
CyclicBarrier fileChangeBarrier = new CyclicBarrier(2);
fileSettingsService.addFileChangedListener(() -> awaitOrBust(fileChangeBarrier));
Files.createDirectories(fileSettingsService.watchedFileDir());
// contents of the JSON don't matter, we just need a file to exist
writeTestFile(fileSettingsService.watchedFile(), "{}");
doAnswer((Answer<?>) invocation -> {
boolean returnedNormally = false;
try {
var result = invocation.callRealMethod();
returnedNormally = true;
return result;
} catch (XContentParseException e) {
// We're expecting a parse error. processFileChanges specifies that this is supposed to throw ExecutionException.
throw new ExecutionException(e);
} catch (Throwable e) {
throw new AssertionError("Unexpected exception", e);
} finally {
if (returnedNormally == false) {
// Because of the exception, listeners aren't notified, so we need to activate the barrier ourselves
awaitOrBust(fileChangeBarrier);
}
}
}).when(fileSettingsService).processFileChanges();
// Establish the initial valid JSON
fileSettingsService.start();
fileSettingsService.clusterChanged(new ClusterChangedEvent("test", clusterService.state(), ClusterState.EMPTY_STATE));
awaitOrBust(fileChangeBarrier);
// Now break the JSON
writeTestFile(fileSettingsService.watchedFile(), "test_invalid_JSON");
awaitOrBust(fileChangeBarrier);
verify(fileSettingsService, times(1)).processFileOnServiceStart(); // The initial state
verify(fileSettingsService, times(1)).processFileChanges(); // The changed state
verify(fileSettingsService, times(1)).onProcessFileChangesException(
argThat(e -> e instanceof ExecutionException && e.getCause() instanceof XContentParseException)
);
// Note: the name "processFileOnServiceStart" is a bit misleading because it is not
// referring to fileSettingsService.start(). Rather, it is referring to the initialization
// of the watcher thread itself, which occurs asynchronously when clusterChanged is first called.
}
private static void awaitOrBust(CyclicBarrier barrier) {
try {
barrier.await(20, TimeUnit.SECONDS);
} catch (InterruptedException | BrokenBarrierException | TimeoutException e) {
throw new AssertionError("Unexpected exception waiting for barrier", e);
}
}
@SuppressWarnings("unchecked") @SuppressWarnings("unchecked")
public void testStopWorksInMiddleOfProcessing() throws Exception { public void testStopWorksInMiddleOfProcessing() throws Exception {
CountDownLatch processFileLatch = new CountDownLatch(1); CountDownLatch processFileLatch = new CountDownLatch(1);
@ -356,10 +425,10 @@ public class FileSettingsServiceTests extends ESTestCase {
Path tempFilePath = createTempFile(); Path tempFilePath = createTempFile();
Files.writeString(tempFilePath, contents); Files.writeString(tempFilePath, contents);
try { try {
Files.move(tempFilePath, path, ATOMIC_MOVE); Files.move(tempFilePath, path, REPLACE_EXISTING, ATOMIC_MOVE);
} catch (AtomicMoveNotSupportedException e) { } catch (AtomicMoveNotSupportedException e) {
logger.info("Atomic move not available. Falling back on non-atomic move to write [{}]", path.toAbsolutePath()); logger.info("Atomic move not available. Falling back on non-atomic move to write [{}]", path.toAbsolutePath());
Files.move(tempFilePath, path); Files.move(tempFilePath, path, REPLACE_EXISTING);
} }
} }
@ -374,4 +443,5 @@ public class FileSettingsServiceTests extends ESTestCase {
fail(e, "longAwait: interrupted waiting for CountDownLatch to reach zero"); fail(e, "longAwait: interrupted waiting for CountDownLatch to reach zero");
} }
} }
} }

View file

@ -10,7 +10,6 @@
package org.elasticsearch.rest.action.document; package org.elasticsearch.rest.action.document;
import org.apache.lucene.util.SetOnce; import org.apache.lucene.util.SetOnce;
import org.elasticsearch.Version;
import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.index.IndexResponse;
@ -55,10 +54,10 @@ public final class RestIndexActionTests extends RestActionTestCase {
} }
public void testAutoIdDefaultsToOptypeCreate() { public void testAutoIdDefaultsToOptypeCreate() {
checkAutoIdOpType(Version.CURRENT, DocWriteRequest.OpType.CREATE); checkAutoIdOpType(DocWriteRequest.OpType.CREATE);
} }
private void checkAutoIdOpType(Version minClusterVersion, DocWriteRequest.OpType expectedOpType) { private void checkAutoIdOpType(DocWriteRequest.OpType expectedOpType) {
SetOnce<Boolean> executeCalled = new SetOnce<>(); SetOnce<Boolean> executeCalled = new SetOnce<>();
verifyingClient.setExecuteVerifier((actionType, request) -> { verifyingClient.setExecuteVerifier((actionType, request) -> {
assertThat(request, instanceOf(IndexRequest.class)); assertThat(request, instanceOf(IndexRequest.class));
@ -71,9 +70,7 @@ public final class RestIndexActionTests extends RestActionTestCase {
.withContent(new BytesArray("{}"), XContentType.JSON) .withContent(new BytesArray("{}"), XContentType.JSON)
.build(); .build();
clusterStateSupplier.set( clusterStateSupplier.set(
ClusterState.builder(ClusterName.DEFAULT) ClusterState.builder(ClusterName.DEFAULT).nodes(DiscoveryNodes.builder().add(DiscoveryNodeUtils.create("test")).build()).build()
.nodes(DiscoveryNodes.builder().add(DiscoveryNodeUtils.builder("test").version(minClusterVersion).build()).build())
.build()
); );
dispatchRequest(autoIdRequest); dispatchRequest(autoIdRequest);
assertThat(executeCalled.get(), equalTo(true)); assertThat(executeCalled.get(), equalTo(true));

View file

@ -194,7 +194,7 @@ public abstract class ESAllocationTestCase extends ESTestCase {
protected static Set<DiscoveryNodeRole> MASTER_DATA_ROLES = Set.of(DiscoveryNodeRole.MASTER_ROLE, DiscoveryNodeRole.DATA_ROLE); protected static Set<DiscoveryNodeRole> MASTER_DATA_ROLES = Set.of(DiscoveryNodeRole.MASTER_ROLE, DiscoveryNodeRole.DATA_ROLE);
protected static DiscoveryNode newNode(String nodeId) { protected static DiscoveryNode newNode(String nodeId) {
return newNode(nodeId, (Version) null); return DiscoveryNodeUtils.builder(nodeId).roles(MASTER_DATA_ROLES).build();
} }
protected static DiscoveryNode newNode(String nodeName, String nodeId, Map<String, String> attributes) { protected static DiscoveryNode newNode(String nodeName, String nodeId, Map<String, String> attributes) {

View file

@ -13,6 +13,7 @@ import org.elasticsearch.Version;
import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.UUIDs;
import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.env.BuildVersion;
import org.elasticsearch.index.IndexVersion; import org.elasticsearch.index.IndexVersion;
import org.elasticsearch.node.Node; import org.elasticsearch.node.Node;
@ -36,10 +37,15 @@ public class DiscoveryNodeUtils {
return builder(id).address(address).build(); return builder(id).address(address).build();
} }
@Deprecated
public static DiscoveryNode create(String id, TransportAddress address, Version version) { public static DiscoveryNode create(String id, TransportAddress address, Version version) {
return builder(id).address(address).version(version).build(); return builder(id).address(address).version(version).build();
} }
public static DiscoveryNode create(String id, TransportAddress address, VersionInformation version) {
return builder(id).address(address).version(version).build();
}
public static DiscoveryNode create(String id, TransportAddress address, Map<String, String> attributes, Set<DiscoveryNodeRole> roles) { public static DiscoveryNode create(String id, TransportAddress address, Map<String, String> attributes, Set<DiscoveryNodeRole> roles) {
return builder(id).address(address).attributes(attributes).roles(roles).build(); return builder(id).address(address).attributes(attributes).roles(roles).build();
} }
@ -67,6 +73,7 @@ public class DiscoveryNodeUtils {
private TransportAddress address; private TransportAddress address;
private Map<String, String> attributes = Map.of(); private Map<String, String> attributes = Map.of();
private Set<DiscoveryNodeRole> roles = DiscoveryNodeRole.roles(); private Set<DiscoveryNodeRole> roles = DiscoveryNodeRole.roles();
private BuildVersion buildVersion;
private Version version; private Version version;
private IndexVersion minIndexVersion; private IndexVersion minIndexVersion;
private IndexVersion maxIndexVersion; private IndexVersion maxIndexVersion;
@ -107,19 +114,33 @@ public class DiscoveryNodeUtils {
return this; return this;
} }
@Deprecated
public Builder version(Version version) { public Builder version(Version version) {
this.version = version; this.version = version;
return this; return this;
} }
@Deprecated
public Builder version(Version version, IndexVersion minIndexVersion, IndexVersion maxIndexVersion) { public Builder version(Version version, IndexVersion minIndexVersion, IndexVersion maxIndexVersion) {
this.buildVersion = BuildVersion.fromVersionId(version.id());
this.version = version; this.version = version;
this.minIndexVersion = minIndexVersion; this.minIndexVersion = minIndexVersion;
this.maxIndexVersion = maxIndexVersion; this.maxIndexVersion = maxIndexVersion;
return this; return this;
} }
public Builder version(BuildVersion version, IndexVersion minIndexVersion, IndexVersion maxIndexVersion) {
// see comment in VersionInformation
assert version.equals(BuildVersion.current());
this.buildVersion = version;
this.version = Version.CURRENT;
this.minIndexVersion = minIndexVersion;
this.maxIndexVersion = maxIndexVersion;
return this;
}
public Builder version(VersionInformation versions) { public Builder version(VersionInformation versions) {
this.buildVersion = versions.buildVersion();
this.version = versions.nodeVersion(); this.version = versions.nodeVersion();
this.minIndexVersion = versions.minIndexVersion(); this.minIndexVersion = versions.minIndexVersion();
this.maxIndexVersion = versions.maxIndexVersion(); this.maxIndexVersion = versions.maxIndexVersion();
@ -152,7 +173,7 @@ public class DiscoveryNodeUtils {
if (minIndexVersion == null || maxIndexVersion == null) { if (minIndexVersion == null || maxIndexVersion == null) {
versionInfo = VersionInformation.inferVersions(version); versionInfo = VersionInformation.inferVersions(version);
} else { } else {
versionInfo = new VersionInformation(version, minIndexVersion, maxIndexVersion); versionInfo = new VersionInformation(buildVersion, version, minIndexVersion, maxIndexVersion);
} }
return new DiscoveryNode(name, id, ephemeralId, hostName, hostAddress, address, attributes, roles, versionInfo, externalId); return new DiscoveryNode(name, id, ephemeralId, hostName, hostAddress, address, attributes, roles, versionInfo, externalId);

View file

@ -39,7 +39,7 @@ public final class TextFieldFamilySyntheticSourceTestSetup {
TextFieldMapper.TextFieldType text = (TextFieldMapper.TextFieldType) ft; TextFieldMapper.TextFieldType text = (TextFieldMapper.TextFieldType) ft;
boolean supportsColumnAtATimeReader = text.syntheticSourceDelegate() != null boolean supportsColumnAtATimeReader = text.syntheticSourceDelegate() != null
&& text.syntheticSourceDelegate().hasDocValues() && text.syntheticSourceDelegate().hasDocValues()
&& text.canUseSyntheticSourceDelegateForQuerying(); && text.canUseSyntheticSourceDelegateForLoading();
return new MapperTestCase.BlockReaderSupport(supportsColumnAtATimeReader, mapper, loaderFieldName); return new MapperTestCase.BlockReaderSupport(supportsColumnAtATimeReader, mapper, loaderFieldName);
} }
MappedFieldType parent = mapper.fieldType(parentName); MappedFieldType parent = mapper.fieldType(parentName);

View file

@ -111,6 +111,7 @@ import org.elasticsearch.index.mapper.RangeType;
import org.elasticsearch.index.mapper.TextFieldMapper; import org.elasticsearch.index.mapper.TextFieldMapper;
import org.elasticsearch.index.mapper.TimeSeriesIdFieldMapper; import org.elasticsearch.index.mapper.TimeSeriesIdFieldMapper;
import org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapper; import org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapper;
import org.elasticsearch.index.mapper.vectors.MultiDenseVectorFieldMapper;
import org.elasticsearch.index.mapper.vectors.SparseVectorFieldMapper; import org.elasticsearch.index.mapper.vectors.SparseVectorFieldMapper;
import org.elasticsearch.index.query.SearchExecutionContext; import org.elasticsearch.index.query.SearchExecutionContext;
import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShard;
@ -202,6 +203,7 @@ public abstract class AggregatorTestCase extends ESTestCase {
private static final List<String> TYPE_TEST_BLACKLIST = List.of( private static final List<String> TYPE_TEST_BLACKLIST = List.of(
ObjectMapper.CONTENT_TYPE, // Cannot aggregate objects ObjectMapper.CONTENT_TYPE, // Cannot aggregate objects
DenseVectorFieldMapper.CONTENT_TYPE, // Cannot aggregate dense vectors DenseVectorFieldMapper.CONTENT_TYPE, // Cannot aggregate dense vectors
MultiDenseVectorFieldMapper.CONTENT_TYPE, // Cannot aggregate dense vectors
SparseVectorFieldMapper.CONTENT_TYPE, // Sparse vectors are no longer supported SparseVectorFieldMapper.CONTENT_TYPE, // Sparse vectors are no longer supported
NestedObjectMapper.CONTENT_TYPE, // TODO support for nested NestedObjectMapper.CONTENT_TYPE, // TODO support for nested

View file

@ -88,5 +88,7 @@ tasks.named("yamlRestCompatTestTransform").configure({ task ->
task.skipTest("esql/60_usage/Basic ESQL usage output (telemetry) non-snapshot version", "The number of functions is constantly increasing") task.skipTest("esql/60_usage/Basic ESQL usage output (telemetry) non-snapshot version", "The number of functions is constantly increasing")
task.skipTest("esql/80_text/reverse text", "The output type changed from TEXT to KEYWORD.") task.skipTest("esql/80_text/reverse text", "The output type changed from TEXT to KEYWORD.")
task.skipTest("esql/80_text/values function", "The output type changed from TEXT to KEYWORD.") task.skipTest("esql/80_text/values function", "The output type changed from TEXT to KEYWORD.")
task.skipTest("privileges/11_builtin/Test get builtin privileges" ,"unnecessary to test compatibility")
task.skipTest("enrich/10_basic/Test using the deprecated elasticsearch_version field results in a warning", "The deprecation message was changed")
}) })

View file

@ -216,8 +216,7 @@ public class CcrRepository extends AbstractLifecycleComponent implements Reposit
if (IndexVersion.current().equals(maxIndexVersion)) { if (IndexVersion.current().equals(maxIndexVersion)) {
for (var node : response.nodes()) { for (var node : response.nodes()) {
if (node.canContainData() && node.getMaxIndexVersion().equals(maxIndexVersion)) { if (node.canContainData() && node.getMaxIndexVersion().equals(maxIndexVersion)) {
// TODO: Revisit when looking into removing release version from DiscoveryNode BuildVersion remoteVersion = node.getBuildVersion();
BuildVersion remoteVersion = BuildVersion.fromVersionId(node.getVersion().id);
if (remoteVersion.isFutureVersion()) { if (remoteVersion.isFutureVersion()) {
throw new SnapshotException( throw new SnapshotException(
snapshot, snapshot,

View file

@ -36,7 +36,7 @@ import java.util.Objects;
public final class EnrichPolicy implements Writeable, ToXContentFragment { public final class EnrichPolicy implements Writeable, ToXContentFragment {
private static final String ELASTICEARCH_VERSION_DEPRECATION_MESSAGE = private static final String ELASTICEARCH_VERSION_DEPRECATION_MESSAGE =
"the [elasticsearch_version] field of an enrich policy has no effect and will be removed in Elasticsearch 9.0"; "the [elasticsearch_version] field of an enrich policy has no effect and will be removed in a future version of Elasticsearch";
private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(EnrichPolicy.class); private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(EnrichPolicy.class);

View file

@ -115,7 +115,7 @@ public final class GetUserPrivilegesResponse extends ActionResponse {
} }
public boolean hasRemoteClusterPrivileges() { public boolean hasRemoteClusterPrivileges() {
return remoteClusterPermissions.hasPrivileges(); return remoteClusterPermissions.hasAnyPrivileges();
} }
@Override @Override

View file

@ -36,6 +36,7 @@ import org.elasticsearch.xpack.core.security.authc.file.FileRealmSettings;
import org.elasticsearch.xpack.core.security.authc.service.ServiceAccountSettings; import org.elasticsearch.xpack.core.security.authc.service.ServiceAccountSettings;
import org.elasticsearch.xpack.core.security.authc.support.AuthenticationContextSerializer; import org.elasticsearch.xpack.core.security.authc.support.AuthenticationContextSerializer;
import org.elasticsearch.xpack.core.security.authz.RoleDescriptor; import org.elasticsearch.xpack.core.security.authz.RoleDescriptor;
import org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions;
import org.elasticsearch.xpack.core.security.user.AnonymousUser; import org.elasticsearch.xpack.core.security.user.AnonymousUser;
import org.elasticsearch.xpack.core.security.user.InternalUser; import org.elasticsearch.xpack.core.security.user.InternalUser;
import org.elasticsearch.xpack.core.security.user.InternalUsers; import org.elasticsearch.xpack.core.security.user.InternalUsers;
@ -76,6 +77,7 @@ import static org.elasticsearch.xpack.core.security.authc.AuthenticationField.CR
import static org.elasticsearch.xpack.core.security.authc.AuthenticationField.FALLBACK_REALM_NAME; import static org.elasticsearch.xpack.core.security.authc.AuthenticationField.FALLBACK_REALM_NAME;
import static org.elasticsearch.xpack.core.security.authc.AuthenticationField.FALLBACK_REALM_TYPE; import static org.elasticsearch.xpack.core.security.authc.AuthenticationField.FALLBACK_REALM_TYPE;
import static org.elasticsearch.xpack.core.security.authc.RealmDomain.REALM_DOMAIN_PARSER; import static org.elasticsearch.xpack.core.security.authc.RealmDomain.REALM_DOMAIN_PARSER;
import static org.elasticsearch.xpack.core.security.authz.RoleDescriptor.Fields.REMOTE_CLUSTER;
import static org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions.ROLE_REMOTE_CLUSTER_PRIVS; import static org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions.ROLE_REMOTE_CLUSTER_PRIVS;
/** /**
@ -233,8 +235,8 @@ public final class Authentication implements ToXContentObject {
+ "]" + "]"
); );
} }
final Map<String, Object> newMetadata = maybeRewriteMetadata(olderVersion, this); final Map<String, Object> newMetadata = maybeRewriteMetadata(olderVersion, this);
final Authentication newAuthentication; final Authentication newAuthentication;
if (isRunAs()) { if (isRunAs()) {
// The lookup user for run-as currently doesn't have authentication metadata associated with them because // The lookup user for run-as currently doesn't have authentication metadata associated with them because
@ -272,12 +274,23 @@ public final class Authentication implements ToXContentObject {
} }
private static Map<String, Object> maybeRewriteMetadata(TransportVersion olderVersion, Authentication authentication) { private static Map<String, Object> maybeRewriteMetadata(TransportVersion olderVersion, Authentication authentication) {
if (authentication.isAuthenticatedAsApiKey()) { try {
return maybeRewriteMetadataForApiKeyRoleDescriptors(olderVersion, authentication); if (authentication.isAuthenticatedAsApiKey()) {
} else if (authentication.isCrossClusterAccess()) { return maybeRewriteMetadataForApiKeyRoleDescriptors(olderVersion, authentication);
return maybeRewriteMetadataForCrossClusterAccessAuthentication(olderVersion, authentication); } else if (authentication.isCrossClusterAccess()) {
} else { return maybeRewriteMetadataForCrossClusterAccessAuthentication(olderVersion, authentication);
return authentication.getAuthenticatingSubject().getMetadata(); } else {
return authentication.getAuthenticatingSubject().getMetadata();
}
} catch (Exception e) {
// CCS workflows may swallow the exception message making this difficult to troubleshoot, so we explicitly log and re-throw
// here. It may result in duplicate logs, so we only log the message at warn level.
if (logger.isDebugEnabled()) {
logger.debug("Un-expected exception thrown while rewriting metadata. This is likely a bug.", e);
} else {
logger.warn("Un-expected exception thrown while rewriting metadata. This is likely a bug [" + e.getMessage() + "]");
}
throw e;
} }
} }
@ -1323,6 +1336,7 @@ public final class Authentication implements ToXContentObject {
if (authentication.getEffectiveSubject().getTransportVersion().onOrAfter(ROLE_REMOTE_CLUSTER_PRIVS) if (authentication.getEffectiveSubject().getTransportVersion().onOrAfter(ROLE_REMOTE_CLUSTER_PRIVS)
&& streamVersion.before(ROLE_REMOTE_CLUSTER_PRIVS)) { && streamVersion.before(ROLE_REMOTE_CLUSTER_PRIVS)) {
// the authentication understands the remote_cluster field but the stream does not
metadata = new HashMap<>(metadata); metadata = new HashMap<>(metadata);
metadata.put( metadata.put(
AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY, AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY,
@ -1336,7 +1350,26 @@ public final class Authentication implements ToXContentObject {
(BytesReference) metadata.get(AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY) (BytesReference) metadata.get(AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY)
) )
); );
} } else if (authentication.getEffectiveSubject().getTransportVersion().onOrAfter(ROLE_REMOTE_CLUSTER_PRIVS)
&& streamVersion.onOrAfter(ROLE_REMOTE_CLUSTER_PRIVS)) {
// both the authentication object and the stream understand the remote_cluster field
// check each individual permission and remove as needed
metadata = new HashMap<>(metadata);
metadata.put(
AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY,
maybeRemoveRemoteClusterPrivilegesFromRoleDescriptors(
(BytesReference) metadata.get(AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY),
streamVersion
)
);
metadata.put(
AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY,
maybeRemoveRemoteClusterPrivilegesFromRoleDescriptors(
(BytesReference) metadata.get(AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY),
streamVersion
)
);
}
if (authentication.getEffectiveSubject().getTransportVersion().onOrAfter(VERSION_API_KEY_ROLES_AS_BYTES) if (authentication.getEffectiveSubject().getTransportVersion().onOrAfter(VERSION_API_KEY_ROLES_AS_BYTES)
&& streamVersion.before(VERSION_API_KEY_ROLES_AS_BYTES)) { && streamVersion.before(VERSION_API_KEY_ROLES_AS_BYTES)) {
@ -1417,7 +1450,7 @@ public final class Authentication implements ToXContentObject {
} }
static BytesReference maybeRemoveRemoteClusterFromRoleDescriptors(BytesReference roleDescriptorsBytes) { static BytesReference maybeRemoveRemoteClusterFromRoleDescriptors(BytesReference roleDescriptorsBytes) {
return maybeRemoveTopLevelFromRoleDescriptors(roleDescriptorsBytes, RoleDescriptor.Fields.REMOTE_CLUSTER.getPreferredName()); return maybeRemoveTopLevelFromRoleDescriptors(roleDescriptorsBytes, REMOTE_CLUSTER.getPreferredName());
} }
static BytesReference maybeRemoveRemoteIndicesFromRoleDescriptors(BytesReference roleDescriptorsBytes) { static BytesReference maybeRemoveRemoteIndicesFromRoleDescriptors(BytesReference roleDescriptorsBytes) {
@ -1450,6 +1483,66 @@ public final class Authentication implements ToXContentObject {
} }
} }
/**
* Before we send the role descriptors to the remote cluster, we need to remove the remote cluster privileges that the other cluster
* will not understand. If all privileges are removed, then the entire "remote_cluster" is removed to avoid sending empty privileges.
* @param roleDescriptorsBytes The role descriptors to be sent to the remote cluster, represented as bytes.
* @return The role descriptors with the privileges that unsupported by version removed, represented as bytes.
*/
@SuppressWarnings("unchecked")
static BytesReference maybeRemoveRemoteClusterPrivilegesFromRoleDescriptors(
BytesReference roleDescriptorsBytes,
TransportVersion outboundVersion
) {
if (roleDescriptorsBytes == null || roleDescriptorsBytes.length() == 0) {
return roleDescriptorsBytes;
}
final Map<String, Object> roleDescriptorsMap = convertRoleDescriptorsBytesToMap(roleDescriptorsBytes);
final Map<String, Object> roleDescriptorsMapMutated = new HashMap<>(roleDescriptorsMap);
final AtomicBoolean modified = new AtomicBoolean(false);
roleDescriptorsMap.forEach((key, value) -> {
if (value instanceof Map) {
Map<String, Object> roleDescriptor = (Map<String, Object>) value;
roleDescriptor.forEach((innerKey, innerValue) -> {
// example: remote_cluster=[{privileges=[monitor_enrich, monitor_stats]
if (REMOTE_CLUSTER.getPreferredName().equals(innerKey)) {
assert innerValue instanceof List;
RemoteClusterPermissions discoveredRemoteClusterPermission = new RemoteClusterPermissions(
(List<Map<String, List<String>>>) innerValue
);
RemoteClusterPermissions mutated = discoveredRemoteClusterPermission.removeUnsupportedPrivileges(outboundVersion);
if (mutated.equals(discoveredRemoteClusterPermission) == false) {
// swap out the old value with the new value
modified.set(true);
Map<String, Object> remoteClusterMap = new HashMap<>((Map<String, Object>) roleDescriptorsMapMutated.get(key));
if (mutated.hasAnyPrivileges()) {
// has at least one group with privileges
remoteClusterMap.put(innerKey, mutated.toMap());
} else {
// has no groups with privileges
remoteClusterMap.remove(innerKey);
}
roleDescriptorsMapMutated.put(key, remoteClusterMap);
}
}
});
}
});
if (modified.get()) {
logger.debug(
"mutated role descriptors. Changed from {} to {} for outbound version {}",
roleDescriptorsMap,
roleDescriptorsMapMutated,
outboundVersion
);
return convertRoleDescriptorsMapToBytes(roleDescriptorsMapMutated);
} else {
// No need to serialize if we did not change anything.
logger.trace("no change to role descriptors {} for outbound version {}", roleDescriptorsMap, outboundVersion);
return roleDescriptorsBytes;
}
}
static boolean equivalentRealms(String name1, String type1, String name2, String type2) { static boolean equivalentRealms(String name1, String type1, String name2, String type2) {
if (false == type1.equals(type2)) { if (false == type1.equals(type2)) {
return false; return false;

View file

@ -6,6 +6,8 @@
*/ */
package org.elasticsearch.xpack.core.security.authz; package org.elasticsearch.xpack.core.security.authz;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.ElasticsearchSecurityException; import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.TransportVersion; import org.elasticsearch.TransportVersion;
@ -62,6 +64,7 @@ public class RoleDescriptor implements ToXContentObject, Writeable {
public static final TransportVersion SECURITY_ROLE_DESCRIPTION = TransportVersions.V_8_15_0; public static final TransportVersion SECURITY_ROLE_DESCRIPTION = TransportVersions.V_8_15_0;
public static final String ROLE_TYPE = "role"; public static final String ROLE_TYPE = "role";
private static final Logger logger = LogManager.getLogger(RoleDescriptor.class);
private final String name; private final String name;
private final String[] clusterPrivileges; private final String[] clusterPrivileges;
@ -191,7 +194,7 @@ public class RoleDescriptor implements ToXContentObject, Writeable {
? Collections.unmodifiableMap(transientMetadata) ? Collections.unmodifiableMap(transientMetadata)
: Collections.singletonMap("enabled", true); : Collections.singletonMap("enabled", true);
this.remoteIndicesPrivileges = remoteIndicesPrivileges != null ? remoteIndicesPrivileges : RemoteIndicesPrivileges.NONE; this.remoteIndicesPrivileges = remoteIndicesPrivileges != null ? remoteIndicesPrivileges : RemoteIndicesPrivileges.NONE;
this.remoteClusterPermissions = remoteClusterPermissions != null && remoteClusterPermissions.hasPrivileges() this.remoteClusterPermissions = remoteClusterPermissions != null && remoteClusterPermissions.hasAnyPrivileges()
? remoteClusterPermissions ? remoteClusterPermissions
: RemoteClusterPermissions.NONE; : RemoteClusterPermissions.NONE;
this.restriction = restriction != null ? restriction : Restriction.NONE; this.restriction = restriction != null ? restriction : Restriction.NONE;
@ -263,7 +266,7 @@ public class RoleDescriptor implements ToXContentObject, Writeable {
} }
public boolean hasRemoteClusterPermissions() { public boolean hasRemoteClusterPermissions() {
return remoteClusterPermissions.hasPrivileges(); return remoteClusterPermissions.hasAnyPrivileges();
} }
public RemoteClusterPermissions getRemoteClusterPermissions() { public RemoteClusterPermissions getRemoteClusterPermissions() {
@ -830,25 +833,32 @@ public class RoleDescriptor implements ToXContentObject, Writeable {
currentFieldName = parser.currentName(); currentFieldName = parser.currentName();
} else if (Fields.PRIVILEGES.match(currentFieldName, parser.getDeprecationHandler())) { } else if (Fields.PRIVILEGES.match(currentFieldName, parser.getDeprecationHandler())) {
privileges = readStringArray(roleName, parser, false); privileges = readStringArray(roleName, parser, false);
if (privileges.length != 1 if (Arrays.stream(privileges)
|| RemoteClusterPermissions.getSupportedRemoteClusterPermissions() .map(s -> s.toLowerCase(Locale.ROOT).trim())
.contains(privileges[0].trim().toLowerCase(Locale.ROOT)) == false) { .allMatch(RemoteClusterPermissions.getSupportedRemoteClusterPermissions()::contains) == false) {
throw new ElasticsearchParseException( final String message = String.format(
"failed to parse remote_cluster for role [{}]. " Locale.ROOT,
+ RemoteClusterPermissions.getSupportedRemoteClusterPermissions() "failed to parse remote_cluster for role [%s]. "
+ " is the only value allowed for [{}] within [remote_cluster]", + "%s are the only values allowed for [%s] within [remote_cluster]. Found %s",
roleName, roleName,
currentFieldName RemoteClusterPermissions.getSupportedRemoteClusterPermissions(),
currentFieldName,
Arrays.toString(privileges)
); );
logger.info(message);
throw new ElasticsearchParseException(message);
} }
} else if (Fields.CLUSTERS.match(currentFieldName, parser.getDeprecationHandler())) { } else if (Fields.CLUSTERS.match(currentFieldName, parser.getDeprecationHandler())) {
clusters = readStringArray(roleName, parser, false); clusters = readStringArray(roleName, parser, false);
} else { } else {
throw new ElasticsearchParseException( final String message = String.format(
"failed to parse remote_cluster for role [{}]. unexpected field [{}]", Locale.ROOT,
"failed to parse remote_cluster for role [%s]. unexpected field [%s]",
roleName, roleName,
currentFieldName currentFieldName
); );
logger.info(message);
throw new ElasticsearchParseException(message);
} }
} }
if (privileges != null && clusters == null) { if (privileges != null && clusters == null) {

View file

@ -13,11 +13,15 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.xcontent.ToXContentObject; import org.elasticsearch.xcontent.ToXContentObject;
import org.elasticsearch.xcontent.XContentBuilder; import org.elasticsearch.xcontent.XContentBuilder;
import org.elasticsearch.xpack.core.security.authz.RoleDescriptor;
import org.elasticsearch.xpack.core.security.support.StringMatcher; import org.elasticsearch.xpack.core.security.support.StringMatcher;
import java.io.IOException; import java.io.IOException;
import java.util.Arrays; import java.util.Arrays;
import java.util.List;
import java.util.Map;
import static org.elasticsearch.xpack.core.security.authz.RoleDescriptor.Fields.CLUSTERS;
import static org.elasticsearch.xpack.core.security.authz.RoleDescriptor.Fields.PRIVILEGES;
/** /**
* Represents a group of permissions for a remote cluster. For example: * Represents a group of permissions for a remote cluster. For example:
@ -41,6 +45,14 @@ public class RemoteClusterPermissionGroup implements NamedWriteable, ToXContentO
remoteClusterAliasMatcher = StringMatcher.of(remoteClusterAliases); remoteClusterAliasMatcher = StringMatcher.of(remoteClusterAliases);
} }
public RemoteClusterPermissionGroup(Map<String, List<String>> remoteClusterGroup) {
assert remoteClusterGroup.get(PRIVILEGES.getPreferredName()) != null : "privileges must be non-null";
assert remoteClusterGroup.get(CLUSTERS.getPreferredName()) != null : "clusters must be non-null";
clusterPrivileges = remoteClusterGroup.get(PRIVILEGES.getPreferredName()).toArray(new String[0]);
remoteClusterAliases = remoteClusterGroup.get(CLUSTERS.getPreferredName()).toArray(new String[0]);
remoteClusterAliasMatcher = StringMatcher.of(remoteClusterAliases);
}
/** /**
* @param clusterPrivileges The list of cluster privileges that are allowed for the remote cluster. must not be null or empty. * @param clusterPrivileges The list of cluster privileges that are allowed for the remote cluster. must not be null or empty.
* @param remoteClusterAliases The list of remote clusters that the privileges apply to. must not be null or empty. * @param remoteClusterAliases The list of remote clusters that the privileges apply to. must not be null or empty.
@ -53,10 +65,14 @@ public class RemoteClusterPermissionGroup implements NamedWriteable, ToXContentO
throw new IllegalArgumentException("remote cluster groups must not be null or empty"); throw new IllegalArgumentException("remote cluster groups must not be null or empty");
} }
if (Arrays.stream(clusterPrivileges).anyMatch(s -> Strings.hasText(s) == false)) { if (Arrays.stream(clusterPrivileges).anyMatch(s -> Strings.hasText(s) == false)) {
throw new IllegalArgumentException("remote_cluster privileges must contain valid non-empty, non-null values"); throw new IllegalArgumentException(
"remote_cluster privileges must contain valid non-empty, non-null values " + Arrays.toString(clusterPrivileges)
);
} }
if (Arrays.stream(remoteClusterAliases).anyMatch(s -> Strings.hasText(s) == false)) { if (Arrays.stream(remoteClusterAliases).anyMatch(s -> Strings.hasText(s) == false)) {
throw new IllegalArgumentException("remote_cluster clusters aliases must contain valid non-empty, non-null values"); throw new IllegalArgumentException(
"remote_cluster clusters aliases must contain valid non-empty, non-null values " + Arrays.toString(remoteClusterAliases)
);
} }
this.clusterPrivileges = clusterPrivileges; this.clusterPrivileges = clusterPrivileges;
@ -86,11 +102,24 @@ public class RemoteClusterPermissionGroup implements NamedWriteable, ToXContentO
return Arrays.copyOf(remoteClusterAliases, remoteClusterAliases.length); return Arrays.copyOf(remoteClusterAliases, remoteClusterAliases.length);
} }
/**
* Converts the group to a map representation.
* @return A map representation of the group.
*/
public Map<String, List<String>> toMap() {
return Map.of(
PRIVILEGES.getPreferredName(),
Arrays.asList(clusterPrivileges),
CLUSTERS.getPreferredName(),
Arrays.asList(remoteClusterAliases)
);
}
@Override @Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject(); builder.startObject();
builder.array(RoleDescriptor.Fields.PRIVILEGES.getPreferredName(), clusterPrivileges); builder.array(PRIVILEGES.getPreferredName(), clusterPrivileges);
builder.array(RoleDescriptor.Fields.CLUSTERS.getPreferredName(), remoteClusterAliases); builder.array(CLUSTERS.getPreferredName(), remoteClusterAliases);
builder.endObject(); builder.endObject();
return builder; return builder;
} }

View file

@ -29,13 +29,19 @@ import java.util.Locale;
import java.util.Map; import java.util.Map;
import java.util.Objects; import java.util.Objects;
import java.util.Set; import java.util.Set;
import java.util.TreeSet;
import java.util.stream.Collectors; import java.util.stream.Collectors;
import static org.elasticsearch.TransportVersions.ROLE_MONITOR_STATS;
/** /**
* Represents the set of permissions for remote clusters. This is intended to be the model for both the {@link RoleDescriptor} * Represents the set of permissions for remote clusters. This is intended to be the model for both the {@link RoleDescriptor}
* and {@link Role}. This model is not intended to be sent to a remote cluster, but can be (wire) serialized within a single cluster * and {@link Role}. This model is intended to be converted to local cluster permissions
* as well as the Xcontent serialization for the REST API and persistence of the role in the security index. The privileges modeled here * {@link #collapseAndRemoveUnsupportedPrivileges(String, TransportVersion)} before sent to the remote cluster. This model also be included
* will be converted to the appropriate cluster privileges when sent to a remote cluster. * in the role descriptors for (normal) API keys sent between nodes/clusters. In both cases the outbound transport version can be used to
* remove permissions that are not available to older nodes or clusters. The methods {@link #removeUnsupportedPrivileges(TransportVersion)}
* and {@link #collapseAndRemoveUnsupportedPrivileges(String, TransportVersion)} are used to aid in ensuring correct privileges per
* transport version.
* For example, on the local/querying cluster this model represents the following: * For example, on the local/querying cluster this model represents the following:
* <code> * <code>
* "remote_cluster" : [ * "remote_cluster" : [
@ -49,15 +55,18 @@ import java.util.stream.Collectors;
* } * }
* ] * ]
* </code> * </code>
* when sent to the remote cluster "clusterA", the privileges will be converted to the appropriate cluster privileges. For example: * (RCS 2.0) when sent to the remote cluster "clusterA", the privileges will be converted to the appropriate cluster privileges.
* For example:
* <code> * <code>
* "cluster": ["foo"] * "cluster": ["foo"]
* </code> * </code>
* and when sent to the remote cluster "clusterB", the privileges will be converted to the appropriate cluster privileges. For example: * and (RCS 2.0) when sent to the remote cluster "clusterB", the privileges will be converted to the appropriate cluster privileges.
* For example:
* <code> * <code>
* "cluster": ["bar"] * "cluster": ["bar"]
* </code> * </code>
* If the remote cluster does not support the privilege, as determined by the remote cluster version, the privilege will be not be sent. * For normal API keys and their role descriptors :If the remote cluster does not support the privilege, the privilege will be not be sent.
* Upstream code performs the removal, but this class owns the business logic for how to remove per outbound version.
*/ */
public class RemoteClusterPermissions implements NamedWriteable, ToXContentObject { public class RemoteClusterPermissions implements NamedWriteable, ToXContentObject {
@ -70,19 +79,33 @@ public class RemoteClusterPermissions implements NamedWriteable, ToXContentObjec
// package private non-final for testing // package private non-final for testing
static Map<TransportVersion, Set<String>> allowedRemoteClusterPermissions = Map.of( static Map<TransportVersion, Set<String>> allowedRemoteClusterPermissions = Map.of(
ROLE_REMOTE_CLUSTER_PRIVS, ROLE_REMOTE_CLUSTER_PRIVS,
Set.of(ClusterPrivilegeResolver.MONITOR_ENRICH.name()) Set.of(ClusterPrivilegeResolver.MONITOR_ENRICH.name()),
ROLE_MONITOR_STATS,
Set.of(ClusterPrivilegeResolver.MONITOR_STATS.name())
); );
static final TransportVersion lastTransportVersionPermission = allowedRemoteClusterPermissions.keySet()
.stream()
.max(TransportVersion::compareTo)
.orElseThrow();
public static final RemoteClusterPermissions NONE = new RemoteClusterPermissions(); public static final RemoteClusterPermissions NONE = new RemoteClusterPermissions();
public static Set<String> getSupportedRemoteClusterPermissions() { public static Set<String> getSupportedRemoteClusterPermissions() {
return allowedRemoteClusterPermissions.values().stream().flatMap(Set::stream).collect(Collectors.toSet()); return allowedRemoteClusterPermissions.values().stream().flatMap(Set::stream).collect(Collectors.toCollection(TreeSet::new));
} }
public RemoteClusterPermissions(StreamInput in) throws IOException { public RemoteClusterPermissions(StreamInput in) throws IOException {
remoteClusterPermissionGroups = in.readNamedWriteableCollectionAsList(RemoteClusterPermissionGroup.class); remoteClusterPermissionGroups = in.readNamedWriteableCollectionAsList(RemoteClusterPermissionGroup.class);
} }
public RemoteClusterPermissions(List<Map<String, List<String>>> remoteClusters) {
remoteClusterPermissionGroups = new ArrayList<>();
for (Map<String, List<String>> remoteCluster : remoteClusters) {
RemoteClusterPermissionGroup remoteClusterPermissionGroup = new RemoteClusterPermissionGroup(remoteCluster);
remoteClusterPermissionGroups.add(remoteClusterPermissionGroup);
}
}
public RemoteClusterPermissions() { public RemoteClusterPermissions() {
remoteClusterPermissionGroups = new ArrayList<>(); remoteClusterPermissionGroups = new ArrayList<>();
} }
@ -97,10 +120,64 @@ public class RemoteClusterPermissions implements NamedWriteable, ToXContentObjec
} }
/** /**
* Gets the privilege names for the remote cluster. This method will collapse all groups to single String[] all lowercase * Will remove any unsupported privileges for the provided outbound version. This method will not modify the current instance.
* and will only return the appropriate privileges for the provided remote cluster version. * This is useful for (normal) API keys role descriptors to help ensure that we don't send unsupported privileges. The result of
* this method may result in no groups if all privileges are removed. {@link #hasAnyPrivileges()} can be used to check if there are
* any privileges left.
* @param outboundVersion The version by which to remove unsupported privileges, this is typically the version of the remote cluster
* @return a new instance of RemoteClusterPermissions with the unsupported privileges removed
*/ */
public String[] privilegeNames(final String remoteClusterAlias, TransportVersion remoteClusterVersion) { public RemoteClusterPermissions removeUnsupportedPrivileges(TransportVersion outboundVersion) {
Objects.requireNonNull(outboundVersion, "outboundVersion must not be null");
if (outboundVersion.onOrAfter(lastTransportVersionPermission)) {
return this;
}
RemoteClusterPermissions copyForOutboundVersion = new RemoteClusterPermissions();
Set<String> allowedPermissionsPerVersion = getAllowedPermissionsPerVersion(outboundVersion);
for (RemoteClusterPermissionGroup group : remoteClusterPermissionGroups) {
String[] privileges = group.clusterPrivileges();
List<String> outboundPrivileges = new ArrayList<>(privileges.length);
for (String privilege : privileges) {
if (allowedPermissionsPerVersion.contains(privilege.toLowerCase(Locale.ROOT))) {
outboundPrivileges.add(privilege);
}
}
if (outboundPrivileges.isEmpty() == false) {
RemoteClusterPermissionGroup outboundGroup = new RemoteClusterPermissionGroup(
outboundPrivileges.toArray(new String[0]),
group.remoteClusterAliases()
);
copyForOutboundVersion.addGroup(outboundGroup);
if (logger.isDebugEnabled()) {
if (group.equals(outboundGroup) == false) {
logger.debug(
"Removed unsupported remote cluster permissions. Remaining {} for remote cluster [{}] for version [{}]."
+ "Due to the remote cluster version, only the following permissions are allowed: {}",
outboundPrivileges,
group.remoteClusterAliases(),
outboundVersion,
allowedPermissionsPerVersion
);
}
}
} else {
logger.debug(
"Removed all remote cluster permissions for remote cluster [{}]. "
+ "Due to the remote cluster version, only the following permissions are allowed: {}",
group.remoteClusterAliases(),
allowedPermissionsPerVersion
);
}
}
return copyForOutboundVersion;
}
/**
* Gets all the privilege names for the remote cluster. This method will collapse all groups to single String[] all lowercase
* and will only return the appropriate privileges for the provided remote cluster version. This is useful for RCS 2.0 to ensure
* that we properly convert all the remote_cluster -> cluster privileges per remote cluster.
*/
public String[] collapseAndRemoveUnsupportedPrivileges(final String remoteClusterAlias, TransportVersion outboundVersion) {
// get all privileges for the remote cluster // get all privileges for the remote cluster
Set<String> groupPrivileges = remoteClusterPermissionGroups.stream() Set<String> groupPrivileges = remoteClusterPermissionGroups.stream()
@ -111,13 +188,7 @@ public class RemoteClusterPermissions implements NamedWriteable, ToXContentObjec
.collect(Collectors.toSet()); .collect(Collectors.toSet());
// find all the privileges that are allowed for the remote cluster version // find all the privileges that are allowed for the remote cluster version
Set<String> allowedPermissionsPerVersion = allowedRemoteClusterPermissions.entrySet() Set<String> allowedPermissionsPerVersion = getAllowedPermissionsPerVersion(outboundVersion);
.stream()
.filter((entry) -> entry.getKey().onOrBefore(remoteClusterVersion))
.map(Map.Entry::getValue)
.flatMap(Set::stream)
.map(s -> s.toLowerCase(Locale.ROOT))
.collect(Collectors.toSet());
// intersect the two sets to get the allowed privileges for the remote cluster version // intersect the two sets to get the allowed privileges for the remote cluster version
Set<String> allowedPrivileges = new HashSet<>(groupPrivileges); Set<String> allowedPrivileges = new HashSet<>(groupPrivileges);
@ -137,13 +208,21 @@ public class RemoteClusterPermissions implements NamedWriteable, ToXContentObjec
return allowedPrivileges.stream().sorted().toArray(String[]::new); return allowedPrivileges.stream().sorted().toArray(String[]::new);
} }
/**
* Converts this object to it's {@link Map} representation.
* @return a list of maps representing the remote cluster permissions
*/
public List<Map<String, List<String>>> toMap() {
return remoteClusterPermissionGroups.stream().map(RemoteClusterPermissionGroup::toMap).toList();
}
/** /**
* Validates the remote cluster permissions (regardless of remote cluster version). * Validates the remote cluster permissions (regardless of remote cluster version).
* This method will throw an {@link IllegalArgumentException} if the permissions are invalid. * This method will throw an {@link IllegalArgumentException} if the permissions are invalid.
* Generally, this method is just a safety check and validity should be checked before adding the permissions to this class. * Generally, this method is just a safety check and validity should be checked before adding the permissions to this class.
*/ */
public void validate() { public void validate() {
assert hasPrivileges(); assert hasAnyPrivileges();
Set<String> invalid = getUnsupportedPrivileges(); Set<String> invalid = getUnsupportedPrivileges();
if (invalid.isEmpty() == false) { if (invalid.isEmpty() == false) {
throw new IllegalArgumentException( throw new IllegalArgumentException(
@ -173,11 +252,11 @@ public class RemoteClusterPermissions implements NamedWriteable, ToXContentObjec
return invalid; return invalid;
} }
public boolean hasPrivileges(final String remoteClusterAlias) { public boolean hasAnyPrivileges(final String remoteClusterAlias) {
return remoteClusterPermissionGroups.stream().anyMatch(remoteIndicesGroup -> remoteIndicesGroup.hasPrivileges(remoteClusterAlias)); return remoteClusterPermissionGroups.stream().anyMatch(remoteIndicesGroup -> remoteIndicesGroup.hasPrivileges(remoteClusterAlias));
} }
public boolean hasPrivileges() { public boolean hasAnyPrivileges() {
return remoteClusterPermissionGroups.isEmpty() == false; return remoteClusterPermissionGroups.isEmpty() == false;
} }
@ -185,6 +264,16 @@ public class RemoteClusterPermissions implements NamedWriteable, ToXContentObjec
return Collections.unmodifiableList(remoteClusterPermissionGroups); return Collections.unmodifiableList(remoteClusterPermissionGroups);
} }
private Set<String> getAllowedPermissionsPerVersion(TransportVersion outboundVersion) {
return allowedRemoteClusterPermissions.entrySet()
.stream()
.filter((entry) -> entry.getKey().onOrBefore(outboundVersion))
.map(Map.Entry::getValue)
.flatMap(Set::stream)
.map(s -> s.toLowerCase(Locale.ROOT))
.collect(Collectors.toSet());
}
@Override @Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
for (RemoteClusterPermissionGroup remoteClusterPermissionGroup : remoteClusterPermissionGroups) { for (RemoteClusterPermissionGroup remoteClusterPermissionGroup : remoteClusterPermissionGroups) {
@ -220,4 +309,5 @@ public class RemoteClusterPermissions implements NamedWriteable, ToXContentObjec
public String getWriteableName() { public String getWriteableName() {
return NAME; return NAME;
} }
} }

View file

@ -283,7 +283,7 @@ public interface Role {
public Builder addRemoteClusterPermissions(RemoteClusterPermissions remoteClusterPermissions) { public Builder addRemoteClusterPermissions(RemoteClusterPermissions remoteClusterPermissions) {
Objects.requireNonNull(remoteClusterPermissions, "remoteClusterPermissions must not be null"); Objects.requireNonNull(remoteClusterPermissions, "remoteClusterPermissions must not be null");
assert this.remoteClusterPermissions == null : "addRemoteClusterPermissions should only be called once"; assert this.remoteClusterPermissions == null : "addRemoteClusterPermissions should only be called once";
if (remoteClusterPermissions.hasPrivileges()) { if (remoteClusterPermissions.hasAnyPrivileges()) {
remoteClusterPermissions.validate(); remoteClusterPermissions.validate();
} }
this.remoteClusterPermissions = remoteClusterPermissions; this.remoteClusterPermissions = remoteClusterPermissions;

View file

@ -210,7 +210,7 @@ public class SimpleRole implements Role {
final RemoteIndicesPermission remoteIndicesPermission = this.remoteIndicesPermission.forCluster(remoteClusterAlias); final RemoteIndicesPermission remoteIndicesPermission = this.remoteIndicesPermission.forCluster(remoteClusterAlias);
if (remoteIndicesPermission.remoteIndicesGroups().isEmpty() if (remoteIndicesPermission.remoteIndicesGroups().isEmpty()
&& remoteClusterPermissions.hasPrivileges(remoteClusterAlias) == false) { && remoteClusterPermissions.hasAnyPrivileges(remoteClusterAlias) == false) {
return RoleDescriptorsIntersection.EMPTY; return RoleDescriptorsIntersection.EMPTY;
} }
@ -224,7 +224,7 @@ public class SimpleRole implements Role {
return new RoleDescriptorsIntersection( return new RoleDescriptorsIntersection(
new RoleDescriptor( new RoleDescriptor(
REMOTE_USER_ROLE_NAME, REMOTE_USER_ROLE_NAME,
remoteClusterPermissions.privilegeNames(remoteClusterAlias, remoteClusterVersion), remoteClusterPermissions.collapseAndRemoveUnsupportedPrivileges(remoteClusterAlias, remoteClusterVersion),
// The role descriptors constructed here may be cached in raw byte form, using a hash of their content as a // The role descriptors constructed here may be cached in raw byte form, using a hash of their content as a
// cache key; we therefore need deterministic order when constructing them here, to ensure cache hits for // cache key; we therefore need deterministic order when constructing them here, to ensure cache hits for
// equivalent role descriptors // equivalent role descriptors

View file

@ -110,6 +110,8 @@ public class ClusterPrivilegeResolver {
private static final Set<String> MONITOR_WATCHER_PATTERN = Set.of("cluster:monitor/xpack/watcher/*"); private static final Set<String> MONITOR_WATCHER_PATTERN = Set.of("cluster:monitor/xpack/watcher/*");
private static final Set<String> MONITOR_ROLLUP_PATTERN = Set.of("cluster:monitor/xpack/rollup/*"); private static final Set<String> MONITOR_ROLLUP_PATTERN = Set.of("cluster:monitor/xpack/rollup/*");
private static final Set<String> MONITOR_ENRICH_PATTERN = Set.of("cluster:monitor/xpack/enrich/*", "cluster:admin/xpack/enrich/get"); private static final Set<String> MONITOR_ENRICH_PATTERN = Set.of("cluster:monitor/xpack/enrich/*", "cluster:admin/xpack/enrich/get");
// intentionally cluster:monitor/stats* to match cluster:monitor/stats, cluster:monitor/stats[n] and cluster:monitor/stats/remote
private static final Set<String> MONITOR_STATS_PATTERN = Set.of("cluster:monitor/stats*");
private static final Set<String> ALL_CLUSTER_PATTERN = Set.of( private static final Set<String> ALL_CLUSTER_PATTERN = Set.of(
"cluster:*", "cluster:*",
@ -208,7 +210,11 @@ public class ClusterPrivilegeResolver {
// esql enrich // esql enrich
"cluster:monitor/xpack/enrich/esql/resolve_policy", "cluster:monitor/xpack/enrich/esql/resolve_policy",
"cluster:internal:data/read/esql/open_exchange", "cluster:internal:data/read/esql/open_exchange",
"cluster:internal:data/read/esql/exchange" "cluster:internal:data/read/esql/exchange",
// cluster stats for remote clusters
"cluster:monitor/stats/remote",
"cluster:monitor/stats",
"cluster:monitor/stats[n]"
); );
private static final Set<String> CROSS_CLUSTER_REPLICATION_PATTERN = Set.of( private static final Set<String> CROSS_CLUSTER_REPLICATION_PATTERN = Set.of(
RemoteClusterService.REMOTE_CLUSTER_HANDSHAKE_ACTION_NAME, RemoteClusterService.REMOTE_CLUSTER_HANDSHAKE_ACTION_NAME,
@ -243,6 +249,7 @@ public class ClusterPrivilegeResolver {
public static final NamedClusterPrivilege MONITOR_WATCHER = new ActionClusterPrivilege("monitor_watcher", MONITOR_WATCHER_PATTERN); public static final NamedClusterPrivilege MONITOR_WATCHER = new ActionClusterPrivilege("monitor_watcher", MONITOR_WATCHER_PATTERN);
public static final NamedClusterPrivilege MONITOR_ROLLUP = new ActionClusterPrivilege("monitor_rollup", MONITOR_ROLLUP_PATTERN); public static final NamedClusterPrivilege MONITOR_ROLLUP = new ActionClusterPrivilege("monitor_rollup", MONITOR_ROLLUP_PATTERN);
public static final NamedClusterPrivilege MONITOR_ENRICH = new ActionClusterPrivilege("monitor_enrich", MONITOR_ENRICH_PATTERN); public static final NamedClusterPrivilege MONITOR_ENRICH = new ActionClusterPrivilege("monitor_enrich", MONITOR_ENRICH_PATTERN);
public static final NamedClusterPrivilege MONITOR_STATS = new ActionClusterPrivilege("monitor_stats", MONITOR_STATS_PATTERN);
public static final NamedClusterPrivilege MANAGE = new ActionClusterPrivilege("manage", ALL_CLUSTER_PATTERN, ALL_SECURITY_PATTERN); public static final NamedClusterPrivilege MANAGE = new ActionClusterPrivilege("manage", ALL_CLUSTER_PATTERN, ALL_SECURITY_PATTERN);
public static final NamedClusterPrivilege MANAGE_INFERENCE = new ActionClusterPrivilege("manage_inference", MANAGE_INFERENCE_PATTERN); public static final NamedClusterPrivilege MANAGE_INFERENCE = new ActionClusterPrivilege("manage_inference", MANAGE_INFERENCE_PATTERN);
public static final NamedClusterPrivilege MANAGE_ML = new ActionClusterPrivilege("manage_ml", MANAGE_ML_PATTERN); public static final NamedClusterPrivilege MANAGE_ML = new ActionClusterPrivilege("manage_ml", MANAGE_ML_PATTERN);
@ -424,6 +431,7 @@ public class ClusterPrivilegeResolver {
MONITOR_WATCHER, MONITOR_WATCHER,
MONITOR_ROLLUP, MONITOR_ROLLUP,
MONITOR_ENRICH, MONITOR_ENRICH,
MONITOR_STATS,
MANAGE, MANAGE,
MANAGE_CONNECTOR, MANAGE_CONNECTOR,
MANAGE_INFERENCE, MANAGE_INFERENCE,
@ -499,7 +507,7 @@ public class ClusterPrivilegeResolver {
+ Strings.collectionToCommaDelimitedString(VALUES.keySet()) + Strings.collectionToCommaDelimitedString(VALUES.keySet())
+ "] or a pattern over one of the available " + "] or a pattern over one of the available "
+ "cluster actions"; + "cluster actions";
logger.debug(errorMessage); logger.warn(errorMessage);
throw new IllegalArgumentException(errorMessage); throw new IllegalArgumentException(errorMessage);
} }

View file

@ -20,6 +20,9 @@ import org.elasticsearch.xpack.core.security.action.profile.GetProfilesAction;
import org.elasticsearch.xpack.core.security.action.profile.SuggestProfilesAction; import org.elasticsearch.xpack.core.security.action.profile.SuggestProfilesAction;
import org.elasticsearch.xpack.core.security.action.user.ProfileHasPrivilegesAction; import org.elasticsearch.xpack.core.security.action.user.ProfileHasPrivilegesAction;
import org.elasticsearch.xpack.core.security.authz.RoleDescriptor; import org.elasticsearch.xpack.core.security.authz.RoleDescriptor;
import org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissionGroup;
import org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions;
import org.elasticsearch.xpack.core.security.authz.privilege.ClusterPrivilegeResolver;
import org.elasticsearch.xpack.core.security.authz.privilege.ConfigurableClusterPrivilege; import org.elasticsearch.xpack.core.security.authz.privilege.ConfigurableClusterPrivilege;
import org.elasticsearch.xpack.core.security.authz.privilege.ConfigurableClusterPrivileges; import org.elasticsearch.xpack.core.security.authz.privilege.ConfigurableClusterPrivileges;
import org.elasticsearch.xpack.core.security.support.MetadataUtils; import org.elasticsearch.xpack.core.security.support.MetadataUtils;
@ -497,7 +500,15 @@ class KibanaOwnedReservedRoleDescriptors {
getRemoteIndicesReadPrivileges("metrics-apm.*"), getRemoteIndicesReadPrivileges("metrics-apm.*"),
getRemoteIndicesReadPrivileges("traces-apm.*"), getRemoteIndicesReadPrivileges("traces-apm.*"),
getRemoteIndicesReadPrivileges("traces-apm-*") }, getRemoteIndicesReadPrivileges("traces-apm-*") },
null, new RemoteClusterPermissions().addGroup(
new RemoteClusterPermissionGroup(
RemoteClusterPermissions.getSupportedRemoteClusterPermissions()
.stream()
.filter(s -> s.equals(ClusterPrivilegeResolver.MONITOR_STATS.name()))
.toArray(String[]::new),
new String[] { "*" }
)
),
null, null,
"Grants access necessary for the Kibana system user to read from and write to the Kibana indices, " "Grants access necessary for the Kibana system user to read from and write to the Kibana indices, "
+ "manage index templates and tokens, and check the availability of the Elasticsearch cluster. " + "manage index templates and tokens, and check the availability of the Elasticsearch cluster. "

View file

@ -85,7 +85,6 @@ public abstract class AbstractClusterStateLicenseServiceTestCase extends ESTestC
when(discoveryNodes.stream()).thenAnswer(i -> Stream.of(mockNode)); when(discoveryNodes.stream()).thenAnswer(i -> Stream.of(mockNode));
when(discoveryNodes.iterator()).thenAnswer(i -> Iterators.single(mockNode)); when(discoveryNodes.iterator()).thenAnswer(i -> Iterators.single(mockNode));
when(discoveryNodes.isLocalNodeElectedMaster()).thenReturn(false); when(discoveryNodes.isLocalNodeElectedMaster()).thenReturn(false);
when(discoveryNodes.getMinNodeVersion()).thenReturn(mockNode.getVersion());
when(state.nodes()).thenReturn(discoveryNodes); when(state.nodes()).thenReturn(discoveryNodes);
when(state.getNodes()).thenReturn(discoveryNodes); // it is really ridiculous we have nodes() and getNodes()... when(state.getNodes()).thenReturn(discoveryNodes); // it is really ridiculous we have nodes() and getNodes()...
when(clusterService.state()).thenReturn(state); when(clusterService.state()).thenReturn(state);

View file

@ -10,11 +10,16 @@ package org.elasticsearch.xpack.core.security.action.apikey;
import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.core.Strings; import org.elasticsearch.core.Strings;
import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.transport.TransportRequest;
import org.elasticsearch.xcontent.XContentParseException; import org.elasticsearch.xcontent.XContentParseException;
import org.elasticsearch.xcontent.XContentParserConfiguration; import org.elasticsearch.xcontent.XContentParserConfiguration;
import org.elasticsearch.xcontent.json.JsonXContent; import org.elasticsearch.xcontent.json.JsonXContent;
import org.elasticsearch.xpack.core.security.authc.AuthenticationTestHelper;
import org.elasticsearch.xpack.core.security.authz.RoleDescriptor; import org.elasticsearch.xpack.core.security.authz.RoleDescriptor;
import org.elasticsearch.xpack.core.security.authz.permission.ClusterPermission;
import org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions; import org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions;
import org.elasticsearch.xpack.core.security.authz.privilege.ClusterPrivilege;
import org.elasticsearch.xpack.core.security.authz.privilege.ClusterPrivilegeResolver;
import java.io.IOException; import java.io.IOException;
import java.util.List; import java.util.List;
@ -27,6 +32,7 @@ import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.instanceOf; import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.is;
import static org.mockito.Mockito.mock;
public class CrossClusterApiKeyRoleDescriptorBuilderTests extends ESTestCase { public class CrossClusterApiKeyRoleDescriptorBuilderTests extends ESTestCase {
@ -356,9 +362,42 @@ public class CrossClusterApiKeyRoleDescriptorBuilderTests extends ESTestCase {
} }
public void testAPIKeyAllowsAllRemoteClusterPrivilegesForCCS() { public void testAPIKeyAllowsAllRemoteClusterPrivilegesForCCS() {
// if users can add remote cluster permissions to a role, then the APIKey should also allow that for that permission // test to help ensure that at least 1 action that is allowed by the remote cluster permissions are supported by CCS
// the inverse however, is not guaranteed. cross_cluster_search exists largely for internal use and is not exposed to the users role List<String> actionsToTest = List.of("cluster:monitor/xpack/enrich/esql/resolve_policy", "cluster:monitor/stats/remote");
assertTrue(Set.of(CCS_CLUSTER_PRIVILEGE_NAMES).containsAll(RemoteClusterPermissions.getSupportedRemoteClusterPermissions())); // if you add new remote cluster permissions, please define an action we can test to help ensure it is supported by RCS 2.0
assertThat(actionsToTest.size(), equalTo(RemoteClusterPermissions.getSupportedRemoteClusterPermissions().size()));
for (String privilege : RemoteClusterPermissions.getSupportedRemoteClusterPermissions()) {
boolean actionPassesRemoteClusterPermissionCheck = false;
ClusterPrivilege clusterPrivilege = ClusterPrivilegeResolver.resolve(privilege);
// each remote cluster privilege has an action to test
for (String action : actionsToTest) {
if (clusterPrivilege.buildPermission(ClusterPermission.builder())
.build()
.check(action, mock(TransportRequest.class), AuthenticationTestHelper.builder().build())) {
actionPassesRemoteClusterPermissionCheck = true;
break;
}
}
assertTrue(
"privilege [" + privilege + "] does not cover any actions among [" + actionsToTest + "]",
actionPassesRemoteClusterPermissionCheck
);
}
// test that the actions pass the privilege check for CCS
for (String privilege : Set.of(CCS_CLUSTER_PRIVILEGE_NAMES)) {
boolean actionPassesRemoteCCSCheck = false;
ClusterPrivilege clusterPrivilege = ClusterPrivilegeResolver.resolve(privilege);
for (String action : actionsToTest) {
if (clusterPrivilege.buildPermission(ClusterPermission.builder())
.build()
.check(action, mock(TransportRequest.class), AuthenticationTestHelper.builder().build())) {
actionPassesRemoteCCSCheck = true;
break;
}
}
assertTrue(actionPassesRemoteCCSCheck);
}
} }
private static void assertRoleDescriptor( private static void assertRoleDescriptor(

View file

@ -104,7 +104,7 @@ public class PutRoleRequestTests extends ESTestCase {
} }
request.putRemoteCluster(remoteClusterPermissions); request.putRemoteCluster(remoteClusterPermissions);
assertValidationError("Invalid remote_cluster permissions found. Please remove the following: [", request); assertValidationError("Invalid remote_cluster permissions found. Please remove the following: [", request);
assertValidationError("Only [monitor_enrich] are allowed", request); assertValidationError("Only [monitor_enrich, monitor_stats] are allowed", request);
} }
public void testValidationErrorWithEmptyClustersInRemoteIndices() { public void testValidationErrorWithEmptyClustersInRemoteIndices() {

View file

@ -21,6 +21,7 @@ import org.elasticsearch.core.Tuple;
import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.TransportVersionUtils; import org.elasticsearch.test.TransportVersionUtils;
import org.elasticsearch.transport.RemoteClusterPortSettings; import org.elasticsearch.transport.RemoteClusterPortSettings;
import org.elasticsearch.xcontent.ObjectPath;
import org.elasticsearch.xcontent.ToXContent; import org.elasticsearch.xcontent.ToXContent;
import org.elasticsearch.xcontent.XContentBuilder; import org.elasticsearch.xcontent.XContentBuilder;
import org.elasticsearch.xcontent.XContentType; import org.elasticsearch.xcontent.XContentType;
@ -32,6 +33,7 @@ import org.elasticsearch.xpack.core.security.authc.support.AuthenticationContext
import org.elasticsearch.xpack.core.security.authz.RoleDescriptorsIntersection; import org.elasticsearch.xpack.core.security.authz.RoleDescriptorsIntersection;
import org.elasticsearch.xpack.core.security.user.AnonymousUser; import org.elasticsearch.xpack.core.security.user.AnonymousUser;
import org.elasticsearch.xpack.core.security.user.User; import org.elasticsearch.xpack.core.security.user.User;
import org.hamcrest.Matchers;
import java.io.IOException; import java.io.IOException;
import java.util.Arrays; import java.util.Arrays;
@ -42,6 +44,8 @@ import java.util.function.Consumer;
import java.util.stream.Collectors; import java.util.stream.Collectors;
import static java.util.Map.entry; import static java.util.Map.entry;
import static org.elasticsearch.TransportVersions.ROLE_MONITOR_STATS;
import static org.elasticsearch.xpack.core.security.authc.Authentication.VERSION_API_KEY_ROLES_AS_BYTES;
import static org.elasticsearch.xpack.core.security.authc.AuthenticationTestHelper.randomCrossClusterAccessSubjectInfo; import static org.elasticsearch.xpack.core.security.authc.AuthenticationTestHelper.randomCrossClusterAccessSubjectInfo;
import static org.elasticsearch.xpack.core.security.authc.CrossClusterAccessSubjectInfoTests.randomRoleDescriptorsIntersection; import static org.elasticsearch.xpack.core.security.authc.CrossClusterAccessSubjectInfoTests.randomRoleDescriptorsIntersection;
import static org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions.ROLE_REMOTE_CLUSTER_PRIVS; import static org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions.ROLE_REMOTE_CLUSTER_PRIVS;
@ -1070,7 +1074,7 @@ public class AuthenticationTests extends ESTestCase {
// pick a version before that of the authentication instance to force a rewrite // pick a version before that of the authentication instance to force a rewrite
final TransportVersion olderVersion = TransportVersionUtils.randomVersionBetween( final TransportVersion olderVersion = TransportVersionUtils.randomVersionBetween(
random(), random(),
Authentication.VERSION_API_KEY_ROLES_AS_BYTES, VERSION_API_KEY_ROLES_AS_BYTES,
TransportVersionUtils.getPreviousVersion(original.getEffectiveSubject().getTransportVersion()) TransportVersionUtils.getPreviousVersion(original.getEffectiveSubject().getTransportVersion())
); );
@ -1115,7 +1119,7 @@ public class AuthenticationTests extends ESTestCase {
// pick a version before that of the authentication instance to force a rewrite // pick a version before that of the authentication instance to force a rewrite
final TransportVersion olderVersion = TransportVersionUtils.randomVersionBetween( final TransportVersion olderVersion = TransportVersionUtils.randomVersionBetween(
random(), random(),
Authentication.VERSION_API_KEY_ROLES_AS_BYTES, VERSION_API_KEY_ROLES_AS_BYTES,
TransportVersionUtils.getPreviousVersion(original.getEffectiveSubject().getTransportVersion()) TransportVersionUtils.getPreviousVersion(original.getEffectiveSubject().getTransportVersion())
); );
@ -1135,6 +1139,84 @@ public class AuthenticationTests extends ESTestCase {
); );
} }
public void testMaybeRewriteMetadataForApiKeyRoleDescriptorsWithRemoteClusterRemovePrivs() throws IOException {
final String apiKeyId = randomAlphaOfLengthBetween(1, 10);
final String apiKeyName = randomAlphaOfLengthBetween(1, 10);
Map<String, Object> metadata = Map.ofEntries(
entry(AuthenticationField.API_KEY_ID_KEY, apiKeyId),
entry(AuthenticationField.API_KEY_NAME_KEY, apiKeyName),
entry(AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY, new BytesArray("""
{"base_role":{"cluster":["all"],
"remote_cluster":[{"privileges":["monitor_enrich", "monitor_stats"],"clusters":["*"]}]
}}""")),
entry(AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY, new BytesArray("""
{"limited_by_role":{"cluster":["*"],
"remote_cluster":[{"privileges":["monitor_enrich", "monitor_stats"],"clusters":["*"]}]
}}"""))
);
final Authentication with2privs = AuthenticationTestHelper.builder()
.apiKey()
.metadata(metadata)
.transportVersion(TransportVersion.current())
.build();
// pick a version that will only remove one of the two privileges
final TransportVersion olderVersion = TransportVersionUtils.randomVersionBetween(
random(),
ROLE_REMOTE_CLUSTER_PRIVS,
TransportVersionUtils.getPreviousVersion(ROLE_MONITOR_STATS)
);
Map<String, Object> rewrittenMetadata = with2privs.maybeRewriteForOlderVersion(olderVersion).getEffectiveSubject().getMetadata();
assertThat(rewrittenMetadata.keySet(), equalTo(with2privs.getAuthenticatingSubject().getMetadata().keySet()));
// only one of the two privileges are left after the rewrite
BytesReference baseRoleBytes = (BytesReference) rewrittenMetadata.get(AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY);
Map<String, Object> baseRoleAsMap = XContentHelper.convertToMap(baseRoleBytes, false, XContentType.JSON).v2();
assertThat(ObjectPath.eval("base_role.remote_cluster.0.privileges", baseRoleAsMap), Matchers.contains("monitor_enrich"));
assertThat(ObjectPath.eval("base_role.remote_cluster.0.clusters", baseRoleAsMap), notNullValue());
BytesReference limitedByRoleBytes = (BytesReference) rewrittenMetadata.get(
AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY
);
Map<String, Object> limitedByRoleAsMap = XContentHelper.convertToMap(limitedByRoleBytes, false, XContentType.JSON).v2();
assertThat(ObjectPath.eval("limited_by_role.remote_cluster.0.privileges", limitedByRoleAsMap), Matchers.contains("monitor_enrich"));
assertThat(ObjectPath.eval("limited_by_role.remote_cluster.0.clusters", limitedByRoleAsMap), notNullValue());
// same version, but it removes the only defined privilege
metadata = Map.ofEntries(
entry(AuthenticationField.API_KEY_ID_KEY, apiKeyId),
entry(AuthenticationField.API_KEY_NAME_KEY, apiKeyName),
entry(AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY, new BytesArray("""
{"base_role":{"cluster":["all"],
"remote_cluster":[{"privileges":["monitor_stats"],"clusters":["*"]}]
}}""")),
entry(AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY, new BytesArray("""
{"limited_by_role":{"cluster":["*"],
"remote_cluster":[{"privileges":["monitor_stats"],"clusters":["*"]}]
}}"""))
);
final Authentication with1priv = AuthenticationTestHelper.builder()
.apiKey()
.metadata(metadata)
.transportVersion(TransportVersion.current())
.build();
rewrittenMetadata = with1priv.maybeRewriteForOlderVersion(olderVersion).getEffectiveSubject().getMetadata();
assertThat(rewrittenMetadata.keySet(), equalTo(with1priv.getAuthenticatingSubject().getMetadata().keySet()));
// the one privileges is removed after the rewrite, which removes the full "remote_cluster" object
baseRoleBytes = (BytesReference) rewrittenMetadata.get(AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY);
baseRoleAsMap = XContentHelper.convertToMap(baseRoleBytes, false, XContentType.JSON).v2();
assertThat(ObjectPath.eval("base_role.remote_cluster", baseRoleAsMap), nullValue());
assertThat(ObjectPath.eval("base_role.cluster", baseRoleAsMap), notNullValue());
limitedByRoleBytes = (BytesReference) rewrittenMetadata.get(AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY);
limitedByRoleAsMap = XContentHelper.convertToMap(limitedByRoleBytes, false, XContentType.JSON).v2();
assertThat(ObjectPath.eval("limited_by_role.remote_cluster", limitedByRoleAsMap), nullValue());
assertThat(ObjectPath.eval("limited_by_role.cluster", limitedByRoleAsMap), notNullValue());
}
public void testMaybeRemoveRemoteIndicesFromRoleDescriptors() { public void testMaybeRemoveRemoteIndicesFromRoleDescriptors() {
final boolean includeClusterPrivileges = randomBoolean(); final boolean includeClusterPrivileges = randomBoolean();
final BytesReference roleWithoutRemoteIndices = new BytesArray(Strings.format(""" final BytesReference roleWithoutRemoteIndices = new BytesArray(Strings.format("""

View file

@ -542,6 +542,34 @@ public class RoleDescriptorTests extends ESTestCase {
() -> RoleDescriptor.parserBuilder().build().parse("test", new BytesArray(q4), XContentType.JSON) () -> RoleDescriptor.parserBuilder().build().parse("test", new BytesArray(q4), XContentType.JSON)
); );
assertThat(illegalArgumentException.getMessage(), containsString("remote cluster groups must not be null or empty")); assertThat(illegalArgumentException.getMessage(), containsString("remote cluster groups must not be null or empty"));
// one invalid privilege
String q5 = """
{
"remote_cluster": [
{
"privileges": [
"monitor_stats", "read_pipeline"
],
"clusters": [
"*"
]
}
]
}""";
ElasticsearchParseException parseException = expectThrows(
ElasticsearchParseException.class,
() -> RoleDescriptor.parserBuilder().build().parse("test", new BytesArray(q5), XContentType.JSON)
);
assertThat(
parseException.getMessage(),
containsString(
"failed to parse remote_cluster for role [test]. "
+ "[monitor_enrich, monitor_stats] are the only values allowed for [privileges] within [remote_cluster]. "
+ "Found [monitor_stats, read_pipeline]"
)
);
} }
public void testParsingFieldPermissionsUsesCache() throws IOException { public void testParsingFieldPermissionsUsesCache() throws IOException {

View file

@ -16,6 +16,7 @@ import org.elasticsearch.xcontent.XContentParser;
import java.io.IOException; import java.io.IOException;
import java.util.Arrays; import java.util.Arrays;
import java.util.Locale; import java.util.Locale;
import java.util.Map;
import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.containsString;
@ -90,7 +91,7 @@ public class RemoteClusterPermissionGroupTests extends AbstractXContentSerializi
); );
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, invalidClusterAlias); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, invalidClusterAlias);
assertEquals("remote_cluster clusters aliases must contain valid non-empty, non-null values", e.getMessage()); assertThat(e.getMessage(), containsString("remote_cluster clusters aliases must contain valid non-empty, non-null values"));
final ThrowingRunnable invalidPermission = randomFrom( final ThrowingRunnable invalidPermission = randomFrom(
() -> new RemoteClusterPermissionGroup(new String[] { null }, new String[] { "bar" }), () -> new RemoteClusterPermissionGroup(new String[] { null }, new String[] { "bar" }),
@ -100,7 +101,17 @@ public class RemoteClusterPermissionGroupTests extends AbstractXContentSerializi
); );
IllegalArgumentException e2 = expectThrows(IllegalArgumentException.class, invalidPermission); IllegalArgumentException e2 = expectThrows(IllegalArgumentException.class, invalidPermission);
assertEquals("remote_cluster privileges must contain valid non-empty, non-null values", e2.getMessage()); assertThat(e2.getMessage(), containsString("remote_cluster privileges must contain valid non-empty, non-null values"));
}
public void testToMap() {
String[] privileges = generateRandomStringArray(5, 5, false, false);
String[] clusters = generateRandomStringArray(5, 5, false, false);
RemoteClusterPermissionGroup remoteClusterPermissionGroup = new RemoteClusterPermissionGroup(privileges, clusters);
assertEquals(
Map.of("privileges", Arrays.asList(privileges), "clusters", Arrays.asList(clusters)),
remoteClusterPermissionGroup.toMap()
);
} }
@Override @Override

View file

@ -15,6 +15,8 @@ import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.test.AbstractXContentSerializingTestCase; import org.elasticsearch.test.AbstractXContentSerializingTestCase;
import org.elasticsearch.test.TransportVersionUtils; import org.elasticsearch.test.TransportVersionUtils;
import org.elasticsearch.xcontent.XContentParser; import org.elasticsearch.xcontent.XContentParser;
import org.elasticsearch.xpack.core.security.authz.RoleDescriptor;
import org.elasticsearch.xpack.core.security.xcontent.XContentUtils;
import org.junit.Before; import org.junit.Before;
import java.io.IOException; import java.io.IOException;
@ -27,8 +29,11 @@ import java.util.List;
import java.util.Locale; import java.util.Locale;
import java.util.Map; import java.util.Map;
import java.util.Set; import java.util.Set;
import java.util.stream.Collectors;
import static org.elasticsearch.TransportVersions.ROLE_MONITOR_STATS;
import static org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions.ROLE_REMOTE_CLUSTER_PRIVS; import static org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions.ROLE_REMOTE_CLUSTER_PRIVS;
import static org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions.lastTransportVersionPermission;
import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.equalTo;
@ -85,13 +90,13 @@ public class RemoteClusterPermissionsTests extends AbstractXContentSerializingTe
for (int i = 0; i < generateRandomGroups(true).size(); i++) { for (int i = 0; i < generateRandomGroups(true).size(); i++) {
String[] clusters = groupClusters.get(i); String[] clusters = groupClusters.get(i);
for (String cluster : clusters) { for (String cluster : clusters) {
assertTrue(remoteClusterPermission.hasPrivileges(cluster)); assertTrue(remoteClusterPermission.hasAnyPrivileges(cluster));
assertFalse(remoteClusterPermission.hasPrivileges(randomAlphaOfLength(20))); assertFalse(remoteClusterPermission.hasAnyPrivileges(randomAlphaOfLength(20)));
} }
} }
} }
public void testPrivilegeNames() { public void testCollapseAndRemoveUnsupportedPrivileges() {
Map<TransportVersion, Set<String>> original = RemoteClusterPermissions.allowedRemoteClusterPermissions; Map<TransportVersion, Set<String>> original = RemoteClusterPermissions.allowedRemoteClusterPermissions;
try { try {
// create random groups with random privileges for random clusters // create random groups with random privileges for random clusters
@ -108,7 +113,7 @@ public class RemoteClusterPermissionsTests extends AbstractXContentSerializingTe
String[] privileges = groupPrivileges.get(i); String[] privileges = groupPrivileges.get(i);
String[] clusters = groupClusters.get(i); String[] clusters = groupClusters.get(i);
for (String cluster : clusters) { for (String cluster : clusters) {
String[] found = remoteClusterPermission.privilegeNames(cluster, TransportVersion.current()); String[] found = remoteClusterPermission.collapseAndRemoveUnsupportedPrivileges(cluster, TransportVersion.current());
Arrays.sort(found); Arrays.sort(found);
// ensure all lowercase since the privilege names are case insensitive and the method will result in lowercase // ensure all lowercase since the privilege names are case insensitive and the method will result in lowercase
for (int j = 0; j < privileges.length; j++) { for (int j = 0; j < privileges.length; j++) {
@ -126,13 +131,14 @@ public class RemoteClusterPermissionsTests extends AbstractXContentSerializingTe
// create random groups with random privileges for random clusters // create random groups with random privileges for random clusters
List<RemoteClusterPermissionGroup> randomGroups = generateRandomGroups(true); List<RemoteClusterPermissionGroup> randomGroups = generateRandomGroups(true);
// replace a random value with one that is allowed // replace a random value with one that is allowed
groupPrivileges.get(0)[0] = "monitor_enrich"; String singleValidPrivilege = randomFrom(RemoteClusterPermissions.allowedRemoteClusterPermissions.get(TransportVersion.current()));
groupPrivileges.get(0)[0] = singleValidPrivilege;
for (int i = 0; i < randomGroups.size(); i++) { for (int i = 0; i < randomGroups.size(); i++) {
String[] privileges = groupPrivileges.get(i); String[] privileges = groupPrivileges.get(i);
String[] clusters = groupClusters.get(i); String[] clusters = groupClusters.get(i);
for (String cluster : clusters) { for (String cluster : clusters) {
String[] found = remoteClusterPermission.privilegeNames(cluster, TransportVersion.current()); String[] found = remoteClusterPermission.collapseAndRemoveUnsupportedPrivileges(cluster, TransportVersion.current());
Arrays.sort(found); Arrays.sort(found);
// ensure all lowercase since the privilege names are case insensitive and the method will result in lowercase // ensure all lowercase since the privilege names are case insensitive and the method will result in lowercase
for (int j = 0; j < privileges.length; j++) { for (int j = 0; j < privileges.length; j++) {
@ -149,7 +155,7 @@ public class RemoteClusterPermissionsTests extends AbstractXContentSerializingTe
assertFalse(Arrays.equals(privileges, found)); assertFalse(Arrays.equals(privileges, found));
if (i == 0) { if (i == 0) {
// ensure that for the current version we only find the valid "monitor_enrich" // ensure that for the current version we only find the valid "monitor_enrich"
assertThat(Set.of(found), equalTo(Set.of("monitor_enrich"))); assertThat(Set.of(found), equalTo(Set.of(singleValidPrivilege)));
} else { } else {
// all other groups should be found to not have any privileges // all other groups should be found to not have any privileges
assertTrue(found.length == 0); assertTrue(found.length == 0);
@ -159,21 +165,26 @@ public class RemoteClusterPermissionsTests extends AbstractXContentSerializingTe
} }
} }
public void testMonitorEnrichPerVersion() { public void testPermissionsPerVersion() {
// test monitor_enrich before, after and on monitor enrich version testPermissionPerVersion("monitor_enrich", ROLE_REMOTE_CLUSTER_PRIVS);
String[] privileges = randomBoolean() ? new String[] { "monitor_enrich" } : new String[] { "monitor_enrich", "foo", "bar" }; testPermissionPerVersion("monitor_stats", ROLE_MONITOR_STATS);
}
private void testPermissionPerVersion(String permission, TransportVersion version) {
// test permission before, after and on the version
String[] privileges = randomBoolean() ? new String[] { permission } : new String[] { permission, "foo", "bar" };
String[] before = new RemoteClusterPermissions().addGroup(new RemoteClusterPermissionGroup(privileges, new String[] { "*" })) String[] before = new RemoteClusterPermissions().addGroup(new RemoteClusterPermissionGroup(privileges, new String[] { "*" }))
.privilegeNames("*", TransportVersionUtils.getPreviousVersion(ROLE_REMOTE_CLUSTER_PRIVS)); .collapseAndRemoveUnsupportedPrivileges("*", TransportVersionUtils.getPreviousVersion(version));
// empty set since monitor_enrich is not allowed in the before version // empty set since permissions is not allowed in the before version
assertThat(Set.of(before), equalTo(Collections.emptySet())); assertThat(Set.of(before), equalTo(Collections.emptySet()));
String[] on = new RemoteClusterPermissions().addGroup(new RemoteClusterPermissionGroup(privileges, new String[] { "*" })) String[] on = new RemoteClusterPermissions().addGroup(new RemoteClusterPermissionGroup(privileges, new String[] { "*" }))
.privilegeNames("*", ROLE_REMOTE_CLUSTER_PRIVS); .collapseAndRemoveUnsupportedPrivileges("*", version);
// only monitor_enrich since the other values are not allowed // the permission is found on that provided version
assertThat(Set.of(on), equalTo(Set.of("monitor_enrich"))); assertThat(Set.of(on), equalTo(Set.of(permission)));
String[] after = new RemoteClusterPermissions().addGroup(new RemoteClusterPermissionGroup(privileges, new String[] { "*" })) String[] after = new RemoteClusterPermissions().addGroup(new RemoteClusterPermissionGroup(privileges, new String[] { "*" }))
.privilegeNames("*", TransportVersion.current()); .collapseAndRemoveUnsupportedPrivileges("*", TransportVersion.current());
// only monitor_enrich since the other values are not allowed // current version (after the version) has the permission
assertThat(Set.of(after), equalTo(Set.of("monitor_enrich"))); assertThat(Set.of(after), equalTo(Set.of(permission)));
} }
public void testValidate() { public void testValidate() {
@ -181,12 +192,70 @@ public class RemoteClusterPermissionsTests extends AbstractXContentSerializingTe
// random values not allowed // random values not allowed
IllegalArgumentException error = expectThrows(IllegalArgumentException.class, () -> remoteClusterPermission.validate()); IllegalArgumentException error = expectThrows(IllegalArgumentException.class, () -> remoteClusterPermission.validate());
assertTrue(error.getMessage().contains("Invalid remote_cluster permissions found. Please remove the following:")); assertTrue(error.getMessage().contains("Invalid remote_cluster permissions found. Please remove the following:"));
assertTrue(error.getMessage().contains("Only [monitor_enrich] are allowed")); assertTrue(error.getMessage().contains("Only [monitor_enrich, monitor_stats] are allowed"));
new RemoteClusterPermissions().addGroup(new RemoteClusterPermissionGroup(new String[] { "monitor_enrich" }, new String[] { "*" })) new RemoteClusterPermissions().addGroup(new RemoteClusterPermissionGroup(new String[] { "monitor_enrich" }, new String[] { "*" }))
.validate(); // no error .validate(); // no error
} }
public void testToMap() {
RemoteClusterPermissions remoteClusterPermissions = new RemoteClusterPermissions();
List<RemoteClusterPermissionGroup> groups = generateRandomGroups(randomBoolean());
for (int i = 0; i < groups.size(); i++) {
remoteClusterPermissions.addGroup(groups.get(i));
}
List<Map<String, List<String>>> asAsMap = remoteClusterPermissions.toMap();
RemoteClusterPermissions remoteClusterPermissionsAsMap = new RemoteClusterPermissions(asAsMap);
assertEquals(remoteClusterPermissions, remoteClusterPermissionsAsMap);
}
public void testRemoveUnsupportedPrivileges() {
RemoteClusterPermissions remoteClusterPermissions = new RemoteClusterPermissions();
RemoteClusterPermissionGroup group = new RemoteClusterPermissionGroup(new String[] { "monitor_enrich" }, new String[] { "*" });
remoteClusterPermissions.addGroup(group);
// this privilege is allowed by versions, so nothing should be removed
assertEquals(remoteClusterPermissions, remoteClusterPermissions.removeUnsupportedPrivileges(ROLE_REMOTE_CLUSTER_PRIVS));
assertEquals(remoteClusterPermissions, remoteClusterPermissions.removeUnsupportedPrivileges(ROLE_MONITOR_STATS));
remoteClusterPermissions = new RemoteClusterPermissions();
if (randomBoolean()) {
group = new RemoteClusterPermissionGroup(new String[] { "monitor_stats" }, new String[] { "*" });
} else {
// if somehow duplicates end up here, they should not influence removal
group = new RemoteClusterPermissionGroup(new String[] { "monitor_stats", "monitor_stats" }, new String[] { "*" });
}
remoteClusterPermissions.addGroup(group);
// this single newer privilege is not allowed in the older version, so it should result in an object with no groups
assertNotEquals(remoteClusterPermissions, remoteClusterPermissions.removeUnsupportedPrivileges(ROLE_REMOTE_CLUSTER_PRIVS));
assertFalse(remoteClusterPermissions.removeUnsupportedPrivileges(ROLE_REMOTE_CLUSTER_PRIVS).hasAnyPrivileges());
assertEquals(remoteClusterPermissions, remoteClusterPermissions.removeUnsupportedPrivileges(ROLE_MONITOR_STATS));
int groupCount = randomIntBetween(1, 5);
remoteClusterPermissions = new RemoteClusterPermissions();
group = new RemoteClusterPermissionGroup(new String[] { "monitor_enrich", "monitor_stats" }, new String[] { "*" });
for (int i = 0; i < groupCount; i++) {
remoteClusterPermissions.addGroup(group);
}
// one of the newer privilege is not allowed in the older version, so it should result in a group with only the allowed privilege
RemoteClusterPermissions expected = new RemoteClusterPermissions();
for (int i = 0; i < groupCount; i++) {
expected.addGroup(new RemoteClusterPermissionGroup(new String[] { "monitor_enrich" }, new String[] { "*" }));
}
assertEquals(expected, remoteClusterPermissions.removeUnsupportedPrivileges(ROLE_REMOTE_CLUSTER_PRIVS));
// both privileges allowed in the newer version, so it should not change the permission
assertEquals(remoteClusterPermissions, remoteClusterPermissions.removeUnsupportedPrivileges(ROLE_MONITOR_STATS));
}
public void testShortCircuitRemoveUnsupportedPrivileges() {
RemoteClusterPermissions remoteClusterPermissions = new RemoteClusterPermissions();
assertSame(remoteClusterPermissions, remoteClusterPermissions.removeUnsupportedPrivileges(TransportVersion.current()));
assertSame(remoteClusterPermissions, remoteClusterPermissions.removeUnsupportedPrivileges(lastTransportVersionPermission));
assertNotSame(
remoteClusterPermissions,
remoteClusterPermissions.removeUnsupportedPrivileges(TransportVersionUtils.getPreviousVersion(lastTransportVersionPermission))
);
}
private List<RemoteClusterPermissionGroup> generateRandomGroups(boolean fuzzyCluster) { private List<RemoteClusterPermissionGroup> generateRandomGroups(boolean fuzzyCluster) {
clean(); clean();
List<RemoteClusterPermissionGroup> groups = new ArrayList<>(); List<RemoteClusterPermissionGroup> groups = new ArrayList<>();
@ -216,22 +285,48 @@ public class RemoteClusterPermissionsTests extends AbstractXContentSerializingTe
@Override @Override
protected RemoteClusterPermissions createTestInstance() { protected RemoteClusterPermissions createTestInstance() {
Set<String> all = RemoteClusterPermissions.allowedRemoteClusterPermissions.values()
.stream()
.flatMap(Set::stream)
.collect(Collectors.toSet());
List<String> randomPermission = randomList(1, all.size(), () -> randomFrom(all));
return new RemoteClusterPermissions().addGroup( return new RemoteClusterPermissions().addGroup(
new RemoteClusterPermissionGroup(new String[] { "monitor_enrich" }, new String[] { "*" }) new RemoteClusterPermissionGroup(randomPermission.toArray(new String[0]), new String[] { "*" })
); );
} }
@Override @Override
protected RemoteClusterPermissions mutateInstance(RemoteClusterPermissions instance) throws IOException { protected RemoteClusterPermissions mutateInstance(RemoteClusterPermissions instance) throws IOException {
return new RemoteClusterPermissions().addGroup( return new RemoteClusterPermissions().addGroup(
new RemoteClusterPermissionGroup(new String[] { "monitor_enrich" }, new String[] { "*" }) new RemoteClusterPermissionGroup(new String[] { "monitor_enrich", "monitor_stats" }, new String[] { "*" })
).addGroup(new RemoteClusterPermissionGroup(new String[] { "foobar" }, new String[] { "*" })); ).addGroup(new RemoteClusterPermissionGroup(new String[] { "foobar" }, new String[] { "*" }));
} }
@Override @Override
protected RemoteClusterPermissions doParseInstance(XContentParser parser) throws IOException { protected RemoteClusterPermissions doParseInstance(XContentParser parser) throws IOException {
// fromXContent/parsing isn't supported since we still do old school manual parsing of the role descriptor // fromXContent/object parsing isn't supported since we still do old school manual parsing of the role descriptor
return createTestInstance(); // so this test is silly because it only tests we know how to manually parse the test instance in this test
// this is needed since we want the other parts from the AbstractXContentSerializingTestCase suite
RemoteClusterPermissions remoteClusterPermissions = new RemoteClusterPermissions();
String[] privileges = null;
String[] clusters = null;
XContentParser.Token token;
String currentFieldName = null;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.START_OBJECT) {
continue;
}
if (token == XContentParser.Token.FIELD_NAME) {
currentFieldName = parser.currentName();
} else if (RoleDescriptor.Fields.PRIVILEGES.match(currentFieldName, parser.getDeprecationHandler())) {
privileges = XContentUtils.readStringArray(parser, false);
} else if (RoleDescriptor.Fields.CLUSTERS.match(currentFieldName, parser.getDeprecationHandler())) {
clusters = XContentUtils.readStringArray(parser, false);
}
}
remoteClusterPermissions.addGroup(new RemoteClusterPermissionGroup(privileges, clusters));
return remoteClusterPermissions;
} }
@Override @Override

Some files were not shown because too many files have changed in this diff Show more