TSDB: Support GET and DELETE and doc versioning (#82633)

This adds support for GET and DELETE and the ids query and
Elasticsearch's standard document versioning to TSDB. So you can do
things like:
```
POST /tsdb_idx/_doc?filter_path=_id
{
  "@timestamp": "2021-12-29T19:25:05Z", "uid": "adsfadf", "v": 1.2
}
```

That'll return `{"_id" : "BsYQJjqS3TnsUlF3aDKnB34BAAA"}` which you can turn
around and fetch with
```
GET /tsdb_idx/_doc/BsYQJjqS3TnsUlF3aDKnB34BAAA
```
just like any other document in any other index. You can delete it too!
Or fetch it.

The ID comes from the dimensions and the `@timestamp`. So you can
overwrite the document:
```
POST /tsdb_idx/_bulk
{"index": {}}
{"@timestamp": "2021-12-29T19:25:05Z", "uid": "adsfadf", "v": 1.2}
```

Or you can write only if it doesn't already exist:
```
POST /tsdb_idx/_bulk
{"create": {}}
{"@timestamp": "2021-12-29T19:25:05Z", "uid": "adsfadf", "v": 1.2}
```

This works by generating an id from the dimensions and the `@timestamp`
when parsing the document. The id looks like:
* 4 bytes of hash from the routing calculated from routing_path fields
* 8 bytes of hash from the dimensions
* 8 bytes of timestamp
All that's base 64 encoded so that `Uid` can chew on it fairly
efficiently.

When it comes time to fetch or delete documents we base 64 decode the id
and grab the routing from the first four bytes. We use that hash to pick
the shard. Then we use the entire ID to perform the fetch or delete.

We don't implement update actions because we haven't written the
infrastructure to make sure the dimensions don't change. It's possible
to do, but feels like more than we need now.

There *ton* of compromises with this. The long term sad thing is that it
locks us into *indexing* the id of the sample. It'll index fairly
efficiently because the each time series will have the same first eight
bytes. It's also possible we'd share many of the first few bytes in the
timestamp as well. In our tsdb rally track this costs 8.75 bytes per
document. It's substantial, but not overwhelming.

In the short term there are lots of problems that I'd like to save for a
follow up change:
1. ~~We still generate the automatic `_id` for the document but we don't use
   it. We should stop generating it.~~ Included in this PR based on review comments.
2. We generated the time series `_id` on each shard and when replaying
   the translog. It'd be the good kind of paranoid to generate it once
   on the primary and then keep it forever.
3. We have to encode the `_id` as a string to pass it around
   Elasticsearch internally. And Elasticsearch assumes that when an id
   is loaded we always store as bytes encoded the `Uid` - which *does*
   have nice encoding for base 64 bytes. But this whole thing requires
   us to make the bytes, base 64 encode them, and then hand them back to
   `Uid` to base 64 decode them into bytes. It's a bit hacky. And, it's
   a small thing, but if the first byte of the routing hash encodes to
   254 or 255 we `Uid` spends an extra byte to encode it. One that'll
   always be a common prefix for tsdb indices, but still, it hurts my
   heart. It's just hard to fix.
4. We store the `_id` in Lucene stored fields for tsdb indices. Now
   that we're building it from the dimensions and the `@timestamp` we
   really don't *need* to store it. We could recalculate it when fetching
   documents. In the tsdb rall ytrick this'd save us 6 bytes per document
   at the cost of marginally slower fetches. Which is *fine*.
5. There are several error messages that try to use `_id` right now
   during parsing but the `_id` isn't available until after the parsing
   is complete. And, if parsing fails, it may not be possible to know
   the id at all. All of these error messages will have to change,
   at least in tsdb mode.
6. ~~If you specify an `_id` on the request right now we just overwrite
   it. We should send you an error.~~ Included in this PR after review comments.
7. We have to entirely disable the append-only optimization that allows
   Elasticsearch to skip looking up the ids in lucene. This *halves*
   indexing speed. It's substantial. We have to claw that optimization
   back *somehow*. Something like sliding bloom filters or relying on
   the increasing timestamps.
8. We parse the source from json when building the routing hash when
   parsing fields. We should just build it from to parsed field values.
   It looks like that'd improve indexing speed by about 20%.
9. Right now we write the `@timestamp` little endian. This is likely bad
   the prefix encoded inverted index. It'll prefer big endian. Might shrink it.
10. Improve error message on version conflict to include tsid and timestamp.
11. Improve error message when modifying dimensions or timestamp in update_by_query
12. Make it possible to modify dimension or timestamp in reindex.
13. Test TSDB's `_id` in `RecoverySourceHandlerTests.java` and `EngineTests.java`.

I've had to make some changes as part of this that don't feel super
expected. The biggest one is changing `Engine.Result` to include the
`id`. When the `id` comes from the dimensions it is calculated by the
document parsing infrastructure which is happens in
`IndexShard#pepareIndex`. Which returns an `Engine.IndexResult`. To make
everything clean I made it so `id` is available on all `Engine.Result`s
and I made all of the "outer results classes" read from
`Engine.Results#id`. I'm not excited by it. But it works and it's what
we're going with.

I've opted to create two subclasses of `IdFieldMapper`, one for standard
indices and one for tsdb indices. This feels like the right way to
introduce the distinction, especially if we don't want tsdb to cary
around it's old fielddata support. Honestly if we *need* to aggregate on
`_id` in tsdb mode we have doc values for the `tsdb` and the
`@timestamp` - we could build doc values for `_id` on the fly. But I'm
not expecting folks will need to do this. Also! I'd like to stop storing
tsdb'd `_id` field (see number 4 above) and the new subclass feels like
a good place to put that too.
This commit is contained in:
Nik Everett 2022-03-10 10:05:27 -05:00 committed by GitHub
parent 2a5dccb7d3
commit 37ea6a8255
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
95 changed files with 2557 additions and 997 deletions

View file

@ -31,10 +31,10 @@ import org.elasticsearch.index.analysis.AnalyzerScope;
import org.elasticsearch.index.analysis.IndexAnalyzers;
import org.elasticsearch.index.analysis.NamedAnalyzer;
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MapperRegistry;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.ParsedDocument;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.SourceToParse;
import org.elasticsearch.index.query.SearchExecutionContext;
import org.elasticsearch.index.search.QueryParserHelper;
@ -184,7 +184,7 @@ public class QueryParserHelperBenchmark {
similarityService,
mapperRegistry,
() -> { throw new UnsupportedOperationException(); },
new IdFieldMapper(() -> true),
new ProvidedIdFieldMapper(() -> true),
new ScriptCompiler() {
@Override
public <T> T compile(Script script, ScriptContext<T> scriptContext) {

View file

@ -0,0 +1,5 @@
pr: 82633
summary: "TSDB: Support GET and DELETE and doc versioning"
area: TSDB
type: feature
issues: []

View file

@ -17,7 +17,9 @@ import org.elasticsearch.test.rest.yaml.ObjectPath;
import java.io.IOException;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import static org.elasticsearch.cluster.metadata.DataStreamTestHelper.backingIndexEqualTo;
import static org.hamcrest.Matchers.aMapWithSize;
@ -301,10 +303,15 @@ public class TsdbDataStreamRestIT extends ESRestTestCase {
int numDocs = 32;
var currentTime = Instant.now();
var currentMinus30Days = currentTime.minus(30, ChronoUnit.DAYS);
Set<Instant> times = new HashSet<>();
for (int i = 0; i < numRollovers; i++) {
for (int j = 0; j < numDocs; j++) {
var indexRequest = new Request("POST", "/k8s/_doc");
var time = Instant.ofEpochMilli(randomLongBetween(currentMinus30Days.toEpochMilli(), currentTime.toEpochMilli()));
var time = randomValueOtherThanMany(
times::contains,
() -> Instant.ofEpochMilli(randomLongBetween(currentMinus30Days.toEpochMilli(), currentTime.toEpochMilli()))
);
times.add(time);
indexRequest.setJsonEntity(DOC.replace("$time", formatInstant(time)));
var response = client().performRequest(indexRequest);
assertOK(response);
@ -350,13 +357,15 @@ public class TsdbDataStreamRestIT extends ESRestTestCase {
assertThat(newIndex, backingIndexEqualTo("k8s", 6));
// Ingest documents that will land in the new tsdb backing index:
var t = currentTime;
for (int i = 0; i < numDocs; i++) {
var indexRequest = new Request("POST", "/k8s/_doc");
indexRequest.setJsonEntity(DOC.replace("$time", formatInstant(currentTime)));
indexRequest.setJsonEntity(DOC.replace("$time", formatInstant(t)));
var response = client().performRequest(indexRequest);
assertOK(response);
var responseBody = entityAsMap(response);
assertThat((String) responseBody.get("_index"), backingIndexEqualTo("k8s", 6));
t = t.plusMillis(1000);
}
// Fail if documents target older non tsdb backing index:

View file

@ -48,8 +48,7 @@ import org.elasticsearch.geometry.Point;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexService;
import org.elasticsearch.index.mapper.DateFieldMapper;
import org.elasticsearch.index.mapper.DocumentParser;
import org.elasticsearch.index.mapper.MappingLookup;
import org.elasticsearch.index.mapper.DocumentMapper;
import org.elasticsearch.index.mapper.ParsedDocument;
import org.elasticsearch.index.mapper.SourceToParse;
import org.elasticsearch.index.query.AbstractQueryBuilder;
@ -701,22 +700,22 @@ public class PainlessExecuteAction extends ActionType<PainlessExecuteAction.Resp
CheckedBiFunction<SearchExecutionContext, LeafReaderContext, Response, IOException> handler,
IndexService indexService
) throws IOException {
Analyzer defaultAnalyzer = indexService.getIndexAnalyzers().getDefaultIndexAnalyzer();
try (Directory directory = new ByteBuffersDirectory()) {
try (IndexWriter indexWriter = new IndexWriter(directory, new IndexWriterConfig(defaultAnalyzer))) {
String index = indexService.index().getName();
BytesReference document = request.contextSetup.document;
XContentType xContentType = request.contextSetup.xContentType;
SourceToParse sourceToParse = new SourceToParse("_id", document, xContentType);
MappingLookup mappingLookup = indexService.mapperService().mappingLookup();
DocumentParser documentParser = indexService.mapperService().documentParser();
DocumentMapper documentMapper = indexService.mapperService().documentMapper();
if (documentMapper == null) {
documentMapper = DocumentMapper.createEmpty(indexService.mapperService());
}
// Note that we are not doing anything with dynamic mapping updates, hence fields that are not mapped but are present
// in the sample doc are not accessible from the script through doc['field'].
// This is a problem especially for indices that have no mappings, as no fields will be accessible, neither through doc
// nor _source (if there are no mappings there are no metadata fields).
ParsedDocument parsedDocument = documentParser.parseDocument(sourceToParse, mappingLookup);
ParsedDocument parsedDocument = documentMapper.parse(sourceToParse);
indexWriter.addDocuments(parsedDocument.docs());
try (IndexReader indexReader = DirectoryReader.open(indexWriter)) {
final IndexSearcher searcher = new IndexSearcher(indexReader);

View file

@ -70,10 +70,12 @@ public final class ParentJoinFieldMapper extends FieldMapper {
}
private static void checkIndexCompatibility(IndexSettings settings, String name) {
String indexName = settings.getIndex().getName();
if (settings.getIndexMetadata().isRoutingPartitionedIndex()) {
throw new IllegalStateException(
"cannot create join field [" + name + "] " + "for the partitioned index " + "[" + settings.getIndex().getName() + "]"
);
throw new IllegalStateException("cannot create join field [" + name + "] for the partitioned index [" + indexName + "]");
}
if (settings.getIndexMetadata().getRoutingPaths().isEmpty() == false) {
throw new IllegalStateException("cannot create join field [" + name + "] for the index [" + indexName + "] with routing_path");
}
}
@ -141,7 +143,7 @@ public final class ParentJoinFieldMapper extends FieldMapper {
}
}
public static TypeParser PARSER = new TypeParser((n, c) -> {
public static final TypeParser PARSER = new TypeParser((n, c) -> {
checkIndexCompatibility(c.getIndexSettings(), n);
return new Builder(n);
});
@ -293,7 +295,7 @@ public final class ParentJoinFieldMapper extends FieldMapper {
if (fieldType().joiner.parentTypeExists(name)) {
// Index the document as a parent
String fieldName = fieldType().joiner.childJoinField(name);
parentIdFields.get(fieldName).indexValue(context, context.sourceToParse().id());
parentIdFields.get(fieldName).indexValue(context, context.id());
}
BytesRef binaryValue = new BytesRef(name);

View file

@ -499,12 +499,9 @@ tsdb:
- '{"@timestamp": "2021-04-28T18:50:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "cow", "uid":"1c4fc7b8-93b7-4ba8-b609-2a48af2f8e39", "ip": "10.10.55.4", "network": {"tx": 1434521831, "rx": 530575198}}}}'
- do:
catch: bad_request
delete_by_query:
index: tsdb
body:
query:
match_all: {}
- match: {failures.0.status: 400}
- match: {failures.0.cause.reason: "delete is not supported because the destination index [tsdb] is in time series mode"}
- match: {deleted: 1}

View file

@ -1,8 +1,8 @@
---
setup:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: introduced in 8.2.0
- do:
indices.create:
@ -12,9 +12,6 @@ setup:
index:
mode: time_series
routing_path: [metricset, k8s.pod.uid]
time_series:
start_time: 2021-04-28T00:00:00Z
end_time: 2021-04-29T00:00:00Z
number_of_replicas: 0
number_of_shards: 2
mappings:
@ -56,55 +53,6 @@ setup:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:51:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "cow", "uid":"1c4fc7b8-93b7-4ba8-b609-2a48af2f8e39", "ip": "10.10.55.4", "network": {"tx": 1434595272, "rx": 530605511}}}}'
---
teardown:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
- do:
indices.delete:
index: tsdb, standard, tsdb_new
ignore_unavailable: true
---
reindex tsdb index:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
- do:
reindex:
body:
source:
index: tsdb
dest:
index: standard
- match: {created: 4}
- match: {updated: 0}
- match: {version_conflicts: 0}
- match: {batches: 1}
- match: {failures: []}
- match: {throttled_millis: 0}
- gte: { took: 0 }
- is_false: task
- is_false: deleted
- do:
indices.refresh: {}
- do:
search:
index: standard
body:
sort: '@timestamp'
- match: {hits.total.value: 4}
- match: {hits.hits.0._source.@timestamp: 2021-04-28T18:50:03.142Z}
- match: {hits.hits.1._source.@timestamp: 2021-04-28T18:50:23.142Z}
- match: {hits.hits.2._source.@timestamp: 2021-04-28T18:50:53.142Z}
- match: {hits.hits.3._source.@timestamp: 2021-04-28T18:51:03.142Z}
- do:
indices.create:
index: tsdb_new
@ -113,9 +61,6 @@ reindex tsdb index:
index:
mode: time_series
routing_path: [metricset, k8s.pod.uid]
time_series:
start_time: 2021-04-28T00:00:00Z
end_time: 2021-04-29T00:00:00Z
number_of_replicas: 0
number_of_shards: 2
mappings:
@ -143,11 +88,117 @@ reindex tsdb index:
rx:
type: long
---
from tsdb to standard:
- skip:
version: " - 8.1.99"
reason: introduced in 8.2.0
- do:
reindex:
refresh: true
body:
source:
index: tsdb
dest:
index: standard
- match: {created: 4}
- match: {updated: 0}
- match: {version_conflicts: 0}
- match: {batches: 1}
- match: {failures: []}
- match: {throttled_millis: 0}
- gte: { took: 0 }
- is_false: task
- is_false: deleted
- do:
search:
index: standard
body:
sort: '@timestamp'
- match: {hits.total.value: 4}
- match: {hits.hits.0._source.@timestamp: 2021-04-28T18:50:03.142Z}
- match: {hits.hits.1._source.@timestamp: 2021-04-28T18:50:23.142Z}
- match: {hits.hits.2._source.@timestamp: 2021-04-28T18:50:53.142Z}
- match: {hits.hits.3._source.@timestamp: 2021-04-28T18:51:03.142Z}
---
from tsdb to tsdb:
- skip:
version: " - 8.1.99"
reason: introduced in 8.2.0
- do:
reindex:
refresh: true
body:
source:
index: tsdb
dest:
index: tsdb_new
- match: {created: 4}
- match: {updated: 0}
- match: {version_conflicts: 0}
- match: {batches: 1}
- match: {failures: []}
- match: {throttled_millis: 0}
- gte: { took: 0 }
- is_false: task
- is_false: deleted
- do:
search:
index: tsdb_new
body:
sort: '@timestamp'
aggs:
tsids:
terms:
field: _tsid
order:
_key: asc
- match: {hits.total.value: 4}
- match: {aggregations.tsids.buckets.0.key: {k8s.pod.uid: 1c4fc7b8-93b7-4ba8-b609-2a48af2f8e39, metricset: pod}}
- match: {aggregations.tsids.buckets.0.doc_count: 4}
- match: {hits.hits.0._source.@timestamp: 2021-04-28T18:50:03.142Z}
- match: {hits.hits.1._source.@timestamp: 2021-04-28T18:50:23.142Z}
- match: {hits.hits.2._source.@timestamp: 2021-04-28T18:50:53.142Z}
- match: {hits.hits.3._source.@timestamp: 2021-04-28T18:51:03.142Z}
---
from standard to tsdb:
- skip:
version: " - 8.1.99"
reason: introduced in 8.2.0
# Populate the standard index
- do:
reindex:
refresh: true
body:
source:
index: tsdb
dest:
index: standard
- match: {created: 4}
- match: {updated: 0}
- match: {version_conflicts: 0}
- match: {batches: 1}
- match: {failures: []}
- match: {throttled_millis: 0}
- gte: { took: 0 }
- is_false: task
- is_false: deleted
# Now test reindexing from it to tsdb
- do:
reindex:
refresh: true
body:
source:
index: tsdb
dest:
index: tsdb_new
- match: {created: 4}
@ -167,7 +218,7 @@ reindex tsdb index:
search:
index: tsdb_new
body:
size: 0
sort: '@timestamp'
aggs:
tsids:
terms:
@ -178,3 +229,43 @@ reindex tsdb index:
- match: {hits.total.value: 4}
- match: {aggregations.tsids.buckets.0.key: {k8s.pod.uid: 1c4fc7b8-93b7-4ba8-b609-2a48af2f8e39, metricset: pod}}
- match: {aggregations.tsids.buckets.0.doc_count: 4}
- match: {hits.hits.0._source.@timestamp: 2021-04-28T18:50:03.142Z}
- match: {hits.hits.1._source.@timestamp: 2021-04-28T18:50:23.142Z}
- match: {hits.hits.2._source.@timestamp: 2021-04-28T18:50:53.142Z}
- match: {hits.hits.3._source.@timestamp: 2021-04-28T18:51:03.142Z}
---
from tsdb to tsdb modifying timestamp:
- skip:
version: " - 8.1.99"
reason: introduced in 8.2.0
- do:
catch: bad_request # TODO make this work
reindex:
refresh: true
body:
source:
index: tsdb
dest:
index: tsdb_new
script:
source: ctx._source["@timestamp"] = ctx._source["@timestamp"].replace("-04-", "-05-")
---
from tsdb to tsdb modifying dimension:
- skip:
version: " - 8.1.99"
reason: introduced in 8.2.0
- do:
catch: bad_request # TODO make this work
reindex:
refresh: true
body:
source:
index: tsdb
dest:
index: tsdb_new
script:
source: ctx._source["metricset"] = "bubbles"

View file

@ -357,79 +357,3 @@
id: "1"
- match: { _source: {} }
- match: { _version: 2 }
---
tsdb:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
- do:
indices.create:
index: tsdb
body:
settings:
index:
mode: time_series
routing_path: [metricset, k8s.pod.uid]
time_series:
start_time: 2021-04-28T00:00:00Z
end_time: 2021-04-29T00:00:00Z
number_of_replicas: 0
number_of_shards: 2
mappings:
properties:
"@timestamp":
type: date
metricset:
type: keyword
time_series_dimension: true
k8s:
properties:
pod:
properties:
uid:
type: keyword
time_series_dimension: true
name:
type: keyword
ip:
type: ip
network:
properties:
tx:
type: long
rx:
type: long
- do:
bulk:
refresh: true
index: tsdb
body:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "cow", "uid":"1c4fc7b8-93b7-4ba8-b609-2a48af2f8e39", "ip": "10.10.55.4", "network": {"tx": 1434521831, "rx": 530575198}}}}'
- do:
update_by_query:
index: tsdb
body:
script:
lang: painless
source: ctx._source.k8s.pod.ip = "10.10.55.5"
- match: {updated: 1}
- match: {version_conflicts: 0}
- match: {batches: 1}
- match: {failures: []}
- match: {throttled_millis: 0}
- gte: { took: 0 }
- do:
indices.refresh: {}
- do:
search:
index: tsdb
- match: {hits.total.value: 1}
- match: {hits.hits.0._source.k8s.pod.ip: 10.10.55.5}

View file

@ -0,0 +1,112 @@
setup:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
- do:
indices.create:
index: tsdb
body:
settings:
index:
mode: time_series
routing_path: [metricset, k8s.pod.uid]
time_series:
start_time: 2021-04-28T00:00:00Z
end_time: 2021-04-29T00:00:00Z
number_of_replicas: 0
number_of_shards: 2
mappings:
properties:
"@timestamp":
type: date
metricset:
type: keyword
time_series_dimension: true
k8s:
properties:
pod:
properties:
uid:
type: keyword
time_series_dimension: true
name:
type: keyword
ip:
type: ip
network:
properties:
tx:
type: long
rx:
type: long
- do:
bulk:
refresh: true
index: tsdb
body:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "cow", "uid":"1c4fc7b8-93b7-4ba8-b609-2a48af2f8e39", "ip": "10.10.55.4", "network": {"tx": 1434521831, "rx": 530575198}}}}'
---
update tag field:
- do:
update_by_query:
index: tsdb
refresh: true
body:
script:
lang: painless
source: ctx._source.k8s.pod.ip = "10.10.55.5"
- match: {updated: 1}
- match: {version_conflicts: 0}
- match: {batches: 1}
- match: {failures: []}
- match: {throttled_millis: 0}
- gte: { took: 0 }
- do:
search:
index: tsdb
- match: {hits.total.value: 1}
- match: {hits.hits.0._source.k8s.pod.ip: 10.10.55.5}
---
update dimension field:
# TODO better error message
- do:
catch: bad_request
update_by_query:
index: tsdb
body:
script:
lang: painless
source: ctx._source.k8s.pod.uid = "12342134"
- match: {updated: 0}
- match: {version_conflicts: 0}
- match: {batches: 1}
- match: {throttled_millis: 0}
- gte: { took: 0 }
- match: {failures.0.cause.caused_by.reason: /_id\ must\ be\ unset\ or\ set\ to\ .+/}
---
update timestamp:
# TODO better error message
- do:
catch: bad_request
update_by_query:
index: tsdb
body:
script:
lang: painless
source: ctx._source["@timestamp"] = "2021-04-28T18:50:33.142Z"
- match: {updated: 0}
- match: {version_conflicts: 0}
- match: {batches: 1}
- match: {throttled_millis: 0}
- gte: { took: 0 }
- match: {failures.0.cause.caused_by.reason: /_id\ must\ be\ unset\ or\ set\ to\ .+/}

View file

@ -226,7 +226,7 @@ public class FullClusterRestartIT extends AbstractFullClusterRestartTestCase {
}
public void testSearchTimeSeriesMode() throws Exception {
assumeTrue("time series index sort by _tsid introduced in 8.1.0", getOldClusterVersion().onOrAfter(Version.V_8_1_0));
assumeTrue("indexing time series indices changed in 8.2.0", getOldClusterVersion().onOrAfter(Version.V_8_2_0));
int numDocs;
if (isRunningAgainstOldCluster()) {
numDocs = createTimeSeriesModeIndex(1);
@ -268,7 +268,7 @@ public class FullClusterRestartIT extends AbstractFullClusterRestartTestCase {
}
public void testNewReplicasTimeSeriesMode() throws Exception {
assumeTrue("time series index sort by _tsid introduced in 8.1.0", getOldClusterVersion().onOrAfter(Version.V_8_1_0));
assumeTrue("indexing time series indices changed in 8.2.0", getOldClusterVersion().onOrAfter(Version.V_8_2_0));
if (isRunningAgainstOldCluster()) {
createTimeSeriesModeIndex(0);
} else {

View file

@ -247,7 +247,7 @@ public class IndexingIT extends AbstractRollingTestCase {
}
public void testTsdb() throws IOException {
assumeTrue("sort by _tsid added in 8.1.0", UPGRADE_FROM_VERSION.onOrAfter(Version.V_8_1_0));
assumeTrue("indexing time series indices changed in 8.2.0", UPGRADE_FROM_VERSION.onOrAfter(Version.V_8_2_0));
StringBuilder bulk = new StringBuilder();
switch (CLUSTER_TYPE) {

View file

@ -1,7 +1,7 @@
setup:
- skip:
version: " - 8.1.0"
reason: Suipport for time_series aggs was added in 8.1.0
version: " - 8.2.0"
reason: Time series indexing changed in 8.2.0
- do:
indices.create:

View file

@ -144,3 +144,20 @@ fetch nested source:
- gt: { profile.shards.0.fetch.children.1.breakdown.next_reader: 0 }
- gt: { profile.shards.0.fetch.children.1.breakdown.next_reader_count: 0 }
- gt: { profile.shards.0.fetch.children.1.breakdown.next_reader: 0 }
---
disabling stored fields removes fetch sub phases:
- skip:
version: ' - 7.15.99'
reason: fetch profiling implemented in 7.16.0
- do:
search:
index: test
body:
stored_fields: _none_
profile: true
- match: { hits.hits.0._index: test }
- match: { profile.shards.0.fetch.debug.stored_fields: [] }
- is_false: profile.shards.0.fetch.children

View file

@ -1,7 +1,7 @@
setup:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -63,8 +63,8 @@ setup:
---
composite aggregation on tsid:
- skip:
version: " - 8.0.99"
reason: _tsid introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -113,8 +113,8 @@ composite aggregation on tsid:
---
composite aggregation on tsid with after:
- skip:
version: " - 8.0.99"
reason: _tsid introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:

View file

@ -241,8 +241,9 @@ empty start end times:
---
set start_time and end_time:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
index: test_index
@ -303,8 +304,9 @@ set start_time and end_time:
---
set start_time and end_time without timeseries mode:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
catch: /\[index.time_series.start_time\] requires \[index.mode=time_series\]/
indices.create:
@ -328,8 +330,9 @@ set start_time and end_time without timeseries mode:
---
set bad start_time and end_time:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
index: test_index
@ -368,8 +371,9 @@ set bad start_time and end_time:
---
check start_time and end_time with data_nano:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
index: test_index
@ -427,8 +431,9 @@ check start_time and end_time with data_nano:
---
check start_time boundary with data_nano:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
index: test_index
@ -477,8 +482,9 @@ check start_time boundary with data_nano:
---
check end_time boundary with data_nano:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
index: test_index
@ -529,7 +535,8 @@ check end_time boundary with data_nano:
check time_series default time bound value:
- skip:
version: " - 8.1.99"
reason: behavior was changed in 8.2.0
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
index: test_index
@ -561,7 +568,8 @@ check time_series default time bound value:
check time_series empty time bound value:
- skip:
version: " - 8.1.99"
reason: introduced in 8.2.0
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
index: test_index

View file

@ -2,8 +2,8 @@
---
date:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -51,8 +51,8 @@ date:
---
date_nanos:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -100,8 +100,8 @@ date_nanos:
---
automatically add with date:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -248,8 +248,8 @@ reject bad timestamp meta field:
---
write without timestamp:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -288,8 +288,8 @@ write without timestamp:
---
explicitly enable timestamp meta field:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:

View file

@ -192,8 +192,8 @@ non keyword matches routing_path:
---
runtime field matching routing path:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -234,8 +234,8 @@ runtime field matching routing path:
---
"dynamic: runtime matches routing_path":
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -271,8 +271,8 @@ runtime field matching routing path:
---
"dynamic: false matches routing_path":
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:

View file

@ -0,0 +1,422 @@
setup:
- skip:
version: " - 8.1.99"
reason: id generation changed in 8.2
- do:
indices.create:
index: test
body:
settings:
index:
mode: time_series
routing_path: [metricset, k8s.pod.uid]
time_series:
start_time: 2021-04-28T00:00:00Z
end_time: 2021-04-29T00:00:00Z
number_of_replicas: 0
number_of_shards: 2
mappings:
properties:
"@timestamp":
type: date
metricset:
type: keyword
time_series_dimension: true
k8s:
properties:
pod:
properties:
uid:
type: keyword
time_series_dimension: true
name:
type: keyword
ip:
type: ip
network:
properties:
tx:
type: long
rx:
type: long
- do:
bulk:
refresh: true
index: test
body:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:04.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.1", "network": {"tx": 2001818691, "rx": 802133794}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:24.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.1", "network": {"tx": 2005177954, "rx": 801479970}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:44.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.1", "network": {"tx": 2006223737, "rx": 802337279}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:51:04.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.2", "network": {"tx": 2012916202, "rx": 803685721}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 1434521831, "rx": 530575198}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:23.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 1434577921, "rx": 530600088}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:53.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 1434587694, "rx": 530604797}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:51:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 1434595272, "rx": 530605511}}}}'
---
generates a consistent id:
- skip:
version: " - 8.1.99"
reason: ID generation added in 8.2
- do:
bulk:
refresh: true
index: test
body:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:52:04.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.1", "network": {"tx": 2001818691, "rx": 802133794}}}}'
- match: {items.0.index._id: cZZNs4NdV58ePSPI8-3SGXkBAAA}
- do:
bulk:
refresh: true
index: test
body:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:52:04.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.1", "network": {"tx": 2001818691, "rx": 802133794}}}}'
- match: {items.0.index._id: cZZNs4NdV58ePSPI8-3SGXkBAAA}
---
index a new document on top of an old one:
- skip:
version: " - 8.1.99"
reason: indexing on top of another document support added in 8.2
- do:
search:
index: test
body:
size: 0
aggs:
max_tx:
max:
field: k8s.pod.network.tx
max_rx:
min:
field: k8s.pod.network.rx
- match: {aggregations.max_tx.value: 2.012916202E9}
- match: {aggregations.max_rx.value: 5.30575198E8}
- do:
index:
refresh: true
index: test
op_type: index
body:
"@timestamp": "2021-04-28T18:51:03.142Z"
metricset: pod
k8s:
pod:
name: dog
uid: df3145b3-0563-4d3b-a0f7-897eb2876ea9
ip: 10.10.55.3
network:
tx: 111434595272
rx: 430605511
- match: {_id: cn4exTOUtxytuLkQZv7RGXkBAAA}
- do:
search:
index: test
body:
size: 0
aggs:
max_tx:
max:
field: k8s.pod.network.tx
max_rx:
min:
field: k8s.pod.network.rx
- match: {aggregations.max_tx.value: 1.11434595272E11}
- match: {aggregations.max_rx.value: 4.30605511E8}
---
index a new document on top of an old one over bulk:
- skip:
version: " - 8.1.99"
reason: indexing on top of another document support added in 8.2
- do:
search:
index: test
body:
size: 0
aggs:
max_tx:
max:
field: k8s.pod.network.tx
max_rx:
min:
field: k8s.pod.network.rx
- match: {aggregations.max_tx.value: 2.012916202E9}
- match: {aggregations.max_rx.value: 5.30575198E8}
- do:
bulk:
refresh: true
index: test
body:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:51:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 111434595272, "rx": 430605511}}}}'
- match: {items.0.index._id: cn4exTOUtxytuLkQZv7RGXkBAAA}
- do:
search:
index: test
body:
size: 0
aggs:
max_tx:
max:
field: k8s.pod.network.tx
max_rx:
min:
field: k8s.pod.network.rx
- match: {aggregations.max_tx.value: 1.11434595272E11}
- match: {aggregations.max_rx.value: 4.30605511E8}
---
ids query:
- skip:
version: " - 8.1.99"
reason: ids generation changed in 8.2
- do:
search:
index: test
body:
fields:
- field: k8s.pod.network.tx
query:
ids:
values: ["cn4exTOUtxytuLkQZv7RGXkBAAA", "cZZNs4NdV58ePSPIkwPSGXkBAAA"]
sort: ["@timestamp"]
- match: {hits.total.value: 2}
- match: {hits.hits.0._id: "cn4exTOUtxytuLkQZv7RGXkBAAA"}
- match: {hits.hits.0.fields.k8s\.pod\.network\.tx: [1434595272]}
- match: {hits.hits.1._id: "cZZNs4NdV58ePSPIkwPSGXkBAAA"}
- match: {hits.hits.1.fields.k8s\.pod\.network\.tx: [2012916202]}
---
get:
- skip:
version: " - 8.1.99"
reason: ids generation changed in 8.2
- do:
get:
index: test
id: cZZNs4NdV58ePSPIkwPSGXkBAAA
- match: {_index: test}
- match: {_id: cZZNs4NdV58ePSPIkwPSGXkBAAA}
- match:
_source:
"@timestamp": "2021-04-28T18:51:04.467Z"
metricset: pod
k8s:
pod:
name: cat
uid: 947e4ced-1786-4e53-9e0c-5c447e959507
ip: 10.10.55.2
network:
tx: 2012916202
rx: 803685721
---
get not found:
- skip:
version: " - 8.1.99"
reason: ids generation changed in 8.2
- do:
catch: missing
get:
index: test
id: not found
---
get with routing:
- skip:
version: " - 8.1.99"
reason: ids generation changed in 8.2
- do:
catch: bad_request
get:
index: test
id: cZZNs4NdV58ePSPIkwPSGXkBAAA
routing: routing
---
delete:
- skip:
version: " - 8.1.99"
reason: ids generation changed in 8.2
- do:
delete:
index: test
id: cZZNs4NdV58ePSPIkwPSGXkBAAA
- match: {result: deleted}
---
delete not found:
- skip:
version: " - 8.1.99"
reason: ids generation changed in 8.2
- do:
catch: missing
delete:
index: test
id: not found
---
delete with routing:
- skip:
version: " - 8.1.99"
reason: ids generation changed in 8.2
- do:
catch: bad_request
delete:
index: test
id: not found
routing: routing
---
delete over _bulk:
- skip:
version: " - 8.1.99"
reason: ids generation changed in 8.2
- do:
bulk:
refresh: true
index: test
body:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:04.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.1", "network": {"tx": 2001818691, "rx": 802133794}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:24.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.1", "network": {"tx": 2005177954, "rx": 801479970}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:44.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.1", "network": {"tx": 2006223737, "rx": 802337279}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:51:04.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.2", "network": {"tx": 2012916202, "rx": 803685721}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 1434521831, "rx": 530575198}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:23.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 1434577921, "rx": 530600088}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:53.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 1434587694, "rx": 530604797}}}}'
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:51:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 1434595272, "rx": 530605511}}}}'
- do:
bulk:
index: test
body:
- '{"delete": {"_id": "cn4exTOUtxytuLkQBhTRGXkBAAA"}}'
- '{"delete": {"_id": "cZZNs4NdV58ePSPIkwPSGXkBAAA"}}'
- '{"delete": {"_id": "not found ++ not found"}}'
- match: {items.0.delete.result: deleted}
- match: {items.1.delete.result: deleted}
- match: {items.2.delete.status: 404}
- match: {items.2.delete.error.reason: "invalid id [not found ++ not found] for index [test] in time series mode"}
---
routing_path matches deep object:
- skip:
version: " - 8.1.99"
reason: id generation changed in 8.2
- do:
indices.create:
index: test2
body:
settings:
index:
mode: time_series
routing_path: [dim.**.uid]
time_series:
start_time: 2021-04-28T00:00:00Z
end_time: 2021-04-29T00:00:00Z
number_of_replicas: 0
number_of_shards: 2
mappings:
properties:
"@timestamp":
type: date
dim:
properties:
foo:
properties:
bar:
properties:
baz:
properties:
uid:
type: keyword
time_series_dimension: true
- do:
bulk:
refresh: true
index: test2
body:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:04.467Z", "dim": {"foo": {"bar": {"baz": {"uid": "uid1"}}}}}'
- match: {items.0.index.result: created}
- match: {items.0.index._id: OcEOGaxBa0saxogMMxnRGXkBAAA}
---
routing_path matches object:
- skip:
version: " - 8.1.99"
reason: id generation changed in 8.2
- do:
indices.create:
index: test2
body:
settings:
index:
mode: time_series
routing_path: [dim.*.uid]
time_series:
start_time: 2021-04-28T00:00:00Z
end_time: 2021-04-29T00:00:00Z
number_of_replicas: 0
number_of_shards: 2
mappings:
properties:
"@timestamp":
type: date
dim:
properties:
foo:
properties:
uid:
type: keyword
time_series_dimension: true
- do:
bulk:
refresh: true
index: test2
body:
- '{"index": {}}'
- '{"@timestamp": "2021-04-28T18:50:04.467Z", "dim": {"foo": {"uid": "uid1"}}}'
- match: {items.0.index.result: created}
- match: {items.0.index._id: 8bgiqUyQKH6n8noAMxnRGXkBAAA}

View file

@ -17,8 +17,8 @@ teardown:
---
"Create a snapshot and then restore it":
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
features: ["allowed_warnings"]
# Create index

View file

@ -1,7 +1,7 @@
setup:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -63,8 +63,8 @@ setup:
---
query a dimension:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -79,8 +79,8 @@ query a dimension:
---
query a metric:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -96,8 +96,8 @@ query a metric:
---
"query tsid fails":
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
catch: /\[_tsid\] is not searchable/
@ -111,8 +111,8 @@ query a metric:
---
fetch a dimension:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -130,8 +130,8 @@ fetch a dimension:
---
fetch a metric:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -150,8 +150,8 @@ fetch a metric:
---
fetch a tag:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -170,8 +170,8 @@ fetch a tag:
---
"fetch the tsid":
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -189,8 +189,8 @@ fetch a tag:
---
aggregate a dimension:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -211,8 +211,8 @@ aggregate a dimension:
---
aggregate a metric:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -239,8 +239,8 @@ aggregate a metric:
---
aggregate a tag:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -265,8 +265,8 @@ aggregate a tag:
---
"aggregate the tsid":
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -289,8 +289,8 @@ aggregate a tag:
---
"aggregate filter the tsid fails":
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
catch: /\[_tsid\] is not searchable/
@ -307,8 +307,8 @@ aggregate a tag:
---
field capabilities:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
field_caps:
@ -344,46 +344,11 @@ field capabilities:
- is_false: fields._tsid._tsid.non_searchable_indices
- is_false: fields._tsid._tsid.non_aggregatable_indices
---
ids query:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
# Ingest documents assigning custom ids so we can query them
- do:
bulk:
refresh: true
body:
- '{"index": {"_index": "test", "_id": "u1"}}'
- '{"@timestamp": "2021-04-28T18:45:04.467Z", "metricset": "foo", "k8s": {"pod": {"name": "cat", "uid":"6483d28c-24ee-44f2-926b-63b89d6d8b1b", "ip": "10.10.55.1", "network": {"tx": 2001828691, "rx": 802133794}}}}'
- '{"index": {"_index": "test", "_id": "u2"}}'
- '{"@timestamp": "2021-04-28T18:50:24.467Z", "metricset": "foo", "k8s": {"pod": {"name": "cat", "uid":"6483d28c-24ee-44f2-926b-63b89d6d8b1b", "ip": "10.10.55.1", "network": {"tx": 2001838691, "rx": 801479970}}}}'
- '{"index": {"_index": "test", "_id": "u3"}}'
- '{"@timestamp": "2021-04-28T18:55:24.467Z", "metricset": "foo", "k8s": {"pod": {"name": "cat", "uid":"6483d28c-24ee-44f2-926b-63b89d6d8b1b", "ip": "10.10.55.1", "network": {"tx": 2001848691, "rx": 801479970}}}}'
- do:
search:
index: test
body:
fields:
- field: k8s.pod.network.tx
query:
ids:
values: ["u1", "u3"]
sort: ["@timestamp"]
- match: {hits.total.value: 2}
- match: {hits.hits.0._id: "u1"}
- match: {hits.hits.0.fields.k8s\.pod\.network\.tx: [2001828691]}
- match: {hits.hits.1._id: "u3"}
- match: {hits.hits.1.fields.k8s\.pod\.network\.tx: [2001848691]}
---
sort by tsid:
- skip:
version: " - 8.0.99"
reason: _tsid introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:

View file

@ -1,7 +1,7 @@
setup:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -63,8 +63,8 @@ setup:
---
search an alias:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.put_alias:
@ -92,8 +92,8 @@ search an alias:
---
index into alias:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.put_alias:

View file

@ -1,8 +1,8 @@
---
add dimensions with put_mapping:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -54,8 +54,8 @@ add dimensions with put_mapping:
---
add dimensions to no dims with dynamic_template over index:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -102,8 +102,8 @@ add dimensions to no dims with dynamic_template over index:
---
add dimensions to no dims with dynamic_template over bulk:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -150,8 +150,8 @@ add dimensions to no dims with dynamic_template over bulk:
---
add dimensions to some dims with dynamic_template over index:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -202,8 +202,8 @@ add dimensions to some dims with dynamic_template over index:
---
add dimensions to some dims with dynamic_template over bulk:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:

View file

@ -1,8 +1,9 @@
keyword dimension:
- skip:
features: close_to
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -74,8 +75,8 @@ keyword dimension:
long dimension:
- skip:
features: close_to
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -149,8 +150,8 @@ long dimension:
ip dimension:
- skip:
features: close_to
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:

View file

@ -1,7 +1,7 @@
setup:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
features: "arbitrary_key"
# Force allocating all shards to a single node so that we can shrink later.
@ -85,8 +85,8 @@ setup:
---
split:
- skip:
version: " - 8.0.99"
reason: index-split check introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
catch: /index-split is not supported because the destination index \[test\] is in time series mode/
@ -101,8 +101,8 @@ split:
---
shrink:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.shrink:
@ -128,8 +128,8 @@ shrink:
---
clone:
- skip:
version: " - 8.0.99"
reason: _tsid support introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.clone:
@ -152,8 +152,8 @@ clone:
---
clone no source index:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:

View file

@ -1,7 +1,7 @@
setup:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
indices.create:
@ -64,18 +64,17 @@ setup:
- '{"@timestamp": "2021-04-28T18:51:03.142Z", "metricset": "pod", "k8s": {"pod": {"name": "dog", "uid":"df3145b3-0563-4d3b-a0f7-897eb2876ea9", "ip": "10.10.55.3", "network": {"tx": 1434595272, "rx": 530605511}}}}'
---
index with specified routing:
index with routing:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
catch: /indexing with a specified routing is not supported because the destination index \[test\] is in time series mode/
catch: /specifying routing is not supported because the destination index \[test\] is in time series mode/
index:
index: test
routing: foo
body:
doc:
"@timestamp": "2021-04-28T18:35:24.467Z"
metricset: "pod"
k8s:
@ -88,10 +87,11 @@ index with specified routing:
rx: 802133794
---
index with specified routing over _bulk:
index with routing over _bulk:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
bulk:
refresh: true
@ -99,39 +99,13 @@ index with specified routing over _bulk:
body:
- '{"index": {"routing": "foo"}}'
- '{"@timestamp": "2021-04-28T18:50:04.467Z", "metricset": "pod", "k8s": {"pod": {"name": "cat", "uid":"947e4ced-1786-4e53-9e0c-5c447e959507", "ip": "10.10.55.1", "network": {"tx": 2001818691, "rx": 802133794}}}}'
- match: {items.0.index.error.reason: "indexing with a specified routing is not supported because the destination index [test] is in time series mode"}
---
delete:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
- do:
catch: /delete is not supported because the destination index \[test\] is in time series mode/
delete:
index: test
id: "1"
---
delete over _bulk:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
- do:
bulk:
index: test
body:
- '{"delete": {"_id": 1}}'
- '{"delete": {"_id": 2}}'
- match: {items.0.delete.error.reason: "delete is not supported because the destination index [test] is in time series mode"}
- match: {items.0.index.error.reason: "specifying routing is not supported because the destination index [test] is in time series mode"}
---
noop update:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
search:
@ -152,8 +126,8 @@ noop update:
---
update:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
# We fail even though the document isn't found.
- do:
@ -177,8 +151,8 @@ update:
---
update over _bulk:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
bulk:
@ -191,8 +165,8 @@ update over _bulk:
---
search with routing:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
# We fail even though the document isn't found.
- do:
@ -204,8 +178,8 @@ search with routing:
---
alias with routing:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
catch: /routing is forbidden on CRUD operations that target indices in \[index.mode=time_series\]/
@ -218,8 +192,8 @@ alias with routing:
---
alias with search_routing:
- skip:
version: " - 8.0.99"
reason: introduced in 8.1.0
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
catch: /routing is forbidden on CRUD operations that target indices in \[index.mode=time_series\]/
@ -229,3 +203,33 @@ alias with search_routing:
body:
search_routing: foo
---
sort by _id:
- skip:
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
catch: /Fielddata is not supported on \[_id\] field in \[time_series\] indices/
search:
index: test
body:
size: 1
sort: _id
---
aggregate on _id:
- skip:
version: " - 8.1.99"
reason: tsdb indexing changed in 8.2.0
- do:
catch: /Fielddata is not supported on \[_id\] field in \[time_series\] indices/
search:
index: test
body:
size: 1
aggs:
id:
terms:
field: _id

View file

@ -113,7 +113,7 @@ public class BulkIntegrationIT extends ESIntegTestCase {
// allowing the auto-generated timestamp to externally be set would allow making the index inconsistent with duplicate docs
public void testExternallySetAutoGeneratedTimestamp() {
IndexRequest indexRequest = new IndexRequest("index1").source(Collections.singletonMap("foo", "baz"));
indexRequest.process(); // sets the timestamp
indexRequest.autoGenerateId();
if (randomBoolean()) {
indexRequest.id("test");
}

View file

@ -147,7 +147,7 @@ public interface DocWriteRequest<T> extends IndicesRequest, Accountable {
/**
* Finalize the request before executing or routing it.
*/
void process();
void process(IndexRouting indexRouting);
/**
* Pick the appropriate shard id to receive this request.

View file

@ -245,7 +245,7 @@ class BulkPrimaryExecutionContext {
Engine.IndexResult indexResult = (Engine.IndexResult) result;
response = new IndexResponse(
primary.shardId(),
requestToExecute.id(),
indexResult.getId(),
result.getSeqNo(),
result.getTerm(),
indexResult.getVersion(),
@ -270,20 +270,19 @@ class BulkPrimaryExecutionContext {
executionResult.getResponse().setShardInfo(new ReplicationResponse.ShardInfo());
locationToSync = TransportWriteAction.locationToSync(locationToSync, result.getTranslogLocation());
}
case FAILURE -> executionResult = BulkItemResponse.failure(
case FAILURE -> {
/*
* Make sure to use request.index() here, if you
* use docWriteRequest.index() it will use the
* concrete index instead of an alias if used!
*/
String index = request.index();
executionResult = BulkItemResponse.failure(
current.id(),
docWriteRequest.opType(),
// Make sure to use request.index() here, if you
// use docWriteRequest.index() it will use the
// concrete index instead of an alias if used!
new BulkItemResponse.Failure(
request.index(),
docWriteRequest.id(),
result.getFailure(),
result.getSeqNo(),
result.getTerm()
)
new BulkItemResponse.Failure(index, result.getId(), result.getFailure(), result.getSeqNo(), result.getTerm())
);
}
default -> throw new AssertionError("unknown result type for " + getCurrentItem() + ": " + result.getResultType());
}
currentItemState = ItemProcessingState.EXECUTED;

View file

@ -15,6 +15,7 @@ import org.elasticsearch.Assertions;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.ExceptionsHelper;
import org.elasticsearch.ResourceAlreadyExistsException;
import org.elasticsearch.ResourceNotFoundException;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.ActionRunnable;
@ -532,20 +533,20 @@ public class TransportBulkAction extends HandledTransportAction<BulkRequest, Bul
prohibitCustomRoutingOnDataStream(docWriteRequest, metadata);
prohibitAppendWritesInBackingIndices(docWriteRequest, metadata);
docWriteRequest.routing(metadata.resolveWriteIndexRouting(docWriteRequest.routing(), docWriteRequest.index()));
docWriteRequest.process();
final Index concreteIndex = docWriteRequest.getConcreteWriteIndex(ia, metadata);
if (addFailureIfIndexIsClosed(docWriteRequest, concreteIndex, i, metadata)) {
continue;
}
IndexRouting indexRouting = concreteIndices.routing(concreteIndex);
docWriteRequest.process(indexRouting);
int shardId = docWriteRequest.route(indexRouting);
List<BulkItemRequest> shardRequests = requestsByShard.computeIfAbsent(
new ShardId(concreteIndex, shardId),
shard -> new ArrayList<>()
);
shardRequests.add(new BulkItemRequest(i, docWriteRequest));
} catch (ElasticsearchParseException | IllegalArgumentException | IndexNotFoundException | RoutingMissingException e) {
} catch (ElasticsearchParseException | IllegalArgumentException | RoutingMissingException | ResourceNotFoundException e) {
String name = ia != null ? ia.getName() : docWriteRequest.index();
BulkItemResponse.Failure failure = new BulkItemResponse.Failure(name, docWriteRequest.id(), e);
BulkItemResponse bulkItemResponse = BulkItemResponse.failure(i, docWriteRequest.opType(), failure);

View file

@ -218,7 +218,8 @@ public class TransportShardBulkAction extends TransportWriteAction<BulkShardRequ
e,
primary,
docWriteRequest.opType() == DocWriteRequest.OpType.DELETE,
docWriteRequest.version()
docWriteRequest.version(),
docWriteRequest.id()
),
context,
null
@ -274,7 +275,7 @@ public class TransportShardBulkAction extends TransportWriteAction<BulkShardRequ
} catch (Exception failure) {
// we may fail translating a update to index or delete operation
// we use index result to communicate failure while translating update request
final Engine.Result result = new Engine.IndexResult(failure, updateRequest.version());
final Engine.Result result = new Engine.IndexResult(failure, updateRequest.version(), updateRequest.id());
context.setRequestToExecute(updateRequest);
context.markOperationAsExecuted(result);
context.markAsCompleted(context.getExecutionResult());
@ -285,9 +286,7 @@ public class TransportShardBulkAction extends TransportWriteAction<BulkShardRequ
context.markAsCompleted(context.getExecutionResult());
return true;
}
DocWriteRequest<?> translated = updateResult.action();
translated.process();
context.setRequestToExecute(translated);
context.setRequestToExecute(updateResult.action());
} else {
context.setRequestToExecute(context.getCurrent());
updateResult = null;
@ -338,7 +337,8 @@ public class TransportShardBulkAction extends TransportWriteAction<BulkShardRequ
);
} catch (Exception e) {
logger.info(() -> new ParameterizedMessage("{} mapping update rejected by primary", primary.shardId()), e);
onComplete(exceptionToResult(e, primary, isDelete, version), context, updateResult);
assert result.getId() != null;
onComplete(exceptionToResult(e, primary, isDelete, version, result.getId()), context, updateResult);
return true;
}
@ -362,7 +362,7 @@ public class TransportShardBulkAction extends TransportWriteAction<BulkShardRequ
@Override
public void onFailure(Exception e) {
onComplete(exceptionToResult(e, primary, isDelete, version), context, updateResult);
onComplete(exceptionToResult(e, primary, isDelete, version, result.getId()), context, updateResult);
// Requesting mapping update failed, so we don't have to wait for a cluster state update
assert context.isInitial();
itemDoneListener.onResponse(null);
@ -375,8 +375,9 @@ public class TransportShardBulkAction extends TransportWriteAction<BulkShardRequ
return true;
}
private static Engine.Result exceptionToResult(Exception e, IndexShard primary, boolean isDelete, long version) {
return isDelete ? primary.getFailedDeleteResult(e, version) : primary.getFailedIndexResult(e, version);
private static Engine.Result exceptionToResult(Exception e, IndexShard primary, boolean isDelete, long version, String id) {
assert id != null;
return isDelete ? primary.getFailedDeleteResult(e, version, id) : primary.getFailedIndexResult(e, version, id);
}
private static void onComplete(Engine.Result r, BulkPrimaryExecutionContext context, UpdateHelper.Result updateResult) {

View file

@ -234,7 +234,7 @@ public class DeleteRequest extends ReplicatedWriteRequest<DeleteRequest>
}
@Override
public void process() {
public void process(IndexRouting indexRouting) {
// Nothing to do
}

View file

@ -591,21 +591,30 @@ public class IndexRequest extends ReplicatedWriteRequest<IndexRequest> implement
}
@Override
public void process() {
if ("".equals(id)) {
throw new IllegalArgumentException("if _id is specified it must not be empty");
public void process(IndexRouting indexRouting) {
indexRouting.process(this);
}
// generate id if not already provided
if (id == null) {
/**
* Set the {@code #id()} to an automatically generated one and make this
* request compatible with the append-only optimization.
*/
public void autoGenerateId() {
assert id == null;
assert autoGeneratedTimestamp == UNSET_AUTO_GENERATED_TIMESTAMP : "timestamp has already been generated!";
assert ifSeqNo == UNASSIGNED_SEQ_NO;
assert ifPrimaryTerm == UNASSIGNED_PRIMARY_TERM;
autoGeneratedTimestamp = Math.max(0, System.currentTimeMillis()); // extra paranoia
/*
* Set the auto generated timestamp so the append only optimization
* can quickly test if this request *must* be unique without reaching
* into the Lucene index. We lock it >0 because UNSET_AUTO_GENERATED_TIMESTAMP
* has a special meaning and is a negative value. This optimiation will
* never work before 1970, but that's ok. It's after 1970.
*/
autoGeneratedTimestamp = Math.max(0, System.currentTimeMillis());
String uid = UUIDs.base64UUID();
id(uid);
}
}
public void checkAutoIdWithOpTypeCreateSupportedByVersion(Version version) {
if (id == null && opType == OpType.CREATE && version.before(Version.V_7_5_0)) {
@ -728,7 +737,6 @@ public class IndexRequest extends ReplicatedWriteRequest<IndexRequest> implement
@Override
public int route(IndexRouting indexRouting) {
assert id != null : "route must be called after process";
return indexRouting.indexShard(id, routing, contentType, source);
}

View file

@ -832,7 +832,7 @@ public class UpdateRequest extends InstanceShardOperationRequest<UpdateRequest>
}
@Override
public void process() {
public void process(IndexRouting indexRouting) {
// Nothing to do
}

View file

@ -22,7 +22,6 @@ import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.analysis.AnalyzerScope;
import org.elasticsearch.index.analysis.IndexAnalyzers;
import org.elasticsearch.index.analysis.NamedAnalyzer;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MapperRegistry;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.similarity.SimilarityService;
@ -191,7 +190,7 @@ public class IndexMetadataVerifier {
similarityService,
mapperRegistry,
() -> null,
IdFieldMapper.NO_FIELD_DATA,
indexSettings.getMode().buildNoFieldDataIdFieldMapper(),
scriptService
);
mapperService.merge(indexMetadata, MapperService.MergeReason.MAPPING_RECOVERY);

View file

@ -10,11 +10,15 @@ package org.elasticsearch.cluster.routing;
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.StringHelper;
import org.elasticsearch.ResourceNotFoundException;
import org.elasticsearch.action.RoutingMissingException;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.cluster.metadata.MappingMetadata;
import org.elasticsearch.common.ParsingException;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.regex.Regex;
import org.elasticsearch.common.util.ByteUtils;
import org.elasticsearch.core.Nullable;
import org.elasticsearch.transport.Transports;
import org.elasticsearch.xcontent.XContentParser;
@ -24,9 +28,11 @@ import org.elasticsearch.xcontent.XContentType;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Base64;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.IntConsumer;
@ -59,6 +65,8 @@ public abstract class IndexRouting {
this.routingFactor = metadata.getRoutingFactor();
}
public abstract void process(IndexRequest indexRequest);
/**
* Called when indexing a document to generate the shard id that should contain
* a document with the provided parameters.
@ -129,8 +137,23 @@ public abstract class IndexRouting {
protected abstract int shardId(String id, @Nullable String routing);
@Override
public void process(IndexRequest indexRequest) {
if ("".equals(indexRequest.id())) {
throw new IllegalArgumentException("if _id is specified it must not be empty");
}
// generate id if not already provided
if (indexRequest.id() == null) {
indexRequest.autoGenerateId();
}
}
@Override
public int indexShard(String id, @Nullable String routing, XContentType sourceType, BytesReference source) {
if (id == null) {
throw new IllegalStateException("id is required and should have been set by process");
}
checkRoutingRequired(id, routing);
return shardId(id, routing);
}
@ -208,7 +231,8 @@ public abstract class IndexRouting {
}
}
private static class ExtractFromSource extends IndexRouting {
public static class ExtractFromSource extends IndexRouting {
private final List<String> routingPaths;
private final XContentParserConfiguration parserConfig;
ExtractFromSource(IndexMetadata metadata) {
@ -216,16 +240,36 @@ public abstract class IndexRouting {
if (metadata.isRoutingPartitionedIndex()) {
throw new IllegalArgumentException("routing_partition_size is incompatible with routing_path");
}
this.parserConfig = XContentParserConfiguration.EMPTY.withFiltering(Set.copyOf(metadata.getRoutingPaths()), null, true);
this.routingPaths = metadata.getRoutingPaths();
this.parserConfig = XContentParserConfiguration.EMPTY.withFiltering(Set.copyOf(routingPaths), null, true);
}
@Override
public int indexShard(String id, @Nullable String routing, XContentType sourceType, BytesReference source) {
if (routing != null) {
throw new IllegalArgumentException(error("indexing with a specified routing"));
}
assert Transports.assertNotTransportThread("parsing the _source can get slow");
public void process(IndexRequest indexRequest) {}
@Override
public int indexShard(String id, @Nullable String routing, XContentType sourceType, BytesReference source) {
assert Transports.assertNotTransportThread("parsing the _source can get slow");
checkNoRouting(routing);
return hashToShardId(hashSource(sourceType, source));
}
public String createId(XContentType sourceType, BytesReference source, byte[] suffix) {
return createId(hashSource(sourceType, source), suffix);
}
public String createId(Map<String, Object> flat, byte[] suffix) {
return createId(hashSource(flat), suffix);
}
private String createId(int routingHash, byte[] suffix) {
byte[] idBytes = new byte[4 + suffix.length];
ByteUtils.writeIntLE(routingHash, idBytes, 0);
System.arraycopy(suffix, 0, idBytes, 4, suffix.length);
return Base64.getUrlEncoder().withoutPadding().encodeToString(idBytes);
}
private int hashSource(XContentType sourceType, BytesReference source) {
List<NameAndHash> hashes = new ArrayList<>();
try {
try (XContentParser parser = sourceType.xContent().createParser(parserConfig, source.streamInput())) {
@ -240,7 +284,7 @@ public abstract class IndexRouting {
} catch (IOException | ParsingException e) {
throw new IllegalArgumentException("Error extracting routing: " + e.getMessage(), e);
}
return hashToShardId(hashesToHash(hashes));
return hashesToHash(hashes);
}
private static void extractObject(List<NameAndHash> hashes, @Nullable String path, XContentParser source) throws IOException {
@ -276,6 +320,16 @@ public abstract class IndexRouting {
}
}
private int hashSource(Map<String, Object> flat) {
List<NameAndHash> hashes = new ArrayList<>();
for (Map.Entry<String, Object> e : flat.entrySet()) {
if (Regex.simpleMatch(routingPaths, e.getKey())) {
hashes.add(new NameAndHash(new BytesRef(e.getKey()), hash(new BytesRef(e.getValue().toString()))));
}
}
return hashesToHash(hashes);
}
private static int hash(BytesRef ref) {
return StringHelper.murmurhash3_x86_32(ref, 0);
}
@ -307,12 +361,33 @@ public abstract class IndexRouting {
@Override
public int deleteShard(String id, @Nullable String routing) {
throw new IllegalArgumentException(error("delete"));
checkNoRouting(routing);
return idToHash(id);
}
@Override
public int getShard(String id, @Nullable String routing) {
throw new IllegalArgumentException(error("get"));
checkNoRouting(routing);
return idToHash(id);
}
private void checkNoRouting(@Nullable String routing) {
if (routing != null) {
throw new IllegalArgumentException(error("specifying routing"));
}
}
private int idToHash(String id) {
byte[] idBytes;
try {
idBytes = Base64.getUrlDecoder().decode(id);
} catch (IllegalArgumentException e) {
throw new ResourceNotFoundException("invalid id [{}] for index [{}] in time series mode", id, indexName);
}
if (idBytes.length < 4) {
throw new ResourceNotFoundException("invalid id [{}] for index [{}] in time series mode", id, indexName);
}
return hashToShardId(ByteUtils.readIntLE(idBytes, 0));
}
@Override

View file

@ -18,18 +18,22 @@ import org.elasticsearch.core.Nullable;
import org.elasticsearch.index.mapper.DataStreamTimestampFieldMapper;
import org.elasticsearch.index.mapper.DateFieldMapper;
import org.elasticsearch.index.mapper.DocumentDimensions;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.MappingLookup;
import org.elasticsearch.index.mapper.MetadataFieldMapper;
import org.elasticsearch.index.mapper.NestedLookup;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.RoutingFieldMapper;
import org.elasticsearch.index.mapper.TimeSeriesIdFieldMapper;
import org.elasticsearch.index.mapper.TsidExtractingIdFieldMapper;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.function.BooleanSupplier;
import java.util.stream.Collectors;
import java.util.stream.Stream;
@ -86,6 +90,16 @@ public enum IndexMode {
return null;
}
@Override
public IdFieldMapper buildNoFieldDataIdFieldMapper() {
return ProvidedIdFieldMapper.NO_FIELD_DATA;
}
@Override
public IdFieldMapper buildIdFieldMapper(BooleanSupplier fieldDataEnabled) {
return new ProvidedIdFieldMapper(fieldDataEnabled);
}
@Override
public DocumentDimensions buildDocumentDimensions() {
return new DocumentDimensions.OnlySingleValueAllowed();
@ -156,6 +170,17 @@ public enum IndexMode {
return TimeSeriesIdFieldMapper.INSTANCE;
}
@Override
public IdFieldMapper buildNoFieldDataIdFieldMapper() {
return TsidExtractingIdFieldMapper.INSTANCE;
}
@Override
public IdFieldMapper buildIdFieldMapper(BooleanSupplier fieldDataEnabled) {
// We don't support field data on TSDB's _id
return TsidExtractingIdFieldMapper.INSTANCE;
}
@Override
public DocumentDimensions buildDocumentDimensions() {
return new TimeSeriesIdFieldMapper.TimeSeriesIdBuilder();
@ -239,6 +264,10 @@ public enum IndexMode {
@Nullable
public abstract CompressedXContent getDefaultMapping();
public abstract IdFieldMapper buildIdFieldMapper(BooleanSupplier fieldDataEnabled);
public abstract IdFieldMapper buildNoFieldDataIdFieldMapper();
/**
* Get timebounds
*/
@ -252,6 +281,9 @@ public enum IndexMode {
*/
public abstract MetadataFieldMapper buildTimeSeriesIdFieldMapper();
/**
* How {@code time_series_dimension} fields are handled by indices in this mode.
*/
public abstract DocumentDimensions buildDocumentDimensions();
public static IndexMode fromString(String value) {

View file

@ -581,7 +581,7 @@ public final class IndexModule {
new SimilarityService(indexSettings, scriptService, similarities),
mapperRegistry,
() -> { throw new UnsupportedOperationException("no index query shard context available"); },
IdFieldMapper.NO_FIELD_DATA,
indexSettings.getMode().buildNoFieldDataIdFieldMapper(),
scriptService
);
}

View file

@ -13,6 +13,7 @@ import org.apache.lucene.index.MergePolicy;
import org.elasticsearch.Build;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.cluster.routing.IndexRouting;
import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.common.settings.IndexScopedSettings;
import org.elasticsearch.common.settings.Setting;
@ -634,6 +635,8 @@ public final class IndexSettings {
*/
private volatile int maxRegexLength;
private final IndexRouting indexRouting;
/**
* Returns the default search fields for this index.
*/
@ -745,6 +748,7 @@ public final class IndexSettings {
mappingDepthLimit = scopedSettings.get(INDEX_MAPPING_DEPTH_LIMIT_SETTING);
mappingFieldNameLengthLimit = scopedSettings.get(INDEX_MAPPING_FIELD_NAME_LENGTH_LIMIT_SETTING);
mappingDimensionFieldsLimit = scopedSettings.get(INDEX_MAPPING_DIMENSION_FIELDS_LIMIT_SETTING);
indexRouting = IndexRouting.fromIndexMetadata(indexMetadata);
scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_COMPOUND_FORMAT_SETTING, mergePolicyConfig::setNoCFSRatio);
scopedSettings.addSettingsUpdateConsumer(
@ -1344,4 +1348,12 @@ public final class IndexSettings {
public TimestampBounds getTimestampBounds() {
return timestampBounds;
}
/**
* The way that documents are routed on the coordinating
* node when being sent to shards of this index.
*/
public IndexRouting getIndexRouting() {
return indexRouting;
}
}

View file

@ -358,10 +358,11 @@ public abstract class Engine implements Closeable {
private final Exception failure;
private final SetOnce<Boolean> freeze = new SetOnce<>();
private final Mapping requiredMappingUpdate;
private final String id;
private Translog.Location translogLocation;
private long took;
protected Result(Operation.TYPE operationType, Exception failure, long version, long term, long seqNo) {
protected Result(Operation.TYPE operationType, Exception failure, long version, long term, long seqNo, String id) {
this.operationType = operationType;
this.failure = Objects.requireNonNull(failure);
this.version = version;
@ -369,9 +370,10 @@ public abstract class Engine implements Closeable {
this.seqNo = seqNo;
this.requiredMappingUpdate = null;
this.resultType = Type.FAILURE;
this.id = id;
}
protected Result(Operation.TYPE operationType, long version, long term, long seqNo) {
protected Result(Operation.TYPE operationType, long version, long term, long seqNo, String id) {
this.operationType = operationType;
this.version = version;
this.seqNo = seqNo;
@ -379,9 +381,10 @@ public abstract class Engine implements Closeable {
this.failure = null;
this.requiredMappingUpdate = null;
this.resultType = Type.SUCCESS;
this.id = id;
}
protected Result(Operation.TYPE operationType, Mapping requiredMappingUpdate) {
protected Result(Operation.TYPE operationType, Mapping requiredMappingUpdate, String id) {
this.operationType = operationType;
this.version = Versions.NOT_FOUND;
this.seqNo = UNASSIGNED_SEQ_NO;
@ -389,6 +392,7 @@ public abstract class Engine implements Closeable {
this.failure = null;
this.requiredMappingUpdate = requiredMappingUpdate;
this.resultType = Type.MAPPING_UPDATE_REQUIRED;
this.id = id;
}
/** whether the operation was successful, has failed or was aborted due to a mapping update */
@ -441,6 +445,10 @@ public abstract class Engine implements Closeable {
return operationType;
}
public String getId() {
return id;
}
void setTranslogLocation(Translog.Location translogLocation) {
if (freeze.get() == null) {
this.translogLocation = translogLocation;
@ -472,57 +480,56 @@ public abstract class Engine implements Closeable {
private final boolean created;
public IndexResult(long version, long term, long seqNo, boolean created) {
super(Operation.TYPE.INDEX, version, term, seqNo);
public IndexResult(long version, long term, long seqNo, boolean created, String id) {
super(Operation.TYPE.INDEX, version, term, seqNo, id);
this.created = created;
}
/**
* use in case of the index operation failed before getting to internal engine
**/
public IndexResult(Exception failure, long version) {
this(failure, version, UNASSIGNED_PRIMARY_TERM, UNASSIGNED_SEQ_NO);
public IndexResult(Exception failure, long version, String id) {
this(failure, version, UNASSIGNED_PRIMARY_TERM, UNASSIGNED_SEQ_NO, id);
}
public IndexResult(Exception failure, long version, long term, long seqNo) {
super(Operation.TYPE.INDEX, failure, version, term, seqNo);
public IndexResult(Exception failure, long version, long term, long seqNo, String id) {
super(Operation.TYPE.INDEX, failure, version, term, seqNo, id);
this.created = false;
}
public IndexResult(Mapping requiredMappingUpdate) {
super(Operation.TYPE.INDEX, requiredMappingUpdate);
public IndexResult(Mapping requiredMappingUpdate, String id) {
super(Operation.TYPE.INDEX, requiredMappingUpdate, id);
this.created = false;
}
public boolean isCreated() {
return created;
}
}
public static class DeleteResult extends Result {
private final boolean found;
public DeleteResult(long version, long term, long seqNo, boolean found) {
super(Operation.TYPE.DELETE, version, term, seqNo);
public DeleteResult(long version, long term, long seqNo, boolean found, String id) {
super(Operation.TYPE.DELETE, version, term, seqNo, id);
this.found = found;
}
/**
* use in case of the delete operation failed before getting to internal engine
**/
public DeleteResult(Exception failure, long version, long term) {
this(failure, version, term, UNASSIGNED_SEQ_NO, false);
public DeleteResult(Exception failure, long version, long term, String id) {
this(failure, version, term, UNASSIGNED_SEQ_NO, false, id);
}
public DeleteResult(Exception failure, long version, long term, long seqNo, boolean found) {
super(Operation.TYPE.DELETE, failure, version, term, seqNo);
public DeleteResult(Exception failure, long version, long term, long seqNo, boolean found, String id) {
super(Operation.TYPE.DELETE, failure, version, term, seqNo, id);
this.found = found;
}
public DeleteResult(Mapping requiredMappingUpdate) {
super(Operation.TYPE.DELETE, requiredMappingUpdate);
public DeleteResult(Mapping requiredMappingUpdate, String id) {
super(Operation.TYPE.DELETE, requiredMappingUpdate, id);
this.found = false;
}
@ -535,11 +542,11 @@ public abstract class Engine implements Closeable {
public static class NoOpResult extends Result {
NoOpResult(long term, long seqNo) {
super(Operation.TYPE.NO_OP, 0, term, seqNo);
super(Operation.TYPE.NO_OP, 0, term, seqNo, null);
}
NoOpResult(long term, long seqNo, Exception failure) {
super(Operation.TYPE.NO_OP, failure, 0, term, seqNo);
super(Operation.TYPE.NO_OP, failure, 0, term, seqNo, null);
}
}

View file

@ -998,7 +998,8 @@ public class InternalEngine extends Engine {
plan.versionForIndexing,
index.primaryTerm(),
index.seqNo(),
plan.currentNotFoundOrDeleted
plan.currentNotFoundOrDeleted,
index.id()
);
}
}
@ -1108,7 +1109,7 @@ public class InternalEngine extends Engine {
if (canOptimizeAddDocument && mayHaveBeenIndexedBefore(index) == false) {
final Exception reserveError = tryAcquireInFlightDocs(index, reservingDocs);
if (reserveError != null) {
plan = IndexingStrategy.failAsTooManyDocs(reserveError);
plan = IndexingStrategy.failAsTooManyDocs(reserveError, index.id());
} else {
plan = IndexingStrategy.optimizedAppendOnly(1L, reservingDocs);
}
@ -1134,7 +1135,7 @@ public class InternalEngine extends Engine {
SequenceNumbers.UNASSIGNED_SEQ_NO,
SequenceNumbers.UNASSIGNED_PRIMARY_TERM
);
plan = IndexingStrategy.skipDueToVersionConflict(e, true, currentVersion);
plan = IndexingStrategy.skipDueToVersionConflict(e, true, currentVersion, index.id());
} else if (index.getIfSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO
&& (versionValue.seqNo != index.getIfSeqNo() || versionValue.term != index.getIfPrimaryTerm())) {
final VersionConflictEngineException e = new VersionConflictEngineException(
@ -1145,7 +1146,7 @@ public class InternalEngine extends Engine {
versionValue.seqNo,
versionValue.term
);
plan = IndexingStrategy.skipDueToVersionConflict(e, currentNotFoundOrDeleted, currentVersion);
plan = IndexingStrategy.skipDueToVersionConflict(e, currentNotFoundOrDeleted, currentVersion, index.id());
} else if (index.versionType().isVersionConflictForWrites(currentVersion, index.version(), currentNotFoundOrDeleted)) {
final VersionConflictEngineException e = new VersionConflictEngineException(
shardId,
@ -1153,11 +1154,11 @@ public class InternalEngine extends Engine {
currentVersion,
currentNotFoundOrDeleted
);
plan = IndexingStrategy.skipDueToVersionConflict(e, currentNotFoundOrDeleted, currentVersion);
plan = IndexingStrategy.skipDueToVersionConflict(e, currentNotFoundOrDeleted, currentVersion, index.id());
} else {
final Exception reserveError = tryAcquireInFlightDocs(index, reservingDocs);
if (reserveError != null) {
plan = IndexingStrategy.failAsTooManyDocs(reserveError);
plan = IndexingStrategy.failAsTooManyDocs(reserveError, index.id());
} else {
plan = IndexingStrategy.processNormally(
currentNotFoundOrDeleted,
@ -1191,7 +1192,7 @@ public class InternalEngine extends Engine {
assert assertDocDoesNotExist(index, canOptimizeAddDocument(index) == false);
addDocs(index.docs(), indexWriter);
}
return new IndexResult(plan.versionForIndexing, index.primaryTerm(), index.seqNo(), plan.currentNotFoundOrDeleted);
return new IndexResult(plan.versionForIndexing, index.primaryTerm(), index.seqNo(), plan.currentNotFoundOrDeleted, index.id());
} catch (Exception ex) {
if (ex instanceof AlreadyClosedException == false
&& indexWriter.getTragicException() == null
@ -1209,7 +1210,7 @@ public class InternalEngine extends Engine {
* we return a `MATCH_ANY` version to indicate no document was index. The value is
* not used anyway
*/
return new IndexResult(ex, Versions.MATCH_ANY, index.primaryTerm(), index.seqNo());
return new IndexResult(ex, Versions.MATCH_ANY, index.primaryTerm(), index.seqNo(), index.id());
} else {
throw ex;
}
@ -1314,9 +1315,10 @@ public class InternalEngine extends Engine {
public static IndexingStrategy skipDueToVersionConflict(
VersionConflictEngineException e,
boolean currentNotFoundOrDeleted,
long currentVersion
long currentVersion,
String id
) {
final IndexResult result = new IndexResult(e, currentVersion);
final IndexResult result = new IndexResult(e, currentVersion, id);
return new IndexingStrategy(currentNotFoundOrDeleted, false, false, false, Versions.NOT_FOUND, 0, result);
}
@ -1340,8 +1342,8 @@ public class InternalEngine extends Engine {
return new IndexingStrategy(false, false, false, true, versionForIndexing, reservedDocs, null);
}
static IndexingStrategy failAsTooManyDocs(Exception e) {
final IndexResult result = new IndexResult(e, Versions.NOT_FOUND);
static IndexingStrategy failAsTooManyDocs(Exception e, String id) {
final IndexResult result = new IndexResult(e, Versions.NOT_FOUND, id);
return new IndexingStrategy(false, false, false, false, Versions.NOT_FOUND, 0, result);
}
}
@ -1424,7 +1426,8 @@ public class InternalEngine extends Engine {
plan.versionOfDeletion,
delete.primaryTerm(),
delete.seqNo(),
plan.currentlyDeleted == false
plan.currentlyDeleted == false,
delete.id()
);
}
if (plan.deleteFromLucene) {
@ -1550,7 +1553,7 @@ public class InternalEngine extends Engine {
SequenceNumbers.UNASSIGNED_SEQ_NO,
SequenceNumbers.UNASSIGNED_PRIMARY_TERM
);
plan = DeletionStrategy.skipDueToVersionConflict(e, currentVersion, true);
plan = DeletionStrategy.skipDueToVersionConflict(e, currentVersion, true, delete.id());
} else if (delete.getIfSeqNo() != SequenceNumbers.UNASSIGNED_SEQ_NO
&& (versionValue.seqNo != delete.getIfSeqNo() || versionValue.term != delete.getIfPrimaryTerm())) {
final VersionConflictEngineException e = new VersionConflictEngineException(
@ -1561,7 +1564,7 @@ public class InternalEngine extends Engine {
versionValue.seqNo,
versionValue.term
);
plan = DeletionStrategy.skipDueToVersionConflict(e, currentVersion, currentlyDeleted);
plan = DeletionStrategy.skipDueToVersionConflict(e, currentVersion, currentlyDeleted, delete.id());
} else if (delete.versionType().isVersionConflictForWrites(currentVersion, delete.version(), currentlyDeleted)) {
final VersionConflictEngineException e = new VersionConflictEngineException(
shardId,
@ -1569,11 +1572,11 @@ public class InternalEngine extends Engine {
currentVersion,
currentlyDeleted
);
plan = DeletionStrategy.skipDueToVersionConflict(e, currentVersion, currentlyDeleted);
plan = DeletionStrategy.skipDueToVersionConflict(e, currentVersion, currentlyDeleted, delete.id());
} else {
final Exception reserveError = tryAcquireInFlightDocs(delete, 1);
if (reserveError != null) {
plan = DeletionStrategy.failAsTooManyDocs(reserveError);
plan = DeletionStrategy.failAsTooManyDocs(reserveError, delete.id());
} else {
final long versionOfDeletion = delete.versionType().updateVersion(currentVersion, delete.version());
plan = DeletionStrategy.processNormally(currentlyDeleted, versionOfDeletion, 1);
@ -1598,7 +1601,13 @@ public class InternalEngine extends Engine {
} else {
indexWriter.softUpdateDocument(delete.uid(), doc, softDeletesField);
}
return new DeleteResult(plan.versionOfDeletion, delete.primaryTerm(), delete.seqNo(), plan.currentlyDeleted == false);
return new DeleteResult(
plan.versionOfDeletion,
delete.primaryTerm(),
delete.seqNo(),
plan.currentlyDeleted == false,
delete.id()
);
} catch (final Exception ex) {
/*
* Document level failures when deleting are unexpected, we likely hit something fatal such as the Lucene index being corrupt,
@ -1655,14 +1664,16 @@ public class InternalEngine extends Engine {
public static DeletionStrategy skipDueToVersionConflict(
VersionConflictEngineException e,
long currentVersion,
boolean currentlyDeleted
boolean currentlyDeleted,
String id
) {
final DeleteResult deleteResult = new DeleteResult(
e,
currentVersion,
SequenceNumbers.UNASSIGNED_PRIMARY_TERM,
SequenceNumbers.UNASSIGNED_SEQ_NO,
currentlyDeleted == false
currentlyDeleted == false,
id
);
return new DeletionStrategy(false, false, currentlyDeleted, Versions.NOT_FOUND, 0, deleteResult);
}
@ -1680,13 +1691,14 @@ public class InternalEngine extends Engine {
return new DeletionStrategy(false, true, false, versionOfDeletion, 0, null);
}
static DeletionStrategy failAsTooManyDocs(Exception e) {
static DeletionStrategy failAsTooManyDocs(Exception e, String id) {
final DeleteResult deleteResult = new DeleteResult(
e,
Versions.NOT_FOUND,
SequenceNumbers.UNASSIGNED_PRIMARY_TERM,
SequenceNumbers.UNASSIGNED_SEQ_NO,
false
false,
id
);
return new DeletionStrategy(false, false, false, Versions.NOT_FOUND, 0, deleteResult);
}

View file

@ -60,10 +60,6 @@ public class DocumentMapper {
return metadataMapper(SourceFieldMapper.class);
}
public IdFieldMapper idFieldMapper() {
return metadataMapper(IdFieldMapper.class);
}
public RoutingFieldMapper routingFieldMapper() {
return metadataMapper(RoutingFieldMapper.class);
}

View file

@ -90,7 +90,7 @@ public final class DocumentParser {
return new ParsedDocument(
context.version(),
context.seqID(),
context.sourceToParse().id(),
context.id(),
source.routing(),
context.reorderParentAndGetDocs(),
context.sourceToParse().source(),
@ -339,7 +339,8 @@ public final class DocumentParser {
if (idField != null) {
// We just need to store the id as indexed field, so that IndexWriter#deleteDocuments(term) can then
// delete it when the root document is deleted too.
nestedDoc.add(new Field(IdFieldMapper.NAME, idField.binaryValue(), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
// NOTE: we don't support nested fields in tsdb so it's safe to assume the standard id mapper.
nestedDoc.add(new Field(IdFieldMapper.NAME, idField.binaryValue(), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
} else {
throw new IllegalStateException("The root document of a nested document should have an _id field");
}

View file

@ -90,6 +90,7 @@ public abstract class DocumentParserContext {
private final Map<String, ObjectMapper> dynamicObjectMappers;
private final List<RuntimeField> dynamicRuntimeFields;
private final DocumentDimensions dimensions;
private String id;
private Field version;
private SeqNoFieldMapper.SequenceIDFields seqID;
@ -104,6 +105,7 @@ public abstract class DocumentParserContext {
this.newFieldsSeen = in.newFieldsSeen;
this.dynamicObjectMappers = in.dynamicObjectMappers;
this.dynamicRuntimeFields = in.dynamicRuntimeFields;
this.id = in.id;
this.version = in.version;
this.seqID = in.seqID;
this.dimensions = in.dimensions;
@ -192,6 +194,18 @@ public abstract class DocumentParserContext {
this.version = version;
}
public final String id() {
if (id == null) {
assert false : "id field mapper has not set the id";
throw new IllegalStateException("id field mapper has not set the id");
}
return id;
}
public final void id(String id) {
this.id = id;
}
public final SeqNoFieldMapper.SequenceIDFields seqID() {
return this.seqID;
}

View file

@ -9,277 +9,33 @@
package org.elasticsearch.index.mapper;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.FieldType;
import org.apache.lucene.index.IndexOptions;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.SortField;
import org.apache.lucene.search.TermInSetQuery;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.logging.DeprecationCategory;
import org.elasticsearch.common.logging.DeprecationLogger;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.index.fielddata.FieldData;
import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
import org.elasticsearch.index.fielddata.LeafFieldData;
import org.elasticsearch.index.fielddata.ScriptDocValues;
import org.elasticsearch.index.fielddata.SortedBinaryDocValues;
import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;
import org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData;
import org.elasticsearch.index.query.SearchExecutionContext;
import org.elasticsearch.indices.IndicesService;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.script.field.DelegateDocValuesField;
import org.elasticsearch.script.field.DocValuesField;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.MultiValueMode;
import org.elasticsearch.search.aggregations.support.CoreValuesSourceType;
import org.elasticsearch.search.aggregations.support.ValuesSourceType;
import org.elasticsearch.search.lookup.SearchLookup;
import org.elasticsearch.search.sort.BucketedSort;
import org.elasticsearch.search.sort.SortOrder;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.function.BooleanSupplier;
import java.util.function.Supplier;
import org.elasticsearch.index.analysis.NamedAnalyzer;
/**
* A mapper for the _id field. It does nothing since _id is neither indexed nor
* stored, but we need to keep it so that its FieldType can be used to generate
* queries.
* A mapper for the _id field.
*/
public class IdFieldMapper extends MetadataFieldMapper {
private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(IdFieldMapper.class);
static final String ID_FIELD_DATA_DEPRECATION_MESSAGE =
"Loading the fielddata on the _id field is deprecated and will be removed in future versions. "
+ "If you require sorting or aggregating on this field you should also include the id in the "
+ "body of your documents, and map this field as a keyword field that has [doc_values] enabled";
public abstract class IdFieldMapper extends MetadataFieldMapper {
public static final String NAME = "_id";
public static final String CONTENT_TYPE = "_id";
public static class Defaults {
public static final FieldType FIELD_TYPE = new FieldType();
public static final FieldType NESTED_FIELD_TYPE;
static {
FIELD_TYPE.setTokenized(false);
FIELD_TYPE.setIndexOptions(IndexOptions.DOCS);
FIELD_TYPE.setStored(true);
FIELD_TYPE.setOmitNorms(true);
FIELD_TYPE.freeze();
NESTED_FIELD_TYPE = new FieldType();
NESTED_FIELD_TYPE.setTokenized(false);
NESTED_FIELD_TYPE.setIndexOptions(IndexOptions.DOCS);
NESTED_FIELD_TYPE.setStored(false);
NESTED_FIELD_TYPE.setOmitNorms(true);
NESTED_FIELD_TYPE.freeze();
}
}
public static final IdFieldMapper NO_FIELD_DATA = new IdFieldMapper(() -> false);
public static final TypeParser PARSER = new FixedTypeParser(MappingParserContext::idFieldMapper);
static final class IdFieldType extends TermBasedFieldType {
private final BooleanSupplier fieldDataEnabled;
IdFieldType(BooleanSupplier fieldDataEnabled) {
super(NAME, true, true, false, TextSearchInfo.SIMPLE_MATCH_ONLY, Collections.emptyMap());
this.fieldDataEnabled = fieldDataEnabled;
assert isSearchable();
protected IdFieldMapper(MappedFieldType mappedFieldType, NamedAnalyzer indexAnalyzer) {
super(mappedFieldType, indexAnalyzer);
assert mappedFieldType.isSearchable();
}
@Override
public String typeName() {
protected final String contentType() {
return CONTENT_TYPE;
}
@Override
public boolean isSearchable() {
// The _id field is always searchable.
return true;
}
@Override
public boolean mayExistInIndex(SearchExecutionContext context) {
return true;
}
@Override
public ValueFetcher valueFetcher(SearchExecutionContext context, String format) {
return new StoredValueFetcher(context.lookup(), NAME);
}
@Override
public Query termQuery(Object value, SearchExecutionContext context) {
return termsQuery(Arrays.asList(value), context);
}
@Override
public Query existsQuery(SearchExecutionContext context) {
return new MatchAllDocsQuery();
}
@Override
public Query termsQuery(Collection<?> values, SearchExecutionContext context) {
failIfNotIndexed();
BytesRef[] bytesRefs = values.stream().map(v -> {
Object idObject = v;
if (idObject instanceof BytesRef) {
idObject = ((BytesRef) idObject).utf8ToString();
}
return Uid.encodeId(idObject.toString());
}).toArray(BytesRef[]::new);
return new TermInSetQuery(name(), bytesRefs);
}
@Override
public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName, Supplier<SearchLookup> searchLookup) {
if (fieldDataEnabled.getAsBoolean() == false) {
throw new IllegalArgumentException(
"Fielddata access on the _id field is disallowed, "
+ "you can re-enable it by updating the dynamic cluster setting: "
+ IndicesService.INDICES_ID_FIELD_DATA_ENABLED_SETTING.getKey()
);
}
final IndexFieldData.Builder fieldDataBuilder = new PagedBytesIndexFieldData.Builder(
name(),
TextFieldMapper.Defaults.FIELDDATA_MIN_FREQUENCY,
TextFieldMapper.Defaults.FIELDDATA_MAX_FREQUENCY,
TextFieldMapper.Defaults.FIELDDATA_MIN_SEGMENT_SIZE,
CoreValuesSourceType.KEYWORD,
(dv, n) -> new DelegateDocValuesField(
new ScriptDocValues.Strings(new ScriptDocValues.StringsSupplier(FieldData.toString(dv))),
n
)
);
return new IndexFieldData.Builder() {
@Override
public IndexFieldData<?> build(IndexFieldDataCache cache, CircuitBreakerService breakerService) {
deprecationLogger.warn(DeprecationCategory.AGGREGATIONS, "id_field_data", ID_FIELD_DATA_DEPRECATION_MESSAGE);
final IndexFieldData<?> fieldData = fieldDataBuilder.build(cache, breakerService);
return new IndexFieldData<>() {
@Override
public String getFieldName() {
return fieldData.getFieldName();
}
@Override
public ValuesSourceType getValuesSourceType() {
return fieldData.getValuesSourceType();
}
@Override
public LeafFieldData load(LeafReaderContext context) {
return wrap(fieldData.load(context));
}
@Override
public LeafFieldData loadDirect(LeafReaderContext context) throws Exception {
return wrap(fieldData.loadDirect(context));
}
@Override
public SortField sortField(Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) {
XFieldComparatorSource source = new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested);
return new SortField(getFieldName(), source, reverse);
}
@Override
public BucketedSort newBucketedSort(
BigArrays bigArrays,
Object missingValue,
MultiValueMode sortMode,
Nested nested,
SortOrder sortOrder,
DocValueFormat format,
int bucketSize,
BucketedSort.ExtraData extra
) {
throw new UnsupportedOperationException("can't sort on the [" + CONTENT_TYPE + "] field");
}
};
}
};
}
}
private static LeafFieldData wrap(LeafFieldData in) {
return new LeafFieldData() {
@Override
public void close() {
in.close();
}
@Override
public long ramBytesUsed() {
return in.ramBytesUsed();
}
@Override
public DocValuesField<?> getScriptField(String name) {
return new DelegateDocValuesField(new ScriptDocValues.Strings(new ScriptDocValues.StringsSupplier(getBytesValues())), name);
}
@Override
public SortedBinaryDocValues getBytesValues() {
SortedBinaryDocValues inValues = in.getBytesValues();
return new SortedBinaryDocValues() {
@Override
public BytesRef nextValue() throws IOException {
BytesRef encoded = inValues.nextValue();
return new BytesRef(
Uid.decodeId(Arrays.copyOfRange(encoded.bytes, encoded.offset, encoded.offset + encoded.length))
);
}
@Override
public int docValueCount() {
final int count = inValues.docValueCount();
// If the count is not 1 then the impl is not correct as the binary representation
// does not preserve order. But id fields only have one value per doc so we are good.
assert count == 1;
return inValues.docValueCount();
}
@Override
public boolean advanceExact(int doc) throws IOException {
return inValues.advanceExact(doc);
}
};
}
};
}
public IdFieldMapper(BooleanSupplier fieldDataEnabled) {
super(new IdFieldType(fieldDataEnabled), Lucene.KEYWORD_ANALYZER);
}
@Override
public void preParse(DocumentParserContext context) {
context.doc().add(idField(context.sourceToParse().id()));
}
public static Field idField(String id) {
return new Field(NAME, Uid.encodeId(id), Defaults.FIELD_TYPE);
}
@Override
protected String contentType() {
return CONTENT_TYPE;
/**
* Create a {@link Field} to store the provided {@code _id} that "stores"
* the {@code _id} so it can be fetched easily from the index.
*/
public static Field standardIdField(String id) {
return new Field(NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.FIELD_TYPE);
}
}

View file

@ -144,5 +144,4 @@ public class LuceneDocument implements Iterable<IndexableField> {
}
return null;
}
}

View file

@ -74,7 +74,7 @@ public class ParsedDocument {
seqIdFields.addFields(document);
Field versionField = VersionFieldMapper.versionField();
document.add(versionField);
document.add(IdFieldMapper.idField(id));
document.add(IdFieldMapper.standardIdField(id));
return new ParsedDocument(
versionField,
seqIdFields,

View file

@ -0,0 +1,272 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.index.mapper;
import org.apache.lucene.document.FieldType;
import org.apache.lucene.index.IndexOptions;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.SortField;
import org.apache.lucene.search.TermInSetQuery;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.logging.DeprecationCategory;
import org.elasticsearch.common.logging.DeprecationLogger;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.index.fielddata.FieldData;
import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
import org.elasticsearch.index.fielddata.LeafFieldData;
import org.elasticsearch.index.fielddata.ScriptDocValues;
import org.elasticsearch.index.fielddata.SortedBinaryDocValues;
import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;
import org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData;
import org.elasticsearch.index.query.SearchExecutionContext;
import org.elasticsearch.indices.IndicesService;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.script.field.DelegateDocValuesField;
import org.elasticsearch.script.field.DocValuesField;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.MultiValueMode;
import org.elasticsearch.search.aggregations.support.CoreValuesSourceType;
import org.elasticsearch.search.aggregations.support.ValuesSourceType;
import org.elasticsearch.search.lookup.SearchLookup;
import org.elasticsearch.search.sort.BucketedSort;
import org.elasticsearch.search.sort.SortOrder;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.function.BooleanSupplier;
import java.util.function.Supplier;
/**
* A mapper for the _id field. It does nothing since _id is neither indexed nor
* stored, but we need to keep it so that its FieldType can be used to generate
* queries.
*/
public class ProvidedIdFieldMapper extends IdFieldMapper {
private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(ProvidedIdFieldMapper.class);
static final String ID_FIELD_DATA_DEPRECATION_MESSAGE =
"Loading the fielddata on the _id field is deprecated and will be removed in future versions. "
+ "If you require sorting or aggregating on this field you should also include the id in the "
+ "body of your documents, and map this field as a keyword field that has [doc_values] enabled";
public static class Defaults {
public static final FieldType FIELD_TYPE = new FieldType();
public static final FieldType NESTED_FIELD_TYPE;
static {
FIELD_TYPE.setTokenized(false);
FIELD_TYPE.setIndexOptions(IndexOptions.DOCS);
FIELD_TYPE.setStored(true);
FIELD_TYPE.setOmitNorms(true);
FIELD_TYPE.freeze();
NESTED_FIELD_TYPE = new FieldType();
NESTED_FIELD_TYPE.setTokenized(false);
NESTED_FIELD_TYPE.setIndexOptions(IndexOptions.DOCS);
NESTED_FIELD_TYPE.setStored(false);
NESTED_FIELD_TYPE.setOmitNorms(true);
NESTED_FIELD_TYPE.freeze();
}
}
public static final ProvidedIdFieldMapper NO_FIELD_DATA = new ProvidedIdFieldMapper(() -> false);
static final class IdFieldType extends TermBasedFieldType {
private final BooleanSupplier fieldDataEnabled;
IdFieldType(BooleanSupplier fieldDataEnabled) {
super(NAME, true, true, false, TextSearchInfo.SIMPLE_MATCH_ONLY, Collections.emptyMap());
this.fieldDataEnabled = fieldDataEnabled;
}
@Override
public String typeName() {
return CONTENT_TYPE;
}
@Override
public boolean isSearchable() {
// The _id field is always searchable.
return true;
}
@Override
public boolean mayExistInIndex(SearchExecutionContext context) {
return true;
}
@Override
public ValueFetcher valueFetcher(SearchExecutionContext context, String format) {
return new StoredValueFetcher(context.lookup(), NAME);
}
@Override
public Query termQuery(Object value, SearchExecutionContext context) {
return termsQuery(Arrays.asList(value), context);
}
@Override
public Query existsQuery(SearchExecutionContext context) {
return new MatchAllDocsQuery();
}
@Override
public Query termsQuery(Collection<?> values, SearchExecutionContext context) {
failIfNotIndexed();
BytesRef[] bytesRefs = values.stream().map(v -> {
Object idObject = v;
if (idObject instanceof BytesRef) {
idObject = ((BytesRef) idObject).utf8ToString();
}
return Uid.encodeId(idObject.toString());
}).toArray(BytesRef[]::new);
return new TermInSetQuery(name(), bytesRefs);
}
@Override
public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName, Supplier<SearchLookup> searchLookup) {
if (fieldDataEnabled.getAsBoolean() == false) {
throw new IllegalArgumentException(
"Fielddata access on the _id field is disallowed, "
+ "you can re-enable it by updating the dynamic cluster setting: "
+ IndicesService.INDICES_ID_FIELD_DATA_ENABLED_SETTING.getKey()
);
}
final IndexFieldData.Builder fieldDataBuilder = new PagedBytesIndexFieldData.Builder(
name(),
TextFieldMapper.Defaults.FIELDDATA_MIN_FREQUENCY,
TextFieldMapper.Defaults.FIELDDATA_MAX_FREQUENCY,
TextFieldMapper.Defaults.FIELDDATA_MIN_SEGMENT_SIZE,
CoreValuesSourceType.KEYWORD,
(dv, n) -> new DelegateDocValuesField(
new ScriptDocValues.Strings(new ScriptDocValues.StringsSupplier(FieldData.toString(dv))),
n
)
);
return new IndexFieldData.Builder() {
@Override
public IndexFieldData<?> build(IndexFieldDataCache cache, CircuitBreakerService breakerService) {
deprecationLogger.warn(DeprecationCategory.AGGREGATIONS, "id_field_data", ID_FIELD_DATA_DEPRECATION_MESSAGE);
final IndexFieldData<?> fieldData = fieldDataBuilder.build(cache, breakerService);
return new IndexFieldData<>() {
@Override
public String getFieldName() {
return fieldData.getFieldName();
}
@Override
public ValuesSourceType getValuesSourceType() {
return fieldData.getValuesSourceType();
}
@Override
public LeafFieldData load(LeafReaderContext context) {
return wrap(fieldData.load(context));
}
@Override
public LeafFieldData loadDirect(LeafReaderContext context) throws Exception {
return wrap(fieldData.loadDirect(context));
}
@Override
public SortField sortField(Object missingValue, MultiValueMode sortMode, Nested nested, boolean reverse) {
XFieldComparatorSource source = new BytesRefFieldComparatorSource(this, missingValue, sortMode, nested);
return new SortField(getFieldName(), source, reverse);
}
@Override
public BucketedSort newBucketedSort(
BigArrays bigArrays,
Object missingValue,
MultiValueMode sortMode,
Nested nested,
SortOrder sortOrder,
DocValueFormat format,
int bucketSize,
BucketedSort.ExtraData extra
) {
throw new UnsupportedOperationException("can't sort on the [" + CONTENT_TYPE + "] field");
}
};
}
};
}
}
private static LeafFieldData wrap(LeafFieldData in) {
return new LeafFieldData() {
@Override
public void close() {
in.close();
}
@Override
public long ramBytesUsed() {
return in.ramBytesUsed();
}
@Override
public DocValuesField<?> getScriptField(String name) {
return new DelegateDocValuesField(new ScriptDocValues.Strings(new ScriptDocValues.StringsSupplier(getBytesValues())), name);
}
@Override
public SortedBinaryDocValues getBytesValues() {
SortedBinaryDocValues inValues = in.getBytesValues();
return new SortedBinaryDocValues() {
@Override
public BytesRef nextValue() throws IOException {
BytesRef encoded = inValues.nextValue();
return new BytesRef(
Uid.decodeId(Arrays.copyOfRange(encoded.bytes, encoded.offset, encoded.offset + encoded.length))
);
}
@Override
public int docValueCount() {
final int count = inValues.docValueCount();
// If the count is not 1 then the impl is not correct as the binary representation
// does not preserve order. But id fields only have one value per doc so we are good.
assert count == 1;
return inValues.docValueCount();
}
@Override
public boolean advanceExact(int doc) throws IOException {
return inValues.advanceExact(doc);
}
};
}
};
}
public ProvidedIdFieldMapper(BooleanSupplier fieldDataEnabled) {
super(new IdFieldType(fieldDataEnabled), Lucene.KEYWORD_ANALYZER);
}
@Override
public void preParse(DocumentParserContext context) {
if (context.sourceToParse().id() == null) {
throw new IllegalStateException("_id should have been set on the coordinating node");
}
context.id(context.sourceToParse().id());
context.doc().add(standardIdField(context.id()));
}
}

View file

@ -29,13 +29,13 @@ public class SourceToParse {
private final Map<String, String> dynamicTemplates;
public SourceToParse(
String id,
@Nullable String id,
BytesReference source,
XContentType xContentType,
@Nullable String routing,
Map<String, String> dynamicTemplates
) {
this.id = Objects.requireNonNull(id);
this.id = id;
// we always convert back to byte array, since we store it and Field only supports bytes..
// so, we might as well do it here, and improve the performance of working with direct byte arrays
this.source = new BytesArray(Objects.requireNonNull(source).toBytesRef());
@ -52,7 +52,7 @@ public class SourceToParse {
return this.source;
}
public String id() {
public String id() { // TODO migrate callers that use this to describe the document to a new method
return this.id;
}

View file

@ -144,7 +144,9 @@ public class TimeSeriesIdFieldMapper extends MetadataFieldMapper {
assert fieldType().isIndexed() == false;
TimeSeriesIdBuilder timeSeriesIdBuilder = (TimeSeriesIdBuilder) context.getDimensions();
context.doc().add(new SortedDocValuesField(fieldType().name(), timeSeriesIdBuilder.build().toBytesRef()));
BytesRef timeSeriesId = timeSeriesIdBuilder.build().toBytesRef();
context.doc().add(new SortedDocValuesField(fieldType().name(), timeSeriesId));
TsidExtractingIdFieldMapper.INSTANCE.createField(context, timeSeriesId);
}
@Override

View file

@ -0,0 +1,158 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.index.mapper;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.FieldType;
import org.apache.lucene.index.IndexOptions;
import org.apache.lucene.index.IndexableField;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TermInSetQuery;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.cluster.routing.IndexRouting;
import org.elasticsearch.common.hash.MurmurHash3;
import org.elasticsearch.common.hash.MurmurHash3.Hash128;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.util.ByteUtils;
import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.query.SearchExecutionContext;
import org.elasticsearch.search.lookup.SearchLookup;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.Locale;
import java.util.function.Supplier;
/**
* A mapper for the _id field. It does nothing since _id is neither indexed nor
* stored, but we need to keep it so that its FieldType can be used to generate
* queries.
*/
public class TsidExtractingIdFieldMapper extends IdFieldMapper {
private static final FieldType FIELD_TYPE = new FieldType();
static {
FIELD_TYPE.setTokenized(false);
FIELD_TYPE.setIndexOptions(IndexOptions.DOCS);
FIELD_TYPE.setStored(true); // TODO reconstruct the id on fetch from tsid and timestamp
FIELD_TYPE.setOmitNorms(true);
FIELD_TYPE.freeze();
}
public static final TsidExtractingIdFieldMapper INSTANCE = new TsidExtractingIdFieldMapper();
public static final TypeParser PARSER = new FixedTypeParser(MappingParserContext::idFieldMapper);
static final class IdFieldType extends TermBasedFieldType {
IdFieldType() {
super(NAME, true, true, false, TextSearchInfo.SIMPLE_MATCH_ONLY, Collections.emptyMap());
}
@Override
public String typeName() {
return CONTENT_TYPE;
}
@Override
public boolean isSearchable() {
// The _id field is always searchable.
return true;
}
@Override
public ValueFetcher valueFetcher(SearchExecutionContext context, String format) {
return new StoredValueFetcher(context.lookup(), NAME);
}
@Override
public Query termQuery(Object value, SearchExecutionContext context) {
return termsQuery(Arrays.asList(value), context);
}
@Override
public Query existsQuery(SearchExecutionContext context) {
return new MatchAllDocsQuery();
}
@Override
public Query termsQuery(Collection<?> values, SearchExecutionContext context) {
failIfNotIndexed();
BytesRef[] bytesRefs = values.stream().map(v -> {
Object idObject = v;
if (idObject instanceof BytesRef) {
idObject = ((BytesRef) idObject).utf8ToString();
}
return Uid.encodeId(idObject.toString());
}).toArray(BytesRef[]::new);
return new TermInSetQuery(name(), bytesRefs);
}
@Override
public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName, Supplier<SearchLookup> searchLookup) {
throw new IllegalArgumentException("Fielddata is not supported on [_id] field in [time_series] indices");
}
}
private TsidExtractingIdFieldMapper() {
super(new IdFieldType(), Lucene.KEYWORD_ANALYZER);
}
private static final long SEED = 0;
public void createField(DocumentParserContext context, BytesRef tsid) {
IndexableField[] timestampFields = context.rootDoc().getFields(DataStreamTimestampFieldMapper.DEFAULT_PATH);
if (timestampFields.length == 0) {
throw new IllegalArgumentException(
"data stream timestamp field [" + DataStreamTimestampFieldMapper.DEFAULT_PATH + "] is missing"
);
}
long timestamp = timestampFields[0].numericValue().longValue();
Hash128 hash = new Hash128();
MurmurHash3.hash128(tsid.bytes, tsid.offset, tsid.length, SEED, hash);
byte[] suffix = new byte[16];
ByteUtils.writeLongLE(hash.h1, suffix, 0);
ByteUtils.writeLongLE(timestamp, suffix, 8); // TODO compare disk usage for LE and BE on timestamp
IndexRouting.ExtractFromSource indexRouting = (IndexRouting.ExtractFromSource) context.indexSettings().getIndexRouting();
// TODO it'd be way faster to use the fields that we've extract here rather than the source or parse the tsid
String id = indexRouting.createId(context.sourceToParse().getXContentType(), context.sourceToParse().source(), suffix);
assert Uid.isURLBase64WithoutPadding(id); // Make sure we get to use Uid's nice optimizations
/*
* Make sure that _id from extracting the tsid matches that _id
* from extracting the _source. This should be true for all valid
* documents with valid mappings. *But* some invalid mappings
* will not parse the field but be rejected later by the dynamic
* mappings machinery. So if there are any dynamic mappings
* at all we just skip the assertion because we can't be sure
* it always must pass.
*/
assert context.getDynamicMappers().isEmpty() == false
|| context.getDynamicRuntimeFields().isEmpty() == false
|| id.equals(indexRouting.createId(TimeSeriesIdFieldMapper.decodeTsid(tsid), suffix));
if (context.sourceToParse().id() != null && false == context.sourceToParse().id().equals(id)) {
throw new IllegalArgumentException(
String.format(
Locale.ROOT,
"_id must be unset or set to [%s] but was [%s] because [%s] is in time_series mode",
id,
context.sourceToParse().id(),
context.indexSettings().getIndexMetadata().getIndex().getName()
)
);
}
context.id(id);
BytesRef uidEncoded = Uid.encodeId(context.id());
context.doc().add(new Field(NAME, uidEncoded, FIELD_TYPE));
}
}

View file

@ -958,7 +958,7 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl
);
Mapping update = operation.parsedDoc().dynamicMappingsUpdate();
if (update != null) {
return new Engine.IndexResult(update);
return new Engine.IndexResult(update, operation.parsedDoc().id());
}
} catch (Exception e) {
// We treat any exception during parsing and or mapping update as a document level failure
@ -966,7 +966,7 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl
// can not raise an exception that may block any replication of previous operations to the
// replicas
verifyNotClosed(e);
return new Engine.IndexResult(e, version, opPrimaryTerm, seqNo);
return new Engine.IndexResult(e, version, opPrimaryTerm, seqNo, sourceToParse.id());
}
return index(engine, operation);
@ -1096,12 +1096,12 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl
return engine.noOp(noOp);
}
public Engine.IndexResult getFailedIndexResult(Exception e, long version) {
return new Engine.IndexResult(e, version);
public Engine.IndexResult getFailedIndexResult(Exception e, long version, String id) {
return new Engine.IndexResult(e, version, id);
}
public Engine.DeleteResult getFailedDeleteResult(Exception e, long version) {
return new Engine.DeleteResult(e, version, getOperationPrimaryTerm());
public Engine.DeleteResult getFailedDeleteResult(Exception e, long version, String id) {
return new Engine.DeleteResult(e, version, getOperationPrimaryTerm(), id);
}
public Engine.DeleteResult applyDeleteOperationOnPrimary(

View file

@ -73,6 +73,7 @@ import org.elasticsearch.env.ShardLockObtainFailedException;
import org.elasticsearch.gateway.MetaStateService;
import org.elasticsearch.gateway.MetadataStateFormat;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexMode;
import org.elasticsearch.index.IndexModule;
import org.elasticsearch.index.IndexNotFoundException;
import org.elasticsearch.index.IndexService;
@ -142,6 +143,7 @@ import java.nio.file.Files;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.EnumMap;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
@ -237,7 +239,7 @@ public class IndicesService extends AbstractLifecycleComponent
private volatile boolean idFieldDataEnabled;
private volatile boolean allowExpensiveQueries;
private final IdFieldMapper idFieldMapper = new IdFieldMapper(() -> idFieldDataEnabled);
private final Function<IndexMode, IdFieldMapper> idFieldMappers;
@Nullable
private final EsThreadPoolExecutor danglingIndicesThreadPoolExecutor;
@ -359,6 +361,12 @@ public class IndicesService extends AbstractLifecycleComponent
}
});
Map<IndexMode, IdFieldMapper> idFieldMappers = new EnumMap<>(IndexMode.class);
for (IndexMode mode : IndexMode.values()) {
idFieldMappers.put(mode, mode.buildIdFieldMapper(() -> idFieldDataEnabled));
}
this.idFieldMappers = idFieldMappers::get;
final String nodeName = Objects.requireNonNull(Node.NODE_NAME_SETTING.get(settings));
nodeWriteDanglingIndicesInfo = WRITE_DANGLING_INDICES_INFO_SETTING.get(settings);
danglingIndicesThreadPoolExecutor = nodeWriteDanglingIndicesInfo
@ -717,7 +725,7 @@ public class IndicesService extends AbstractLifecycleComponent
mapperRegistry,
indicesFieldDataCache,
namedWriteableRegistry,
idFieldMapper,
idFieldMappers.apply(idxSettings.getMode()),
valuesSourceRegistry,
indexFoldersDeletionListeners,
snapshotCommitSuppliers

View file

@ -48,7 +48,9 @@ public class BulkPrimaryExecutionContextTests extends ESTestCase {
visitedRequests.add(context.getCurrent());
context.setRequestToExecute(context.getCurrent());
// using failures prevents caring about types
context.markOperationAsExecuted(new Engine.IndexResult(new ElasticsearchException("bla"), 1));
context.markOperationAsExecuted(
new Engine.IndexResult(new ElasticsearchException("bla"), 1, context.getRequestToExecute().id())
);
context.markAsCompleted(context.getExecutionResult());
}
@ -97,25 +99,25 @@ public class BulkPrimaryExecutionContextTests extends ESTestCase {
case INDEX, CREATE -> {
context.setRequestToExecute(current);
if (failure) {
result = new Engine.IndexResult(new ElasticsearchException("bla"), 1);
result = new Engine.IndexResult(new ElasticsearchException("bla"), 1, current.id());
} else {
result = new FakeIndexResult(1, 1, randomLongBetween(0, 200), randomBoolean(), location);
result = new FakeIndexResult(1, 1, randomLongBetween(0, 200), randomBoolean(), location, "id");
}
}
case UPDATE -> {
context.setRequestToExecute(new IndexRequest(current.index()).id(current.id()));
if (failure) {
result = new Engine.IndexResult(new ElasticsearchException("bla"), 1, 1, 1);
result = new Engine.IndexResult(new ElasticsearchException("bla"), 1, 1, 1, current.id());
} else {
result = new FakeIndexResult(1, 1, randomLongBetween(0, 200), randomBoolean(), location);
result = new FakeIndexResult(1, 1, randomLongBetween(0, 200), randomBoolean(), location, "id");
}
}
case DELETE -> {
context.setRequestToExecute(current);
if (failure) {
result = new Engine.DeleteResult(new ElasticsearchException("bla"), 1, 1);
result = new Engine.DeleteResult(new ElasticsearchException("bla"), 1, 1, current.id());
} else {
result = new FakeDeleteResult(1, 1, randomLongBetween(0, 200), randomBoolean(), location);
result = new FakeDeleteResult(1, 1, randomLongBetween(0, 200), randomBoolean(), location, current.id());
}
}
default -> throw new AssertionError("unknown type:" + current.opType());

View file

@ -769,7 +769,7 @@ public class TransportBulkActionIngestTests extends ESTestCase {
any(),
eq(Names.WRITE)
);
indexRequest1.process();
indexRequest1.autoGenerateId();
completionHandler.getValue().accept(Thread.currentThread(), null);
// check failure passed through to the listener

View file

@ -67,6 +67,7 @@ import static org.hamcrest.Matchers.nullValue;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyBoolean;
import static org.mockito.ArgumentMatchers.anyLong;
import static org.mockito.ArgumentMatchers.anyString;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.spy;
@ -254,10 +255,11 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
BulkShardRequest bulkShardRequest = new BulkShardRequest(shardId, RefreshPolicy.NONE, items);
Engine.IndexResult mappingUpdate = new Engine.IndexResult(
new Mapping(mock(RootObjectMapper.class), new MetadataFieldMapper[0], Collections.emptyMap())
new Mapping(mock(RootObjectMapper.class), new MetadataFieldMapper[0], Collections.emptyMap()),
"id"
);
Translog.Location resultLocation = new Translog.Location(42, 42, 42);
Engine.IndexResult success = new FakeIndexResult(1, 1, 13, true, resultLocation);
Engine.IndexResult success = new FakeIndexResult(1, 1, 13, true, resultLocation, "id");
IndexShard shard = mock(IndexShard.class);
when(shard.shardId()).thenReturn(shardId);
@ -523,7 +525,7 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
IndexRequest updateResponse = new IndexRequest("index").id("id").source(Requests.INDEX_CONTENT_TYPE, "field", "value");
Exception err = new ElasticsearchException("I'm dead <(x.x)>");
Engine.IndexResult indexResult = new Engine.IndexResult(err, 0, 0, 0);
Engine.IndexResult indexResult = new Engine.IndexResult(err, 0, 0, 0, "id");
IndexShard shard = mock(IndexShard.class);
when(shard.applyIndexOperationOnPrimary(anyLong(), any(), any(), anyLong(), anyLong(), anyLong(), anyBoolean())).thenReturn(
indexResult
@ -580,7 +582,7 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
IndexRequest updateResponse = new IndexRequest("index").id("id").source(Requests.INDEX_CONTENT_TYPE, "field", "value");
Exception err = new VersionConflictEngineException(shardId, "id", "I'm conflicted <(;_;)>");
Engine.IndexResult indexResult = new Engine.IndexResult(err, 0, 0, 0);
Engine.IndexResult indexResult = new Engine.IndexResult(err, 0, 0, 0, "id");
IndexShard shard = mock(IndexShard.class);
when(shard.applyIndexOperationOnPrimary(anyLong(), any(), any(), anyLong(), anyLong(), anyLong(), anyBoolean())).thenReturn(
indexResult
@ -636,7 +638,7 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
boolean created = randomBoolean();
Translog.Location resultLocation = new Translog.Location(42, 42, 42);
Engine.IndexResult indexResult = new FakeIndexResult(1, 1, 13, created, resultLocation);
Engine.IndexResult indexResult = new FakeIndexResult(1, 1, 13, created, resultLocation, "id");
IndexShard shard = mock(IndexShard.class);
when(shard.applyIndexOperationOnPrimary(anyLong(), any(), any(), anyLong(), anyLong(), anyLong(), anyBoolean())).thenReturn(
indexResult
@ -694,7 +696,7 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
boolean found = randomBoolean();
Translog.Location resultLocation = new Translog.Location(42, 42, 42);
final long resultSeqNo = 13;
Engine.DeleteResult deleteResult = new FakeDeleteResult(1, 1, resultSeqNo, found, resultLocation);
Engine.DeleteResult deleteResult = new FakeDeleteResult(1, 1, resultSeqNo, found, resultLocation, "id");
IndexShard shard = mock(IndexShard.class);
when(shard.applyDeleteOperationOnPrimary(anyLong(), any(), any(), anyLong(), anyLong())).thenReturn(deleteResult);
when(shard.indexSettings()).thenReturn(indexSettings);
@ -848,12 +850,13 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
IndexRequest updateResponse = new IndexRequest("index").id("id").source(Requests.INDEX_CONTENT_TYPE, "field", "value");
Exception err = new VersionConflictEngineException(shardId, "id", "I'm conflicted <(;_;)>");
Engine.IndexResult conflictedResult = new Engine.IndexResult(err, 0);
Engine.IndexResult conflictedResult = new Engine.IndexResult(err, 0, "id");
Engine.IndexResult mappingUpdate = new Engine.IndexResult(
new Mapping(mock(RootObjectMapper.class), new MetadataFieldMapper[0], Collections.emptyMap())
new Mapping(mock(RootObjectMapper.class), new MetadataFieldMapper[0], Collections.emptyMap()),
"id"
);
Translog.Location resultLocation = new Translog.Location(42, 42, 42);
Engine.IndexResult success = new FakeIndexResult(1, 1, 13, true, resultLocation);
Engine.IndexResult success = new FakeIndexResult(1, 1, 13, true, resultLocation, "id");
IndexShard shard = mock(IndexShard.class);
when(shard.applyIndexOperationOnPrimary(anyLong(), any(), any(), anyLong(), anyLong(), anyLong(), anyBoolean())).thenAnswer(ir -> {
@ -941,12 +944,13 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
BulkShardRequest bulkShardRequest = new BulkShardRequest(shardId, RefreshPolicy.NONE, items);
Engine.IndexResult mappingUpdate = new Engine.IndexResult(
new Mapping(mock(RootObjectMapper.class), new MetadataFieldMapper[0], Collections.emptyMap())
new Mapping(mock(RootObjectMapper.class), new MetadataFieldMapper[0], Collections.emptyMap()),
"id"
);
Translog.Location resultLocation1 = new Translog.Location(42, 36, 36);
Translog.Location resultLocation2 = new Translog.Location(42, 42, 42);
Engine.IndexResult success1 = new FakeIndexResult(1, 1, 10, true, resultLocation1);
Engine.IndexResult success2 = new FakeIndexResult(1, 1, 13, true, resultLocation2);
Engine.IndexResult success1 = new FakeIndexResult(1, 1, 10, true, resultLocation1, "id");
Engine.IndexResult success2 = new FakeIndexResult(1, 1, 13, true, resultLocation2, "id");
IndexShard shard = mock(IndexShard.class);
when(shard.shardId()).thenReturn(shardId);
@ -955,7 +959,7 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
mappingUpdate,
success2
);
when(shard.getFailedIndexResult(any(EsRejectedExecutionException.class), anyLong())).thenCallRealMethod();
when(shard.getFailedIndexResult(any(EsRejectedExecutionException.class), anyLong(), anyString())).thenCallRealMethod();
when(shard.mapperService()).thenReturn(mock(MapperService.class));
randomlySetIgnoredPrimaryResponse(items[0]);
@ -1084,8 +1088,8 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
private final Translog.Location location;
protected FakeIndexResult(long version, long term, long seqNo, boolean created, Translog.Location location) {
super(version, term, seqNo, created);
protected FakeIndexResult(long version, long term, long seqNo, boolean created, Translog.Location location, String id) {
super(version, term, seqNo, created, id);
this.location = location;
}
@ -1102,8 +1106,8 @@ public class TransportShardBulkActionTests extends IndexShardTestCase {
private final Translog.Location location;
protected FakeDeleteResult(long version, long term, long seqNo, boolean found, Translog.Location location) {
super(version, term, seqNo, found);
protected FakeDeleteResult(long version, long term, long seqNo, boolean found, Translog.Location location, String id) {
super(version, term, seqNo, found, id);
this.location = location;
}

View file

@ -118,13 +118,10 @@ public class IndexRequestTests extends ESTestCase {
expectThrows(IllegalArgumentException.class, () -> request.waitForActiveShards(ActiveShardCount.from(randomIntBetween(-10, -1))));
}
public void testAutoGenIdTimestampIsSet() {
public void testAutoGenerateId() {
IndexRequest request = new IndexRequest("index");
request.process();
request.autoGenerateId();
assertTrue("expected > 0 but got: " + request.getAutoGeneratedTimestamp(), request.getAutoGeneratedTimestamp() > 0);
request = new IndexRequest("index").id("1");
request.process();
assertEquals(IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, request.getAutoGeneratedTimestamp());
}
public void testIndexResponse() {

View file

@ -9,11 +9,14 @@ package org.elasticsearch.cluster.routing;
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.StringHelper;
import org.elasticsearch.ResourceNotFoundException;
import org.elasticsearch.Version;
import org.elasticsearch.action.RoutingMissingException;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.core.Nullable;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.VersionUtils;
@ -35,8 +38,41 @@ import java.util.concurrent.atomic.AtomicInteger;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.hasSize;
import static org.hamcrest.Matchers.not;
import static org.hamcrest.Matchers.nullValue;
public class IndexRoutingTests extends ESTestCase {
public void testSimpleRoutingRejectsEmptyId() {
IndexRouting indexRouting = IndexRouting.fromIndexMetadata(
IndexMetadata.builder("test").settings(settings(Version.CURRENT)).numberOfShards(2).numberOfReplicas(1).build()
);
IndexRequest req = new IndexRequest().id("");
Exception e = expectThrows(IllegalArgumentException.class, () -> indexRouting.process(req));
assertThat(e.getMessage(), equalTo("if _id is specified it must not be empty"));
}
public void testSimpleRoutingAcceptsId() {
IndexRouting indexRouting = IndexRouting.fromIndexMetadata(
IndexMetadata.builder("test").settings(settings(Version.CURRENT)).numberOfShards(2).numberOfReplicas(1).build()
);
String id = randomAlphaOfLength(10);
IndexRequest req = new IndexRequest().id(id);
indexRouting.process(req);
assertThat(req.id(), equalTo(id));
assertThat(req.getAutoGeneratedTimestamp(), equalTo(IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP));
}
public void testSimpleRoutingAssignedRandomId() {
IndexRouting indexRouting = IndexRouting.fromIndexMetadata(
IndexMetadata.builder("test").settings(settings(Version.CURRENT)).numberOfShards(2).numberOfReplicas(1).build()
);
IndexRequest req = new IndexRequest();
indexRouting.process(req);
req.checkAutoIdWithOpTypeCreateSupportedByVersion(null);
assertThat(req.id(), not(nullValue()));
assertThat(req.getAutoGeneratedTimestamp(), not(equalTo(IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP)));
}
public void testGenerateShardId() {
int[][] possibleValues = new int[][] { { 8, 4, 2 }, { 20, 10, 2 }, { 36, 12, 3 }, { 15, 5, 1 } };
for (int i = 0; i < 10; i++) {
@ -129,7 +165,10 @@ public class IndexRoutingTests extends ESTestCase {
}
public void testPartitionedIndex() {
// make sure the same routing value always has each _id fall within the configured partition size
/*
* make sure the same routing value always has each _id fall within the
* configured partition size
*/
for (int shards = 1; shards < 5; shards++) {
for (int partitionSize = 1; partitionSize == 1 || partitionSize < shards; partitionSize++) {
IndexRouting indexRouting = IndexRouting.fromIndexMetadata(
@ -414,8 +453,8 @@ public class IndexRoutingTests extends ESTestCase {
/**
* Extract a shardId from a "simple" {@link IndexRouting} using a randomly
* chosen method. All of the random methods <strong>should</strong> return
* the same results.
* chosen method. All of the random methods <strong>should</strong> return the
* same results.
*/
private int shardIdFromSimple(IndexRouting indexRouting, String id, @Nullable String routing) {
return switch (between(0, 3)) {
@ -427,16 +466,25 @@ public class IndexRoutingTests extends ESTestCase {
};
}
public void testRoutingPathSpecifiedRouting() throws IOException {
IndexRouting routing = indexRoutingForPath(between(1, 5), randomAlphaOfLength(5));
Exception e = expectThrows(
IllegalArgumentException.class,
() -> routing.indexShard(null, randomAlphaOfLength(5), XContentType.JSON, source(Map.of()))
);
assertThat(
e.getMessage(),
equalTo("indexing with a specified routing is not supported because the destination index [test] is in time series mode")
);
public void testRoutingAllowsId() {
IndexRouting indexRouting = indexRoutingForPath(between(1, 5), randomAlphaOfLength(5));
String id = randomAlphaOfLength(5);
IndexRequest req = new IndexRequest().id(id);
indexRouting.process(req);
assertThat(req.id(), equalTo(id));
}
/**
* {@code routing_path} based {@link IndexRouting} implementations do
* not assign an {@code _id} on the coordinating node, instead they
* rely on the {@link IdFieldMapper} implementation to assign the
* {@code _id} as part of parsing the document.
*/
public void testRoutingPathLeavesIdNull() {
IndexRouting indexRouting = indexRoutingForPath(between(1, 5), randomAlphaOfLength(5));
IndexRequest req = new IndexRequest();
indexRouting.process(req);
assertThat(req.id(), nullValue());
}
public void testRoutingPathEmptySource() throws IOException {
@ -466,22 +514,19 @@ public class IndexRoutingTests extends ESTestCase {
assertThat(e.getMessage(), equalTo("update is not supported because the destination index [test] is in time series mode"));
}
public void testRoutingPathDelete() throws IOException {
IndexRouting routing = indexRoutingForPath(between(1, 5), "foo");
public void testRoutingIndexWithRouting() throws IOException {
IndexRouting indexRouting = indexRoutingForPath(5, "foo");
String value = randomAlphaOfLength(5);
BytesReference source = source(Map.of("foo", value));
String docRouting = randomAlphaOfLength(5);
Exception e = expectThrows(
IllegalArgumentException.class,
() -> routing.deleteShard(randomAlphaOfLength(5), randomBoolean() ? null : randomAlphaOfLength(5))
() -> indexRouting.indexShard(randomAlphaOfLength(5), docRouting, XContentType.JSON, source)
);
assertThat(e.getMessage(), equalTo("delete is not supported because the destination index [test] is in time series mode"));
}
public void testRoutingPathGet() throws IOException {
IndexRouting routing = indexRoutingForPath(between(1, 5), "foo");
Exception e = expectThrows(
IllegalArgumentException.class,
() -> routing.getShard(randomAlphaOfLength(5), randomBoolean() ? null : randomAlphaOfLength(5))
assertThat(
e.getMessage(),
equalTo("specifying routing is not supported because the destination index [test] is in time series mode")
);
assertThat(e.getMessage(), equalTo("get is not supported because the destination index [test] is in time series mode"));
}
public void testRoutingPathCollectSearchWithRouting() throws IOException {
@ -555,6 +600,29 @@ public class IndexRoutingTests extends ESTestCase {
assertIndexShard(routing, Map.of("dim.a", "a"), 4);
}
public void testRoutingPathReadWithInvalidString() throws IOException {
int shards = between(2, 1000);
IndexRouting indexRouting = indexRoutingForPath(shards, "foo");
Exception e = expectThrows(ResourceNotFoundException.class, () -> shardIdForReadFromSourceExtracting(indexRouting, "!@#"));
assertThat(e.getMessage(), equalTo("invalid id [!@#] for index [test] in time series mode"));
}
public void testRoutingPathReadWithShortString() throws IOException {
int shards = between(2, 1000);
IndexRouting indexRouting = indexRoutingForPath(shards, "foo");
Exception e = expectThrows(ResourceNotFoundException.class, () -> shardIdForReadFromSourceExtracting(indexRouting, ""));
assertThat(e.getMessage(), equalTo("invalid id [] for index [test] in time series mode"));
}
/**
* Extract a shardId from an {@link IndexRouting} that extracts routingusing a randomly
* chosen method. All of the random methods <strong>should</strong> return the
* same results.
*/
private int shardIdForReadFromSourceExtracting(IndexRouting indexRouting, String id) {
return randomBoolean() ? indexRouting.deleteShard(id, null) : indexRouting.getShard(id, null);
}
private IndexRouting indexRoutingForPath(int shards, String path) {
return indexRoutingForPath(Version.CURRENT, shards, path);
}
@ -569,8 +637,23 @@ public class IndexRoutingTests extends ESTestCase {
);
}
private void assertIndexShard(IndexRouting routing, Map<String, Object> source, int expected) throws IOException {
assertThat(routing.indexShard(randomAlphaOfLength(5), null, XContentType.JSON, source(source)), equalTo(expected));
private void assertIndexShard(IndexRouting routing, Map<String, Object> source, int expectedShard) throws IOException {
byte[] suffix = randomSuffix();
BytesReference sourceBytes = source(source);
assertThat(routing.indexShard(randomAlphaOfLength(5), null, XContentType.JSON, sourceBytes), equalTo(expectedShard));
IndexRouting.ExtractFromSource r = (IndexRouting.ExtractFromSource) routing;
String idFromSource = r.createId(XContentType.JSON, sourceBytes, suffix);
assertThat(shardIdForReadFromSourceExtracting(routing, idFromSource), equalTo(expectedShard));
String idFromFlattened = r.createId(flatten(source), suffix);
assertThat(idFromFlattened, equalTo(idFromSource));
}
private byte[] randomSuffix() {
byte[] suffix = new byte[between(0, 10)];
for (int i = 0; i < suffix.length; i++) {
suffix[i] = randomByte();
}
return suffix;
}
private BytesReference source(Map<String, Object> doc) throws IOException {
@ -587,6 +670,23 @@ public class IndexRoutingTests extends ESTestCase {
);
}
private Map<String, Object> flatten(Map<String, Object> m) {
Map<String, Object> result = new HashMap<>();
flatten(result, null, m);
return result;
}
private void flatten(Map<String, Object> result, String path, Map<?, ?> m) {
for (Map.Entry<?, ?> e : m.entrySet()) {
String subPath = path == null ? e.getKey().toString() : path + "." + e.getKey();
if (e.getValue()instanceof Map<?, ?> subM) {
flatten(result, subPath, subM);
} else {
result.put(subPath, e.getValue());
}
}
}
/**
* Build the hash we expect from the extracter.
*/

View file

@ -22,6 +22,7 @@ import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver.DocIdAndVersion;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
import org.elasticsearch.index.mapper.VersionFieldMapper;
import org.elasticsearch.test.ESTestCase;
@ -43,7 +44,7 @@ public class VersionLookupTests extends ESTestCase {
.setMergePolicy(NoMergePolicy.INSTANCE)
);
Document doc = new Document();
doc.add(new Field(IdFieldMapper.NAME, "6", IdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new Field(IdFieldMapper.NAME, "6", ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new NumericDocValuesField(VersionFieldMapper.NAME, 87));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.NAME, randomNonNegativeLong()));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.PRIMARY_TERM_NAME, randomLongBetween(1, Long.MAX_VALUE)));
@ -78,7 +79,7 @@ public class VersionLookupTests extends ESTestCase {
Directory dir = newDirectory();
IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER).setMergePolicy(NoMergePolicy.INSTANCE));
Document doc = new Document();
doc.add(new Field(IdFieldMapper.NAME, "6", IdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new Field(IdFieldMapper.NAME, "6", ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new NumericDocValuesField(VersionFieldMapper.NAME, 87));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.NAME, randomNonNegativeLong()));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.PRIMARY_TERM_NAME, randomLongBetween(1, Long.MAX_VALUE)));

View file

@ -19,6 +19,7 @@ import org.elasticsearch.Version;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.lucene.index.ElasticsearchDirectoryReader;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
import org.elasticsearch.index.mapper.VersionFieldMapper;
import org.elasticsearch.index.shard.ShardId;
@ -56,7 +57,7 @@ public class VersionsTests extends ESTestCase {
assertThat(loadDocIdAndVersion(directoryReader, new Term(IdFieldMapper.NAME, "1"), randomBoolean()), nullValue());
Document doc = new Document();
doc.add(new Field(IdFieldMapper.NAME, "1", IdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new Field(IdFieldMapper.NAME, "1", ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new NumericDocValuesField(VersionFieldMapper.NAME, 1));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.NAME, randomNonNegativeLong()));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.PRIMARY_TERM_NAME, randomLongBetween(1, Long.MAX_VALUE)));
@ -65,7 +66,7 @@ public class VersionsTests extends ESTestCase {
assertThat(loadDocIdAndVersion(directoryReader, new Term(IdFieldMapper.NAME, "1"), randomBoolean()).version, equalTo(1L));
doc = new Document();
Field uid = new Field(IdFieldMapper.NAME, "1", IdFieldMapper.Defaults.FIELD_TYPE);
Field uid = new Field(IdFieldMapper.NAME, "1", ProvidedIdFieldMapper.Defaults.FIELD_TYPE);
Field version = new NumericDocValuesField(VersionFieldMapper.NAME, 2);
doc.add(uid);
doc.add(version);
@ -103,12 +104,12 @@ public class VersionsTests extends ESTestCase {
for (int i = 0; i < 4; ++i) {
// Nested
Document doc = new Document();
doc.add(new Field(IdFieldMapper.NAME, "1", IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
doc.add(new Field(IdFieldMapper.NAME, "1", ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
docs.add(doc);
}
// Root
Document doc = new Document();
doc.add(new Field(IdFieldMapper.NAME, "1", IdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new Field(IdFieldMapper.NAME, "1", ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
NumericDocValuesField version = new NumericDocValuesField(VersionFieldMapper.NAME, 5L);
doc.add(version);
doc.add(new NumericDocValuesField(SeqNoFieldMapper.NAME, randomNonNegativeLong()));
@ -141,7 +142,7 @@ public class VersionsTests extends ESTestCase {
Directory dir = newDirectory();
IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER));
Document doc = new Document();
doc.add(new Field(IdFieldMapper.NAME, "6", IdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new Field(IdFieldMapper.NAME, "6", ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new NumericDocValuesField(VersionFieldMapper.NAME, 87));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.NAME, randomNonNegativeLong()));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.PRIMARY_TERM_NAME, randomLongBetween(1, Long.MAX_VALUE)));
@ -168,7 +169,7 @@ public class VersionsTests extends ESTestCase {
Directory dir = newDirectory();
IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(Lucene.STANDARD_ANALYZER));
Document doc = new Document();
doc.add(new Field(IdFieldMapper.NAME, "6", IdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new Field(IdFieldMapper.NAME, "6", ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new NumericDocValuesField(VersionFieldMapper.NAME, 87));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.NAME, randomNonNegativeLong()));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.PRIMARY_TERM_NAME, randomLongBetween(1, Long.MAX_VALUE)));

View file

@ -52,7 +52,6 @@ import org.elasticsearch.index.engine.Engine;
import org.elasticsearch.index.engine.InternalEngineFactory;
import org.elasticsearch.index.engine.InternalEngineTests;
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MapperRegistry;
import org.elasticsearch.index.mapper.ParsedDocument;
import org.elasticsearch.index.mapper.Uid;
@ -195,7 +194,7 @@ public class IndexModuleTests extends ESTestCase {
mapperRegistry,
new IndicesFieldDataCache(settings, listener),
writableRegistry(),
IdFieldMapper.NO_FIELD_DATA,
module.indexSettings().getMode().buildNoFieldDataIdFieldMapper(),
null,
indexDeletionListener,
emptyMap()

View file

@ -22,7 +22,6 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.analysis.IndexAnalyzers;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MapperRegistry;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.similarity.SimilarityService;
@ -91,7 +90,7 @@ public class CodecTests extends ESTestCase {
similarityService,
mapperRegistry,
() -> null,
IdFieldMapper.NO_FIELD_DATA,
settings.getMode().buildNoFieldDataIdFieldMapper(),
ScriptCompiler.NONE
);
return new CodecService(service);

View file

@ -104,6 +104,7 @@ import org.elasticsearch.index.mapper.LuceneDocument;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.MappingLookup;
import org.elasticsearch.index.mapper.ParsedDocument;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
import org.elasticsearch.index.mapper.SourceFieldMapper;
import org.elasticsearch.index.mapper.Uid;
@ -1532,7 +1533,7 @@ public class InternalEngineTests extends EngineTestCase {
)
) {
org.apache.lucene.document.Document doc = new org.apache.lucene.document.Document();
doc.add(new Field(IdFieldMapper.NAME, "1", IdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new Field(IdFieldMapper.NAME, "1", ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
doc.add(new NumericDocValuesField(VersionFieldMapper.NAME, -1));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.NAME, 1));
doc.add(new NumericDocValuesField(SeqNoFieldMapper.PRIMARY_TERM_NAME, 1));
@ -5469,7 +5470,7 @@ public class InternalEngineTests extends EngineTestCase {
)
) {
final String id = "id";
final Field uidField = new Field("_id", id, IdFieldMapper.Defaults.FIELD_TYPE);
final Field uidField = new Field("_id", id, ProvidedIdFieldMapper.Defaults.FIELD_TYPE);
final Field versionField = new NumericDocValuesField("_version", 0);
final SeqNoFieldMapper.SequenceIDFields seqID = SeqNoFieldMapper.SequenceIDFields.emptySeqID();
final LuceneDocument document = new LuceneDocument();

View file

@ -287,7 +287,7 @@ public class DocumentMapperTests extends MapperServiceTestCase {
DocumentMapper documentMapper = DocumentMapper.createEmpty(mapperService);
assertEquals("{\"_doc\":{}}", Strings.toString(documentMapper.mapping()));
assertTrue(documentMapper.mappers().hasMappings());
assertNotNull(documentMapper.idFieldMapper());
assertNotNull(documentMapper.mappers().getMapper(IdFieldMapper.NAME));
assertNotNull(documentMapper.sourceMapper());
assertNotNull(documentMapper.IndexFieldMapper());
List<Class<?>> metadataMappers = new ArrayList<>(documentMapper.mappers().getMapping().getMetadataMappersMap().keySet());
@ -297,10 +297,10 @@ public class DocumentMapperTests extends MapperServiceTestCase {
matchesList().item(DataStreamTimestampFieldMapper.class)
.item(DocCountFieldMapper.class)
.item(FieldNamesFieldMapper.class)
.item(IdFieldMapper.class)
.item(IgnoredFieldMapper.class)
.item(IndexFieldMapper.class)
.item(NestedPathFieldMapper.class)
.item(ProvidedIdFieldMapper.class)
.item(RoutingFieldMapper.class)
.item(SeqNoFieldMapper.class)
.item(SourceFieldMapper.class)

View file

@ -468,14 +468,14 @@ public class DocumentParserTests extends MapperServiceTestCase {
// Nested document:
assertNotNull(result.docs().get(0).getField(IdFieldMapper.NAME));
assertEquals(Uid.encodeId("1"), result.docs().get(0).getField(IdFieldMapper.NAME).binaryValue());
assertEquals(IdFieldMapper.Defaults.NESTED_FIELD_TYPE, result.docs().get(0).getField(IdFieldMapper.NAME).fieldType());
assertEquals(ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE, result.docs().get(0).getField(IdFieldMapper.NAME).fieldType());
assertNotNull(result.docs().get(0).getField(NestedPathFieldMapper.NAME));
assertEquals("foo", result.docs().get(0).getField(NestedPathFieldMapper.NAME).stringValue());
assertEquals("value1", result.docs().get(0).getField("foo.bar").binaryValue().utf8ToString());
// Root document:
assertNotNull(result.docs().get(1).getField(IdFieldMapper.NAME));
assertEquals(Uid.encodeId("1"), result.docs().get(1).getField(IdFieldMapper.NAME).binaryValue());
assertEquals(IdFieldMapper.Defaults.FIELD_TYPE, result.docs().get(1).getField(IdFieldMapper.NAME).fieldType());
assertEquals(ProvidedIdFieldMapper.Defaults.FIELD_TYPE, result.docs().get(1).getField(IdFieldMapper.NAME).fieldType());
assertNull(result.docs().get(1).getField(NestedPathFieldMapper.NAME));
assertEquals("value2", result.docs().get(1).getField("baz").binaryValue().utf8ToString());
}
@ -1536,7 +1536,7 @@ public class DocumentParserTests extends MapperServiceTestCase {
DocumentMapper builtDocMapper = createDocumentMapper(builtMapping);
BytesReference json = new BytesArray(copyToBytesFromClasspath("/org/elasticsearch/index/mapper/simple/test1.json"));
LuceneDocument doc = builtDocMapper.parse(new SourceToParse("1", json, XContentType.JSON)).rootDoc();
assertThat(doc.getBinaryValue(builtDocMapper.idFieldMapper().name()), equalTo(Uid.encodeId("1")));
assertThat(doc.getBinaryValue(IdFieldMapper.NAME), equalTo(Uid.encodeId("1")));
assertThat(doc.get(builtDocMapper.mappers().getMapper("name.first").name()), equalTo("shay"));
}
@ -1548,7 +1548,7 @@ public class DocumentParserTests extends MapperServiceTestCase {
BytesReference json = new BytesArray(copyToBytesFromClasspath("/org/elasticsearch/index/mapper/simple/test1.json"));
LuceneDocument doc = docMapper.parse(new SourceToParse("1", json, XContentType.JSON)).rootDoc();
assertThat(doc.getBinaryValue(docMapper.idFieldMapper().name()), equalTo(Uid.encodeId("1")));
assertThat(doc.getBinaryValue(IdFieldMapper.NAME), equalTo(Uid.encodeId("1")));
assertThat(doc.get(docMapper.mappers().getMapper("name.first").name()), equalTo("shay"));
}
@ -1557,7 +1557,7 @@ public class DocumentParserTests extends MapperServiceTestCase {
DocumentMapper docMapper = createDocumentMapper(mapping);
BytesReference json = new BytesArray(copyToBytesFromClasspath("/org/elasticsearch/index/mapper/simple/test1-notype-noid.json"));
LuceneDocument doc = docMapper.parse(new SourceToParse("1", json, XContentType.JSON)).rootDoc();
assertThat(doc.getBinaryValue(docMapper.idFieldMapper().name()), equalTo(Uid.encodeId("1")));
assertThat(doc.getBinaryValue(IdFieldMapper.NAME), equalTo(Uid.encodeId("1")));
assertThat(doc.get(docMapper.mappers().getMapper("name.first").name()), equalTo("shay"));
}

View file

@ -21,7 +21,9 @@ import org.mockito.Mockito;
public class IdFieldTypeTests extends ESTestCase {
public void testRangeQuery() {
MappedFieldType ft = new IdFieldMapper.IdFieldType(() -> false);
MappedFieldType ft = randomBoolean()
? new ProvidedIdFieldMapper.IdFieldType(() -> false)
: new TsidExtractingIdFieldMapper.IdFieldType();
IllegalArgumentException e = expectThrows(
IllegalArgumentException.class,
() -> ft.rangeQuery(null, null, randomBoolean(), randomBoolean(), null, null, null, null)
@ -31,26 +33,33 @@ public class IdFieldTypeTests extends ESTestCase {
public void testTermsQuery() {
SearchExecutionContext context = Mockito.mock(SearchExecutionContext.class);
Settings indexSettings = Settings.builder()
Settings.Builder indexSettings = Settings.builder()
.put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT)
.put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0)
.put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1)
.put(IndexMetadata.SETTING_INDEX_UUID, UUIDs.randomBase64UUID())
.build();
.put(IndexMetadata.SETTING_INDEX_UUID, UUIDs.randomBase64UUID());
if (randomBoolean()) {
indexSettings.put(IndexSettings.MODE.getKey(), "time_series");
indexSettings.put(IndexMetadata.INDEX_ROUTING_PATH.getKey(), "foo");
}
IndexMetadata indexMetadata = IndexMetadata.builder(IndexMetadata.INDEX_UUID_NA_VALUE).settings(indexSettings).build();
IndexSettings mockSettings = new IndexSettings(indexMetadata, Settings.EMPTY);
Mockito.when(context.getIndexSettings()).thenReturn(mockSettings);
Mockito.when(context.indexVersionCreated()).thenReturn(indexSettings.getAsVersion(IndexMetadata.SETTING_VERSION_CREATED, null));
MappedFieldType ft = new IdFieldMapper.IdFieldType(() -> false);
Mockito.when(context.indexVersionCreated()).thenReturn(Version.CURRENT);
MappedFieldType ft = new ProvidedIdFieldMapper.IdFieldType(() -> false);
Query query = ft.termQuery("id", context);
assertEquals(new TermInSetQuery("_id", Uid.encodeId("id")), query);
}
public void testIsAggregatable() {
MappedFieldType ft = new IdFieldMapper.IdFieldType(() -> false);
MappedFieldType ft = new ProvidedIdFieldMapper.IdFieldType(() -> false);
assertFalse(ft.isAggregatable());
ft = new IdFieldMapper.IdFieldType(() -> true);
ft = new ProvidedIdFieldMapper.IdFieldType(() -> true);
assertTrue(ft.isAggregatable());
ft = new TsidExtractingIdFieldMapper.IdFieldType();
assertFalse(ft.isAggregatable());
}
}

View file

@ -44,7 +44,7 @@ public class MappingParserTests extends MapperServiceTestCase {
scriptService,
indexAnalyzers,
indexSettings,
IdFieldMapper.NO_FIELD_DATA
indexSettings.getMode().buildNoFieldDataIdFieldMapper()
);
Map<String, MetadataFieldMapper.TypeParser> metadataMapperParsers = mapperRegistry.getMetadataMapperParsers(
indexSettings.getIndexVersionCreated()

View file

@ -13,7 +13,9 @@ import org.elasticsearch.Version;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.compress.CompressedXContent;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.analysis.AnalyzerScope;
import org.elasticsearch.index.analysis.IndexAnalyzers;
import org.elasticsearch.index.analysis.NamedAnalyzer;
@ -251,6 +253,8 @@ public class ParametrizedMapperTests extends MapperServiceTestCase {
Collections.emptyMap()
);
when(mapperService.getIndexAnalyzers()).thenReturn(indexAnalyzers);
IndexSettings indexSettings = createIndexSettings(version, Settings.EMPTY);
when(mapperService.getIndexSettings()).thenReturn(indexSettings);
MappingParserContext pc = new MappingParserContext(s -> null, s -> {
if (Objects.equals("keyword", s)) {
return KeywordFieldMapper.PARSER;
@ -267,7 +271,7 @@ public class ParametrizedMapperTests extends MapperServiceTestCase {
ScriptCompiler.NONE,
mapperService.getIndexAnalyzers(),
mapperService.getIndexSettings(),
IdFieldMapper.NO_FIELD_DATA
mapperService.getIndexSettings().getMode().buildNoFieldDataIdFieldMapper()
);
if (fromDynamicTemplate) {
pc = new MappingParserContext.DynamicTemplateParserContext(pc);

View file

@ -20,12 +20,11 @@ import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import static org.elasticsearch.index.mapper.IdFieldMapper.ID_FIELD_DATA_DEPRECATION_MESSAGE;
import static org.hamcrest.Matchers.containsString;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
public class IdFieldMapperTests extends MapperServiceTestCase {
public class ProvidedIdFieldMapperTests extends MapperServiceTestCase {
public void testIncludeInObjectNotAllowed() throws Exception {
DocumentMapper docMapper = createDocumentMapper(mapping(b -> {}));
@ -50,7 +49,7 @@ public class IdFieldMapperTests extends MapperServiceTestCase {
boolean[] enabled = new boolean[1];
MapperService mapperService = createMapperService(() -> enabled[0], mapping(b -> {}));
IdFieldMapper.IdFieldType ft = (IdFieldMapper.IdFieldType) mapperService.fieldType("_id");
ProvidedIdFieldMapper.IdFieldType ft = (ProvidedIdFieldMapper.IdFieldType) mapperService.fieldType("_id");
IllegalArgumentException exc = expectThrows(
IllegalArgumentException.class,
@ -61,7 +60,7 @@ public class IdFieldMapperTests extends MapperServiceTestCase {
enabled[0] = true;
ft.fielddataBuilder("test", () -> { throw new UnsupportedOperationException(); }).build(null, null);
assertWarnings(ID_FIELD_DATA_DEPRECATION_MESSAGE);
assertWarnings(ProvidedIdFieldMapper.ID_FIELD_DATA_DEPRECATION_MESSAGE);
assertTrue(ft.isAggregatable());
}
@ -75,7 +74,7 @@ public class IdFieldMapperTests extends MapperServiceTestCase {
SearchLookup lookup = new SearchLookup(mapperService::fieldType, fieldDataLookup());
SearchExecutionContext searchExecutionContext = mock(SearchExecutionContext.class);
when(searchExecutionContext.lookup()).thenReturn(lookup);
IdFieldMapper.IdFieldType ft = (IdFieldMapper.IdFieldType) mapperService.fieldType("_id");
ProvidedIdFieldMapper.IdFieldType ft = (ProvidedIdFieldMapper.IdFieldType) mapperService.fieldType("_id");
ValueFetcher valueFetcher = ft.valueFetcher(searchExecutionContext, null);
IndexSearcher searcher = newSearcher(iw);
LeafReaderContext context = searcher.getIndexReader().leaves().get(0);

View file

@ -10,8 +10,9 @@ package org.elasticsearch.index.mapper;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.io.stream.ByteArrayStreamInput;
import org.elasticsearch.core.CheckedFunction;
import org.elasticsearch.core.CheckedConsumer;
import org.elasticsearch.index.IndexMode;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.xcontent.XContentBuilder;
@ -43,7 +44,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
private DocumentMapper createDocumentMapper(String routingPath, XContentBuilder mappings) throws IOException {
return createMapperService(
getIndexSettingsBuilder().put(IndexSettings.MODE.getKey(), IndexMode.TIME_SERIES.name())
.put(MapperService.INDEX_MAPPING_DIMENSION_FIELDS_LIMIT_SETTING.getKey(), 200) // Increase dimension limit
.put(MapperService.INDEX_MAPPING_DIMENSION_FIELDS_LIMIT_SETTING.getKey(), 200) // Allow tests that use many dimensions
.put(IndexMetadata.INDEX_ROUTING_PATH.getKey(), routingPath)
.put(IndexSettings.TIME_SERIES_START_TIME.getKey(), "2021-04-28T00:00:00Z")
.put(IndexSettings.TIME_SERIES_END_TIME.getKey(), "2021-04-29T00:00:00Z")
@ -52,10 +53,16 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
).documentMapper();
}
private ParsedDocument parseDocument(DocumentMapper docMapper, CheckedFunction<XContentBuilder, XContentBuilder, IOException> f)
throws IOException {
private ParsedDocument parseDocument(DocumentMapper docMapper, CheckedConsumer<XContentBuilder, IOException> f) throws IOException {
// Add the @timestamp field required by DataStreamTimestampFieldMapper for all time series indices
return docMapper.parse(source(b -> f.apply(b).field("@timestamp", "2021-10-01")));
return docMapper.parse(source(null, b -> {
f.accept(b);
b.field("@timestamp", "2021-10-01");
}, null));
}
private BytesRef parseAndGetTsid(DocumentMapper docMapper, CheckedConsumer<XContentBuilder, IOException> f) throws IOException {
return parseDocument(docMapper, f).rootDoc().getBinaryValue(TimeSeriesIdFieldMapper.NAME);
}
public void testEnabledInTimeSeriesMode() throws Exception {
@ -84,7 +91,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
).documentMapper();
assertThat(docMapper.metadataMapper(TimeSeriesIdFieldMapper.class), is(nullValue()));
ParsedDocument doc = docMapper.parse(source(b -> b.field("field", "value")));
ParsedDocument doc = docMapper.parse(source("id", b -> b.field("field", "value"), null));
assertThat(doc.rootDoc().getBinaryValue("_tsid"), is(nullValue()));
assertThat(doc.rootDoc().get("field"), equalTo("value"));
}
@ -115,12 +122,12 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
.endObject();
}));
ParsedDocument doc = parseDocument(
BytesRef tsid = parseAndGetTsid(
docMapper,
b -> b.field("a", "foo").field("b", "bar").field("c", "baz").startObject("o").field("e", "bort").endObject()
);
assertMap(
TimeSeriesIdFieldMapper.decodeTsid(new ByteArrayStreamInput(doc.rootDoc().getBinaryValue("_tsid").bytes)),
TimeSeriesIdFieldMapper.decodeTsid(new BytesArray(tsid).streamInput()),
matchesMap().entry("a", "foo").entry("o.e", "bort")
);
}
@ -128,7 +135,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
public void testUnicodeKeys() throws IOException {
String fire = new String(new int[] { 0x1F525 }, 0, 1);
String coffee = "\u2615";
DocumentMapper docMapper = createDocumentMapper("a", mapping(b -> {
DocumentMapper docMapper = createDocumentMapper(fire + "," + coffee, mapping(b -> {
b.startObject(fire).field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject(coffee).field("type", "keyword").field("time_series_dimension", true).endObject();
}));
@ -170,13 +177,14 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
}
public void testKeywordNull() throws IOException {
DocumentMapper docMapper = createDocumentMapper(
"a",
mapping(b -> { b.startObject("a").field("type", "keyword").field("time_series_dimension", true).endObject(); })
);
DocumentMapper docMapper = createDocumentMapper("r", mapping(b -> {
b.startObject("r").field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject("a").field("type", "keyword").field("time_series_dimension", true).endObject();
}));
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> b.field("a", (String) null)));
assertThat(e.getCause().getMessage(), equalTo("Dimension fields are missing."));
BytesRef withNull = parseAndGetTsid(docMapper, b -> b.field("r", "foo").field("a", (String) null));
BytesRef withoutField = parseAndGetTsid(docMapper, b -> b.field("r", "foo"));
assertThat(withNull, equalTo(withoutField));
}
/**
@ -196,13 +204,16 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
.endObject();
}));
ParsedDocument doc = parseDocument(
docMapper,
b -> b.field("a", 1L).field("b", -1).field("c", "baz").startObject("o").field("e", 1234).endObject()
);
BytesRef tsid = parseAndGetTsid(docMapper, b -> {
b.field("kw", "kw");
b.field("a", 1L);
b.field("b", -1);
b.field("c", "baz");
b.startObject("o").field("e", 1234).endObject();
});
assertMap(
TimeSeriesIdFieldMapper.decodeTsid(new ByteArrayStreamInput(doc.rootDoc().getBinaryValue("_tsid").bytes)),
matchesMap().entry("a", 1L).entry("o.e", 1234L)
TimeSeriesIdFieldMapper.decodeTsid(new BytesArray(tsid).streamInput()),
matchesMap().entry("kw", "kw").entry("a", 1L).entry("o.e", 1234L)
);
}
@ -214,17 +225,20 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> b.field("a", "not_a_long")));
assertThat(
e.getMessage(),
equalTo("failed to parse field [a] of type [long] in document with id '1'. Preview of field's value: 'not_a_long'")
// TODO describe the document instead of "null"
equalTo("failed to parse field [a] of type [long] in document with id 'null'. Preview of field's value: 'not_a_long'")
);
}
public void testLongNull() throws IOException {
DocumentMapper docMapper = createDocumentMapper("b", mapping(b -> {
DocumentMapper docMapper = createDocumentMapper("r", mapping(b -> {
b.startObject("r").field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject("a").field("type", "long").field("time_series_dimension", true).endObject();
b.startObject("b").field("type", "keyword").field("time_series_dimension", true).endObject();
}));
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> b.field("a", (Long) null)));
assertThat(e.getCause().getMessage(), equalTo("Dimension fields are missing."));
BytesRef withNull = parseAndGetTsid(docMapper, b -> b.field("r", "foo").field("a", (Long) null));
BytesRef withoutField = parseAndGetTsid(docMapper, b -> b.field("r", "foo"));
assertThat(withNull, equalTo(withoutField));
}
/**
@ -244,13 +258,16 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
.endObject();
}));
ParsedDocument doc = parseDocument(
docMapper,
b -> b.field("a", 1L).field("b", -1).field("c", "baz").startObject("o").field("e", Integer.MIN_VALUE).endObject()
);
BytesRef tsid = parseAndGetTsid(docMapper, b -> {
b.field("kw", "kw");
b.field("a", 1L);
b.field("b", -1);
b.field("c", "baz");
b.startObject("o").field("e", Integer.MIN_VALUE).endObject();
});
assertMap(
TimeSeriesIdFieldMapper.decodeTsid(new ByteArrayStreamInput(doc.rootDoc().getBinaryValue("_tsid").bytes)),
matchesMap().entry("a", 1L).entry("o.e", (long) Integer.MIN_VALUE)
TimeSeriesIdFieldMapper.decodeTsid(new BytesArray(tsid).streamInput()),
matchesMap().entry("kw", "kw").entry("a", 1L).entry("o.e", (long) Integer.MIN_VALUE)
);
}
@ -262,7 +279,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> b.field("a", "not_an_int")));
assertThat(
e.getMessage(),
equalTo("failed to parse field [a] of type [integer] in document with id '1'. Preview of field's value: 'not_an_int'")
equalTo("failed to parse field [a] of type [integer] in document with id 'null'. Preview of field's value: 'not_an_int'")
);
}
@ -275,7 +292,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
assertThat(
e.getMessage(),
equalTo(
"failed to parse field [a] of type [integer] in document with id '1'. Preview of field's value: '" + Long.MAX_VALUE + "'"
"failed to parse field [a] of type [integer] in document with id 'null'. Preview of field's value: '" + Long.MAX_VALUE + "'"
)
);
}
@ -297,13 +314,16 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
.endObject();
}));
ParsedDocument doc = parseDocument(
docMapper,
b -> b.field("a", 1L).field("b", -1).field("c", "baz").startObject("o").field("e", Short.MIN_VALUE).endObject()
);
BytesRef tsid = parseAndGetTsid(docMapper, b -> {
b.field("kw", "kw");
b.field("a", 1L);
b.field("b", -1);
b.field("c", "baz");
b.startObject("o").field("e", Short.MIN_VALUE).endObject();
});
assertMap(
TimeSeriesIdFieldMapper.decodeTsid(new ByteArrayStreamInput(doc.rootDoc().getBinaryValue("_tsid").bytes)),
matchesMap().entry("a", 1L).entry("o.e", (long) Short.MIN_VALUE)
TimeSeriesIdFieldMapper.decodeTsid(new BytesArray(tsid).streamInput()),
matchesMap().entry("kw", "kw").entry("a", 1L).entry("o.e", (long) Short.MIN_VALUE)
);
}
@ -315,7 +335,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> b.field("a", "not_a_short")));
assertThat(
e.getMessage(),
equalTo("failed to parse field [a] of type [short] in document with id '1'. Preview of field's value: 'not_a_short'")
equalTo("failed to parse field [a] of type [short] in document with id 'null'. Preview of field's value: 'not_a_short'")
);
}
@ -327,7 +347,9 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> b.field("a", Long.MAX_VALUE)));
assertThat(
e.getMessage(),
equalTo("failed to parse field [a] of type [short] in document with id '1'. Preview of field's value: '" + Long.MAX_VALUE + "'")
equalTo(
"failed to parse field [a] of type [short] in document with id 'null'. Preview of field's value: '" + Long.MAX_VALUE + "'"
)
);
}
@ -348,13 +370,16 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
.endObject();
}));
ParsedDocument doc = parseDocument(
docMapper,
b -> b.field("a", 1L).field("b", -1).field("c", "baz").startObject("o").field("e", (int) Byte.MIN_VALUE).endObject()
);
BytesRef tsid = parseAndGetTsid(docMapper, b -> {
b.field("kw", "kw");
b.field("a", 1L);
b.field("b", -1);
b.field("c", "baz");
b.startObject("o").field("e", (int) Byte.MIN_VALUE).endObject();
});
assertMap(
TimeSeriesIdFieldMapper.decodeTsid(new ByteArrayStreamInput(doc.rootDoc().getBinaryValue("_tsid").bytes)),
matchesMap().entry("a", 1L).entry("o.e", (long) Byte.MIN_VALUE)
TimeSeriesIdFieldMapper.decodeTsid(new BytesArray(tsid).streamInput()),
matchesMap().entry("kw", "kw").entry("a", 1L).entry("o.e", (long) Byte.MIN_VALUE)
);
}
@ -366,7 +391,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> b.field("a", "not_a_byte")));
assertThat(
e.getMessage(),
equalTo("failed to parse field [a] of type [byte] in document with id '1'. Preview of field's value: 'not_a_byte'")
equalTo("failed to parse field [a] of type [byte] in document with id 'null'. Preview of field's value: 'not_a_byte'")
);
}
@ -378,7 +403,9 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> b.field("a", Long.MAX_VALUE)));
assertThat(
e.getMessage(),
equalTo("failed to parse field [a] of type [byte] in document with id '1'. Preview of field's value: '" + Long.MAX_VALUE + "'")
equalTo(
"failed to parse field [a] of type [byte] in document with id 'null'. Preview of field's value: '" + Long.MAX_VALUE + "'"
)
);
}
@ -399,13 +426,16 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
.endObject();
}));
ParsedDocument doc = parseDocument(
docMapper,
b -> b.field("a", "192.168.0.1").field("b", -1).field("c", "baz").startObject("o").field("e", "255.255.255.1").endObject()
);
ParsedDocument doc = parseDocument(docMapper, b -> {
b.field("kw", "kw");
b.field("a", "192.168.0.1");
b.field("b", -1);
b.field("c", "baz");
b.startObject("o").field("e", "255.255.255.1").endObject();
});
assertMap(
TimeSeriesIdFieldMapper.decodeTsid(new ByteArrayStreamInput(doc.rootDoc().getBinaryValue("_tsid").bytes)),
matchesMap().entry("a", "192.168.0.1").entry("o.e", "255.255.255.1")
matchesMap().entry("kw", "kw").entry("a", "192.168.0.1").entry("o.e", "255.255.255.1")
);
}
@ -417,7 +447,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> b.field("a", "not_an_ip")));
assertThat(
e.getMessage(),
equalTo("failed to parse field [a] of type [ip] in document with id '1'. Preview of field's value: 'not_an_ip'")
equalTo("failed to parse field [a] of type [ip] in document with id 'null'. Preview of field's value: 'not_an_ip'")
);
}
@ -425,8 +455,6 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
* Tests when the total of the tsid is more than 32k.
*/
public void testVeryLarge() throws IOException {
// By default, only 16 dimension fields are allowed. To support 100 dimension fields
// we must increase 'index.mapping.dimension_fields.limit'
DocumentMapper docMapper = createDocumentMapper("b", mapping(b -> {
b.startObject("b").field("type", "keyword").field("time_series_dimension", true).endObject();
for (int i = 0; i < 100; i++) {
@ -436,12 +464,12 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
String large = "many words ".repeat(80);
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, b -> {
b.field("b", "foo");
for (int i = 0; i < 100; i++) {
b.field("d" + i, large);
}
return b;
}));
assertThat(e.getCause().getMessage(), equalTo("_tsid longer than [32766] bytes [88691]."));
assertThat(e.getCause().getMessage(), equalTo("_tsid longer than [32766] bytes [88698]."));
}
/**
@ -457,7 +485,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
String a = randomAlphaOfLength(10);
int b = between(1, 100);
int c = between(0, 2);
CheckedFunction<XContentBuilder, XContentBuilder, IOException> fields = d -> d.field("a", a).field("b", b).field("c", (long) c);
CheckedConsumer<XContentBuilder, IOException> fields = d -> d.field("a", a).field("b", b).field("c", (long) c);
ParsedDocument doc1 = parseDocument(docMapper, fields);
ParsedDocument doc2 = parseDocument(docMapper, fields);
assertThat(doc1.rootDoc().getBinaryValue("_tsid").bytes, equalTo(doc2.rootDoc().getBinaryValue("_tsid").bytes));
@ -517,7 +545,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
String a = randomAlphaOfLength(10);
int b = between(1, 100);
CheckedFunction<XContentBuilder, XContentBuilder, IOException> fields = d -> d.field("a", a).field("b", b);
CheckedConsumer<XContentBuilder, IOException> fields = d -> d.field("a", a).field("b", b);
ParsedDocument doc1 = parseDocument(docMapper, fields);
ParsedDocument doc2 = parseDocument(docMapper, fields);
assertThat(doc1.rootDoc().getBinaryValue("_tsid").bytes, equalTo(doc2.rootDoc().getBinaryValue("_tsid").bytes));
@ -557,7 +585,7 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
String a = randomAlphaOfLength(10);
int b = between(1, 100);
int c = between(5, 500);
CheckedFunction<XContentBuilder, XContentBuilder, IOException> fields = d -> d.field("a", a).field("b", b).field("c", c);
CheckedConsumer<XContentBuilder, IOException> fields = d -> d.field("a", a).field("b", b).field("c", c);
ParsedDocument doc1 = parseDocument(docMapper1, fields);
ParsedDocument doc2 = parseDocument(docMapper2, fields);
assertThat(doc1.rootDoc().getBinaryValue("_tsid").bytes, not(doc2.rootDoc().getBinaryValue("_tsid").bytes));
@ -580,15 +608,4 @@ public class TimeSeriesIdFieldMapperTests extends MetadataMapperTestCase {
ParsedDocument doc2 = parseDocument(docMapper, d -> d.field("a", a).field("b", b).field("c", c));
assertThat(doc1.rootDoc().getBinaryValue("_tsid").bytes, not(doc2.rootDoc().getBinaryValue("_tsid").bytes));
}
public void testEmpty() throws IOException {
DocumentMapper docMapper = createDocumentMapper("a", mapping(b -> {
b.startObject("a").field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject("b").field("type", "integer").field("time_series_dimension", true).endObject();
b.startObject("c").field("type", "integer").field("time_series_dimension", true).endObject();
}));
Exception e = expectThrows(MapperParsingException.class, () -> parseDocument(docMapper, d -> d));
assertThat(e.getCause().getMessage(), equalTo("Dimension fields are missing."));
}
}

View file

@ -0,0 +1,491 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.index.mapper;
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.cluster.routing.IndexRouting;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.inject.name.Named;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.core.CheckedConsumer;
import org.elasticsearch.core.Nullable;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.test.VersionUtils;
import org.elasticsearch.xcontent.XContentBuilder;
import org.elasticsearch.xcontent.XContentType;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import static org.hamcrest.Matchers.equalTo;
public class TsidExtractingIdFieldMapperTests extends MetadataMapperTestCase {
private static class TestCase {
private final String name;
private final String expectedId;
private final CheckedConsumer<XContentBuilder, IOException> source;
private final List<CheckedConsumer<XContentBuilder, IOException>> equivalentSources = new ArrayList<>();
TestCase(String name, String expectedId, CheckedConsumer<XContentBuilder, IOException> source) {
this.name = name;
this.expectedId = expectedId;
this.source = source;
}
public TestCase and(CheckedConsumer<XContentBuilder, IOException> equivalentSource) {
this.equivalentSources.add(equivalentSource);
return this;
}
@Override
public String toString() {
return name;
}
}
@ParametersFactory
public static Iterable<Object[]> params() {
List<TestCase> items = new ArrayList<>();
/*
* If these values change then ids for individual samples will shift. You may
* modify them with a new index created version, but when you do you must copy
* this test and continue to support the versions here so Elasticsearch can
* continue to read older indices.
*/
// Dates
items.add(new TestCase("2022-01-01T01:00:00Z", "XsFI2ezm5OViFixWgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
}));
items.add(new TestCase("2022-01-01T01:00:01Z", "XsFI2ezm5OViFixWaI4mE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:01Z");
b.field("r1", "cat");
}));
items.add(new TestCase("1970-01-01T00:00:00Z", "XsFI2ezm5OViFixWAAAAAAAAAAA", b -> {
b.field("@timestamp", "1970-01-01T00:00:00Z");
b.field("r1", "cat");
}));
items.add(new TestCase("-9998-01-01T00:00:00Z", "XsFI2ezm5OViFixWABhgBIKo_v8", b -> {
b.field("@timestamp", "-9998-01-01T00:00:00Z");
b.field("r1", "cat");
}));
items.add(new TestCase("9998-01-01T00:00:00Z", "XsFI2ezm5OViFixWAIS9ImnmAAA", b -> {
b.field("@timestamp", "9998-01-01T00:00:00Z");
b.field("r1", "cat");
}));
// routing keywords
items.add(new TestCase("r1", "XsFI2ezm5OViFixWgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("k1", (String) null);
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("L1", (Long) null);
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("i1", (Integer) null);
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("s1", (Short) null);
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("b1", (Byte) null);
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("ip1", (String) null);
}));
items.add(new TestCase("r2", "1y-UzdYi98F0UVRigIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r2", "cat");
}));
items.add(new TestCase("o.r3", "zh4dcftpIU55Ond-gIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o").field("r3", "cat").endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
}));
// non-routing keyword
items.add(new TestCase("k1=dog", "XsFI2dL8sZeQhBgxgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("k1", "dog");
}));
items.add(new TestCase("k1=pumpkin", "XsFI2VlD6_SkSo4MgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("k1", "pumpkin");
}));
items.add(new TestCase("k1=empty string", "XsFI2aBA6UgrxLRqgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("k1", "");
}));
items.add(new TestCase("k2", "XsFI2W2e5Ycw0o5_gIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("k2", "dog");
}));
items.add(new TestCase("o.k3", "XsFI2ZAfOI6DMQhFgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.startObject("o").field("k3", "dog").endObject();
}));
items.add(new TestCase("o.r3", "zh4dcbFtT1qHtjl8gIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o");
{
b.field("r3", "cat");
b.field("k3", "dog");
}
b.endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.startObject("o").field("k3", "dog").endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o").field("r3", "cat").endObject();
b.field("o.k3", "dog");
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.field("o.k3", "dog");
}));
// long
items.add(new TestCase("L1=1", "XsFI2eGMFOYjW7LLgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("L1", 1);
}));
items.add(new TestCase("L1=min", "XsFI2f9V0yuDfkRWgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("L1", Long.MIN_VALUE);
}));
items.add(new TestCase("L2=1234", "XsFI2S8PYEBSm6QYgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("L2", 1234);
}));
items.add(new TestCase("o.L3=max", "zh4dcaI-57LdG7-cgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o");
{
b.field("r3", "cat");
b.field("L3", Long.MAX_VALUE);
}
b.endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.startObject("o").field("L3", Long.MAX_VALUE).endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o").field("r3", "cat").endObject();
b.field("o.L3", Long.MAX_VALUE);
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.field("o.L3", Long.MAX_VALUE);
}));
// int
items.add(new TestCase("i1=1", "XsFI2R3LiMZSeUGKgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("i1", 1);
}));
items.add(new TestCase("i1=min", "XsFI2fC7DMEVFaU9gIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("i1", Integer.MIN_VALUE);
}));
items.add(new TestCase("i2=1234", "XsFI2ZVte8HK90RJgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("i2", 1324);
}));
items.add(new TestCase("o.i3=max", "zh4dcQy_QJRCqIx7gIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o");
{
b.field("r3", "cat");
b.field("i3", Integer.MAX_VALUE);
}
b.endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.startObject("o").field("i3", Integer.MAX_VALUE).endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o").field("r3", "cat").endObject();
b.field("o.i3", Integer.MAX_VALUE);
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.field("o.i3", Integer.MAX_VALUE);
}));
// short
items.add(new TestCase("s1=1", "XsFI2axCr11Q93m7gIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("s1", 1);
}));
items.add(new TestCase("s1=min", "XsFI2Rbs9Ua9BH1wgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("s1", Short.MIN_VALUE);
}));
items.add(new TestCase("s2=1234", "XsFI2SBKaLBqXMBYgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("s2", 1234);
}));
items.add(new TestCase("o.s3=max", "zh4dcYIFo98LQWs4gIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o");
{
b.field("r3", "cat");
b.field("s3", Short.MAX_VALUE);
}
b.endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.startObject("o").field("s3", Short.MAX_VALUE).endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o").field("r3", "cat").endObject();
b.field("o.s3", Short.MAX_VALUE);
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.field("o.s3", Short.MAX_VALUE);
}));
// byte
items.add(new TestCase("b1=1", "XsFI2dDrcWaf3zDPgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("b1", 1);
}));
items.add(new TestCase("b1=min", "XsFI2cTzLrNqHtxngIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("b1", Byte.MIN_VALUE);
}));
items.add(new TestCase("b2=12", "XsFI2Sb77VB9AswjgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("b2", 12);
}));
items.add(new TestCase("o.s3=max", "zh4dcfFauKzj6lgxgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o");
{
b.field("r3", "cat");
b.field("b3", Byte.MAX_VALUE);
}
b.endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.startObject("o").field("b3", Byte.MAX_VALUE).endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o").field("r3", "cat").endObject();
b.field("o.b3", Byte.MAX_VALUE);
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.field("o.b3", Byte.MAX_VALUE);
}));
// ip
items.add(new TestCase("ip1=192.168.0.1", "XsFI2dJ1cyrrjNa2gIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("ip1", "192.168.0.1");
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("ip1", "::ffff:c0a8:1");
}));
items.add(new TestCase("ip1=12.12.45.254", "XsFI2ZUAcRxOwhHKgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("ip1", "12.12.45.254");
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("ip1", "::ffff:c0c:2dfe");
}));
items.add(new TestCase("ip2=FE80:CD00:0000:0CDE:1257:0000:211E:729C", "XsFI2XTGWAekP_oGgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("r1", "cat");
b.field("ip2", "FE80:CD00:0000:0CDE:1257:0000:211E:729C");
}));
items.add(new TestCase("o.ip3=2001:db8:85a3:8d3:1319:8a2e:370:7348", "zh4dcU_FSGP9GuHjgIomE34BAAA", b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o");
{
b.field("r3", "cat");
b.field("ip3", "2001:db8:85a3:8d3:1319:8a2e:370:7348");
}
b.endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.startObject("o").field("ip3", "2001:db8:85a3:8d3:1319:8a2e:370:7348").endObject();
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.startObject("o").field("r3", "cat").endObject();
b.field("o.ip3", "2001:db8:85a3:8d3:1319:8a2e:370:7348");
}).and(b -> {
b.field("@timestamp", "2022-01-01T01:00:00Z");
b.field("o.r3", "cat");
b.field("o.ip3", "2001:db8:85a3:8d3:1319:8a2e:370:7348");
}));
return items.stream().map(td -> new Object[] { td }).toList();
}
private final TestCase testCase;
public TsidExtractingIdFieldMapperTests(@Named("testCase") TestCase testCase) throws IOException {
this.testCase = testCase;
}
public void testExpectedId() throws IOException {
assertThat(parse(null, mapperService(), testCase.source).id(), equalTo(testCase.expectedId));
}
public void testProvideExpectedId() throws IOException {
assertThat(parse(testCase.expectedId, mapperService(), testCase.source).id(), equalTo(testCase.expectedId));
}
public void testProvideWrongId() throws IOException {
String wrongId = testCase.expectedId + "wrong";
Exception e = expectThrows(MapperParsingException.class, () -> parse(wrongId, mapperService(), testCase.source));
assertThat(
e.getCause().getMessage(),
equalTo(
"_id must be unset or set to ["
+ testCase.expectedId
+ "] but was ["
+ testCase.expectedId
+ "wrong] because [index] is in time_series mode"
)
);
}
public void testEquivalentSources() throws IOException {
MapperService mapperService = mapperService();
for (CheckedConsumer<XContentBuilder, IOException> equivalent : testCase.equivalentSources) {
assertThat(parse(null, mapperService, equivalent).id(), equalTo(testCase.expectedId));
}
}
private ParsedDocument parse(@Nullable String id, MapperService mapperService, CheckedConsumer<XContentBuilder, IOException> source)
throws IOException {
try (XContentBuilder builder = XContentBuilder.builder(randomFrom(XContentType.values()).xContent())) {
builder.startObject();
source.accept(builder);
builder.endObject();
SourceToParse sourceToParse = new SourceToParse(id, BytesReference.bytes(builder), builder.contentType());
return mapperService.documentParser().parseDocument(sourceToParse, mapperService.mappingLookup());
}
}
public void testRoutingPathCompliant() throws IOException {
Version version = VersionUtils.randomIndexCompatibleVersion(random());
IndexRouting indexRouting = createIndexSettings(version, indexSettings(version)).getIndexRouting();
int indexShard = indexShard(indexRouting);
assertThat(indexRouting.getShard(testCase.expectedId, null), equalTo(indexShard));
assertThat(indexRouting.deleteShard(testCase.expectedId, null), equalTo(indexShard));
}
private int indexShard(IndexRouting indexRouting) throws IOException {
try (XContentBuilder builder = XContentBuilder.builder(randomFrom(XContentType.values()).xContent())) {
builder.startObject();
testCase.source.accept(builder);
builder.endObject();
return indexRouting.indexShard(null, null, builder.contentType(), BytesReference.bytes(builder));
}
}
private Settings indexSettings(Version version) {
return Settings.builder()
.put(IndexSettings.MODE.getKey(), "time_series")
.put(IndexMetadata.SETTING_VERSION_CREATED, version)
.put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, between(1, 100))
.put(IndexSettings.TIME_SERIES_START_TIME.getKey(), "-9999-01-01T00:00:00Z")
.put(IndexSettings.TIME_SERIES_END_TIME.getKey(), "9999-01-01T00:00:00Z")
.put(IndexMetadata.INDEX_ROUTING_PATH.getKey(), "r1,r2,o.r3")
.put(MapperService.INDEX_MAPPING_DIMENSION_FIELDS_LIMIT_SETTING.getKey(), 100)
.build();
}
private MapperService mapperService() throws IOException {
Version version = VersionUtils.randomIndexCompatibleVersion(random());
return createMapperService(indexSettings(version), mapping(b -> {
b.startObject("r1").field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject("r2").field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject("k1").field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject("k2").field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject("L1").field("type", "long").field("time_series_dimension", true).endObject();
b.startObject("L2").field("type", "long").field("time_series_dimension", true).endObject();
b.startObject("i1").field("type", "integer").field("time_series_dimension", true).endObject();
b.startObject("i2").field("type", "integer").field("time_series_dimension", true).endObject();
b.startObject("s1").field("type", "short").field("time_series_dimension", true).endObject();
b.startObject("s2").field("type", "short").field("time_series_dimension", true).endObject();
b.startObject("b1").field("type", "byte").field("time_series_dimension", true).endObject();
b.startObject("b2").field("type", "byte").field("time_series_dimension", true).endObject();
b.startObject("ip1").field("type", "ip").field("time_series_dimension", true).endObject();
b.startObject("ip2").field("type", "ip").field("time_series_dimension", true).endObject();
b.startObject("o").startObject("properties");
{
b.startObject("r3").field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject("k3").field("type", "keyword").field("time_series_dimension", true).endObject();
b.startObject("L3").field("type", "long").field("time_series_dimension", true).endObject();
b.startObject("i3").field("type", "integer").field("time_series_dimension", true).endObject();
b.startObject("s3").field("type", "short").field("time_series_dimension", true).endObject();
b.startObject("b3").field("type", "byte").field("time_series_dimension", true).endObject();
b.startObject("ip3").field("type", "ip").field("time_series_dimension", true).endObject();
}
b.endObject().endObject();
}));
}
@Override
protected String fieldName() {
return IdFieldMapper.NAME;
}
@Override
protected void registerParameters(ParameterChecker checker) throws IOException {}
}

View file

@ -81,7 +81,7 @@ public class TypeParsersTests extends ESTestCase {
ScriptCompiler.NONE,
mapperService.getIndexAnalyzers(),
mapperService.getIndexSettings(),
IdFieldMapper.NO_FIELD_DATA
ProvidedIdFieldMapper.NO_FIELD_DATA
);
TextFieldMapper.PARSER.parse("some-field", fieldNode, olderContext);
@ -107,7 +107,7 @@ public class TypeParsersTests extends ESTestCase {
ScriptCompiler.NONE,
mapperService.getIndexAnalyzers(),
mapperService.getIndexSettings(),
IdFieldMapper.NO_FIELD_DATA
ProvidedIdFieldMapper.NO_FIELD_DATA
);
IllegalArgumentException e = expectThrows(

View file

@ -38,7 +38,6 @@ import org.elasticsearch.index.fielddata.ScriptDocValues;
import org.elasticsearch.index.fielddata.SortedBinaryDocValues;
import org.elasticsearch.index.fielddata.plain.AbstractLeafOrdinalsFieldData;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.IndexFieldMapper;
import org.elasticsearch.index.mapper.KeywordFieldMapper;
import org.elasticsearch.index.mapper.KeywordScriptFieldType;
@ -443,7 +442,7 @@ public class SearchExecutionContextTests extends ESTestCase {
ScriptCompiler.NONE,
indexAnalyzers,
indexSettings,
new IdFieldMapper(() -> true)
indexSettings.getMode().buildIdFieldMapper(() -> true)
)
);
return mapperService;

View file

@ -148,7 +148,7 @@ public class IndexLevelReplicationTests extends ESIndexLevelReplicationTestCase
try (ReplicationGroup shards = createGroup(0)) {
shards.startAll();
final IndexRequest originalRequest = new IndexRequest(index.getName()).source("{}", XContentType.JSON);
originalRequest.process();
originalRequest.autoGenerateId();
final IndexRequest retryRequest = copyIndexRequest(originalRequest);
retryRequest.onRetry();
shards.index(retryRequest);

View file

@ -611,7 +611,7 @@ public class RecoveryDuringReplicationTests extends ESIndexLevelReplicationTestC
List<IndexRequest> replicationRequests = new ArrayList<>();
for (int numDocs = between(1, 10), i = 0; i < numDocs; i++) {
final IndexRequest indexRequest = new IndexRequest(index.getName()).source("{}", XContentType.JSON);
indexRequest.process();
indexRequest.autoGenerateId();
final IndexRequest copyRequest;
if (randomBoolean()) {
copyRequest = copyIndexRequest(indexRequest);

View file

@ -129,7 +129,11 @@ public class IndexingOperationListenerTests extends ESTestCase {
ParsedDocument doc = InternalEngineTests.createParsedDoc("1", null);
Engine.Delete delete = new Engine.Delete("1", new Term("_id", Uid.encodeId(doc.id())), randomNonNegativeLong());
Engine.Index index = new Engine.Index(new Term("_id", Uid.encodeId(doc.id())), randomNonNegativeLong(), doc);
compositeListener.postDelete(randomShardId, delete, new Engine.DeleteResult(1, 0, SequenceNumbers.UNASSIGNED_SEQ_NO, true));
compositeListener.postDelete(
randomShardId,
delete,
new Engine.DeleteResult(1, 0, SequenceNumbers.UNASSIGNED_SEQ_NO, true, delete.id())
);
assertEquals(0, preIndex.get());
assertEquals(0, postIndex.get());
assertEquals(0, postIndexException.get());
@ -153,7 +157,11 @@ public class IndexingOperationListenerTests extends ESTestCase {
assertEquals(2, postDelete.get());
assertEquals(2, postDeleteException.get());
compositeListener.postIndex(randomShardId, index, new Engine.IndexResult(0, 0, SequenceNumbers.UNASSIGNED_SEQ_NO, false));
compositeListener.postIndex(
randomShardId,
index,
new Engine.IndexResult(0, 0, SequenceNumbers.UNASSIGNED_SEQ_NO, false, index.id())
);
assertEquals(0, preIndex.get());
assertEquals(2, postIndex.get());
assertEquals(0, postIndexException.get());

View file

@ -40,6 +40,7 @@ import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.LuceneDocument;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.ParsedDocument;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
import org.elasticsearch.index.mapper.Uid;
import org.elasticsearch.index.seqno.RetentionLeases;
@ -537,7 +538,7 @@ public class RefreshListenersTests extends ESTestCase {
final Term uid = new Term(IdFieldMapper.NAME, Uid.encodeId(id));
LuceneDocument document = new LuceneDocument();
document.add(new TextField("test", testFieldValue, Field.Store.YES));
Field idField = new Field(uid.field(), uid.bytes(), IdFieldMapper.Defaults.FIELD_TYPE);
Field idField = new Field(uid.field(), uid.bytes(), ProvidedIdFieldMapper.Defaults.FIELD_TYPE);
Field versionField = new NumericDocValuesField("_version", Versions.MATCH_ANY);
SeqNoFieldMapper.SequenceIDFields seqID = SeqNoFieldMapper.SequenceIDFields.emptySeqID();
document.add(idField);

View file

@ -943,10 +943,6 @@ public class TranslogTests extends ESTestCase {
return new Term("_id", Uid.encodeId(doc.id()));
}
private Term newUid(String id) {
return new Term("_id", Uid.encodeId(id));
}
public void testVerifyTranslogIsNotDeleted() throws IOException {
assertFileIsPresent(translog, 1);
translog.add(new Translog.Index("1", 0, primaryTerm.get(), new byte[] { 1 }));
@ -3340,7 +3336,7 @@ public class TranslogTests extends ESTestCase {
seqID.seqNo.setLongValue(randomSeqNum);
seqID.seqNoDocValue.setLongValue(randomSeqNum);
seqID.primaryTerm.setLongValue(randomPrimaryTerm);
Field idField = new Field("_id", Uid.encodeId("1"), IdFieldMapper.Defaults.FIELD_TYPE);
Field idField = IdFieldMapper.standardIdField("1");
Field versionField = new NumericDocValuesField("_version", 1);
LuceneDocument document = new LuceneDocument();
document.add(new TextField("value", "test", Field.Store.YES));
@ -3365,7 +3361,7 @@ public class TranslogTests extends ESTestCase {
SequenceNumbers.UNASSIGNED_SEQ_NO,
0
);
Engine.IndexResult eIndexResult = new Engine.IndexResult(1, randomPrimaryTerm, randomSeqNum, true);
Engine.IndexResult eIndexResult = new Engine.IndexResult(1, randomPrimaryTerm, randomSeqNum, true, eIndex.id());
Translog.Index index = new Translog.Index(eIndex, eIndexResult);
Version wireVersion = VersionUtils.randomVersionBetween(random(), Version.CURRENT.minimumCompatibilityVersion(), Version.CURRENT);
@ -3389,7 +3385,7 @@ public class TranslogTests extends ESTestCase {
SequenceNumbers.UNASSIGNED_SEQ_NO,
0
);
Engine.DeleteResult eDeleteResult = new Engine.DeleteResult(2, randomPrimaryTerm, randomSeqNum, true);
Engine.DeleteResult eDeleteResult = new Engine.DeleteResult(2, randomPrimaryTerm, randomSeqNum, true, doc.id());
Translog.Delete delete = new Translog.Delete(eDelete, eDeleteResult);
out = new BytesStreamOutput();

View file

@ -256,12 +256,12 @@ public class RecoverySourceHandlerTests extends ESTestCase {
final int initialNumberOfDocs = randomIntBetween(10, 1000);
for (int i = 0; i < initialNumberOfDocs; i++) {
final Engine.Index index = getIndex(Integer.toString(i));
operations.add(new Translog.Index(index, new Engine.IndexResult(1, 1, SequenceNumbers.UNASSIGNED_SEQ_NO, true)));
operations.add(new Translog.Index(index, new Engine.IndexResult(1, 1, SequenceNumbers.UNASSIGNED_SEQ_NO, true, index.id())));
}
final int numberOfDocsWithValidSequenceNumbers = randomIntBetween(10, 1000);
for (int i = initialNumberOfDocs; i < initialNumberOfDocs + numberOfDocsWithValidSequenceNumbers; i++) {
final Engine.Index index = getIndex(Integer.toString(i));
operations.add(new Translog.Index(index, new Engine.IndexResult(1, 1, i - initialNumberOfDocs, true)));
operations.add(new Translog.Index(index, new Engine.IndexResult(1, 1, i - initialNumberOfDocs, true, index.id())));
}
final long startingSeqNo = randomIntBetween(0, numberOfDocsWithValidSequenceNumbers - 1);
final long endingSeqNo = randomLongBetween(startingSeqNo, numberOfDocsWithValidSequenceNumbers - 1);
@ -330,7 +330,7 @@ public class RecoverySourceHandlerTests extends ESTestCase {
final List<Translog.Operation> ops = new ArrayList<>();
for (int numOps = between(1, 256), i = 0; i < numOps; i++) {
final Engine.Index index = getIndex(Integer.toString(i));
ops.add(new Translog.Index(index, new Engine.IndexResult(1, 1, i, true)));
ops.add(new Translog.Index(index, new Engine.IndexResult(1, 1, i, true, index.id())));
}
final AtomicBoolean wasFailed = new AtomicBoolean();
RecoveryTargetHandler recoveryTarget = new TestRecoveryTargetHandler() {
@ -464,10 +464,10 @@ public class RecoverySourceHandlerTests extends ESTestCase {
assertThat(receivedSeqNos, equalTo(sentSeqNos));
}
private Engine.Index getIndex(final String id) {
private Engine.Index getIndex(String id) {
final LuceneDocument document = new LuceneDocument();
document.add(new TextField("test", "test", Field.Store.YES));
final Field idField = new Field("_id", Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE);
final Field idField = IdFieldMapper.standardIdField(id); // TODO tsdbid field could be different.
final Field versionField = new NumericDocValuesField("_version", Versions.MATCH_ANY);
final SeqNoFieldMapper.SequenceIDFields seqID = SeqNoFieldMapper.SequenceIDFields.emptySeqID();
document.add(idField);

View file

@ -54,6 +54,7 @@ import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.NestedPathFieldMapper;
import org.elasticsearch.index.mapper.NumberFieldMapper;
import org.elasticsearch.index.mapper.ObjectMapper;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
import org.elasticsearch.index.mapper.TimeSeriesIdFieldMapper;
import org.elasticsearch.index.mapper.Uid;
@ -660,13 +661,13 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
// Root docs
Document root;
root = new Document();
root.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), IdFieldMapper.Defaults.FIELD_TYPE));
root.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
root.add(sequenceIDFields.primaryTerm);
root.add(new StringField(rootNameField, new BytesRef("Ballpoint"), Field.Store.NO));
documents.add(root);
root = new Document();
root.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), IdFieldMapper.Defaults.FIELD_TYPE));
root.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
root.add(new StringField(rootNameField, new BytesRef("Notebook"), Field.Store.NO));
root.add(sequenceIDFields.primaryTerm);
documents.add(root);
@ -714,13 +715,13 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
// Root docs
Document root;
root = new Document();
root.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), IdFieldMapper.Defaults.FIELD_TYPE));
root.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
root.add(sequenceIDFields.primaryTerm);
root.add(new StringField(rootNameField, new BytesRef("Ballpoint"), Field.Store.NO));
documents.add(root);
root = new Document();
root.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), IdFieldMapper.Defaults.FIELD_TYPE));
root.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
root.add(new StringField(rootNameField, new BytesRef("Notebook"), Field.Store.NO));
root.add(sequenceIDFields.primaryTerm);
documents.add(root);
@ -3066,7 +3067,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
private Document createNestedDocument(String id, String nestedPath, Object... rawFields) {
assert rawFields.length % 2 == 0;
Document doc = new Document();
doc.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
doc.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
doc.add(new Field(NestedPathFieldMapper.NAME, nestedPath, NestedPathFieldMapper.Defaults.FIELD_TYPE));
Object[] fields = new Object[rawFields.length];
for (int i = 0; i < fields.length; i += 2) {

View file

@ -39,6 +39,7 @@ import org.elasticsearch.index.mapper.NestedObjectMapper;
import org.elasticsearch.index.mapper.NestedPathFieldMapper;
import org.elasticsearch.index.mapper.NumberFieldMapper;
import org.elasticsearch.index.mapper.ObjectMapper;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
import org.elasticsearch.index.mapper.Uid;
import org.elasticsearch.index.query.MatchAllQueryBuilder;
@ -178,7 +179,9 @@ public class NestedAggregatorTests extends AggregatorTestCase {
expectedNestedDocs += numNestedDocs;
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), ProvidedIdFieldMapper.Defaults.FIELD_TYPE)
);
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -232,7 +235,9 @@ public class NestedAggregatorTests extends AggregatorTestCase {
expectedNestedDocs += numNestedDocs;
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), ProvidedIdFieldMapper.Defaults.FIELD_TYPE)
);
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -284,7 +289,9 @@ public class NestedAggregatorTests extends AggregatorTestCase {
expectedNestedDocs += numNestedDocs;
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), ProvidedIdFieldMapper.Defaults.FIELD_TYPE)
);
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -331,19 +338,19 @@ public class NestedAggregatorTests extends AggregatorTestCase {
// 1 segment with, 1 root document, with 3 nested sub docs
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -354,11 +361,11 @@ public class NestedAggregatorTests extends AggregatorTestCase {
// 1 segment with:
// 1 document, with 1 nested subdoc
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -366,11 +373,11 @@ public class NestedAggregatorTests extends AggregatorTestCase {
documents.clear();
// and 1 document, with 1 nested subdoc
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -604,19 +611,19 @@ public class NestedAggregatorTests extends AggregatorTestCase {
try (RandomIndexWriter iw = new RandomIndexWriter(random(), directory)) {
List<Document> documents = new ArrayList<>();
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedDocValuesField("key", new BytesRef("key1")));
document.add(new SortedDocValuesField("value", new BytesRef("a1")));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedDocValuesField("key", new BytesRef("key2")));
document.add(new SortedDocValuesField("value", new BytesRef("b1")));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("1"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "_doc", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -625,19 +632,19 @@ public class NestedAggregatorTests extends AggregatorTestCase {
documents.clear();
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedDocValuesField("key", new BytesRef("key1")));
document.add(new SortedDocValuesField("value", new BytesRef("a2")));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedDocValuesField("key", new BytesRef("key2")));
document.add(new SortedDocValuesField("value", new BytesRef("b2")));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("2"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "_doc", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -646,19 +653,19 @@ public class NestedAggregatorTests extends AggregatorTestCase {
documents.clear();
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedDocValuesField("key", new BytesRef("key1")));
document.add(new SortedDocValuesField("value", new BytesRef("a3")));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_field", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedDocValuesField("key", new BytesRef("key2")));
document.add(new SortedDocValuesField("value", new BytesRef("b3")));
documents.add(document);
document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId("3"), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "_doc", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -728,7 +735,9 @@ public class NestedAggregatorTests extends AggregatorTestCase {
generateDocuments(documents, numNestedDocs, i, NESTED_OBJECT, VALUE_FIELD_NAME);
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), ProvidedIdFieldMapper.Defaults.FIELD_TYPE)
);
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -768,7 +777,9 @@ public class NestedAggregatorTests extends AggregatorTestCase {
expectedNestedDocs += 1;
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), ProvidedIdFieldMapper.Defaults.FIELD_TYPE)
);
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
documents.add(document);
@ -860,7 +871,7 @@ public class NestedAggregatorTests extends AggregatorTestCase {
documents.get(r).add(new SortedNumericDocValuesField("reseller_id", r));
}
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(p)), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(p)), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
document.add(new SortedNumericDocValuesField("product_id", p));
@ -888,7 +899,9 @@ public class NestedAggregatorTests extends AggregatorTestCase {
double[] values = new double[numNestedDocs];
for (int nested = 0; nested < numNestedDocs; nested++) {
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(id)), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(id)), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE)
);
document.add(new Field(NestedPathFieldMapper.NAME, path, NestedPathFieldMapper.Defaults.FIELD_TYPE));
long value = randomNonNegativeLong() % 10000;
document.add(new SortedNumericDocValuesField(fieldName, value));
@ -903,14 +916,14 @@ public class NestedAggregatorTests extends AggregatorTestCase {
for (int numPage : numPages) {
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_chapters", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedNumericDocValuesField("num_pages", numPage));
documents.add(document);
}
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "book", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);
for (String author : authors) {

View file

@ -21,6 +21,7 @@ import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.NestedPathFieldMapper;
import org.elasticsearch.index.mapper.NumberFieldMapper;
import org.elasticsearch.index.mapper.ObjectMapper;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
import org.elasticsearch.index.mapper.Uid;
import org.elasticsearch.search.aggregations.AggregationBuilder;
@ -96,14 +97,20 @@ public class ReverseNestedAggregatorTests extends AggregatorTestCase {
for (int nested = 0; nested < numNestedDocs; nested++) {
Document document = new Document();
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), IdFieldMapper.Defaults.NESTED_FIELD_TYPE)
new Field(
IdFieldMapper.NAME,
Uid.encodeId(Integer.toString(i)),
ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE
)
);
document.add(new Field(NestedPathFieldMapper.NAME, NESTED_OBJECT, NestedPathFieldMapper.Defaults.FIELD_TYPE));
documents.add(document);
expectedNestedDocs++;
}
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), ProvidedIdFieldMapper.Defaults.FIELD_TYPE)
);
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
long value = randomNonNegativeLong() % 10000;
document.add(new SortedNumericDocValuesField(VALUE_FIELD_NAME, value));
@ -157,13 +164,19 @@ public class ReverseNestedAggregatorTests extends AggregatorTestCase {
for (int nested = 0; nested < numNestedDocs; nested++) {
Document document = new Document();
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), IdFieldMapper.Defaults.NESTED_FIELD_TYPE)
new Field(
IdFieldMapper.NAME,
Uid.encodeId(Integer.toString(i)),
ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE
)
);
document.add(new Field(NestedPathFieldMapper.NAME, NESTED_OBJECT, NestedPathFieldMapper.Defaults.FIELD_TYPE));
documents.add(document);
}
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(
new Field(IdFieldMapper.NAME, Uid.encodeId(Integer.toString(i)), ProvidedIdFieldMapper.Defaults.FIELD_TYPE)
);
document.add(new Field(NestedPathFieldMapper.NAME, "test", NestedPathFieldMapper.Defaults.FIELD_TYPE));
long value = randomNonNegativeLong() % 10000;

View file

@ -36,6 +36,7 @@ import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.NestedPathFieldMapper;
import org.elasticsearch.index.mapper.NumberFieldMapper;
import org.elasticsearch.index.mapper.ObjectMapper;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.RangeFieldMapper;
import org.elasticsearch.index.mapper.RangeType;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
@ -490,14 +491,14 @@ public class RareTermsAggregatorTests extends AggregatorTestCase {
for (int nestedValue : nestedValues) {
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_object", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedNumericDocValuesField("nested_value", nestedValue));
documents.add(document);
}
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "docs", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedNumericDocValuesField("value", value));
document.add(sequenceIDFields.primaryTerm);

View file

@ -57,6 +57,7 @@ import org.elasticsearch.index.mapper.NumberFieldMapper;
import org.elasticsearch.index.mapper.NumberFieldMapper.NumberFieldType;
import org.elasticsearch.index.mapper.NumberFieldMapper.NumberType;
import org.elasticsearch.index.mapper.ObjectMapper;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.RangeFieldMapper;
import org.elasticsearch.index.mapper.RangeType;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
@ -2139,14 +2140,14 @@ public class TermsAggregatorTests extends AggregatorTestCase {
for (int nestedValue : nestedValues) {
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_object", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedNumericDocValuesField("nested_value", nestedValue));
documents.add(document);
}
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "docs", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedNumericDocValuesField("value", value));
document.add(sequenceIDFields.primaryTerm);
@ -2166,7 +2167,7 @@ public class TermsAggregatorTests extends AggregatorTestCase {
for (int i = 0; i < tags.length; i++) {
List<IndexableField> document = new ArrayList<>();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.NESTED_FIELD_TYPE));
document.add(new Field(NestedPathFieldMapper.NAME, "nested_object", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedDocValuesField("tag", new BytesRef(tags[i])));
@ -2175,7 +2176,7 @@ public class TermsAggregatorTests extends AggregatorTestCase {
}
List<IndexableField> document = new ArrayList<>();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
document.addAll(doc(animalFieldType, animal));
document.add(new Field(NestedPathFieldMapper.NAME, "docs", NestedPathFieldMapper.Defaults.FIELD_TYPE));
document.add(sequenceIDFields.primaryTerm);

View file

@ -31,6 +31,7 @@ import org.apache.lucene.util.BytesRef;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.KeywordFieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.Uid;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.search.aggregations.Aggregation;
@ -137,7 +138,7 @@ public class TopHitsAggregatorTests extends AggregatorTestCase {
private Document document(String id, String... stringValues) {
Document document = new Document();
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE));
document.add(new Field(IdFieldMapper.NAME, Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.FIELD_TYPE));
for (String stringValue : stringValues) {
document.add(new Field("string", stringValue, KeywordFieldMapper.Defaults.FIELD_TYPE));
document.add(new SortedSetDocValuesField("string", new BytesRef(stringValue)));

View file

@ -14,7 +14,6 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.analysis.IndexAnalyzers;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MapperRegistry;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.similarity.SimilarityService;
@ -65,7 +64,7 @@ public class MapperTestUtils {
similarityService,
mapperRegistry,
() -> null,
IdFieldMapper.NO_FIELD_DATA,
indexSettings.getMode().buildNoFieldDataIdFieldMapper(),
ScriptCompiler.NONE
);
}

View file

@ -79,6 +79,7 @@ import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.Mapping;
import org.elasticsearch.index.mapper.MappingLookup;
import org.elasticsearch.index.mapper.ParsedDocument;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
import org.elasticsearch.index.mapper.SourceFieldMapper;
import org.elasticsearch.index.mapper.SourceToParse;
@ -387,8 +388,8 @@ public abstract class EngineTestCase extends ESTestCase {
BytesReference source,
Mapping mappingUpdate,
boolean recoverySource
) {
Field uidField = new Field("_id", Uid.encodeId(id), IdFieldMapper.Defaults.FIELD_TYPE);
) { // TODO try with TsdbIdFieldMapper
Field uidField = new Field("_id", Uid.encodeId(id), ProvidedIdFieldMapper.Defaults.FIELD_TYPE);
Field versionField = new NumericDocValuesField("_version", 0);
SeqNoFieldMapper.SequenceIDFields seqID = SeqNoFieldMapper.SequenceIDFields.emptySeqID();
document.add(uidField);

View file

@ -17,7 +17,6 @@ import org.elasticsearch.index.analysis.AnalysisRegistry;
import org.elasticsearch.index.analysis.AnalyzerScope;
import org.elasticsearch.index.analysis.IndexAnalyzers;
import org.elasticsearch.index.analysis.NamedAnalyzer;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MapperRegistry;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.SourceToParse;
@ -60,7 +59,7 @@ public class TranslogHandler implements Engine.TranslogRecoveryRunner {
similarityService,
mapperRegistry,
() -> null,
IdFieldMapper.NO_FIELD_DATA,
indexSettings.getMode().buildNoFieldDataIdFieldMapper(),
null
);
}

View file

@ -201,7 +201,7 @@ public abstract class MapperServiceTestCase extends ESTestCase {
similarityService,
mapperRegistry,
() -> { throw new UnsupportedOperationException(); },
new IdFieldMapper(idFieldDataEnabled),
indexSettings.getMode().buildIdFieldMapper(idFieldDataEnabled),
this::compileScript
);
}
@ -239,11 +239,14 @@ public abstract class MapperServiceTestCase extends ESTestCase {
}
}
/**
* Build a {@link SourceToParse} with an id.
*/
protected final SourceToParse source(CheckedConsumer<XContentBuilder, IOException> build) throws IOException {
return source("1", build, null);
}
protected final SourceToParse source(String id, CheckedConsumer<XContentBuilder, IOException> build, @Nullable String routing)
protected final SourceToParse source(@Nullable String id, CheckedConsumer<XContentBuilder, IOException> build, @Nullable String routing)
throws IOException {
return source("test", id, build, routing, Map.of());
}

View file

@ -875,7 +875,7 @@ public abstract class ESIndexLevelReplicationTestCase extends IndexShardTestCase
) {
for (BulkItemRequest itemRequest : request.items()) {
if (itemRequest.request() instanceof IndexRequest) {
((IndexRequest) itemRequest.request()).process();
((IndexRequest) itemRequest.request()).process(primary.indexSettings().getIndexRouting());
}
}
final PlainActionFuture<Releasable> permitAcquiredFuture = new PlainActionFuture<>();

View file

@ -410,6 +410,7 @@ public abstract class AggregatorTestCase extends ESTestCase {
IndexShard indexShard = mock(IndexShard.class);
when(indexShard.shardId()).thenReturn(new ShardId("test", "test", 0));
when(indexShard.indexSettings()).thenReturn(indexSettings);
when(ctx.indexShard()).thenReturn(indexShard);
return new SubSearchContext(ctx);
}
@ -991,7 +992,8 @@ public abstract class AggregatorTestCase extends ESTestCase {
source.put("doc_values", "true");
}
Mapper.Builder builder = mappedType.getValue().parse(fieldName, source, new MockParserContext());
IndexSettings indexSettings = createIndexSettings();
Mapper.Builder builder = mappedType.getValue().parse(fieldName, source, new MockParserContext(indexSettings));
FieldMapper mapper = (FieldMapper) builder.build(MapperBuilderContext.ROOT);
MappedFieldType fieldType = mapper.fieldType();
@ -1174,8 +1176,8 @@ public abstract class AggregatorTestCase extends ESTestCase {
}
private static class MockParserContext extends MappingParserContext {
MockParserContext() {
super(null, null, null, Version.CURRENT, null, null, ScriptCompiler.NONE, null, null, null);
MockParserContext(IndexSettings indexSettings) {
super(null, null, null, Version.CURRENT, null, null, ScriptCompiler.NONE, null, indexSettings, null);
}
@Override

View file

@ -41,7 +41,6 @@ import org.elasticsearch.index.analysis.IndexAnalyzers;
import org.elasticsearch.index.cache.bitset.BitsetFilterCache;
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
import org.elasticsearch.index.fielddata.IndexFieldDataService;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MapperRegistry;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.query.SearchExecutionContext;
@ -403,7 +402,7 @@ public abstract class AbstractBuilderTestCase extends ESTestCase {
similarityService,
mapperRegistry,
() -> createShardContext(null),
IdFieldMapper.NO_FIELD_DATA,
idxSettings.getMode().buildNoFieldDataIdFieldMapper(),
ScriptCompiler.NONE
);
IndicesFieldDataCache indicesFieldDataCache = new IndicesFieldDataCache(nodeSettings, new IndexFieldDataCache.Listener() {

View file

@ -2314,7 +2314,7 @@ public final class InternalTestCluster extends TestCluster {
IndexRouting indexRouting = IndexRouting.fromIndexMetadata(clusterState.metadata().getIndexSafe(index));
while (true) {
String routing = RandomStrings.randomAsciiLettersOfLength(random, 10);
if (shard == indexRouting.indexShard(null, routing, null, null)) {
if (shard == indexRouting.indexShard("id", routing, null, null)) {
return routing;
}
}

View file

@ -261,15 +261,7 @@ public class FollowIndexIT extends ESCCRRestTestCase {
);
for (int i = 0; i < numDocs; i++) {
logger.info("Indexing doc [{}]", i);
index(
client(),
leaderIndexName,
Integer.toString(i),
"@timestamp",
basetime + TimeUnit.SECONDS.toMillis(i * 10),
"dim",
"foobar"
);
index(client(), leaderIndexName, null, "@timestamp", basetime + TimeUnit.SECONDS.toMillis(i * 10), "dim", "foobar");
}
refresh(leaderIndexName);
verifyDocuments(client(), leaderIndexName, numDocs);
@ -306,31 +298,30 @@ public class FollowIndexIT extends ESCCRRestTestCase {
pauseFollow(followIndexName);
resumeFollow(followIndexName);
try (RestClient leaderClient = buildLeaderClient()) {
int id = numDocs;
index(
leaderClient,
leaderIndexName,
Integer.toString(id),
null,
"@timestamp",
basetime + TimeUnit.SECONDS.toMillis(id * 10),
basetime + TimeUnit.SECONDS.toMillis(numDocs * 10),
"dim",
"foobar"
);
index(
leaderClient,
leaderIndexName,
Integer.toString(id + 1),
null,
"@timestamp",
basetime + TimeUnit.SECONDS.toMillis(id * 10 + 10),
basetime + TimeUnit.SECONDS.toMillis(numDocs * 10 + 10),
"dim",
"foobar"
);
index(
leaderClient,
leaderIndexName,
Integer.toString(id + 2),
null,
"@timestamp",
basetime + TimeUnit.SECONDS.toMillis(id * 10 + 20),
basetime + TimeUnit.SECONDS.toMillis(numDocs * 10 + 20),
"dim",
"foobar"
);

View file

@ -57,7 +57,7 @@ public class ESCCRRestTestCase extends ESRestTestCase {
document.field((String) fields[i], fields[i + 1]);
}
document.endObject();
final Request request = new Request("POST", "/" + index + "/_doc/" + id);
final Request request = new Request("POST", "/" + index + "/_doc" + (id == null ? "" : "/" + id));
request.setJsonEntity(Strings.toString(document));
assertOK(client.performRequest(request));
}

View file

@ -83,7 +83,7 @@ public class FollowingEngine extends InternalEngine {
index.seqNo(),
lookupPrimaryTerm(index.seqNo())
);
return IndexingStrategy.skipDueToVersionConflict(error, false, index.version());
return IndexingStrategy.skipDueToVersionConflict(error, false, index.version(), index.id());
} else {
return planIndexingAsNonPrimary(index);
}
@ -99,7 +99,7 @@ public class FollowingEngine extends InternalEngine {
delete.seqNo(),
lookupPrimaryTerm(delete.seqNo())
);
return DeletionStrategy.skipDueToVersionConflict(error, delete.version(), false);
return DeletionStrategy.skipDueToVersionConflict(error, delete.version(), false, delete.id());
} else {
return planDeletionAsNonPrimary(delete);
}