Currently, the raw path is only available from the RestRequest. This
makes the logic to determine if a handler supports streaming more
challenging to evaluate. This commit moves the raw path into pre request
to allow easier streaming support logic.
* Initial hello-world entitlements agent
* Respond to Ryan's comments
* License header
* Fix forbidden APIs setup
* Rename EntitlementAgent
* Automated refactor missed one
* Automated rename really let me down here
* Very serious test name
* README files for the new modules
* Use "tasks.named('jar')"
Co-authored-by: Rene Groeschke <rene@breskeby.com>
* Use 'tasks.named('test')'
Co-authored-by: Rene Groeschke <rene@breskeby.com>
* More deferral of gradle tasks
Co-authored-by: Rene Groeschke <rene@breskeby.com>
* Even more deferral
Co-authored-by: Rene Groeschke <rene@breskeby.com>
* FIx gradle syntax for javaagent arg
---------
Co-authored-by: Rene Groeschke <rene@breskeby.com>
We may have shut a shard down while merges were still pending (or
adjusted the merge policy while the shard was down) meaning that after
recovery its segments do not reflect the desired state according to the
merge policy. With this commit we invoke `IndexWriter#maybeMerge()` at
the end of recovery to check for, and execute, any such lost merges.
Extensible plugins use a custom classloader for other plugin jars. When
extensible plugins were first added, the transport client still existed,
and elasticsearch plugins did not exist in the transport client (at
least not the ones that create classloaders). Yet the transport client
still created a PluginsService. An indirection was used to avoid
creating separate classloaders when the transport client had created the
PluginsService.
The transport client was removed in 8.0, but the indirection still
exists. This commit removes that indirection layer.
Source-only snapshots do not support indices that do not retain the
original source, including indices with synthetic sources. This change
adds a YAML test to verify this behavior.
Closes#112735
Now that 8.x is branched from main, all transport version changes must be
backported until 9.0 is ready to diverge. This commit adds a test which
ensures transport versions are densely packed, ie there are no gaps at
the granularity the version id is bumped (multiples of 1000).
Almost every implementation of `AckedRequest` is an
`AcknowledgedRequest` too, and the distinction is rather confusing.
Moreover the other implementations of `AckedRequest` are a potential
source of `null` timeouts that we'd like to get rid of. This commit
simplifies the situation by dropping the unnecessary `AckedRequest`
interface entirely.
Adding warnings like
```
Date format [MMMM] contains textual field specifiers that could change in JDK 23
```
to failing tests, due to changes recently introduced about Locale
Provider
Fixes: #113226Fixes: #113227Fixes: #113198Fixes: #113199 Fixes:
#113200
This will correct/switch "year" unit diffing from the current integer
subtraction to a crono subtraction. Consequently, two dates are (at
least) one year apart now if (at least) a full calendar year separates
them. The previous implementation simply subtracted the year part of the
dates.
Note: this parts with ES SQL's implementation of the same function,
which itself is aligned with MS SQL's implementation, which works
equivalent to an integer subtraction.
Fixes#112482.
This method is quite hot in some use-cases because it's used by
most string writing to transport messages. Overriding teh default
implementation for cases where we can write straight to the
page instead of going through an intermediary buffer speeds up
the method by more than 2x, saving lots of cycles, especially
on transport threads.
If we don't actually execute this phase we shouldn't fork the phase
unnecessarily. We can compute the RankFeaturePhaseRankCoordinatorContext
on the transport thread and move on to fetch without forking.
Fetch itself will then fork and we can run the reduce as part of fetch instead of in
a separte search pool task (this is the way it worked up until the recent introduction
of RankFeaturePhase, this fixes that regression).
The only usage of `MappedFieldType#extractTerm` comes from `SpanTermQueryBuilder`
which attempts to extract a single term from a generic Query obtained from calling
`MappedFieldType#termQuery`. We can move this logic directly within its only caller,
and instead of using instanceof checks, we can rely on the query visitor API.
This additionally allows us to remove one of the leftover usages of TermInSetQuery#getTermData
which is deprecated in Lucene
The failure store status is a flag that indicates how the failure store was used or could be used if enabled. The user can be informed about the usage of the failure store in the following way:
When relevant we add the optional field `failure_store` . The field will be omitted when the use of the failure store is not relevant. For example, if a document was successfully indexed in a data stream, if a failure concerns an index or if the opType is not index or create. In more detail:
- when we have a “success” create/index response, the field `failure_store` will not be present if the documented was indexed in a backing index. Otherwise, if it got stored in the failure store it will have the value `used`.
- when we have a “rejected“ create/index response, meaning the document was not persisted in elasticsearch, we return the field `failure_store` which is either `not_enabled`, if the document could have ended up in the failure store if it was enabled, or `failed` if something went wrong and the document was not persisted in the failure store, for example, the cluster is out of space and in read-only mode.
We chose to make it an optional field to reduce the impact of this field on a bulk response. The value will exist in the java object but it will not be returned to the user. The only values that will be displayed are:
- `used`: meaning this document was indexed in the failure store
- `not_enabled`: meaning this document was rejected but could have been stored in the failure store if it was applicable.
- `failed`: meaning this failed document, failed to be stored in the failure store.
Example:
```
"errors": true,
"took": 202,
"items": [
{
"create": {
"_index": ".fs-my-ds-2024.09.04-000002",
"_id": "iRDDvJEB_J3Inuia2zgH",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"_seq_no": 6,
"_primary_term": 1,
"status": 201,
"failure_store": "used"
}
},
{
"create": {
"_index": "ds-no-fs",
"_id": "hxDDvJEB_J3Inuia2jj3",
"status": 400,
"error": {
"type": "document_parsing_exception",
"reason": "[1:153] failed to parse field [count] of type [long] in document with id 'hxDDvJEB_J3Inuia2jj3'. Preview of field's value: 'bla'",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "For input string: \"bla\""
}
}
},
"failure_store": "not_enabled"
},
{
"create": {
"_index": ".ds-my-ds-2024.09.04-000001",
"_id": "iBDDvJEB_J3Inuia2jj3",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"_seq_no": 7,
"_primary_term": 1,
"status": 201
}
}
]
```
Now that main has a minimum compile version of Java 21, native access no
longer needs JNA. This commit removes JNA as a dependency, and moves the
jdk implementation into the main source set. It also slightly adjusts
the Mrjar plugin so that the main source set also supports preview
features, like the other numbered source sets.
Several `TransportNodesAction` implementations do some kind of top-level
computation in addition to fanning out requests to individual nodes.
Today they all have to do this once the node-level fanout is complete,
but in most cases the top-level computation can happen in parallel with
the fanout. This commit adds support for an additional `ActionContext`
object, created when starting to process the request and exposed to
`newResponseAsync()` at the end, to allow this parallelization.
All implementations use `(Void) null` for this param, except for
`TransportClusterStatsAction` which now parallelizes the computation of
the cluster-state-based stats with the node-level fanout.