* remove as many assembler-related redirects as possible
* Update docs/redirects.yml
* delete more unused temp redirects
* remove more redirects
* remove all redirects to see remaining errors
In particular:
* Remove all links (both asciidoc and markdown) from the JSON definition files.
* This required a two phase edit, from asciidoc links to markdown, and then removal of markdown (replace with markdown text). This is because the asciidoc does not have the display text, and because some links were already markdown.
* Split predicates into is_null and is_not_null
* We kept the old combined version because the main docs still use that, so now we have both combined and separate versions, and Kibana can select the version they want.
With #123610 we disabled parallel collection for field and script sorted top hits,
aligning its behaviour with that of top level search. This was mainly to work around
a bug in script sorting that did not support inter-segment concurrency.
The bug with script sort has been fixed with #123757 and concurrency re-enabled for it.
While sort by field is not optimized for search concurrency, top hits benefits from it
and disabling concurrency for sort by field in top hits has caused performance
regressions in our nightly benchmarks.
This commit re-enables concurrency for top hits with sort by field is used. This
introduces back a discrepancy between top level search and top hits, in that concurrency
is applied for top hits despite sort by field normally disables it. The key difference
is the context where sorting is applied, and the fact that concurrency is disabled
only for performance reasons on top level searches and not for functional reasons.
Optimize calculating the usage of ILM policies in the `GET _ilm/policy` and `GET _ilm/policy/<policy_id>` endpoints by xtracting a separate class that pre-computes some parts on initialization (i.e. only once per request) and then uses those pre-computed parts when calculating the usage for an individual policy. By precomputing all the usages, the class makes a tradeoff by using a little bit more memory to significantly improve the overall processing time.
This change moves the query phase a single roundtrip per node just like can_match or field_caps work already.
A a result of executing multiple shard queries from a single request we can also partially reduce each node's query results on the data node side before responding to the coordinating node.
As a result this change significantly reduces the impact of network latencies on the end-to-end query performance, reduces the amount of work done (memory and cpu) on the coordinating node and the network traffic by factors of up to the number of shards per data node!
Benchmarking shows up to orders of magnitude improvements in heap and network traffic dimensions in querying across a larger number of shards.
This primarily splits the old preview:true warning from the newer applies_to approach. Since all of our current applies_to examples are actually just behaviour modifications of current functions, we do not use the official docs {applies_to} syntax. However there is code to make use of that in the case where we have an entirely new function which will appear in a new version.
Co-authored-by: Alexander Spies <alexander.spies@elastic.co>
Previously if an aggregate_metric_double was present amongst fields and
you tried to sort on any (not necessarily even on the agg_metric itself)
field in ES|QL, it would break the results.
This commit doesn't add support for sorting _on_ aggregate_metric_double
(it is unclear what aspect would be sorted), but it fixes the previous
behavior.
Appends the FailedShardEntry request to the 'shard-failed'
task source string in ShardFailedTransportHandler.messageReceived().
This information will now be available in the 'source' string for
shard failed task entries in the Cluster Pending Tasks API response.
This source string change matches what is done in the
ShardStartedTransportHandler.
Closes#102606.
This patch builds on the work in #113757, #122999, #124594, and #125529 to
natively store array offsets for unsigned long fields instead of falling
back to ignored source when synthetic_source_keep: arrays.
This allows a `rescore_vector: {oversample: 0}` to indicate bypassing
oversampling and rescoring.
This is useful for:
- Updating a quantized mapping to turn off automatic rescoring
- Bypassing oversampling at query time in an ad-hoc manner if its on by default in the mapping
closes: https://github.com/elastic/elasticsearch/issues/125157
Load field caps from store if they haven't been initialised through a refresh yet.
Keep the plain reads to not mess with performance characteristics too much on the good path but protect against confusing races when loading field infos now (that probably should have been ordered stores in the first place but this was safe due to other locks/volatiles on the refresh path).
Closes#125483
Adds the `original_types` to the description of ESQL's `unsupported`
fields. This looks like:
```
{
"name" : "a",
"type" : "unsupported",
"original_types" : [
"long",
"text"
]
}
```
for union types. And like:
```
{
"name" : "a",
"type" : "unsupported",
"original_types" : [
"date_range"
]
}
```
for truly unsupported types.
This information is useful for the UI. For union types it can suggest
that users append a cast.
The dataset name for the deprecation logs index was previously renamed
from `deprecation.elasticsearch` to `elasticsearch.deprecation` in
order to follow the pattern of `product.group`. The deprecation index
template, however, was not updated. This causes indexing errors once
upgraded to 9.0 due to the dataset name having changed on a
constant_keyword field. In order to avoid that mismatch, this commit
renames the deprecation indexing datastream to match the dataset name.
The old template is kept in place, but marked as deprecated, so that any
deprecation logs written during upgrading to 9.x will continue to be
indexed into the old datastream.
closes#125445
Follow up to #125345. If the query contained both a nanos and a millis comparison, we were formatting the dates incorrectly for the lucene push down. This PR adds a test and a fix for that case.
---------
Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
This patch builds on the work in #113757, #122999, and #124594 to natively
store array offsets for boolean fields instead of falling back to ignored
source when `synthetic_source_keep: arrays`.
Fixes#125439
We were incorrectly formatting nanosecond dates when building lucene queries. We had missed this in our testing because none of the CSV tests were running against Lucene. This happened because the date nanos test data includes multivalue fields. Our warning behavior for multivalue fields is inconsistent between queries run in Lucene and queries run in pure ES|QL without pushdown. Our warning tests, however, require that the specified warnings be present in all execution paths. When we first built the date nanos CSV tests, we worked around this by always using an MV function to unpack the multivalue fields. But we forgot that using an MV function prevents the entire query from being pushed down to Lucene, and thus that path wasn't being tested.
In this PR, I've duplicated many of the tests to have a version that doesn't use the MV function, and uses warningRegex instead of warning. The regex version does not fail if the header is absent, so it's safe to use in both modes. Rewriting the tests this way revealed several situations in which this bug can manifest, all of which are fixed in this PR. I cannot be confidant that there aren't more paths that can trigger this bug and aren't covered by these tests, but I haven't found any yet.
I've left some trace level logging that I found helpful while debugging this.
---------
Co-authored-by: elasticsearchmachine <infra-root+elasticsearchmachine@elastic.co>
Instead of processing cluster state updates on the cluster state applier
thread, we fork to a different thread where ILM's runtime of processing
the cluster state update does not affect the speed at which the cluster
can apply new cluster states. That does not mean we don't need to
optimize ILM's cluster state processing, as the overall amount of
processing is generally unaffected by this fork approach (unless we skip
some cluster states), but it does mean we're saving a significant amount
of processing on the critical cluster state applier thread.
Additionally, by running ILM's state processing asynchronously, we allow
ILM to skip some cluster states if the management thread pool is
saturated or ILM's processing is taking too long.