The completion_time is set as the start_time (already present) plus the 'took'
time that is set in the SearchResponse object and only if the isRunning status == false
since took is set even for in-progress searches.
We use the 'took' field because it is based on relative time, not absolute wall clock time
which can go backwards due to NTP issues. See the comments in TransportSearchAction about
the SearchTimeProvider for details.
Closes#88640
Before we used to track max_score in collapse when requested (track_scores=true)
or when there is no sort in collapse (see PR#27122). But this feature
was lost through refactoring and changes.
This PR restores this feature.
Closes#97653
Add mask_token field to fill_mask of _ml/trained_models.
This change will enable users and Kibana to get the particular mask tokens needed for deployed models by adding a mask_token field to the GET _ml/trained_models API, as an enhancement to support kibana#159577.
There are situations in which the terminate_after functionality causes
the collection to keep on going although there is nothing to collect,
with the only goal of incrementing the counter of collected docs and
eventually early terminating which sets the `terminated_early` flag
in the search response to true.
When docs collection early terminates, we should rather honor the
corresponding `CollectionTerminatedException` that is thrown, and
adjust expectations around the fact that `terminate_after` affects
actual collection of documents, meaning that it can't be honored if
the threshold has not been reached by the team the collection early
terminates for other reasons.
This commit adjust the QueryPhaseCollector behavior to do that, which
allows for some additional simplifications.
Closes#97269
Today the `current_node` parameter is given in several sample requests
illustrating how to explain an unassigned shard using the cluster
allocation explain API. This doesn't make sense, an unassigned shard has
no `current_node`. This commit removes the misleading parameter in these
cases.
Added a clusterAlias to the Painless execute Request object, so that index
expressions in the request of the form "myremote:myindex" will be parsed to
set clusterAlias to "myremote" and the index to "myindex".
If clusterAlias is null, then it is executed against a shard on the local cluster, as before.
If clusterAlias is non-null, then the SingleShardTransportAction is sent to the remote cluster,
where it will run the full request (doing remote coordination). Note that the new clusterAlias
field is not Writeable so that when it is sent to the remote cluster it will only see the index
name, not the clusterAlias (which it wouldn't know how to handle correctly).
Added PainlessExecuteIT test that tests cross-cluster calls
Updated painless-execute-script end user docs to indicate support for cross-cluster executions
Today we document that tasks may not react to cancellations immediately,
but in practice it's surprising to users and kind of a bug if they run
for too long after being cancelled. This commit adds a little extra
detail about the information to collect to troubleshoot such a
situation.
Currently the prefix size of the _terms_enum endpoint are not limited in size.
Since they run against a keyword field and build automata, this can lead to high memory
consumption and the danger of running OOM. This change check the size of the prefix
early in the rest request and throw a validation error in case it exceeds
IndexWriter.MAX_TERM_LENGTH, which is the same limit we apply to the length of
keyword field values anyway, so this comes at no loss in functionality.
Closes#96572
Discovery, like cluster membership, can also be affected by network-like
issues (e.g. GC/VM pauses, dropped packets and blocked threads) so this
commit duplicates the troubleshooting info across both places.
- Adds the TOC to the Elasticsearch docs landing page. Removes the right sidebar from the landing page.
- Removes the "View all Elastic docs" link from the bottom of the landing page
Co-authored-by: Abdon Pijpelink <abdon.pijpelink@elastic.co>
* Documentation for time-series geo_line
* Fix incorrect ids in geoline docs
* Some updates from review
Added image of kibana map, improved first example, linked to TSDS and added section on line simplification with link to wikipedia.
* Diagrams of truncation versus simplification
The query phase uses a number of different collectors and combines them together, pretty much one per feature that the search API exposes: there is a collector for post_filter, one for min_score, one for terminate_after, one for aggs. While this is very flexible, we always combine such collectors together in the same way (e.g. terminate_after must be the first one, post_filter is only applied to top docs collection, min score is applied to both aggs and top docs). This means that despite we could flexibly compose collectors, we need to apply each feature predictably which makes the composability not needed. Furthermore, composability causes complexity.
The terminate_after functionality is a clear example of complexity introduced as a consequence of having a complex collector tree: it relies on a multi collector, and throws an exception to force terminating the collection for all other collectors in the tree. If there was a single collector aware of post_filter, min_score and terminate_after at the same time, we could simply reuse Lucene mechanisms to early terminate the collection (CollectionTerminatedException) instead of forcing the termination throwing an exception that Lucene does not handle.
Furthermore, MultiCollector is a complex and generic collector to combine multiple collectors together, while we always every combine maximum two collectors with it, which are more or less fixed (e.g. top docs and aggs).
This PR introduces a new top-level collector that is inspired by MultiCollector in that it holds the top docs and the optional aggs collector and applies post_filter, min_score as well as terminate_after as part of its execution. This allows us to have a specialized collector for our needs, less flexibility and more control. This surfaced some strange behaviour that we may want to change as a follow-up in how terminate_after makes us collecting docs even when all possible collections have been early terminated. The goal of this PR though is to have feature parity with query phase before the refactoring, without any change of behaviour.
A nice benefit of this work is that it allows us to rely on CollectionTerminatedException for the terminate_after functionality. This simplifies the introduction of multi-threaded collector managers when it comes to handling exceptions.
This adds IndexVersion to cluster state, alongside node version. This is needed so IndexVersion can be tracked across the cluster, allowing min/max supported index versions to be determined.
Added additional fields to SearchProfileResults for XContent output: node_id, cluster, index, shard_id.
It parses the existing composite ID using the new parseProfileShardId method, which reverses
the SeachShardTarget.toString method.
No new information is added here, merely the splitting out of the four pieces of information
in the profile shards "composite" id that is created by the SeachShardTarget.toString method.
Profile/shards output now has the form:
```
"profile": {
"shards": [
{
"id": "[2m7SW9oIRrirdrwirM1mwQ][blogs][0]",
"node_id": "2m7SW9oIRrirdrwirM1mwQ",
"shard_id": "0",
"index": "blogs",
"cluster": "(local)",
"searches": [ ... ]
...
},
{
"id": "[UngEVXTBQL-7w5j_tftGAQ][remote1:blogs][2]",
"node_id": "UngEVXTBQL-7w5j_tftGAQ",
"shard_id": "2",
"index": "blogs",
"cluster": "remote1",
"searches": [ ... ]
...
```
where the latter is on a remote cluster and you can see that as the prefix on the index name.
Partially addresses #25896
Added yamlRestTest for the new fields in the profile response.