Moving https://github.com/elastic/elasticsearch/pull/103472 here.
---
👋 howdy, team!
Could we include "XFS quotas" as an example for "depending on OS or process level restrictions" for this doc's searchability for users to better understand how to investigate this potential lever's impact?
TIA!
* During ML maintenance, reset jobs in the reset state without a corresponding task.
* Update docs/changelog/106062.yaml
* Fix race condition in MlDailyMaintenanceServiceTests
* Fix log level
elasticsearch-certutil csr generates a private key and a certificate
signing request (CSR) file. It has always accepted the "--pass" command
line option, but ignore it and always generated an unencrypted private
key.
This commit fixes the utility so the --pass option is respected and the
private key is encrypted.
The tests for loading `Block`s from scripted fields could fail randomly
when the `RandomIndexWriter` shuffles the documents. This disables
merging and adds the documents as a block so their order is consistent.
Closes#106044
This computation involves parsing all the pipeline metadata on the
cluster applier thread. It's pretty expensive if there are lots of
pipelines, and seems mostly unnecessary because it's only needed for a
validation check when creating new processors.
* Reset job if existing reset fails (#106020)
* Try again to reset a job if waiting for completion of an existing reset task fails.
* Update docs/changelog/106020.yaml
* Update 106020.yaml
* Update docs/changelog/106020.yaml
* Improve code
* Trigger rebuild
These Java versions are EOL and no longer supported by Elasticsearch so
we can remove them from our CI testing. We only need to support LTS
versions >= 17 and the currently latest bundled JDK version.
This uses the dedicated index block API in the docs for the shrink, split, and clone APIs, rather than putting the block in as a setting directly. The specialized API will wait for ongoing operations to finish, which is better during indexing operations.
Resolves#105831
When using a pre-filter with nested kNN vectors, its treated like a
top-level filter. Meaning, it is applied over parent document fields.
However, there are times when a query filter is applied that may or may
not match internal nested or non-nested docs. We failed to handle this
case correctly and users would receive an error.
closes: https://github.com/elastic/elasticsearch/issues/105901
Previously the `categorize_text` aggregation could throw an
exception if nested as a sub-aggregation of another aggregation
that produced empty buckets at the end of its results. This
change avoids this possibility.
Fixes#105836
This change ensures that the matches implementation of the `SourceConfirmedTextQuery` only checks the current document instead of calling advance on the two phase iterator. The latter tries to find the first doc that matches the query instead of restricting the search to the current doc. This can lead to abnormally slow highlighting if the query is very restrictive and the highlight is done on a non-matching document.
Closes#103298
* ESQL: fix single valued query tests (#105986)
In some cases the tests for our lucene query that makes sure a field is
single-valued was asserting incorrect things about the stats that come
from the query. That was failing the test from time to time. This fixes
the assertion in those cases.
Closes#105918
* ESQL: Reenable svq tests
We fixed the test failure in #105986 but this snuck in.
Closes#105952
The GeoIP endpoint does not use the xpack http client. The GeoIP downloader uses the JDKs builtin cacerts.
If customer is using custom https endpoint they need to provide the cacert in the jdk, whether our jdk bundled in or their jdk. Otherwise they will see something like
```
...PKiX path building failed: sun.security.provier.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target...
```
(cherry picked from commit 30828a5680)
Co-authored-by: Jennie Soria <predogma@users.noreply.github.com>
* Update README.asciidoc
updating the readme with the latest blurb from PMM and a reference to RAG + a few links to search labs content.
* Tweak verbiage
---------
Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
(cherry picked from commit 0b664dd4d4)
Co-authored-by: Serena Chou <serenachou@users.noreply.github.com>
First check whether the full cluster supports a specific indicator (feature) before we mark an indicator as "unknown" when (meta) data is missing from the cluster state.
We seem to have a couple of checks to make sure we delete the data
stream when the last index reaches the delete step however, these checks
seem a bit contradictory.
Namely, the first check makes use if `Index` equality (UUID included)
and the second just checks the index name. So if a data stream with just
one index (the write index) is restored from snapshot (different UUID)
we would've failed the first index equality check and go through the
second check `dataStream.getWriteIndex().getName().equals(indexName)`
and fail the delete step (in a non-retryable way :( ) because we don't
want to delete the write index of a data stream (but we really do if the
data stream has only one index)
This PR makes 2 changes: 1. use the index name equality everywhere in
the step (we already looked up the index abstraction and the parent data
stream, so we know for sure the managed index is part of the data
stream) 2. do not throw exception when we got here via a write index
that is NOT the last index in the data stream but report the exception
so we keep retrying this step (i.e. this enables our users to simply
execute a manual rollover and the index is deleted by ILM eventually on
retry)
The heap attack tests hit OOM where the circuit breaker was
under-accounted. This was because the ProjectOperator retained
references to released blocks. Consequently, the released block couldn't
be GCed although we have decreased memory usage in the circuit breaker.
Relates #10563
We only had a few mentions of 429 handling, now documenting our expectation generically.
Co-authored-by: David Turner <david.turner@elastic.co>
Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
The build handles platform specific code which may be for arm or x86.
Yet there are multiple ways to describe 64bit x86, and the build
converts between the two in several places. This commit consolidates on
the x64 nomenclature in most places, except where necessary (eg ML still
uses x86_64).
relates #105715