* Added query name to inference field metadata
* Fix build error
* Added query builder service
* Add query builder service to query rewrite context
* Updated match query to support querying semantic text fields
* Fix build error
* Fix NPE
* Update the POC to rewrite to a bool query when combined inference and non-inference fields
* Separate clause for each inference index (to avoid inference ID clashes)
* Simplify query builder service concept to a single default inference query
* Rename QueryBuilderService, remove query name from inference metadata
* Fix too many rewrite rounds error by injecting booleans in constructors for match query builder and semantic text
* Fix test compilation errors
* Fix tests
* Add yaml test for semantic match
* Add NodeFeature
* Fix license headers
* Spotless
* Updated getClass comparison in MatchQueryBuilder
* Cleanup
* Add Mock Inference Query Builder Service
* Spotless
* Cleanup
* Update docs/changelog/117839.yaml
* Update changelog
* Replace the default inference query builder with a query rewrite interceptor
* Cleanup
* Some more cleanup/renames
* Some more cleanup/renames
* Spotless
* Checkstyle
* Convert List<QueryRewriteInterceptor> to Map keyed on query name, error on query name collisions
* PR feedback - remove check on QueryRewriteContext class only
* PR feedback
* Remove intercept flag from MatchQueryBuilder and replace with wrapper
* Move feature to test feature
* Ensure interception happens only once
* Rename InterceptedQueryBuilderWrapper to AbstractQueryBuilderWrapper
* Add lenient field to SemanticQueryBuilder
* Clean up yaml test
* Add TODO comment
* Add comment
* Spotless
* Rename AbstractQueryBuilderWrapper back to InterceptedQueryBuilderWrapper
* Spotless
* Didn't mean to commit that
* Remove static class wrapping the InterceptedQueryBuilderWrapper
* Make InterceptedQueryBuilderWrapper part of QueryRewriteInterceptor
* Refactor the interceptor to be an internal plugin that cannot be used outside inference plugin
* Fix tests
* Spotless
* Minor cleanup
* C'mon spotless
* Test spotless
* Cleanup InternalQueryRewriter
* Change if statement to assert
* Simplify template of InterceptedQueryBuilderWrapper
* Change constructor of InterceptedQueryBuilderWrapper
* Refactor InterceptedQueryBuilderWrapper to extend QueryBuilder
* Cleanup
* Add test
* Spotless
* Rename rewrite to interceptAndRewrite in QueryRewriteInterceptor
* DOESN'T WORK - for testing
* Add comment
* Getting closer - match on single typed fields works now
* Deleted line by mistake
* Checkstyle
* Fix over-aggressive IntelliJ Refactor/Rename
* And another one
* Move SemanticMatchQueryRewriteInterceptor.SEMANTIC_MATCH_QUERY_REWRITE_INTERCEPTION_SUPPORTED to Test feature
* PR feedback
* Require query name with no default
* PR feedback & update test
* Add rewrite test
* Update server/src/main/java/org/elasticsearch/index/query/InnerHitContextBuilder.java
Co-authored-by: Mike Pellegrini <mike.pellegrini@elastic.co>
---------
Co-authored-by: Mike Pellegrini <mike.pellegrini@elastic.co>
This measurably improves BBQ by adjusting the underlying algorithm to an
optimized per vector scalar quantization.
This is a brand new way to quantize vectors. Instead of there being a
global set of upper and lower quantile bands, these are optimized and
calculated per individual vector. Additionally, vectors are centered on
a common centroid.
This allows for an almost 32x reduction in memory, and even better
recall than before at the cost of slightly increasing indexing time.
Additionally, this new approach is easily generalizable to various other
bit sizes (e.g. 2 bits, etc.). While not taken advantage of yet, we may
update our scalar quantized indices in the future to use this new
algorithm, giving significant boosts in recall.
The recall gains spread from 2% to almost 10% for certain datasets with
an additional 5-10% indexing cost when indexing with HNSW when compared
with current BBQ.
This will make `TransportLocalClusterStateAction` wait for a new state
that is not blocked. This means we need a timeout (again). For
consistency's sake, we're reusing the REST param `master_timeout` for
this timeout as well.
The only class that was using `TransportLocalClusterStateAction` was
`TransportGetAliasesAction`, so its request needed to accept a timeout
again as well.
Historical features are now trivially true on v9 - so we can remove the features, and the check.
Historical features do not affect cluster state, so this has no compatibility restrictions.
* Refactor: treat "maybe" JVM options uniformly
* WIP
* Get entitlement running with bridge all the way through, with qualified
exports
* Cosmetic changes to SystemJvmOptions
* Disable entitlements by default
* Bridge module comments
* Fixup forbidden APIs
* spotless
* Rename EntitlementChecker
* Fixup InstrumenterTests
* exclude recursive dep
* Fix some compliance stuff
* Rename asm-provider
* Stop using bridge in InstrumenterTests
* Generalize readme for asm-provider
* InstrumenterTests doesn't need EntitlementCheckerHandle
* Better javadoc
* Call parseBoolean
* Add entitlement to internal module list
* Docs as requested by Lorenzo
* Changes from Jack
* Rename ElasticsearchEntitlementChecker
* Remove logging javadoc
* exportInitializationToAgent should reference EntitlementInitialization, not EntitlementBootstrap.
They're currently in the same module, but if that ever changes, this code would have become wrong.
* Some suggestions from Mark
---------
Co-authored-by: Ryan Ernst <ryan@iernst.net>
* Adding API to get list of service configurations
* Update docs/changelog/114862.yaml
* Fixing some configurations
* PR feedback -> Stream.of
* PR feedback -> singleton
* Renaming ServiceConfiguration to SettingsConfiguration. Adding TaskSettingsConfiguration
* Adding task type settings configuration to response
* PR feedback
The most relevant ES changes that upgrading to Lucene 10 requires are:
- use the appropriate IOContext
- Scorer / ScorerSupplier breaking changes
- Regex automaton are no longer determinized by default
- minimize moved to test classes
- introduce Elasticsearch900Codec
- adjust slicing code according to the added support for intra-segment concurrency
- disable intra-segment concurrency in tests
- adjust accessor methods for many Lucene classes that became a record
- adapt to breaking changes in the analysis area
Co-authored-by: Christoph Büscher <christophbuescher@posteo.de>
Co-authored-by: Mayya Sharipova <mayya.sharipova@elastic.co>
Co-authored-by: ChrisHegarty <chegar999@gmail.com>
Co-authored-by: Brian Seeders <brian.seeders@elastic.co>
Co-authored-by: Armin Braun <me@obrown.io>
Co-authored-by: Panagiotis Bailis <pmpailis@gmail.com>
Co-authored-by: Benjamin Trent <4357155+benwtrent@users.noreply.github.com>
This removes the possibility for a plugin to provide factory retention settings. Factory retention settings have been deprecated and completely replaced by #111972.
Note: this feature is not in use. If someone wants to set global retention they can use the cluster settings as defined in #111972.
Including the cluster state in responses to the `POST _cluster/state`
API was deprecated in #90399 (v8.6.0) requiring callers to pass
`?metric=none` to avoid the deprecation warning. This commit adjusts the
behaviour as promised in v9 so that this API never returns the cluster
state, and deprecates the `?metric` parameter itself.
Closes#88978
Regardless of JDK version, ES should always use CLDR locale database from 9.0.0.
This also removes IsoCalendarDataProvider used to override week-date calculations for the root locale only.
Extensible plugins use a custom classloader for other plugin jars. When
extensible plugins were first added, the transport client still existed,
and elasticsearch plugins did not exist in the transport client (at
least not the ones that create classloaders). Yet the transport client
still created a PluginsService. An indirection was used to avoid
creating separate classloaders when the transport client had created the
PluginsService.
The transport client was removed in 8.0, but the indirection still
exists. This commit removes that indirection layer.
* Initial new injector
* Allow createComponents to return classes
* Downsample injection
* Remove more vestiges of subtype handling
* Lowercase logger
* Respond to code review comments
* Only one object per class
* Some additional cleanup incl spotless
* PR feedback
* Missed one
* Rename workQueue
* Remove Injector.addRecordContents
* TelemetryProvider requires us to inject an object using a supertype
* Address Simon's comments
* Clarify the reason for SuppressForbidden
* Make log indentation code less intrusive
Adds to the `GET _cluster/stats` endpoint information about the snapshot
repositories in use, including their types, whether they are read-only
or read-write, and for Azure repositories the kind of credentials in
use.
This commit moves the file preallocation functionality into
NativeAccess. The code is basically the same. One small tweak is that
instead of breaking Java access boundaries in order to get an open file
handle, the new code uses posix open directly.
relates #104876
Introduce an optional k param for knn query
If k is not set, knn query has the previous behaviour:
- `num_candidates` docs is collected from each shard. This `num_candidates` docs
are used for combining with results with other queries and aggregations on each shard.
- docs from all shards are merged to produce the top global `size` results
If k is set, the behaviour instead is following:
- `k` docs is collected from each shard. This `k` docs are used for
combining results with other queries and aggregations on each shard.
- similarly, docs from all shards are merged to produce the top global `size`
results.
Having `k` param makes it more intuitive for users to address their needs.
They also don't need to care and can skip `num_candidates` param for this query
as it is of more internal details to tune how knn search operates.
Closes#108473
This commit adds `bit` vector support by adding `element_type: bit` for
vectors. This new element type works for indexed and non-indexed
vectors. Additionally, it works with `hnsw` and `flat` index types. No
quantization based codec works with this element type, this is
consistent with `byte` vectors.
`bit` vectors accept up to `32768` dimensions in size and expect vectors
that are being indexed to be encoded either as a hexidecimal string or a
`byte[]` array where each element of the `byte` array represents `8`
bits of the vector.
`bit` vectors support script usage and regular query usage. When
indexed, all comparisons done are `xor` and `popcount` summations (aka,
hamming distance), and the scores are transformed and normalized given
the vector dimensions. Note, indexed bit vectors require `l2_norm` to be
the similarity.
For scripts, `l1norm` is the same as `hamming` distance and `l2norm` is
`sqrt(l1norm)`. `dotProduct` and `cosineSimilarity` are not supported.
Note, the dimensions expected by this element_type are always to be
divisible by `8`, and the `byte[]` vectors provided for index must be
have size `dim/8` size, where each byte element represents `8` bits of
the vectors.
closes: https://github.com/elastic/elasticsearch/issues/48322
* Mechanical package change in IntelliJ
* A couple of manual fixups
* Export plugins.loading to deprecation
* Put plugin-cli in a module so can export PluginsUtils to it.
This adds `hamming` distances, the pop-count of `xor` byte vectors as a
first class citizen in painless.
For byte vectors, this means that we can compute hamming distances via
script_score (aka, brute-force).
The implementation of `hamming` is the same that is available in Lucene,
and when lucene 9.11 is merged, we should update our logic where
applicable to utilize it.
NOTE: this does not yet add hamming distance as a metric for indexed
vectors. This will be a future PR after the Lucene 9.11 upgrade.
Consistency of file settings is an important invariant. However, when
upgrading from Elasticsearch versions before file settings existed,
cluster state will not yet have the file settings metadata. If the first
node upgraded is not the master node, new nodes will never become ready
while they wait for file settings metadata to exist.
This commit adds a node feature for file settings to guard waiting on
file settings for readiness. Although file settings has existed since
8.4, the feature is not a historical feature because historical features
are not applied to cluster state that readiness checks. In this case it
is not needed since upgrading from 8.4+ will already contain file
settings metadata.
previously DocumentSizeReporter was reporting upon indexing being completed in TransportShardBulkAction#onComplete
This commit renames the method to onIndexingCompleted and moves that reporting to IndexEngine in serverless plugin.
This will be followed up in a separate PR that will be reporting in an Engine#index subclass (serverless)
This adds a /_capabilities rest endpoint for checking the capabilities of a cluster - what endpoints, parameters, and endpoint capabilities the cluster supports
We introduce the plumbing so that a plugin can provide factory retention. This retention will take effect if there is no global retention provided by the user.
Without a plugin defining the factory retention, elasticsearch will have no factory retention.
This commit adds an optimised int8 vector distance implementation for aarch64. Additional platforms like, say, x64, will be added as a follow-up.
The vector distance implementation outperforms Lucene's Pamana Vector implementation for binary comparisons by approx 5x (depending on the number of dimensions). It does so by means of compiler intrinsics built into a separate native library and link by Panama's FFI. Comparisons are performed on off-heap mmap'ed vector data.
The implementation is currently only used during merging of scalar quantized segments, through a custom format ES814HnswScalarQuantizedVectorsFormat, but its usage will likely be expanded over time.
Co-authored-by: Benjamin Trent <ben.w.trent@gmail.com>
Co-authored-by: Lorenzo Dematté <lorenzo.dematte@elastic.co>
Co-authored-by: Mark Vieira <portugee@gmail.com>
Co-authored-by: Ryan Ernst <ryan@iernst.net>
This cuts over stored fields with `index.codec: best_speed` (default) to ZSTD with level 0 and blocks of at most 128 documents or 14kB, and `index.codec: best_compression` to ZSTD with level 3 and blocks of at most 2,048 documents or 240kB.
Compared with the current codecs, this would yield similar indexing speed, much better space efficiency and similar retrieval speed. Benchmarks on the `elastic/logs` track suggest 10% better storage efficiency and slightly faster ingestion.
The Lucene codec infrastructure records the codec on a per-segment basis and ensures that this change is backward-compatible. Segments will get progressively migrated to ZSTD as they get merged in the background.
Bindings for ZSTD are provided by the Panama FFI API on JDK21+ and JNA on older JDKs.
ZSTD support is currently behind a feature flag, so it won't be enabled immediately when this feature gets merged, this will need a follow-up change.
Co-authored-by: Mark Vieira <portugee@gmail.com>
Co-authored-by: Ryan Ernst <ryan@iernst.net>
This enhancement adds a new abstraction to the _search API called "retriever." A
retriever is something that returns top hits. This adds three initial retrievers called
"standard", "knn", and "rrf". The retrievers use a parser-only approach where they
are parsed and then translated into a SearchSourceBuilder to execute the actual
search.
---------
Co-authored-by: Mayya Sharipova <mayya.sharipova@elastic.co>
This adds the `DataStreamAutoShardingService` that will compute the
optimal number of shards for a data stream and return a recommendation
as to when to apply it (a time interval we call cool down which is 0
when the auto sharding recommendation can be applied immediately).
This also introduces a `DataStreamAutoShardingEvent` object that will be
stored in the data stream metadata to indicate the last auto sharding
event that was applied to a data stream and its cluster state
representation looks like so:
```
"auto_sharding": {
"trigger_index_name": ".ds-logs-nginx-2024.02.12-000002",
"target_number_of_shards": 3,
"event_timestamp": 1707739707954
}
```
The auto sharding service is not used in this PR, so the auto sharding
event will not be stored in the data stream metadata, but the required
infrastructure to configure it is in place.
Elasticsearch requires access to some native functions. Historically
this has been achieved with the JNA library. However, JNA is a
complicated, magical library, and has caused various problems booting
Elasticsearch over the years. The new Java Foreign Function and Memory
API allows access to call native functions directly from Java. It also
has the advantage of tight integration with hotspot which can improve
performance of these functions (though performance of Elasticsearch's
native calls has never been much of an issue since they are mostly at
boot time).
This commit adds a new native lib that is internal to Elasticsearch. It
is built to use the foreign function api starting with Java 21, and
continue using JNA with Java versions below that.
Only one function, checking whether Elasticsearch is running as root, is
migrated. Future changes will migrate other native functions.