[Docs] Add docs for new semantic text query functionality (#119520) (#119883)

* Update docs with new semantic text functionality

* PR feedback

* PR feedback

* PR Feedback
This commit is contained in:
Kathleen DeRusso 2025-01-09 11:39:00 -05:00 committed by GitHub
parent c505da9453
commit 13c4f5d593
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 13 additions and 6 deletions

View file

@ -13,6 +13,7 @@ Long passages are <<auto-text-chunking, automatically chunked>> to smaller secti
The `semantic_text` field type specifies an inference endpoint identifier that will be used to generate embeddings.
You can create the inference endpoint by using the <<put-inference-api>>.
This field type and the <<query-dsl-semantic-query,`semantic` query>> type make it simpler to perform semantic search on your data.
The `semantic_text` field type may also be queried with <<query-dsl-match-query, match>>, <<query-dsl-sparse-vector-query, sparse_vector>> or <<query-dsl-knn-query, knn>> queries.
If you dont specify an inference endpoint, the `inference_id` field defaults to `.elser-2-elasticsearch`, a preconfigured endpoint for the elasticsearch service.

View file

@ -8,7 +8,8 @@ Finds the _k_ nearest vectors to a query vector, as measured by a similarity
metric. _knn_ query finds nearest vectors through approximate search on indexed
dense_vectors. The preferred way to do approximate kNN search is through the
<<knn-search,top level knn section>> of a search request. _knn_ query is reserved for
expert cases, where there is a need to combine this query with other queries.
expert cases, where there is a need to combine this query with other queries, or
perform a kNN search against a <<semantic-text, semantic_text>> field.
[[knn-query-ex-request]]
==== Example request
@ -77,7 +78,8 @@ POST my-image-index/_search
+
--
(Required, string) The name of the vector field to search against. Must be a
<<index-vectors-knn-search, `dense_vector` field with indexing enabled>>.
<<index-vectors-knn-search, `dense_vector` field with indexing enabled>>, or a
<<semantic-text, `semantic_text` field>> with a compatible dense vector inference model.
--
`query_vector`::
@ -93,6 +95,7 @@ Either this or `query_vector_builder` must be provided.
--
(Optional, object) Query vector builder.
include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=knn-query-vector-builder]
If all queried fields are of type <<semantic-text, semantic_text>>, the inference ID associated with the `semantic_text` field may be inferred.
--
`k`::

View file

@ -10,6 +10,10 @@ provided text is analyzed before matching.
The `match` query is the standard query for performing a full-text search,
including options for fuzzy matching.
`Match` will also work against <<semantic-text, semantic_text>> fields,
however when performing `match` queries against `semantic_text` fields options
that specifically target lexical search such as `fuzziness` or `analyzer` will be ignored.
[[match-query-ex-request]]
==== Example request
@ -296,4 +300,3 @@ The example above creates a boolean query:
that matches documents with the term `ny` or the conjunction `new AND york`.
By default the parameter `auto_generate_synonyms_phrase_query` is set to `true`.

View file

@ -11,7 +11,8 @@ This can be achieved with one of two strategies:
- Using an {nlp} model to convert query text into a list of token-weight pairs
- Sending in precalculated token-weight pairs as query vectors
These token-weight pairs are then used in a query against a <<sparse-vector,sparse vector>>.
These token-weight pairs are then used in a query against a <<sparse-vector,sparse vector>>
or a <<semantic-text, semantic_text>> field with a compatible sparse inference model.
At query time, query vectors are calculated using the same inference model that was used to create the tokens.
When querying, these query vectors are ORed together with their respective weights, which means scoring is effectively a <<vector-functions-dot-product,dot product>> calculation between stored dimensions and query dimensions.
@ -65,6 +66,7 @@ GET _search
It must be the same inference ID that was used to create the tokens from the input text.
Only one of `inference_id` and `query_vector` is allowed.
If `inference_id` is specified, `query` must also be specified.
If all queried fields are of type <<semantic-text, semantic_text>>, the inference ID associated with the `semantic_text` field will be inferred.
`query`::
(Optional, string) The query text you want to use for search.
@ -291,5 +293,3 @@ GET my-index/_search
//TEST[skip: Requires inference]
NOTE: When performing <<modules-cross-cluster-search, cross-cluster search>>, inference is performed on the local cluster.