* New l2 normalizer added
* L2 score normaliser is registered
* test case added to the yaml
* Documentation added
* Resolved checkstyle issues
* Update docs/changelog/128504.yaml
* Update docs/reference/elasticsearch/rest-apis/retrievers.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Score 0 test case added to check for corner cases
* Edited the markdown doc description
* Pruned the comment
* Renamed the variable
* Added comment to the class
* Unit tests added
* Spotless and checkstyle fixed
* Fixed build failure
* Fixed the forbidden test
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Begins adding support for running "tagged queries" to the compute
engine. Here, it's just the `LuceneSourceOperator` because that's
useful and contained.
Example time! Say you are running:
```
FROM foo
| STATS MAX(v) BY ROUND_TO(g, 0, 100, 1000, 100000)
```
It's *often* faster to run this as four queries:
* The docs that round to `0`
* The docs that round to `100`
* The docs that round to `1000`
* The docs that round to `100000`
This creates an ESQL operator that can run these queries, one after the
other and attach those tags.
Aggs uses this trick and it's *way* faster when it can push down count
queries, but it's still faster when it pushes doc loading things. This
implementation in `LuceneSourceOperator` is quite similar to the doc
loading version in _search.
I don't have performance measurements yet because I haven't plugged this
into the language. In _search we call this `filter-by-filter` and enable
it when each group averages to more than 5000 documents and when there
isn't an `_doc_count` field. It's faster in those cases not to push. I
expect we'll be pretty similar.
Creates a `ROUND_TO` function that rounds it's input to one of the
provided values. Like so:
```
ROUND_TO(v, 0, 5000, 10000, 20000, 40000, 100000)
v | ROUND_TO
0 | 0
100 | 0
6000 | 5000
45001 | 40000
999999 | 100000
```
For some sequences of numbers you could do this with the `/` operator -
but for arbitrary sequences of numbers you needed `CASE` which is quite
slow. And hard to read!
Rewriting the example above would look like:
```
CASE (
v < 5000, 0,
v < 10000, 5000,
v < 20000, 10000,
v < 40000, 20000,
v < 100000, 40000,
100000
)
```
Even better, this is *fast*:
```
(operation) Mode Cnt Score Error Units
round_to_4_via_case avgt 7 138.124 ± 0.738 ns/op
round_to_4 avgt 7 0.805 ± 0.011 ns/op
round_to_3 avgt 7 0.739 ± 0.011 ns/op
round_to_2 avgt 7 0.651 ± 0.009 ns/op
date_trunc avgt 7 2.425 ± 0.018 ns/op
```
I've included a comparison to `DATE_TRUNC` above because we should be
able to rewrite `DATE_TRUNC` into `ROUND_TO` when we know the date range
of the index. This doesn't do it now, but it should be possible.
Documents that the VALUES aggregate function returns unique documents
and points folks to the TOP aggregate function if they want to keep
dupes.
Closes#128091
---------
Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
Types that parse arrays directly should not need to store values in _ignored_source if synthetic_source_keep=arrays. Since they have custom handling of arrays, it provides no benefit to store in _ignored_source when there are multiple values of the type.
Initial Kibana definition files for commands, currently only providing License information. We leave the license field out if it works with BASIC, so the only two files that actually have a license line are:
* CHANGE_POINT: PLATINUM
* RRF: ENTERPRISE
Output function signature license requirements to Kibana definition files, and also test that this matches the actual licensing behaviour of the functions.
ES|QL functions that enforce license checks do so with the `LicenseAware` interface. This does not expose what that functions license level is, but only whether the current active license will be sufficient for that function and its current signature (data types passed in as fields). Rather than add to this interface, we've made the license level information test-only information. This means if a function implements LicenseAware, it also needs to add a method to its test class to specify the license level for the signature being called. All functions will be tested for compliance, so failing to add this will result in test failure. Also if the test license level does not match the enforced license, that will also cause a failure.
Apache Lucene 10.2 exposes a new search strategy for executing filtered searches over HNSW graphs.
This PR switches to utilizing that strategy by default as it generally provides a much better recall/latency pareto frontier than our regular hnsw fanout search.
Additionally, a new tech-preview setting is provided to potentially revert to the old fanout behavior if issues arise.
This erroneously claimed that the example used a `drop` processor
(which drops whole documents) when it actually uses a `remove`
processor (which removes fields).
This speeds up loading from stored fields by opting more blocks into the
"sequential" strategy. This really kicks in when loading stored fields
like `text`. And when you need less than 100% of documents, but more than,
say, 10%. This is most useful when you need 99.9% of field documents.
That sort of thing. Here's the perf numbers:
```
%100.0 {"took": 403 -> 401,"documents_found":1000000}
%099.9 {"took":3990 -> 436,"documents_found": 999000}
%099.0 {"took":4069 -> 440,"documents_found": 990000}
%090.0 {"took":3468 -> 421,"documents_found": 900000}
%030.0 {"took":1213 -> 152,"documents_found": 300000}
%020.0 {"took": 766 -> 104,"documents_found": 200000}
%010.0 {"took": 397 -> 55,"documents_found": 100000}
%009.0 {"took": 352 -> 375,"documents_found": 90000}
%008.0 {"took": 304 -> 317,"documents_found": 80000}
%007.0 {"took": 273 -> 287,"documents_found": 70000}
%005.0 {"took": 199 -> 204,"documents_found": 50000}
%001.0 {"took": 46 -> 46,"documents_found": 10000}
```
Let's explain this with an example. First, jump to `main` and load a
million documents:
```
rm -f /tmp/bulk
for a in {1..1000}; do
echo '{"index":{}}' >> /tmp/bulk
echo '{"text":"text '$(printf %04d $a)'"}' >> /tmp/bulk
done
curl -s -uelastic:password -HContent-Type:application/json -XDELETE localhost:9200/test
for a in {1..1000}; do
echo -n $a:
curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_bulk?pretty --data-binary @/tmp/bulk | grep errors
done
curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_forcemerge?max_num_segments=1
curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_refresh
echo
```
Now query them all. Run this a few times until it's stable:
```
echo -n "%100.0 "
curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{
"query": "FROM test | STATS SUM(LENGTH(text))",
"pragma": {
"data_partitioning": "shard"
}
}' | jq -c '{took, documents_found}'
```
Now fetch 99.9% of documents:
```
echo -n "%099.9 "
curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{
"query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))",
"pragma": {
"data_partitioning": "shard"
}
}' | jq -c '{took, documents_found}'
```
This should spit out something like:
```
%100.0 { "took":403,"documents_found":1000000}
%099.9 {"took":4098, "documents_found":999000}
```
We're loading *fewer* documents but it's slower! What in the world?!
If you dig into the profile you'll see that it's value loading:
```
$ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{
"query": "FROM test | STATS SUM(LENGTH(text))",
"pragma": {
"data_partitioning": "shard"
},
"profile": true
}' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))'
{
"operator": "ValuesSourceReaderOperator[fields = [text]]",
"status": {
"readers_built": {
"stored_fields[requires_source:true, fields:0, sequential: true]": 222,
"text:column_at_a_time:null": 222,
"text:row_stride:BlockSourceReader.Bytes": 1
},
"values_loaded": 1000000,
"process_nanos": 370687157,
"pages_processed": 222,
"rows_received": 1000000,
"rows_emitted": 1000000
}
}
$ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{
"query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))",
"pragma": {
"data_partitioning": "shard"
},
"profile": true
}' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))'
{
"operator": "ValuesSourceReaderOperator[fields = [text]]",
"status": {
"readers_built": {
"stored_fields[requires_source:true, fields:0, sequential: false]": 222,
"text:column_at_a_time:null": 222,
"text:row_stride:BlockSourceReader.Bytes": 1
},
"values_loaded": 999000,
"process_nanos": 3965803793,
"pages_processed": 222,
"rows_received": 999000,
"rows_emitted": 999000
}
}
```
It jumps from 370ms to almost four seconds! Loading fewer values! The
second big difference is in the `stored_fields` marker. In the second on
it's `sequential: false` and in the first `sequential: true`.
`sequential: true` uses Lucene's "merge" stored fields reader instead of
the default one. It's much more optimized at decoding sequences of
documents.
Previously we only enabled this reader when loading compact sequences of
documents - when the entire block looks like
```
1, 2, 3, 4, 5, ... 1230, 1231
```
If there are any gaps we wouldn't enable it. That was a very
conservative thing we did long ago without doing any experiments. We
knew it was faster without any gaps, but not otherwise. It turns out
it's a lot faster in a lot more cases. I've measured it as faster for
99% gaps, at least on simple documents. I'm a bit worried that this is
too aggressive, so I've set made it configurable and made the default
being to use the "merge" loader with 10% gaps. So we'd use the merge
loader with a block like:
```
1, 11, 21, 31, ..., 1231, 1241
```
This does two things:
- It describes what the `timezone` option actually does. The existing wording is misleading.
- It recommends avoiding short abbreviations for timezones such as `PST`. This has come up at least twice recently.
* [DOCS][9.0] Improve ESQL reference docs IA
- reorganized es|ql reference documentation from flat list to logical hierarchy
- created three main sections: syntax reference , special fields, advanced operations
- renamed pages with more consistent and task-oriented titles
- aligned navigation titles with page content
- improved introductory text for each section
- used parallel phrasing for similar concepts
- clarified the relationship between reference docs and conceptual docs
Co-authored-by: Alexander Spies <alexander.spies@elastic.co>
- I trimmed the KEEP query in my final iteration in https://github.com/elastic/elasticsearch/pull/127215 but neglected to update the query itself, only the response. This fixes that so the query matches the response.
- 🚘 I also updated the table response to match other ESQL response tables
* [DOCS][ESQL] Cleanup and cross-reference LOOKUP JOIN reference and landing pages
**lookup-join.md (syntax reference)**:
- removed tip formatting for simpler direct link to landing page
- improved parameter formatting and descriptions
- fixed template variable from `{esql}` to `{{esql}}`
**esql-lookup-join.md (landing page)**:
- added "compare with enrich" section header
- simplified "how the command works" with clearer parameter explanation
- added code example in how it works section
- improved image alt text for accessibility
- organized example section with better context and SQL comparison
- added dropdown for sample tables to reduce visual clutter
- added "query" subheading for clearer organization
- included reference to additional examples in command reference
- removed excessive whitespace
* Improve example, add setup code
replaced abstract employee/language example with security monitoring use case
added setup instructions for creating test indices
included sample data loading via bulk api
new practical query example joining firewall logs with threat data
simplified results output showing threat detection scenario
added note about left-join behavior
improved code comments and structure
added required index.mode: lookup setting info