Adds a simple script to run benchmarks for ESQL and collect their
results. The script has a `--test` mode which takes about ten minutes.
Running without `--test` takes a four hours fifteen minutes.
To speed up `--test` I reworked the "self test" that each benchmark runs
to be optional and disabled in `--test` mode.
Build jump table (disi) while iterating over SortedNumericDocValues for encoding the values, instead of separately iterating over SortedNumericDocValues just to build the jump table.
In case when indexing sorting is active, this requires an additional merge sort. Follow up from #125403
to use benchmark mode single shot time.
Which makes more sense for benchmarking force merge. The sample time mode would invoke the benchmark methods many times, which in case of force merge is a noop.
This removes all non-test usage of
Metadata.Builder.put(IndexMetadata.Builder)
And replaces it with appropriate calls to the equivalent method on
`ProjectMetadata.Builder`
In most cases this _does not_ make the code project aware, but does
reduce the number of deprecated methods in use
The doc values codec iterates a few times over the doc value instance that needs to be written to disk. In case when merging and index sorting is enabled, this is much more expensive, as each time the doc values instance is iterated a merge sorting is performed (in order to get the doc ids of new segment in order of index sorting).
There are several reasons why the doc value instance is iterated multiple times:
* To compute stats (num values, number of docs with value) required for writing values to disk.
* To write bitset that indicate which documents have a value. (indexed disi, jump table)
* To write the actual values to disk.
* To write the addresses to disk (in case docs have multiple values)
This applies for numeric doc values, but also for the ordinals of sorted (set) doc values.
This PR addresses solving the first reason why doc value instance needs to be iterated. This is done only when in case of merging and when the segments to be merged with are also of type es87 doc values, codec version is the same and there are no deletes. Note this optimized merged is behind a feature flag for now.
Speed up the TO_IP method by converting directly from utf-8 encoded
strings to the ip encoding. Previously we did:
```
utf-8 -> String -> INetAddress -> ip encoding
```
In a step towards solving #125460 this creates three IP parsing
functions, one the rejects leading zeros, one that interprets leading
zeros as decimal numbers, and one the interprets leading zeros as octal
numbers. IPs have historically been parsed in all three of those ways.
This plugs the "rejects leading zeros" parser into `TO_IP` because
that's the behavior it had before.
Here is the performance:
```
Benchmark Score Error Units
leadingZerosAreDecimal 14.007 ± 0.093 ns/op
leadingZerosAreOctal 15.020 ± 0.373 ns/op
leadingZerosRejected 14.176 ± 3.861 ns/op
original 32.950 ± 1.062 ns/op
```
So this is roughly 45% faster than what we had.
Make the conversion functions that process `BytesRef`s into `BytesRefs`
keep the `OrdinalBytesRefVector`s when processing. Let's use `TO_LOWER`
as an example. First, the performance numbers:
```
(operation) Mode Score Error -> Score Error Units
to_lower 30.662 ± 6.163 -> 30.048 ± 0.479 ns/op
to_lower_ords 30.773 ± 0.370 -> 0.025 ± 0.001 ns/op
to_upper 33.552 ± 0.529 -> 35.775 ± 1.799 ns/op
to_upper_ords 35.791 ± 0.658 -> 0.027 ± 0.001 ns/op
```
The test has a 8192 positions containing alternating `foo` and `bar`.
Running `TO_LOWER` via ordinals is super duper faster. No longer
`O(positions)` and now `O(unique_values)`.
Let's paint some pictures! `OrdinalBytesRefVector` is a lookup table.
Like this:
```
+-------+----------+
| bytes | ordinals |
| ----- | -------- |
| FOO | 0 |
| BAR | 1 |
| BAZ | 2 |
+-------+ 1 |
| 1 |
| 0 |
+----------+
```
That lookup table is one block. When you read it you look up the
`ordinal` and match it to the `bytes`. Previously `TO_LOWER` would
process each value one at a time and make:
```
bytes
-----
foo
bar
baz
bar
bar
foo
```
So it'd run `TO_LOWER` once per `ordinal` and it'd make an ordinal
non-lookup table. With this change `TO_LOWER` will now make:
```
+-------+----------+
| bytes | ordinals |
| ----- | -------- |
| foo | 0 |
| bar | 1 |
| baz | 2 |
+-------+ 1 |
| 1 |
| 0 |
+----------+
```
We don't even have to copy the `ordinals` - we can reuse those from the
input and just bump the reference count. That's why this goes from
`O(positions)` to `O(unique_values)`.
Fix the benchmark for `EVAL` which was failing because of a strange
logging error. The benchmarks really didn't want to run when we use
commons-logging. That's fine - we can use the ES logging facade thing. I
also added a test to the benchmarks which should run the self-tests for
`EVAL` on `gradle check`.
Speeds up the VALUES agg when collecting from many buckets.
Specifically, this speeds up the algorithm used to `finish` the
aggregation. Most specifically, this makes the algorithm more tollerant
to large numbers of groups being collected. The old algorithm was
`O(n^2)` with the number of groups. The new one is `O(n)`
```
(groups)
1 219.683 ± 1.069 -> 223.477 ± 1.990 ms/op
1000 426.323 ± 75.963 -> 463.670 ± 7.275 ms/op
100000 36690.871 ± 4656.350 -> 7800.332 ± 2775.869 ms/op
200000 89422.113 ± 2972.606 -> 21920.288 ± 3427.962 ms/op
400000 timed out at 10 minutes -> 40051.524 ± 2011.706 ms/op
```
The `1` group version was not changed at all. That's just noise in the
measurement. The small bump in the `1000` case is almost certainly worth
it and real. The huge drop in the `100000` case is quite real.
To avoid having AggregateMapper find aggregators based on their names with reflection, I'm doing some changes:
- Make the suppliers have methods returning the intermediate states
- To allow this, the suppliers constructor won't receive the chanells as params. Instead, its methods will ask for them
- Most changes in this PR are because of this
- After those changes, I'm leaving AggregateMapper still there, as it still converts AggregateFunctions to its NamedExpressions
```
before after
(operation) Score Error Score Error Units
coalesce_2_noop 75.949 ± 3.961 -> 0.010 ± 0.001 ns/op 99.9%
coalesce_2_eager 99.299 ± 6.959 -> 4.292 ± 0.227 ns/op 95.7%
coalesce_2_lazy 113.118 ± 5.747 -> 26.746 ± 0.954 ns/op 76.4%
```
We tend to advise folks that "COALESCE is faster than CASE", but, as of
8.16.0/https://github.com/elastic/elasticsearch/pull/112295 that wasn't the true. I was working with someone a few
days ago to port a scripted_metric aggregation to ESQL and we saw
COALESCE taking ~60% of the time. That won't do.
The trouble is that CASE and COALESCE have to be *lazy*, meaning that
operations like:
```
COALESCE(a, 1 / b)
```
should never emit a warning if `a` is not `null`, even if `b` is `0`. In
8.16/https://github.com/elastic/elasticsearch/pull/112295 CASE grew an optimization where it could operate non-lazily
if it was flagged as "safe". This brings a similar optimization to
COALESCE, see it above as "case_2_eager", a 95.7% improvement.
It also brings and arguably more important optimization - entire-block
execution for COALESCE. The schort version is that, if the first
parameter of COALESCE returns no nulls we can return it without doing
anything lazily. There are a few more cases, but the upshot is that
COALESCE is pretty much *free* in cases where long strings of results
are `null` or not `null`. That's the `coalesce_2_noop` line.
Finally, when there mixed null and non-null values we were using a
single builder with some fairly inefficient paths. This specializes them
per type and skips some slow null-checking where possible. That's the
`coalesce_2_lazy` result, a more modest 76.4%.
NOTE: These %s of improvements on COALESCE itself, or COALESCE with some load-overhead operators like `+`. If COALESCE isn't taking a *ton* time in your query don't get particularly excited about this. It's fun though.
Closes#119953
* Exhaustive testParseFractionalNumber
* Refactor: encapsulate ByteSizeUnit constructor
* Refactor: store size in bytes
* Support up to 2 decimals in parsed ByteSizeValue
* Fix test for rounding up with no warnings
* ByteSizeUnit transport changes
* Update docs/changelog/120142.yaml
* Changelog details and impact
* Fix change log breaking.area
* Address PR comments
Closes https://github.com/elastic/elasticsearch/issues/119969
- Rename "pages_in/out" to "pages_received/emitted", to standardize the name along most operators
- **There are still "pages_processed" operators**, maybe it would make sense to also rename those?
- Add "pages_received/emitted" to TopN operator, as it was missing that
- Added "rows_received/emitted" to most operators
- Added a test to ensure all operators with status provide those metrics
`fold` can be surprisingly heavy! The maximally efficient/paranoid thing
would be to fold each expression one time, in the constant folding rule,
and then store the result as a `Literal`. But this PR doesn't do that
because it's a big change. Instead, it creates the infrastructure for
tracking memory usage for folding as plugs it into as many places as
possible. That's not perfect, but it's better.
This infrastructure limit the allocations of fold similar to the
`CircuitBreaker` infrastructure we use for values, but it's different
in a critical way: you don't manually free any of the values. This is
important because the plan itself isn't `Releasable`, which is required
when using a real CircuitBreaker. We could have tried to make the plan
releasable, but that'd be a huge change.
Right now there's a single limit of 5% of heap per query. We create the
limit at the start of query planning and use it throughout planning.
There are about 40 places that don't yet use it. We should get them
plugged in as quick as we can manage. After that, we should look to the
maximally efficient/paranoid thing that I mentioned about waiting for
constant folding. That's an even bigger change, one I'm not equipped
to make on my own.
This change introduces optional source filtering directly within source loaders (both synthetic and stored).
The main benefit is seen in synthetic source loaders, as synthetic fields are stored independently.
By filtering while loading the synthetic source, generating the source becomes linear in the number of fields that match the filter.
This update also modifies the get document API to apply source filters earlier—directly through the source loader.
The search API, however, is not affected in this change, since the loaded source is still used by other features (e.g., highlighting, fields, nested hits),
and source filtering is always applied as the final step.
A follow-up will be required to ensure careful handling of all search-related scenarios.
This change loads all the modules and creates the module layers for plugins prior to entitlement
checking during the 2nd phase of bootstrap initialization. This will allow us to know what modules exist
for both validation and checking prior to actually loading any plugin classes (in a follow up change).
There are now two classes:
PluginsLoader which does the module loading and layer creation
PluginsService which uses a PluginsLoader to create the main plugin classes and start the plugins
Static fields dont do well in Gradle with configuration cache enabled.
- Use buildParams extension in build scripts
- Keep BuildParams.ci for now for easy serverless migration
- Tweak testing doc
Noticed during a code review that added yet another one of these:
We have quite a few instances of duplicate noop implementations,
lets make tests a little less verbose here.
Technically the constant is test-only but it felt right to just leave it
on the interface.