This implements the `MV_DEDUPE` function that removes duplicates from
multivalues fields. It wasn't strictly in our list of things we need in
the first release, but I'm grabbing this now because I realized I needed
very similar infrastructure when I was trying to build grouping by
multivalued fields. In fact, I realized that I could use our
stringtemplate code generation to generate most of the complex parts.
This generates the actual body of `MV_DEDUPE`'s implementation and the
body of the `Block` accepting `BlockHash` implementations. It'll be
useful in the final step for grouping by multivalued fields.
I also got pretty curious about whether the `O(n^2)` or `O(n*log(n))`
algorithm for deduplication is faster. I'd been assuming that for all
reasonable sized inputs the `O(n^2)` bubble sort looking selection
algorithm was faster. So I measured it. And it's mostly true - even for
`BytesRef` if you have a dozen entries the selection algorithm is
faster. Lower overhead and stuff. Anyway, to measure it I had to
implement the copy-and-sort `O(n*log(n))` algorithm. So while I was
there I plugged it in and selected it in cases where the number of
inputs is large and the selection alogorithm is likely to be slower.
This adds IndexVersion to cluster state, alongside node version. This is needed so IndexVersion can be tracked across the cluster, allowing min/max supported index versions to be determined.
Convert Avg into a SurrogateExpression and introduce dedicated rule
for handling surrogate AggregateFunction
Remove Avg implementation
Use sum instead of avg in some planning test
Add dataType case for Div operator
Relates ESQL-747
In SQL `AVG(foo)` is `null` if there are no value for `foo`. Same for
`MIN(foo)` and `MAX(foo)`. In fact, the only functions that don't return
`null` on empty inputs seem to be `COUNT` and `COUNT(DISTINCT`.
This flips our non-grouping aggs to have the same behavior because it's
both more expected and fits better with other things we're building.
This *is* different from Elasticsearch's aggs. But it's different in a
good way. It also lines up more closely with the way that our grouping
aggs work.
This also revives the broken `AggregatorBenchmark` so that I could get
performance figures for this change. And it's within the margin of
error:
```
(blockType) (grouping) (op) Mode Cnt Before After Units
vector_longs none sum avgt 7 0.440 ± 0.017 0.397 ± 0.003 ns/op
half_null_longs none sum avgt 7 5.785 ± 0.022 5.861 ± 0.134 ns/op
```
I expected a small slowdown on the `half_null_longs` line and see it,
but is within the margin of error. Either way, that's not the line
that's nearly as optimized. We'll loop back around to it eventually.
Closes ESQL-1297
* Initial import for TDigest forking.
* Fix MedianTest.
More work needed for TDigestPercentile*Tests and the TDigestTest (and
the rest of the tests) in the tdigest lib to pass.
* Fix Dist.
* Fix AVLTreeDigest.quantile to match Dist for uniform centroids.
* Update docs/changelog/96086.yaml
* Fix `MergingDigest.quantile` to match `Dist` on uniform distribution.
* Add merging to TDigestState.hashCode and .equals.
Remove wrong asserts from tests and MergingDigest.
* Fix style violations for tdigest library.
* Fix typo.
* Fix more style violations.
* Fix more style violations.
* Fix remaining style violations in tdigest library.
* Update results in docs based on the forked tdigest.
* Fix YAML tests in aggs module.
* Fix YAML tests in x-pack/plugin.
* Skip failing V7 compat tests in modules/aggregations.
* Fix TDigest library unittests.
Remove redundant serializing interfaces from the library.
* Remove YAML test versions for older releases.
These tests don't address compatibility issues in mixed cluster tests as
the latter contain a mix of older and newer nodes, so the output depends
on which node is picked as a data node since the forked TDigest library
is not backwards compatible (produces slightly different results).
* Fix test failures in docs and mixed cluster.
* Reduce buffer sizes in MergingDigest to avoid oom.
* Exclude more failing V7 compatibility tests.
* Update results for JdbcCsvSpecIT tests.
* Update results for JdbcDocCsvSpecIT tests.
* Revert unrelated change.
* More test fixes.
* Use version skips instead of blacklisting in mixed cluster tests.
* Switch TDigestState back to AVLTreeDigest.
* Update docs and tests with AVLTreeDigest output.
* Update flaky test.
* Remove dead code, esp around tracking of incoming data.
* Update docs/changelog/96086.yaml
* Delete docs/changelog/96086.yaml
* Remove explicit compression calls.
This was added to prevent concurrency tests from failing, but it leads
to reduces precision. Submit this to see if the concurrency tests are
still failing.
* Revert "Remove explicit compression calls."
This reverts commit 5352c96f65.
* Remove explicit compression calls to MedianAbsoluteDeviation input.
* Add unittests for AVL and merging digest accuracy.
* Fix spotless violations.
* Delete redundant tests and benchmarks.
* Fix spotless violation.
* Use the old implementation of AVLTreeDigest.
The latest library version is 50% slower and less accurate, as verified
by ComparisonTests.
* Update docs with latest percentile results.
* Update docs with latest percentile results.
* Remove repeated compression calls.
* Update more percentile results.
* Use approximate percentile values in integration tests.
This helps with mixed cluster tests, where some of the tests where
blocked.
* Fix expected percentile value in test.
* Revert in-place node updates in AVL tree.
Update quantile calculations between centroids and min/max values to
match v.3.2.
* Add SortingDigest and HybridDigest.
The SortingDigest tracks all samples in an ArrayList that
gets sorted for quantile calculations. This approach
provides perfectly accurate results and is the most
efficient implementation for up to millions of samples,
at the cost of bloated memory footprint.
The HybridDigest uses a SortingDigest for small sample
populations, then switches to a MergingDigest. This
approach combines to the best performance and results for
small sample counts with very good performance and
acceptable accuracy for effectively unbounded sample
counts.
* Remove deps to the 3.2 library.
* Remove unused licenses for tdigest.
* Revert changes for SortingDigest and HybridDigest.
These will be submitted in a follow-up PR for enabling MergingDigest.
* Remove unused Histogram classes and unit tests.
Delete dead and commented out code, make the remaining tests run
reasonably fast. Remove unused annotations, esp. SuppressWarnings.
* Remove Comparison class, not used.
* Revert "Revert changes for SortingDigest and HybridDigest."
This reverts commit 2336b11598.
* Use HybridDigest as default tdigest implementation
Add SortingDigest as a simple structure for percentile calculations that
tracks all data points in a sorted array. This is a fast and perfectly
accurate solution that leads to bloated memory allocation.
Add HybridDigest that uses SortingDigest for small sample counts, then
switches to MergingDigest. This approach delivers extreme
performance and accuracy for small populations while scaling
indefinitely and maintaining acceptable performance and accuracy with
constant memory allocation (15kB by default).
Provide knobs to switch back to AVLTreeDigest, either per query or
through ClusterSettings.
* Small fixes.
* Add javadoc and tests.
* Add javadoc and tests.
* Remove special logic for singletons in the boundaries.
While this helps with the case where the digest contains only
singletons (perfect accuracy), it has a major issue problem
(non-monotonic quantile function) when the first singleton is followed
by a non-singleton centroid. It's preferable to revert to the old
version from 3.2; inaccuracies in a singleton-only digest should be
mitigated by using a sorted array for small sample counts.
* Revert changes to expected values in tests.
This is due to restoring quantile functions to match head.
* Revert changes to expected values in tests.
This is due to restoring quantile functions to match head.
* Tentatively restore percentile rank expected results.
* Use cdf version from 3.2
Update Dist.cdf to use interpolation, use the same cdf
version in AVLTreeDigest and MergingDigest.
* Revert "Tentatively restore percentile rank expected results."
This reverts commit 7718dbba59.
* Revert remaining changes compared to main.
* Revert excluded V7 compat tests.
* Exclude V7 compat tests still failing.
* Exclude V7 compat tests still failing.
* Remove ClusterSettings tentatively.
* Initial import for TDigest forking.
* Fix MedianTest.
More work needed for TDigestPercentile*Tests and the TDigestTest (and
the rest of the tests) in the tdigest lib to pass.
* Fix Dist.
* Fix AVLTreeDigest.quantile to match Dist for uniform centroids.
* Update docs/changelog/96086.yaml
* Fix `MergingDigest.quantile` to match `Dist` on uniform distribution.
* Add merging to TDigestState.hashCode and .equals.
Remove wrong asserts from tests and MergingDigest.
* Fix style violations for tdigest library.
* Fix typo.
* Fix more style violations.
* Fix more style violations.
* Fix remaining style violations in tdigest library.
* Update results in docs based on the forked tdigest.
* Fix YAML tests in aggs module.
* Fix YAML tests in x-pack/plugin.
* Skip failing V7 compat tests in modules/aggregations.
* Fix TDigest library unittests.
Remove redundant serializing interfaces from the library.
* Remove YAML test versions for older releases.
These tests don't address compatibility issues in mixed cluster tests as
the latter contain a mix of older and newer nodes, so the output depends
on which node is picked as a data node since the forked TDigest library
is not backwards compatible (produces slightly different results).
* Fix test failures in docs and mixed cluster.
* Reduce buffer sizes in MergingDigest to avoid oom.
* Exclude more failing V7 compatibility tests.
* Update results for JdbcCsvSpecIT tests.
* Update results for JdbcDocCsvSpecIT tests.
* Revert unrelated change.
* More test fixes.
* Use version skips instead of blacklisting in mixed cluster tests.
* Switch TDigestState back to AVLTreeDigest.
* Update docs and tests with AVLTreeDigest output.
* Update flaky test.
* Remove dead code, esp around tracking of incoming data.
* Remove explicit compression calls.
This was added to prevent concurrency tests from failing, but it leads
to reduces precision. Submit this to see if the concurrency tests are
still failing.
* Update docs/changelog/96086.yaml
* Delete docs/changelog/96086.yaml
* Revert "Remove explicit compression calls."
This reverts commit 5352c96f65.
* Remove explicit compression calls to MedianAbsoluteDeviation input.
* Add unittests for AVL and merging digest accuracy.
* Fix spotless violations.
* Delete redundant tests and benchmarks.
* Fix spotless violation.
* Use the old implementation of AVLTreeDigest.
The latest library version is 50% slower and less accurate, as verified
by ComparisonTests.
* Update docs with latest percentile results.
* Update docs with latest percentile results.
* Remove repeated compression calls.
* Update more percentile results.
* Use approximate percentile values in integration tests.
This helps with mixed cluster tests, where some of the tests where
blocked.
* Fix expected percentile value in test.
* Revert in-place node updates in AVL tree.
Update quantile calculations between centroids and min/max values to
match v.3.2.
* Add SortingDigest and HybridDigest.
The SortingDigest tracks all samples in an ArrayList that
gets sorted for quantile calculations. This approach
provides perfectly accurate results and is the most
efficient implementation for up to millions of samples,
at the cost of bloated memory footprint.
The HybridDigest uses a SortingDigest for small sample
populations, then switches to a MergingDigest. This
approach combines to the best performance and results for
small sample counts with very good performance and
acceptable accuracy for effectively unbounded sample
counts.
* Remove deps to the 3.2 library.
* Remove unused licenses for tdigest.
* Revert changes for SortingDigest and HybridDigest.
These will be submitted in a follow-up PR for enabling MergingDigest.
* Remove unused Histogram classes and unit tests.
Delete dead and commented out code, make the remaining tests run
reasonably fast. Remove unused annotations, esp. SuppressWarnings.
* Remove Comparison class, not used.
* Revert "Revert changes for SortingDigest and HybridDigest."
This reverts commit 2336b11598.
* Use HybridDigest as default tdigest implementation
Add SortingDigest as a simple structure for percentile calculations that
tracks all data points in a sorted array. This is a fast and perfectly
accurate solution that leads to bloated memory allocation.
Add HybridDigest that uses SortingDigest for small sample counts, then
switches to MergingDigest. This approach delivers extreme
performance and accuracy for small populations while scaling
indefinitely and maintaining acceptable performance and accuracy with
constant memory allocation (15kB by default).
Provide knobs to switch back to AVLTreeDigest, either per query or
through ClusterSettings.
* Add javadoc and tests.
* Remove ClusterSettings tentatively.
* Restore bySize function in TDigest and subclasses.
* Update Dist.cdf to match the rest.
Update tests.
* Revert outdated test changes.
* Revert outdated changes.
* Small fixes.
* Update docs/changelog/96794.yaml
* Make HybridDigest the default implementation.
* Update boxplot documentation.
* Restore AVLTreeDigest as the default in TDigestState.
TDigest.createHybridDigest nw returns the right type.
The switch in TDigestState will happen in a separate PR
as it requires many test updates.
* Use execution_hint in tdigest spec.
* Fix Dist.cdf for empty digest.
* Pass ClusterSettings through SearchExecutionContext.
* Bump up TransportVersion.
* Bump up TransportVersion for real.
* HybridDigest uses its final implementation during deserialization.
* Restore the right TransportVersion in TDigestState.read
* Add dummy SearchExecutionContext factory for tests.
* Use TDigestExecutionHint instead of strings.
* Remove check for null context.
* Add link to TDigest javadoc.
* Use NodeSettings directly.
* Init executionHint to null, set before using.
* Update docs/changelog/96943.yaml
* Pass initialized executionHint to createEmptyPercentileRanksAggregator.
* Initialize TDigestExecutionHint.SETTING to "DEFAULT".
* Initialize TDigestExecutionHint to null.
* Use readOptionalWriteable/writeOptionalWriteable.
Move test-only SearchExecutionContext method in helper class under
test.
* Bump up TransportVersion.
* Small fixes.
Preparation for aggs to allow for consumption of multiple input
channels, and output of more than one Block.
The salient change can be seen in difference to the AggregatorFunction
and GroupingAggregatorFunction interfaces, e.g.:
```diff
- void addIntermediateInput(Block block);
- Block evaluateIntermediate();
- Block evaluateFinal();
---
+ void addIntermediateInput(Page page);
+ void evaluateIntermediate(Block[] blocks, int offset);
+ void evaluateFinal(Block[] blocks, int offset);
```
addIntermediate accepts a page (rather than a block), to allow the
aggregator function to consume multiple channels.
evaluateXXX accepts a block array and offset, to allow the aggregator
function to populate array elements.
For now, aggs continue to just use a single input channel and output
just a single block. A follow on change will refactor this.
This flips the resolution of aggs from a tree of switch statements into
method calls on the function objects. This gives us much more control
over how we contruct the aggs, making it much simpler to flow parameters
through the system and easier to make sure that only appropriate aggs
run in the right spot.
\
This commit changes access to the latest TransportVersion constant to
use a static method instead of a public static field. By encapsulating
the field we will be able to (in a followup) lazily determine what the
latest is, outside of clinit.
* Initial import for TDigest forking.
* Fix MedianTest.
More work needed for TDigestPercentile*Tests and the TDigestTest (and
the rest of the tests) in the tdigest lib to pass.
* Fix Dist.
* Fix AVLTreeDigest.quantile to match Dist for uniform centroids.
* Update docs/changelog/96086.yaml
* Fix `MergingDigest.quantile` to match `Dist` on uniform distribution.
* Add merging to TDigestState.hashCode and .equals.
Remove wrong asserts from tests and MergingDigest.
* Fix style violations for tdigest library.
* Fix typo.
* Fix more style violations.
* Fix more style violations.
* Fix remaining style violations in tdigest library.
* Update results in docs based on the forked tdigest.
* Fix YAML tests in aggs module.
* Fix YAML tests in x-pack/plugin.
* Skip failing V7 compat tests in modules/aggregations.
* Fix TDigest library unittests.
Remove redundant serializing interfaces from the library.
* Remove YAML test versions for older releases.
These tests don't address compatibility issues in mixed cluster tests as
the latter contain a mix of older and newer nodes, so the output depends
on which node is picked as a data node since the forked TDigest library
is not backwards compatible (produces slightly different results).
* Fix test failures in docs and mixed cluster.
* Reduce buffer sizes in MergingDigest to avoid oom.
* Exclude more failing V7 compatibility tests.
* Update results for JdbcCsvSpecIT tests.
* Update results for JdbcDocCsvSpecIT tests.
* Revert unrelated change.
* More test fixes.
* Use version skips instead of blacklisting in mixed cluster tests.
* Switch TDigestState back to AVLTreeDigest.
* Update docs and tests with AVLTreeDigest output.
* Update flaky test.
* Remove dead code, esp around tracking of incoming data.
* Update docs/changelog/96086.yaml
* Delete docs/changelog/96086.yaml
* Remove explicit compression calls.
This was added to prevent concurrency tests from failing, but it leads
to reduces precision. Submit this to see if the concurrency tests are
still failing.
* Revert "Remove explicit compression calls."
This reverts commit 5352c96f65.
* Remove explicit compression calls to MedianAbsoluteDeviation input.
* Add unittests for AVL and merging digest accuracy.
* Fix spotless violations.
* Delete redundant tests and benchmarks.
* Fix spotless violation.
* Use the old implementation of AVLTreeDigest.
The latest library version is 50% slower and less accurate, as verified
by ComparisonTests.
* Update docs with latest percentile results.
* Update docs with latest percentile results.
* Remove repeated compression calls.
* Update more percentile results.
* Use approximate percentile values in integration tests.
This helps with mixed cluster tests, where some of the tests where
blocked.
* Fix expected percentile value in test.
* Revert in-place node updates in AVL tree.
Update quantile calculations between centroids and min/max values to
match v.3.2.
* Add SortingDigest and HybridDigest.
The SortingDigest tracks all samples in an ArrayList that
gets sorted for quantile calculations. This approach
provides perfectly accurate results and is the most
efficient implementation for up to millions of samples,
at the cost of bloated memory footprint.
The HybridDigest uses a SortingDigest for small sample
populations, then switches to a MergingDigest. This
approach combines to the best performance and results for
small sample counts with very good performance and
acceptable accuracy for effectively unbounded sample
counts.
* Remove deps to the 3.2 library.
* Remove unused licenses for tdigest.
* Revert changes for SortingDigest and HybridDigest.
These will be submitted in a follow-up PR for enabling MergingDigest.
* Remove unused Histogram classes and unit tests.
Delete dead and commented out code, make the remaining tests run
reasonably fast. Remove unused annotations, esp. SuppressWarnings.
* Remove Comparison class, not used.
* Small fixes.
* Add javadoc and tests.
* Remove special logic for singletons in the boundaries.
While this helps with the case where the digest contains only
singletons (perfect accuracy), it has a major issue problem
(non-monotonic quantile function) when the first singleton is followed
by a non-singleton centroid. It's preferable to revert to the old
version from 3.2; inaccuracies in a singleton-only digest should be
mitigated by using a sorted array for small sample counts.
* Revert changes to expected values in tests.
This is due to restoring quantile functions to match head.
* Revert changes to expected values in tests.
This is due to restoring quantile functions to match head.
* Tentatively restore percentile rank expected results.
* Use cdf version from 3.2
Update Dist.cdf to use interpolation, use the same cdf
version in AVLTreeDigest and MergingDigest.
* Revert "Tentatively restore percentile rank expected results."
This reverts commit 7718dbba59.
* Revert remaining changes compared to main.
* Revert excluded V7 compat tests.
* Exclude V7 compat tests still failing.
* Exclude V7 compat tests still failing.
* Restore bySize function in TDigest and subclasses.
Lucene has integrated hardware accelerated vector calculations. Meaning,
calculations like `dot_product` can be much faster when using the Lucene
defined functions.
When a `dense_vector` is indexed, we already support this. However, when
`index: false` we store float vectors as binary fields in Lucene and
decode them ourselves. Meaning, we don't use the underlying Lucene
structures or functions.
To take advantage of the large performance boost, this PR refactors the
binary vector values in the following way:
- Eagerly decode the binary blobs when iterated
- Call the Lucene defined VectorUtil functions when possible
related to: https://github.com/elastic/elasticsearch/issues/96370
This shows a way we can move the construction of aggregators away from
many switch/case statements into the ESQL functions themselves. This
should give us a bunch more control:
1. Instead of enabling aggregations for types of methods we can enable
them for specific data types. We no longer, for example, have to support
`SUM` for dates.
2. The functions can provide additional context without creating any
sort of general context passing mechanism - we're just using closures.
This makes the `precision` parameter fairly clean to pass down to the
`COUNT_DISTINCT` agg.
A driver-local context that is shared across operators.
Operators in the same driver pipeline are executed in a single threaded fashion. A driver context has a set of mutating methods that can be used to store and share values across these operators, or even outside the Driver. When the Driver is finished, it finishes the context. Finishing the context effectively takes a snapshot of the driver context values so
that they can be exposed outside the Driver. The net result of this is that the driver context can be mutated freely,
without contention, by the thread executing the pipeline of operators until it is finished. The context must be finished by the thread running the Driver, when the Driver is finished.
Releasables can be added and removed to the context by operators in the same driver pipeline. This allows to "transfer ownership" of a shared resource across operators (and even across Drivers), while ensuring that the resource can be correctly released when no longer needed.
Currently only supports releasables, but additional driver-local context can be added, like say warnings from the operators.
This PR adds support for passing the `precision_threshold` parameter to
the `count_distinct` aggregation, similar to what is supported for the
[`cardinality`
aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-cardinality-aggregation.html#_precision_control).
```
from employees | stats h = count_distinct(height, 3000);
```
The parameter is **optional** and if ommitted the **default value is
`3000`**
The PR adds an `Object[]` array to the constructors of the
`AggregatorFunction` classes where we pass
all parameters of the function and each aggregator can read its
parameters.
### Add `BigArrays` support to `Aggregator` classes
As I was adding the parameters modifications to the
`AggregatorImplementer` class, I wired support for `BigArrays` which was
needed for the HLL state. Until now, only the `GroupingAggregator`
classes had `BigArrays` support
This removes the unused `elementType` member from `EvalOperator`. Now
the functions it calls are responsible for building the new blocks so
the operator itself doesn't need to know the types it's operating on.
Support geometry and streaming simplification
There are many opportunities to enable geometry simplification in Elasticsearch, both as an explicit feature available to users, and as an internal optimization technique for reducing memory consumption for complex geometries. For the latter case, it can even be considered a bug fix. This PR provides support for constraining Line and LinearRing sizes to a fixed number of points, and thereby a fixed amount of memory usage.
Consider, for example, the geo_line aggregation. This is similar to the top-10 aggregation, but allows the top-10k (ten thousand) points to be aggregated. This is not only a lot of memory, but can still cause unwanted line truncation for very large geometries. Line simplification is a solution to this. It is likely that a much smaller limit than 10k would suffice, while at the same time not truncating the geometry at all, so we fix a bug (truncation) while improving memory usage (pull limit from 10k down to perhaps just 1k).
This PR provides two APIs:
Streaming:
* By using the simplifier.consume(x, y) method on a stream of points, the total memory used is limited to a linear function of k, the total number of points to retain. This algorithm is at its heart based on the Visvalingam–Whyatt algorithm, with concepts from https://bost.ocks.org/mike/simplify/ and in particular the detailed streaming discussions in the paper at https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.7132&rep=rep1&type=pdf
Full-geometry:
* Simplifying full geometries using the simplifier.simplify(geometry) method can work with most geometry types, even GeometryCollection, but:
- Some geometries do not get simplified because it makes no sense to: Point, Circle, Rectangle
- The maxPoints parameter is used as is to apply to the main component (shell for polygons, largest geometry for multi-polygons and geometry collections), and all other sub-components (holes in polygons, etc.) are simplified to a scaled down version of the maxPoints, scaled by the relative size of the sub-component to the main component.
* The simplification itself is done on each Line and LinearRing component using the same streaming algorithm above. Since we use the Visvalingam–Whyatt algorithm, this works is applicable to both streaming and full-geometry simplification with the same essential result, but better control over memory than normal full-geometry simplifiers.
The basic algorithm for simplification on a stream of points requires maintaining two data structures:
* an array of all currently simplified points (implicitly ordered in stream order)
* a priority queue of all but the two end points with an estimated error on each that expresses the cost of removing that point from the line
This makes the `toString` the same as the builder description and moves
a test from `OperatorTests` into the tests for the
`ValuesSourceReaderOperator` itself.
* Use LongSupplier in AllocationService for nano time
AllocationService#currentNanoTime is relying on System.nanoTime().
Instead we should inject time using ThreadPool#relativeTimeInNanos
This follows on from #94157, to add transport version information to the cluster state diff too. A subsequent PR will deal with proactively fixing any inferred transport versions in the cases where transport information is not in the serialized cluster state or state diff. Without that PR, the reported minimum transport version might actually be lower than the min transport version in the cluster for any releases >8.8.0, until those nodes are restarted.
Note that we use the encoding as follows:
* for values taking [33, 40] bits per value encode using 40 bits per value
* for values taking [41, 48] bits per value encode using 48 bits per value
* for values taking [49, 56] bits per value encode using 56 bits per value
This is an improvement over the encoding used by ForUtils that does
not apply any compression for values taking more than 32 bits per value.
Note that 40, 48 and 56 bits per value represent exact multiples of bytes
(40 bits per value = 5 bytes, 48 bits per value = 6 bytes and 56 bits per
value = 7 bytes). As a result we always write values using 3, 2 or 1 byte
less than the 8 bytes required for a long value.
We also apply compression to gauge metrics under the assumption that
compressing values taking more than 32 bits per value works well for
floating point values, because of the way floating point values
are represented (IEEE 754 format).
This changes the serialization format for queries - when the index version is >=8.8.0, it serializes the actual transport version used into the stream. For BwC with old query formats, it uses the mapped TransportVersion for the index version.
This can be modified later if needed to re-interpret the vint used to store TransportVersion to something else, allowing the format to be further modified if necessary.
Fixes#82794. Upgrade the spotless plugin, which addresses the issue
around formatting `instanceof` expressions. Formatting of statements
including lambdas seems to have improved too.
This swaps the implementation for the arithmetic operations from the one
shared with QL to ones generated by our `ExpressionEvaluator` generator.
These should be both faster than the QL implementations and will be
compatible with block-at-a-time execution. This *should* be the last
thing blocking conversion to block-at-a-time execution. *should* be.
This allows us to be more conservative about what needs to be loaded
when using the fields API, and opens up the possibility of avoiding
using stored fields or source altogether if we can use doc values to
fetch values.
This commit also uses this new information from ValueFetchers to
more efficiently preload stored fields for the `fields` API, while
still allowing the lazy loading of individual fields if they are asked
for by scripts or runtime fields which cannot be introspected.
IndexAnalyzers is currently always a concrete class wrapping several
Maps of NamedAnalyzers. This means that whenever it is used it needs
to instantiate all of its component analyzers, making testing much heavier
than it needs to be. It also means that things like overriding analysis for
legacy indexes is pushed into mapper parameters, rather than being
handled in a single place.
This commit makes IndexAnalyzers into an interface, with an anonymous
concrete implementation that handles reloading and closing for index
shards.