* Exhaustive testParseFractionalNumber
* Refactor: encapsulate ByteSizeUnit constructor
* Refactor: store size in bytes
* Support up to 2 decimals in parsed ByteSizeValue
* Fix test for rounding up with no warnings
* ByteSizeUnit transport changes
* Update docs/changelog/120142.yaml
* Changelog details and impact
* Fix change log breaking.area
* Address PR comments
Closes https://github.com/elastic/elasticsearch/issues/119969
- Rename "pages_in/out" to "pages_received/emitted", to standardize the name along most operators
- **There are still "pages_processed" operators**, maybe it would make sense to also rename those?
- Add "pages_received/emitted" to TopN operator, as it was missing that
- Added "rows_received/emitted" to most operators
- Added a test to ensure all operators with status provide those metrics
`fold` can be surprisingly heavy! The maximally efficient/paranoid thing
would be to fold each expression one time, in the constant folding rule,
and then store the result as a `Literal`. But this PR doesn't do that
because it's a big change. Instead, it creates the infrastructure for
tracking memory usage for folding as plugs it into as many places as
possible. That's not perfect, but it's better.
This infrastructure limit the allocations of fold similar to the
`CircuitBreaker` infrastructure we use for values, but it's different
in a critical way: you don't manually free any of the values. This is
important because the plan itself isn't `Releasable`, which is required
when using a real CircuitBreaker. We could have tried to make the plan
releasable, but that'd be a huge change.
Right now there's a single limit of 5% of heap per query. We create the
limit at the start of query planning and use it throughout planning.
There are about 40 places that don't yet use it. We should get them
plugged in as quick as we can manage. After that, we should look to the
maximally efficient/paranoid thing that I mentioned about waiting for
constant folding. That's an even bigger change, one I'm not equipped
to make on my own.
This change introduces optional source filtering directly within source loaders (both synthetic and stored).
The main benefit is seen in synthetic source loaders, as synthetic fields are stored independently.
By filtering while loading the synthetic source, generating the source becomes linear in the number of fields that match the filter.
This update also modifies the get document API to apply source filters earlier—directly through the source loader.
The search API, however, is not affected in this change, since the loaded source is still used by other features (e.g., highlighting, fields, nested hits),
and source filtering is always applied as the final step.
A follow-up will be required to ensure careful handling of all search-related scenarios.
This change loads all the modules and creates the module layers for plugins prior to entitlement
checking during the 2nd phase of bootstrap initialization. This will allow us to know what modules exist
for both validation and checking prior to actually loading any plugin classes (in a follow up change).
There are now two classes:
PluginsLoader which does the module loading and layer creation
PluginsService which uses a PluginsLoader to create the main plugin classes and start the plugins
Static fields dont do well in Gradle with configuration cache enabled.
- Use buildParams extension in build scripts
- Keep BuildParams.ci for now for easy serverless migration
- Tweak testing doc
Noticed during a code review that added yet another one of these:
We have quite a few instances of duplicate noop implementations,
lets make tests a little less verbose here.
Technically the constant is test-only but it felt right to just leave it
on the interface.
* Add benchmark for IndexNameExpressionResolver
* Extract IndicesRequest in a local class
* Added one more benchmark to capture a mixed request
---------
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
The libs projects are configured to all begin with `elasticsearch-`.
While this is desireable for the artifacts to contain this consistent
prefix, it means the project names don't match up with their
directories. Additionally, it creates complexities for subproject naming
that must be manually adjusted.
This commit adjusts the project names for those under libs to be their
directory names. The resulting artifacts for these libs are kept the
same, all beginning with `elasticsearch-`.
The most relevant ES changes that upgrading to Lucene 10 requires are:
- use the appropriate IOContext
- Scorer / ScorerSupplier breaking changes
- Regex automaton are no longer determinized by default
- minimize moved to test classes
- introduce Elasticsearch900Codec
- adjust slicing code according to the added support for intra-segment concurrency
- disable intra-segment concurrency in tests
- adjust accessor methods for many Lucene classes that became a record
- adapt to breaking changes in the analysis area
Co-authored-by: Christoph Büscher <christophbuescher@posteo.de>
Co-authored-by: Mayya Sharipova <mayya.sharipova@elastic.co>
Co-authored-by: ChrisHegarty <chegar999@gmail.com>
Co-authored-by: Brian Seeders <brian.seeders@elastic.co>
Co-authored-by: Armin Braun <me@obrown.io>
Co-authored-by: Panagiotis Bailis <pmpailis@gmail.com>
Co-authored-by: Benjamin Trent <4357155+benwtrent@users.noreply.github.com>
This speeds up grouping by bytes valued fields (keyword, text, ip, and
wildcard) when the input is an ordinal block:
```
bytes_refs 22.213 ± 0.322 -> 19.848 ± 0.205 ns/op (*maybe* real, maybe noise. still good)
ordinal didn't exist -> 2.988 ± 0.011 ns/op
```
I see this as 20ns -> 3ns, an 85% speed up. We never hard the ordinals
branch before so I'm expecting the same performance there - about 20ns
per op.
This also speeds up grouping by a pair of byte valued fields:
```
two_bytes_refs 83.112 ± 42.348 -> 46.521 ± 0.386 ns/op
two_ordinals 83.531 ± 23.473 -> 8.617 ± 0.105 ns/op
```
The speed up is much better when the fields are ordinals because hashing
bytes is comparatively slow.
I believe the ordinals case is quite common. I've run into it in quite a
few profiles.
Lots of effectively singleton objects here and fields that can be made
static, saves a little more on the per-index overhead and might reveal
further simplifications.
Final (I wish) part of
https://github.com/elastic/elasticsearch/issues/99815 Also, fixes
https://github.com/elastic/elasticsearch/issues/113916
## Steps 1. Migrate TDigest classes to use a custom Array
implementation. Temporarily use a simple array wrapper
(https://github.com/elastic/elasticsearch/pull/112810) 2. Implement
CircuitBreaking in the `WrapperTDigestArrays` class. Add
Releasable/AutoCloseable and ensure everything is closed
(https://github.com/elastic/elasticsearch/pull/113105) 3. Pass the
CircuitBreaker as a parameter to TDigestState from wherever it's being
used (https://github.com/elastic/elasticsearch/pull/113387) - ESQL:
Pass a real CB - Other aggs: Use the deprecated methods on
`TDigestState`, that will use a No-op CB instead 4. Account remaining
TDigest classes size ("SHALLOW_SIZE") (This PR)
Every step should be safely mergeable to main: - The first and second
steps should have no impact. - The third and fourth ones will start
increasing the CB count partially.
## Remarks As TDigests are releasable now, I had to refactor all tests,
adding try-with-resources or direct close() calls. That added a lot of
changes, but most of them are trivial.
Outside of it, in ESQL, TDigestStates are closed now. Old aggregations
don't close them, as it's not trivial. However, as they are using the
NoopCircuitBreaker, there's no problem with it. There's nothing to be
closed.
## _Remarks 2_ I tried to follow the same pattern in how everything is
accounted. On each TDigest class: - Static constant "SHALLOW_SIZE" with
the object weight - Field `AtomicBoolean closed` to ensure indempotent
`close()` - Static `create()` method that accounts the SHALLOW_SIZE, and
returns a new isntance. And the important part: On exception, it
discounts SHALLOW_SIZE again - A `ramBytesUsed()` (Accountable
interface), barely used for anything really, but some assertions I
believe - A constructor, that closes everything it created on exception
(If it creates an array, and the next array surpasses the CB limit, the
first one must be closed) - And a close() that will, well, close
everything and discount SHALLOW_SIZE
A lot of steps to make sure everything works well in this multi-level
structure, but I believe the result was quite clean
This adds a test to *every* agg for when it's entirely filtered away and
another when filtering is enabled but unused. I'll follow up with
another test later for partial filtering.
That test caught a bug where some aggs would think they'd been `seen`
when they hadn't. This fixes that too.
Part of https://github.com/elastic/elasticsearch/issues/99815
## Steps
1. Migrate TDigest classes to use a custom Array implementation. Temporarily use a simple array wrapper (https://github.com/elastic/elasticsearch/pull/112810)
2. Implement CircuitBreaking in the `MemoryTrackingTDigestArrays` class. Add `Releasable` and ensure it's always closed within TDigest (This PR)
3. Pass the CircuitBreaker as a parameter to TDigestState from wherever it's being used
4. Account remaining TDigest classes size ("SHALLOW_SIZE")
Every step should be safely mergeable to main:
- The first and second steps should have no impact.
- The third and fourth ones will start increasing the CB count partially.
## Remarks
To simplify testing the CircuitBreaker, added a helper method + `@After` to ESTestCase.
Right now CBs are usually tested through MockBigArrays. E.g:
f7a0196b45/x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/expression/function/AbstractFunctionTestCase.java (L1263-L1265)
So I guess there was no need for this yet. But I may have missed something somewhere.
Also, I'm separating this PR from the "step 3" as integrating this (CB) in the current usages may require some refactor of external code, which may be somewhat more _dangerous_
This speeds up the `CASE` function when it has two or three arguments
and both of the arguments are constants or fields. This works because
`CASE` is lazy so it can avoid warnings in cases like
```
CASE(foo != 0, 2 / foo, 1)
```
And, in the case where the function is *very* slow, it can avoid the
computations.
But if the lhs and rhs of the `CASE` are constant then there isn't any
work to avoid.
The performance improvment is pretty substantial:
```
(operation) Before Error After Error Units
case_1_lazy 97.422 ± 1.048 101.571 ± 0.737 ns/op
case_1_eager 79.312 ± 1.190 4.601 ± 0.049 ns/op
```
The top line is a `CASE` that has to be lazy - it shouldn't change. The
4 nanos change here is noise. The eager version improves by about 94%.
Part of https://github.com/elastic/elasticsearch/issues/99815
## Steps
1. Migrate TDigest classes to use a custom Array implementation. Temporarily use a simple array wrapper (This PR)
2. Implement a BigArrays class and replace the wrapper with it. Add Releasable/AutoCloseable and ensure everything is closed
3. Account remaining TDigest classes size
Every step should be safely mergeable to main:
- The first one should have no impact.
- The second and third ones will start increasing the CB count partially.
## Considerations
The third step will probably require some other interface to manually count used memory _before_ creation of the classes. Something like a `TDigestCircuitBreaker`.
After building this one, I've started considering if it would make sense to just do the breaker, and not migrate things to BigArrays. Simply call a `breaker.increase(...)` before array creation.
The pros I see of BigArrays:
- Automatically verified in ESTestCases. Which means tests will fail unless everything is correctly `.close()`d
- Automatic byte counting, without added calculations to the TDigests (Apart of the close() ones)
And the cons:
- Filling the code with .get()/.set()
- `.sort()` will require a custom implementation for BigArrays. Same for the `.set()` method that copies a range of an array to another
## Benchmarks
This is a comparison of the benchmarks between main (left) and this branch (right). Bigger is worse:
```
TDigest
(compression) (distribution) (tdigestFactory) Score Error -> Score Error Units
100 NORMAL MERGE 0,157 ± 0,248 -> 0,170 ± 0,071 us/op
100 NORMAL AVL_TREE 0,263 ± 0,078 -> 0,309 ± 0,170 us/op
100 NORMAL HYBRID 0,167 ± 0,040 -> 0,169 ± 0,042 us/op
100 GAUSSIAN MERGE 0,159 ± 0,041 -> 0,163 ± 0,070 us/op
100 GAUSSIAN AVL_TREE 0,339 ± 0,127 -> 0,336 ± 0,029 us/op
100 GAUSSIAN HYBRID 0,163 ± 0,049 -> 0,167 ± 0,072 us/op
300 NORMAL MERGE 0,174 ± 0,044 -> 0,183 ± 0,031 us/op
300 NORMAL AVL_TREE 0,443 ± 0,084 -> 0,438 ± 0,079 us/op
300 NORMAL HYBRID 0,180 ± 0,059 -> 0,184 ± 0,039 us/op
300 GAUSSIAN MERGE 0,167 ± 0,040 -> 0,173 ± 0,054 us/op
300 GAUSSIAN AVL_TREE 0,403 ± 0,098 -> 0,387 ± 0,125 us/op
300 GAUSSIAN HYBRID 0,183 ± 0,031 -> 0,178 ± 0,049 us/op
```
```
StableSort
(sortDirection) Score Error -> Score Error Units
0 16,435 ± 1,443 -> 15,909 ± 0,745 ms/op
1 5,237 ± 0,184 -> 4,994 ± 0,461 ms/op
-1 5,458 ± 0,398 -> 4,696 ± 0,265 ms/op
```
There's barely any relevant affectation I can see.
The native platform dir can be found using a TestUtil method, but
benchmarks was trying to construct it on its own. This commit switches
to using the util method.
This adds a `Block#keepMask(BooleanVector)` method that will make a new
block, keeping all of the values where the vector is `true` and
`null`ing all of the velues where the vector is false.
This will be useful for implementing partial aggregation application
like `| STATS MAX(a WHERE b > 1), MIN(j WHERE b > 2) BY bar`. Or however
the syntax ends up being. We already skip `null` group keys and we can
evaluate the `b > 2` bits to a mask pretty easily. It should also be
useful in optimizing `CASE(a > 2, foo)` - but only when the RHS of the
CASE is `null` and the LHS is a constant or constant-like.
This is something that's very optimize-able. I haven't really optimized
it in this PR, but it should be possible to speed this up a ton and
remove a lot of copying. Here's where the benchmarks start:
```
(dataTypeAndBlockKind) Mode Cnt Score Error Units
int/array avgt 7 3.705 ± 0.153 ns/op
int/vector avgt 7 3.234 ± 0.078 ns/op
```
That's about the same speed as reading the block. In a few of these
cases I expect we can get them to constant performance rather than
per-record performance.
Native libraries in Java are loaded by calling System.loadLibrary. This
method inspects paths in the java.library.path to find the requested
library. Elasticsearch previously used this to find libsystemd, but now
the only remaining use is to set the additional platform directory in
which Elasticsearch keeps its own native libraries.
One issue with setting java.library.path is that its not set for the cli
process, which makes loading the native library infrastructure from clis
difficult. This commit reworks how Elasticsearch native libraries are
found in order to avoid needing to set java.library.path. There are two
cases. The simplest is production, where the working directory is the
Elasticsearch installation directory, so the platform specific directory
can be constructed. The second case is for tests where we don't have an
installtion. We already pass in java.library.path there, so this change
renames the system property to be a test specific property that the new
loading infrastructure looks for.