``` before after (operation) Score Error Score Error Units coalesce_2_noop 75.949 ± 3.961 -> 0.010 ± 0.001 ns/op 99.9% coalesce_2_eager 99.299 ± 6.959 -> 4.292 ± 0.227 ns/op 95.7% coalesce_2_lazy 113.118 ± 5.747 -> 26.746 ± 0.954 ns/op 76.4% ``` We tend to advise folks that "COALESCE is faster than CASE", but, as of 8.16.0/https://github.com/elastic/elasticsearch/pull/112295 that wasn't the true. I was working with someone a few days ago to port a scripted_metric aggregation to ESQL and we saw COALESCE taking ~60% of the time. That won't do. The trouble is that CASE and COALESCE have to be *lazy*, meaning that operations like: ``` COALESCE(a, 1 / b) ``` should never emit a warning if `a` is not `null`, even if `b` is `0`. In 8.16/https://github.com/elastic/elasticsearch/pull/112295 CASE grew an optimization where it could operate non-lazily if it was flagged as "safe". This brings a similar optimization to COALESCE, see it above as "case_2_eager", a 95.7% improvement. It also brings and arguably more important optimization - entire-block execution for COALESCE. The schort version is that, if the first parameter of COALESCE returns no nulls we can return it without doing anything lazily. There are a few more cases, but the upshot is that COALESCE is pretty much *free* in cases where long strings of results are `null` or not `null`. That's the `coalesce_2_noop` line. Finally, when there mixed null and non-null values we were using a single builder with some fairly inefficient paths. This specializes them per type and skips some slow null-checking where possible. That's the `coalesce_2_lazy` result, a more modest 76.4%. NOTE: These %s of improvements on COALESCE itself, or COALESCE with some load-overhead operators like `+`. If COALESCE isn't taking a *ton* time in your query don't get particularly excited about this. It's fun though. Closes #119953
6.1 KiB
Elasticsearch Microbenchmark Suite
This directory contains the microbenchmark suite of Elasticsearch. It relies on JMH.
Purpose
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our macrobenchmarks with Rally. Microbenchmarks are intended to spot performance regressions in performance-critical components. The microbenchmark suite is also handy for ad-hoc microbenchmarks, but please remove them again before merging your PR.
Getting Started
Just run gradlew -p benchmarks run
from the project root
directory. It will build all microbenchmarks, execute them and print
the result.
Running Microbenchmarks
Running via an IDE is not supported as the results are meaningless because we have no control over the JVM running the benchmarks.
If you want to run a specific benchmark class like, say,
MemoryStatsBenchmark
, you can use --args
:
gradlew -p benchmarks run --args 'MemoryStatsBenchmark'
Everything in the '
gets sent on the command line to JMH.
You can set benchmark parameters with -p
:
gradlew -p benchmarks/ run --args 'RoundingBenchmark.round -prounder=es -prange="2000-10-01 to 2000-11-01" -pzone=America/New_York -pinterval=10d -pcount=1000000'
The benchmark code defines default values for the parameters, so if
you leave any out JMH will run with each default value, one after
the other. This will run with interval
set to calendar year
then
calendar hour
then 10d
then 5d
then 1h
:
gradlew -p benchmarks/ run --args 'RoundingBenchmark.round -prounder=es -prange="2000-10-01 to 2000-11-01" -pzone=America/New_York -pcount=1000000'
Adding Microbenchmarks
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the JMH samples.
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
end the class name of a benchmark with Benchmark
. To have JMH execute a benchmark, annotate the respective methods with @Benchmark
.
Tips and Best Practices
To get realistic results, you should exercise care when running benchmarks. Here are a few tips:
Do
- Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
runtime jitter. Watch the
Error
column in the benchmark results to see the run-to-run variance. - Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults.
- Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use
taskset
. - Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use
cpufreq-set
and theperformance
CPU governor. - Vary the problem input size with
@Param
. - Use the integrated profilers in JMH to dig deeper if benchmark results do not match your hypotheses:
- Add
-prof gc
to the options to check whether the garbage collector runs during a microbenchmark and skews your results. If so, try to force a GC between runs (-gc true
) but watch out for the caveats. - Add
-prof perf
or-prof perfasm
(both only available on Linux, see Disassembling below) to see hotspots. - Add
-prof async
to see hotspots.
- Add
- Have your benchmarks peer-reviewed.
Don't
- Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with
-prof perfasm
. - Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
- Look only at the
Score
column and ignoreError
. Instead, take countermeasures to keepError
low / variance explainable.
Disassembling
NOTE: Linux only. Sorry Mac and Windows.
Disassembling is fun! Maybe not always useful, but always fun! Generally, you'll want to install perf
and the JDK's hsdis
.
perf
is generally available via apg-get install perf
or pacman -S perf
. hsdis
you'll want to compile from source. is a little more involved. This worked
on 2020-08-01:
git clone git@github.com:openjdk/jdk.git
cd jdk
git checkout jdk-17-ga
cd src/utils/hsdis
# Get a known good binutils
wget https://ftp.gnu.org/gnu/binutils/binutils-2.35.tar.gz
tar xf binutils-2.35.tar.gz
make BINUTILS=binutils-2.35 ARCH=amd64
sudo cp build/linux-amd64/hsdis-amd64.so /usr/lib/jvm/java-17-openjdk/lib/server/
If you want to disassemble a single method do something like this:
gradlew -p benchmarks run --args ' MemoryStatsBenchmark -jvmArgs "-XX:+UnlockDiagnosticVMOptions -XX:CompileCommand=print,*.yourMethodName -XX:PrintAssemblyOptions=intel"
If you want perf
to find the hot methods for you, then do add -prof perfasm
.
Async Profiler
Note: Linux and Mac only. Sorry Windows.
IMPORTANT: The 2.0 version of the profiler doesn't seem to be compatible with JMH as of 2021-04-30.
The async profiler is neat because it does not suffer from the safepoint bias problem. And because it makes pretty flame graphs!
Let user processes read performance stuff:
sudo bash
echo 0 > /proc/sys/kernel/kptr_restrict
echo 1 > /proc/sys/kernel/perf_event_paranoid
exit
Grab the async profiler from https://github.com/jvm-profiling-tools/async-profiler
and run prof async
like so:
gradlew -p benchmarks/ run --args 'LongKeyedBucketOrdsBenchmark.multiBucket -prof "async:libPath=/home/nik9000/Downloads/async-profiler-3.0-29ee888-linux-x64/lib/libasyncProfiler.so;dir=/tmp/prof;output=flamegraph"'
Note: As of January 2025 the latest release of async profiler doesn't work with our JDK but the nightly is fine.
If you are on Mac, this'll warn you that you downloaded the shared library from the internet. You'll need to go to settings and allow it to run.
The profiler tells you it'll be more accurate if you install debug symbols with the JVM. I didn't, and the results looked pretty good to me. (2021-02-01)