Performance journey docs (#140034)

* journey docs

* link

* code review

* cc

* cc
This commit is contained in:
Liza Katz 2022-09-06 12:58:57 +03:00 committed by GitHub
parent 725d1280ea
commit a93f5d9986
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -37,7 +37,7 @@ Each document in the index has the following structure:
"_source": {
"timestamp": "2022-08-31T11:29:58.275Z"
"event_type": "performance_metric", // All events share a common event type to simplify mapping
"eventName": "dashboard_loaded", // Event name as specified when reporting it
"eventName": APP_ACTION, // Event name as specified when reporting it
"duration": 736, // Event duration as specified when reporting it
"context": { // Context holds information identifying the deployment, version, application and page that generated the event
"version": "8.5.0-SNAPSHOT",
@ -159,6 +159,55 @@ to follow if it's important for you to look inside of a specific event e.g. `pag
- **Keep performance in mind**. Reporting the performance of Kibana should never harm its own performance.
Avoid sending events too frequently (`onMouseMove`) or adding serialized JSON objects (whole `SavedObjects`) into the meta object.
### Benchmarking performance on CI
One of the use cases for event based telemetry is benchmarking the performance of features over time.
In order to keep track of their stability, the #kibana-performance team has developed a special set of
functional tests called `Journeys`. These journeys execute a UI workflow and allow the telemetry to be
reported to a cluster where it can then be analysed.
Those journeys run on the key branches (main, release versions) on dedicated machines to produce results
as stable and reproducible as possible.
#### Machine specifications
All benchmarks are run on bare-metal machines with the [following specifications](https://www.hetzner.com/dedicated-rootserver/ex100):
CPU: Intel® Core™ i9-12900K
RAM: 128 GB
SSD: 1.92 TB Datacenter Gen4 NVMe
Since the tests are run on a local machine, there is also realistic throttling applied to the network to
simulate real life internet connection. This means that all requests have a [fixed latency and limited bandwidth](https://github.com/elastic/kibana/blob/main/x-pack/test/performance/services/performance.ts#L157).
#### Journey implementation
If you would like to keep track of the stability of your events, implement a journey by adding a functional
test to the `x-pack/test/performance/journeys` folder.
The telemetry reported during the execution of those journeys will be reported to the `telemetry-v2-staging` cluster
alongside with execution context. Use the `context.labels.ciBuildName` label to filter down events to only those originating
from performance runs and visualize the duration of events (or their breakdowns).
Run the test locally for troubleshooting purposes by running
```
node scripts/functional_test_runner --config x-pack/test/performance/journeys/$YOUR_JOURNEY_NAME/config.ts
```
#### Analyzing journey results
- Be sure to narrow your analysis down to performance events by specifying a filter `context.labels.ciBuildName: kibana-single-user-performance`.
Otherwise you might be looking at results originating from different hardware.
- You can look at the results of a specific journey by filtering on `context.labels.journeyName`.
Please contact the #kibana-performance team if you need more help visualising and tracking the results.
### Production performance tracking
All users who are opted in to report telemetry will start reporting event based telemetry as well.
The data is available to be analysed on the production telemetry cluster.
# Analytics Client
Holds the public APIs to report events, enrich the events' context and set up the transport mechanisms. Please checkout package documentation to get more information about