mirror of
https://github.com/elastic/kibana.git
synced 2025-04-25 02:09:32 -04:00
527 lines
26 KiB
Text
527 lines
26 KiB
Text
[[development-functional-tests]]
|
||
== Functional Testing
|
||
|
||
We use functional tests to make sure the {kib} UI works as expected. It replaces hours of manual testing by automating user interaction. To have better control over our functional test environment, and to make it more accessible to plugin authors, {kib} uses a tool called the `FunctionalTestRunner`.
|
||
|
||
[discrete]
|
||
=== Running functional tests
|
||
|
||
The `FunctionalTestRunner` (FTR) is very bare bones and gets most of its functionality from its config file. The {kib} repo contains many FTR config files which use slightly different configurations for the {kib} server or {es}, have different test files, and potentially other config differences. You can find a manifest of all the FTR config files in `.buildkite/ftr_configs.yml`. If you’re writing a plugin outside the {kib} repo, you will have your own config file.
|
||
See <<external-plugin-functional-tests>> for more info.
|
||
|
||
There are three ways to run the tests depending on your goals:
|
||
|
||
1. Easiest option:
|
||
** Description: Starts up {kib} & {es} servers, followed by running tests. This is much slower when running the tests multiple times because slow startup time for the servers. Recommended for single-runs.
|
||
** `node scripts/functional_tests`
|
||
*** does everything in a single command, including running {es} and {kib} locally
|
||
*** tears down everything after the tests run
|
||
*** exit code reports success/failure of the tests
|
||
|
||
2. Best for development:
|
||
** Description: Two commands, run in separate terminals, separate the components that are long-running and slow from those that are ephemeral and fast. Tests can be re-run much faster, and this still runs {es} & {kib} locally.
|
||
** `node scripts/functional_tests_server`
|
||
*** starts {es} and {kib} servers
|
||
*** slow to start
|
||
*** can be reused for multiple executions of the tests, thereby saving some time when re-running tests
|
||
*** automatically restarts the {kib} server when relevant changes are detected
|
||
** `node scripts/functional_test_runner`
|
||
*** runs the tests against {kib} & {es} servers that were started by `node scripts/functional_tests_server`
|
||
*** exit code reports success or failure of the tests
|
||
|
||
3. Custom option:
|
||
** Description: Runs tests against instances of {es} & {kib} started some other way (like Elastic Cloud, or an instance you are managing in some other way).
|
||
** Just executes the functional tests
|
||
** URL, credentials, etc. for {es} and {kib} are specified via environment variables
|
||
** When running against an Elastic Cloud instance, additional environment variables are required `TEST_CLOUD` and `ES_SECURITY_ENABLED`
|
||
** You must run the same branch of tests as the version of {kib} you're testing. To run against a previous minor version use option `--es-version <instance version>`
|
||
** To run a specific configuration use option `--config <configuration file>`
|
||
** Here's an example that runs against an Elastic Cloud instance
|
||
+
|
||
["source","shell"]
|
||
----------
|
||
export TEST_KIBANA_URL=https://elastic:password@my-kbn-cluster.elastic-cloud.com:443
|
||
export TEST_ES_URL=https://elastic:password@my-es-cluster.elastic-cloud.com:443
|
||
|
||
export TEST_CLOUD=1
|
||
export ES_SECURITY_ENABLED=1
|
||
|
||
node scripts/functional_test_runner [--config <config>] [--es-version <instance version>]
|
||
----------
|
||
|
||
** Or you can override any or all of these individual parts of the URL and leave the others to the default values.
|
||
+
|
||
["source","shell"]
|
||
----------
|
||
export TEST_KIBANA_PROTOCOL=https
|
||
export TEST_KIBANA_HOSTNAME=my-kibana-instance.internal.net
|
||
export TEST_KIBANA_PORT=443
|
||
export TEST_KIBANA_USER=kibana
|
||
export TEST_KIBANA_PASS=<password>
|
||
|
||
export TEST_ES_PROTOCOL=http
|
||
export TEST_ES_HOSTNAME=my-es-cluster.internal.net
|
||
export TEST_ES_PORT=9200
|
||
export TEST_ES_USER=elastic
|
||
export TEST_ES_PASS=<password>
|
||
node scripts/functional_test_runner
|
||
----------
|
||
|
||
** Selenium tests are run in headless mode on CI. Locally the same tests will be executed in a real browser. You can activate headless mode by setting the environment variable:
|
||
+
|
||
["source", "shell"]
|
||
----------
|
||
export TEST_BROWSER_HEADLESS=1
|
||
----------
|
||
|
||
** If you are using Google Chrome, you can slow down the local network connection to verify test stability:
|
||
+
|
||
["source", "shell"]
|
||
----------
|
||
export TEST_THROTTLE_NETWORK=1
|
||
----------
|
||
|
||
** When running against a Cloud deployment, some tests are not applicable. To skip tests that do not apply, use --exclude-tag. An example shell file can be found at: {kibana-blob}test/scripts/jenkins_cloud.sh[test/scripts/jenkins_cloud.sh]
|
||
+
|
||
["source", "shell"]
|
||
----------
|
||
node scripts/functional_test_runner --exclude-tag skipCloud
|
||
----------
|
||
|
||
[discrete]
|
||
==== More about `node scripts/functional_test_runner`
|
||
|
||
When run without any arguments the `FunctionalTestRunner` automatically loads the configuration in the standard location, but you can override that behavior with the `--config` flag. List configs with multiple --config arguments.
|
||
|
||
* `--config test/functional/apps/app-name/config.js` starts {es} and {kib} servers with the WebDriver tests configured to run in Chrome for a specific app. For example,
|
||
`--config test/functional/apps/home/config.js` starts {es} and {kib} servers with the WebDriver tests configured to run in Chrome for the home app.
|
||
* `--config test/functional/config.firefox.js` starts {es} and {kib} servers with the WebDriver tests configured to run in Firefox.
|
||
* `--config test/api_integration/config.js` starts {es} and {kib} servers with the api integration tests configuration.
|
||
* `--config test/accessibility/config.ts` starts {es} and {kib} servers with the WebDriver tests configured to run an accessibility audit using https://www.deque.com/axe/[axe].
|
||
|
||
There are also command line flags for `--bail` and `--grep`, which behave just like their mocha counterparts. For instance, use `--grep=foo` to run only tests that match a regular expression.
|
||
|
||
Logging can also be customized with `--quiet`, `--debug`, or `--verbose` flags.
|
||
|
||
There are also options like `--include` to run only the tests defined in a single file or set of files.
|
||
|
||
Run `node scripts/functional_test_runner --help` to see all available options.
|
||
|
||
|
||
[discrete]
|
||
=== Writing functional tests
|
||
|
||
[discrete]
|
||
==== Environment
|
||
|
||
The tests are written in https://mochajs.org[mocha] using https://github.com/elastic/kibana/tree/main/packages/kbn-expect[@kbn/expect] for assertions.
|
||
|
||
We use https://www.w3.org/TR/webdriver1/[WebDriver Protocol] to run tests in both Chrome and Firefox with the help of https://sites.google.com/a/chromium.org/chromedriver/[chromedriver] and https://firefox-source-docs.mozilla.org/testing/geckodriver/[geckodriver]. When the `FunctionalTestRunner` launches, remote service creates a new webdriver session, which starts the driver and a stripped-down browser instance. We use `browser` service and `webElementWrapper` class to wrap up https://seleniumhq.github.io/selenium/docs/api/javascript/module/selenium-webdriver/[Webdriver API].
|
||
|
||
The `FunctionalTestRunner` automatically transpiles functional tests using babel, so that tests can use the same ECMAScript features that {kib} source code uses. See {kibana-blob}/STYLEGUIDE.mdx[STYLEGUIDE.mdx].
|
||
|
||
[discrete]
|
||
==== Definitions
|
||
|
||
**Provider:**
|
||
|
||
Code run by the `FunctionalTestRunner` is wrapped in a function so it can be passed around via config files and be parameterized. Any of these Provider functions may be asynchronous and should return/resolve-to the value they are meant to _provide_. Provider functions will always be called with a single argument: a provider API (see the <<functional_test_runner_provider_api,Provider API Section>>).
|
||
|
||
A config provider:
|
||
|
||
["source","js"]
|
||
-----------
|
||
// config and test files use `export default`
|
||
export default function (/* { providerAPI } */) {
|
||
return {
|
||
// ...
|
||
}
|
||
}
|
||
-----------
|
||
|
||
**Service**:::
|
||
A Service is a named singleton created using a subclass of `FtrService`. Tests and other services can retrieve service instances by asking for them by name. All functionality except the mocha API is exposed via services. When you write your own functional tests check for existing services that help with the interactions you're looking to execute, and add new services for interactions which aren't already encoded in a service.
|
||
|
||
**Service Providers**:::
|
||
For legacy purposes, and for when creating a subclass of `FtrService` is inconvenient, you can also create services using a "Service Provider". These are functions which which create service instances and return them. These instances are cached and provided to tests. Currently these providers may also return a Promise for the service instance, allowing the service to do some setup work before tests run. We expect to fully deprecate and remove support for async service providers in the near future and instead require that services use the `lifecycle` service to run setup before tests. Providers which return instances of classes other than `FtrService` will likely remain supported for as long as possible.
|
||
|
||
**Page objects**:::
|
||
Page objects are functionally equivalent to services, except they are loaded with a slightly different mechanism and generally defined separate from services. When you write your own functional tests you might want to write some of your services as Page objects, but it is not required.
|
||
|
||
**Test Files**:::
|
||
The `FunctionalTestRunner`'s primary purpose is to execute test files. These files export a Test Provider that is called with a Provider API but is not expected to return a value. Instead Test Providers define a suite using https://mochajs.org/#bdd[mocha's BDD interface].
|
||
|
||
**Test Suite**:::
|
||
A test suite is a collection of tests defined by calling `describe()`, and then populated with tests and setup/teardown hooks by calling `it()`, `before()`, `beforeEach()`, etc. Every test file must define only one top level test suite, and test suites can have as many nested test suites as they like.
|
||
|
||
**Tags**:::
|
||
Use tags in `describe()` function to group functional tests. Tags include:
|
||
* `ciGroup{id}` - Assigns test suite to a specific CI worker
|
||
* `skipCloud` and `skipFirefox` - Excludes test suite from running on Cloud or Firefox
|
||
* `includeFirefox` - Groups tests that run on Chrome and Firefox
|
||
|
||
**Cross-browser testing**:::
|
||
On CI, all the functional tests are executed in Chrome by default. To also run a suite against Firefox, assign the `includeFirefox` tag:
|
||
|
||
["source","js"]
|
||
-----------
|
||
// on CI test suite will be run twice: in Chrome and Firefox
|
||
describe('My Cross-browser Test Suite', function () {
|
||
this.tags('includeFirefox');
|
||
|
||
it('My First Test');
|
||
}
|
||
-----------
|
||
|
||
If the tests do not apply to Firefox, assign the `skipFirefox` tag.
|
||
|
||
To run tests on Firefox locally, use `config.firefox.js`:
|
||
|
||
["source","shell"]
|
||
-----------
|
||
node scripts/functional_test_runner --config test/functional/config.firefox.js
|
||
-----------
|
||
|
||
[discrete]
|
||
==== Using the test_user service
|
||
|
||
Tests should run at the positive security boundary condition, meaning that they should be run with the minimum privileges required (and documented) and not as the superuser.
|
||
This prevents the type of regression where additional privileges accidentally become required to perform the same action.
|
||
|
||
The functional UI tests now default to logging in with a user named `test_user` and the roles of this user can be changed dynamically without logging in and out.
|
||
|
||
In order to achieve this a new service was introduced called `createTestUserService` (see `test/common/services/security/test_user.ts`). The purpose of this test user service is to create roles defined in the test config files and setRoles() or restoreDefaults().
|
||
|
||
An example of how to set the role like how its defined below:
|
||
|
||
`await security.testUser.setRoles(['kibana_user', 'kibana_date_nanos']);`
|
||
|
||
Here we are setting the `test_user` to have the `kibana_user` role and also role access to a specific data index (`kibana_date_nanos`).
|
||
|
||
Tests should normally setRoles() in the before() and restoreDefaults() in the after().
|
||
|
||
|
||
[discrete]
|
||
==== Anatomy of a test file
|
||
|
||
This annotated example file shows the basic structure every test suite uses. It starts by importing https://github.com/elastic/kibana/tree/main/packages/kbn-expect[`@kbn/expect`] and defining its default export: an anonymous Test Provider. The test provider then destructures the Provider API for the `getService()` and `getPageObjects()` functions. It uses these functions to collect the dependencies of this suite. The rest of the test file will look pretty normal to mocha.js users. `describe()`, `it()`, `before()` and the lot are used to define suites that happen to automate a browser via services and objects of type `PageObject`.
|
||
|
||
["source","js"]
|
||
----
|
||
import expect from '@kbn/expect';
|
||
// test files must `export default` a function that defines a test suite
|
||
export default function ({ getService, getPageObject }) {
|
||
|
||
// most test files will start off by loading some services
|
||
const retry = getService('retry');
|
||
const testSubjects = getService('testSubjects');
|
||
const esArchiver = getService('esArchiver');
|
||
const kibanaServer = getService('kibanaServer');
|
||
|
||
// for historical reasons, PageObjects are loaded in a single API call
|
||
// and returned on an object with a key/value for each requested PageObject
|
||
const PageObjects = getPageObjects(['common', 'visualize']);
|
||
|
||
// every file must define a top-level suite before defining hooks/tests
|
||
describe('My Test Suite', () => {
|
||
|
||
// most suites start with a before hook that navigates to a specific
|
||
// app/page and restores some archives into {es} with esArchiver
|
||
before(async () => {
|
||
await Promise.all([
|
||
// start by clearing Saved Objects from the .kibana index
|
||
await kibanaServer.savedObjects.cleanStandardList();
|
||
// load some basic log data only if the index doesn't exist
|
||
esArchiver.loadIfNeeded('test/functional/fixtures/es_archiver/makelogs')
|
||
]);
|
||
// go to the page described by `apps.visualize` in the config
|
||
await PageObjects.common.navigateTo('visualize');
|
||
});
|
||
|
||
// right after the before() hook definition, add the teardown steps
|
||
// that will tidy up {es} for other test suites
|
||
after(async () => {
|
||
// we clear Kibana Saved Objects but not the makelogs
|
||
// archive because we don't make any changes to it, and subsequent
|
||
// suites could use it if they call `.loadIfNeeded()`.
|
||
await kibanaServer.savedObjects.cleanStandardList();
|
||
});
|
||
|
||
// This series of tests illustrate how tests generally verify
|
||
// one step of a larger process and then move on to the next in
|
||
// a new test, each step building on top of the previous
|
||
it('Vis Listing Page is empty');
|
||
it('Create a new vis');
|
||
it('Shows new vis in listing page');
|
||
it('Opens the saved vis');
|
||
it('Respects time filter changes');
|
||
it(...
|
||
});
|
||
|
||
}
|
||
----
|
||
|
||
[discrete]
|
||
[[functional_test_runner_provider_api]]
|
||
=== Provider API
|
||
|
||
The first and only argument to all providers is a Provider API Object. This object can be used to load service/page objects and config/test files.
|
||
|
||
Within config files the API has the following properties
|
||
|
||
[horizontal]
|
||
`log`::: An instance of the `ToolingLog` that is ready for use
|
||
`readConfigFile(path)`::: Returns a promise that will resolve to a Config instance that provides the values from the config file at `path`
|
||
|
||
Within service and PageObject Providers the API is:
|
||
|
||
[horizontal]
|
||
`getService(name)`::: Load and return the singleton instance of a service by name
|
||
`getPageObjects(names)`::: Load the singleton instances of `PageObject`s and collect them on an object where each name is the key to the singleton instance of that PageObject
|
||
|
||
Within a test Provider the API is exactly the same as the service providers API but with an additional method:
|
||
|
||
[horizontal]
|
||
`loadTestFile(path)`::: Load the test file at path in place. Use this method to nest suites from other files into a higher-level suite
|
||
|
||
[discrete]
|
||
=== Service Index
|
||
|
||
[discrete]
|
||
==== Built-in Services
|
||
|
||
The `FunctionalTestRunner` comes with three built-in services:
|
||
|
||
**config:**:::
|
||
// * Source: {kibana-blob}src/functional_test_runner/lib/config/config.ts[src/functional_test_runner/lib/config/config.ts]
|
||
// * Schema: {kibana-blob}src/functional_test_runner/lib/config/schema.ts[src/functional_test_runner/lib/config/schema.ts]
|
||
* Use `config.get(path)` to read any value from the config file
|
||
|
||
**log:**:::
|
||
// * Source: {kibana-blob}packages/kbn-dev-utils/src/tooling_log/tooling_log.js[packages/kbn-dev-utils/src/tooling_log/tooling_log.js]
|
||
* `ToolingLog` instances are readable streams. The instance provided by this service is automatically piped to stdout by the `FunctionalTestRunner` CLI
|
||
* `log.verbose()`, `log.debug()`, `log.info()`, `log.warning()` all work just like console.log but produce more organized output
|
||
|
||
**lifecycle:**:::
|
||
// * Source: {kibana-blob}src/functional_test_runner/lib/lifecycle.ts[src/functional_test_runner/lib/lifecycle.ts]
|
||
* Designed primary for use in services
|
||
* Exposes lifecycle events for basic coordination. Handlers can return a promise and resolve/fail asynchronously
|
||
* Phases include: `beforeLoadTests`, `beforeTests`, `beforeEachTest`, `cleanup`
|
||
|
||
[discrete]
|
||
==== {kib} Services
|
||
|
||
The {kib} functional tests define the vast majority of the actual functionality used by tests.
|
||
|
||
**browser**:::
|
||
// * Source: {kibana-blob}test/functional/services/browser.ts[test/functional/services/browser.ts]
|
||
* Higher level wrapper for `remote` service, which exposes available browser actions
|
||
* Popular methods:
|
||
** `browser.getWindowSize()`
|
||
** `browser.refresh()`
|
||
|
||
**testSubjects:**:::
|
||
// * Source: {kibana-blob}test/functional/services/test_subjects.ts[test/functional/services/test_subjects.ts]
|
||
* Test subjects are elements that are tagged specifically for selecting from tests
|
||
* Use `testSubjects` over CSS selectors when possible
|
||
* Usage:
|
||
** Tag your test subject with a `data-test-subj` attribute:
|
||
+
|
||
["source","html"]
|
||
-----------
|
||
<div id="container”>
|
||
<button id="clickMe” data-test-subj=”containerButton” />
|
||
</div>
|
||
-----------
|
||
+
|
||
** Click this button using the `testSubjects` helper:
|
||
+
|
||
["source","js"]
|
||
-----------
|
||
await testSubjects.click(‘containerButton’);
|
||
-----------
|
||
+
|
||
* Popular methods:
|
||
** `testSubjects.find(testSubjectSelector)` - Find a test subject in the page; throw if it can't be found after some time
|
||
** `testSubjects.click(testSubjectSelector)` - Click a test subject in the page; throw if it can't be found after some time
|
||
|
||
**find:**:::
|
||
// * Source: {kibana-blob}test/functional/services/find.ts[test/functional/services/find.ts]
|
||
* Helpers for `remote.findBy*` methods that log and manage timeouts
|
||
* Popular methods:
|
||
** `find.byCssSelector()`
|
||
** `find.allByCssSelector()`
|
||
|
||
**retry:**:::
|
||
// * Source: {kibana-blob}test/common/services/retry/retry.ts[test/common/services/retry/retry.ts]
|
||
* Helpers for retrying operations
|
||
* Popular methods:
|
||
** `retry.try(fn, onFailureBlock)` - Execute `fn` in a loop until it succeeds or the default timeout elapses. The optional `onFailureBlock` is executed before each retry attempt.
|
||
** `retry.tryForTime(ms, fn, onFailureBlock)` - Execute `fn` in a loop until it succeeds or `ms` milliseconds elapses. The optional `onFailureBlock` is executed before each retry attempt.
|
||
|
||
**kibanaServer:**:::
|
||
// * Source: {kibana-blob}test/common/services/kibana_server/kibana_server.js[test/common/services/kibana_server/kibana_server.js]
|
||
* Helpers for interacting with {kib}'s server
|
||
* Commonly used methods:
|
||
** `kibanaServer.uiSettings.update()`
|
||
** `kibanaServer.version.get()`
|
||
** `kibanaServer.status.getOverallState()`
|
||
|
||
**esArchiver:**:::
|
||
// * Source: {kibana-blob}test/common/services/es_archiver.ts[test/common/services/es_archiver.ts]
|
||
* Load/unload archives created with the `esArchiver`
|
||
* Popular methods:
|
||
** `esArchiver.load(path)`
|
||
** `esArchiver.loadIfNeeded(path)`
|
||
** `esArchiver.unload(path)`
|
||
|
||
Full list of services that are used in functional tests can be found here: {kibana-blob}test/functional/services[test/functional/services]
|
||
|
||
|
||
**Low-level utilities:**:::
|
||
* es
|
||
// ** Source: {kibana-blob}test/common/services/es.ts[test/common/services/es.ts]
|
||
** {es} client
|
||
** Higher level options: `kibanaServer.uiSettings` or `esArchiver`
|
||
* remote
|
||
// ** Source: {kibana-blob}test/functional/services/remote/remote.ts[test/functional/services/remote/remote.ts]
|
||
** Instance of https://seleniumhq.github.io/selenium/docs/api/javascript/module/selenium-webdriver/index_exports_WebDriver.html[WebDriver] class
|
||
** Responsible for all communication with the browser
|
||
** To perform browser actions, use `remote` service
|
||
** For searching and manipulating with DOM elements, use `testSubjects` and `find` services
|
||
** See the https://seleniumhq.github.io/selenium/docs/api/javascript/[selenium-webdriver docs] for the full API.
|
||
|
||
[discrete]
|
||
==== Custom Services
|
||
|
||
Services are intentionally generic. They can be literally anything (even nothing). Some services have helpers for interacting with a specific types of UI elements, like `pointSeriesVis`, and others are more foundational, like `log` or `config`. Whenever you want to provide some functionality in a reusable package, consider making a custom service.
|
||
|
||
To create a custom service `somethingUseful`:
|
||
|
||
* Create a `test/functional/services/something_useful.js` file that looks like this:
|
||
+
|
||
["source","js"]
|
||
-----------
|
||
// Services are defined by Provider functions that receive the ServiceProviderAPI
|
||
export function SomethingUsefulProvider({ getService }) {
|
||
const log = getService('log');
|
||
|
||
class SomethingUseful {
|
||
doSomething() {
|
||
}
|
||
}
|
||
return new SomethingUseful();
|
||
}
|
||
-----------
|
||
+
|
||
* Re-export your provider from `services/index.js`
|
||
* Import it into `src/functional/config.base.js` and add it to the services config:
|
||
+
|
||
["source","js"]
|
||
-----------
|
||
import { SomethingUsefulProvider } from './services';
|
||
|
||
export default function () {
|
||
return {
|
||
// … truncated ...
|
||
services: {
|
||
somethingUseful: SomethingUsefulProvider
|
||
}
|
||
}
|
||
}
|
||
-----------
|
||
|
||
[discrete]
|
||
=== PageObjects
|
||
|
||
The purpose for each PageObject is pretty self-explanatory. The visualize PageObject provides helpers for interacting with the visualize app, dashboard is the same for the dashboard app, and so on.
|
||
|
||
One exception is the "common" PageObject. A holdover from the intern implementation, the common PageObject is a collection of helpers useful across pages. Now that we have shareable services, and those services can be shared with other `FunctionalTestRunner` configurations, we will continue to move functionality out of the common PageObject and into services.
|
||
|
||
Please add new methods to existing or new services rather than further expanding the CommonPage class.
|
||
|
||
[discrete]
|
||
=== Gotchas
|
||
|
||
Remember that you can’t run an individual test in the file (`it` block) because the whole `describe` needs to be run in order. There should only be one top level `describe` in a file.
|
||
|
||
[discrete]
|
||
==== Functional Test Timing
|
||
|
||
Another important gotcha is writing stable tests by being mindful of timing. All methods on `remote` run asynchronously. It’s better to write interactions that wait for changes on the UI to appear before moving onto the next step.
|
||
|
||
For example, rather than writing an interaction that simply clicks a button, write an interaction with the a higher-level purpose in mind:
|
||
|
||
Bad example: `PageObjects.app.clickButton()`
|
||
|
||
["source","js"]
|
||
-----------
|
||
class AppPage {
|
||
// what can people who call this method expect from the
|
||
// UI after the promise resolves? Since the reaction to most
|
||
// clicks is asynchronous the behavior is dependent on timing
|
||
// and likely to cause test that fail unexpectedly
|
||
async clickButton () {
|
||
await testSubjects.click(‘menuButton’);
|
||
}
|
||
}
|
||
-----------
|
||
|
||
Good example: `PageObjects.app.openMenu()`
|
||
|
||
["source","js"]
|
||
-----------
|
||
class AppPage {
|
||
// unlike `clickButton()`, callers of `openMenu()` know
|
||
// the state that the UI will be in before they move on to
|
||
// the next step
|
||
async openMenu () {
|
||
await testSubjects.click(‘menuButton’);
|
||
await testSubjects.exists(‘menu’);
|
||
}
|
||
}
|
||
-----------
|
||
|
||
Writing in this way will ensure your test timings are not flaky or based on assumptions about UI updates after interactions.
|
||
|
||
[discrete]
|
||
=== Debugging
|
||
|
||
From the command line run:
|
||
|
||
["source","shell"]
|
||
-----------
|
||
node --inspect-brk scripts/functional_test_runner
|
||
-----------
|
||
|
||
This prints out a URL that you can visit in Chrome and debug your functional tests in the browser.
|
||
|
||
You can also see additional logs in the terminal by running the `FunctionalTestRunner` with the `--debug` or `--verbose` flag. Add more logs with statements in your tests like
|
||
|
||
["source","js"]
|
||
-----------
|
||
// load the log service
|
||
const log = getService(‘log’);
|
||
|
||
// log.debug only writes when using the `--debug` or `--verbose` flag.
|
||
log.debug(‘done clicking menu’);
|
||
-----------
|
||
|
||
[discrete]
|
||
=== MacOS testing performance tip
|
||
|
||
macOS users on a machine with a discrete graphics card may see significant speedups (up to 2x) when running tests by changing your terminal emulator's GPU settings. In iTerm2:
|
||
* Open Preferences (Command + ,)
|
||
* In the General tab, under the "Magic" section, ensure "GPU rendering" is checked
|
||
* Open "Advanced GPU Settings..."
|
||
* Uncheck the "Prefer integrated to discrete GPU" option
|
||
* Restart iTerm
|
||
|
||
[discrete]
|
||
== Flaky Test Runner
|
||
|
||
If your functional tests are flaky then the Operations team might skip them and ask that you make them less flaky before enabling them once again. This process usually involves looking at the failures which are logged on the relevant Github issue and finding incorrect assumptions or conditions which need to be awaited at some point in the test. To determine if your changes make the test fail less often you can run your tests in the Flaky Test Runner. This tool runs up to 500 executions of a specific ciGroup. To start a build of the Flaky Test Runner create a PR with your changes and then visit https://ci-stats.kibana.dev/trigger_flaky_test_runner, select your PR, choose the CI Group that your tests are in, and trigger the build.
|
||
|
||
This will take you to Buildkite where your build will run and tell you if it failed in any execution.
|
||
|
||
A flaky test may only fail once in 1000 runs, so keep this in mind and make sure you use enough executions to really prove that a test isn't flaky anymore.
|