* WIP: Adding new route for EQL Validation This is mostly boilerplate with some rough parameter definitions; the actual implementation of the validation is going to live in our validateEql function. A few tests are failing as the mocks haven't yet been implemented, I need to see the shape of the responses first. * Cherry-pick Marshall's EQL types * Implements actual EQL validation * Performs an EQL search * filters out non-parsing errors, and returns what remains in the response * Adds mocks for empty EQL responses (we don't yet have a need for mocked data, but we will when we unit-test validateEql) * Adds validation calls to the EQL form input * Adds EQL Validation response schema,mocks,tests * Adds frontend function to call our validation endpoint * Adds hook, useEqlValidation, to call the above function and return state * Adds labels/help text for EQL Query bar * EqlQueryBar consumes useEqlValidation and marks the field as invalid, but does not yet report errors. * Do not call the validation API if query is not present This causes a broader error that results in a 400 response; we can (and do) handle the case of a blank query in the form itself. * Remove EQL Help Text It doesn't add any information for the user, and it currently looks bad when combined with validation errors. * Flesh out and use our popover for displaying validation errors * Fixes issue where old errors were persisted after the user had made modifications * Include verification_exception errors as validation errors These include errors related to index fields and mappings. * Generalize our validation helpers We're concerned with validation errors; the source of those errors is an implementation detail of these functions. * Move error popover and EQL reference link to footer This more closely resembles the new Eui Markdown editor, which places errors and doc links in a footer. * Fix jest tests following additional prop * Add icon for EQL Rule card * Fixes existing EqlQueryBar tests These were broken by our use of useAppToasts and the EUI theme. * Add unit tests around error rendering on EQL Query Bar * Add tests for ErrorPopover * Remove unused schema type Decode doesn't do any additional processing, so we can use t.TypeOf here (the default for buildRouteValidation). * Remove duplicated header * Use ignore parameter to prevent EQL validations from logging errors Without `ignore: [400]` the ES client will log errors and then throw them. We can catch the error, but the log is undesirable. This updates the query to use the ignore parameter, along with updating the validation logic to work with the updated response. Adds some mocks and tests around these responses and helpers, since these will exist independent of the validation implementation. * Include mapping_exceptions during EQL query validation These include errors for inaccessible indexes, which should be useful to the rule writer in writing their EQL query. * Display toast messages for non-validation messages * fix type errors This type was renamed. * Do not request data in our validation request By not having the cluster retrieve/send any data, this should saves us a few CPU cycles. * Move EQL validation to an async form validator Rather than invoking a custom validation hook (useEqlValidation) at custom times (onBlur) in our EqlQueryBar component, we can instead move this functionality to a form validation function and have it be invoked automatically by our form when values change. However, because we still need to handle the validation messages slightly differently (place them in a popover as opposed to an EuiFormRow), we also need custom error retrieval in the form of getValidationResults. After much pain, it was determined that the default behavior of _.debounce does not work with async validator functions, as a debounced call will not "wait" for the eventual invocation but will instead return the most recently resolved value. This leads to stale validation results and terrible UX, so I wrote a custom function (debounceAsync) that behaves like we want/need; see tests for details. * Invalidate our query field when index patterns change Since EQL rules actually validate against the relevant indexes, changing said indexes should invalidate/revalidate the query. With the form lib, this is beautifully simple :) * Set a min-height on our EQL textarea * Remove unused prop from EqlQueryBar Index corresponds to the value from the index field; now that our EQL validation is performed by the form we have no need for it here. * Update EQL overview link to point to elasticsearch docs Adds an entry in our doclinks service, and uses that. * Remove unused prop from stale tests * Update docLinks documentation with new EQL link * Fix bug where saved query rules had no type selected on Edit * Wait for kibana requests to complete before moving between rule tabs With our new async validation, a user can quickly navigate away from the Definition tab before the validation has completed, resulting in the form being invalidated. Any subsequent user actions cause the form to correct itself, but until I can find a better solution here this really just gives the validation time to complete and sidesteps the issue. |
||
---|---|---|
.. | ||
.github | ||
build_chromium | ||
dev-tools | ||
examples | ||
plugins | ||
scripts | ||
tasks | ||
test | ||
test_utils | ||
typings | ||
.gitignore | ||
.i18nrc.json | ||
.telemetryrc.json | ||
gulpfile.js | ||
mocks.ts | ||
package.json | ||
README.md | ||
tsconfig.json | ||
tsconfig.refs.json | ||
yarn.lock |
Elastic License Functionality
This directory tree contains files subject to the Elastic License. The files subject to the Elastic License are grouped in this directory to clearly separate them from files licensed under the Apache License 2.0.
Development
By default, Kibana will run with X-Pack installed as mentioned in the contributing guide.
Elasticsearch will run with a basic license. To run with a trial license, including security, you can specifying that with the yarn es
command.
Example: yarn es snapshot --license trial --password changeme
By default, this will also set the password for native realm accounts to the password provided (changeme
by default). This includes that of the kibana_system
user which elasticsearch.username
defaults to in development. If you wish to specify a password for a given native realm account, you can do that like so: --password.kibana_system=notsecure
Testing
Running specific tests
Test runner | Test location | Runner command (working directory is kibana/x-pack) |
---|---|---|
Jest | x-pack/**/*.test.js x-pack/**/*.test.ts |
cd x-pack && node scripts/jest -t regexp [test path] |
Functional | x-pack/test/*integration/**/config.js x-pack/test/*functional/config.js x-pack/test/accessibility/config.js |
node scripts/functional_tests_server --config x-pack/test/[directory]/config.js node scripts/functional_test_runner --config x-pack/test/[directory]/config.js --grep=regexp |
Examples:
- Run the jest test case whose description matches 'filtering should skip values of null':
cd x-pack && yarn test:jest -t 'filtering should skip values of null' plugins/ml/public/application/explorer/explorer_charts/explorer_charts_container_service.test.js
- Run the x-pack api integration test case whose description matches the given string:
node scripts/functional_tests_server --config x-pack/test/api_integration/config.ts
node scripts/functional_test_runner --config x-pack/test/api_integration/config.ts --grep='apis Monitoring Beats list with restarted beat instance should load multiple clusters'
In addition to to providing a regular expression argument, specific tests can also be run by appeding .only
to an it
or describe
function block. E.g. describe(
to describe.only(
.
Running all tests
You can run unit tests by running:
yarn test
If you want to run tests only for a specific plugin (to save some time), you can run:
yarn test --plugins <plugin>[,<plugin>]* # where <plugin> is "reporting", etc.
Running server unit tests
You can run mocha unit tests by running:
yarn test:mocha
Running functional tests
For more info, see the Elastic functional test development guide.
The functional UI tests, the API integration tests, and the SAML API integration tests are all run against a live browser, Kibana, and Elasticsearch install. Each set of tests is specified with a unique config that describes how to start the Elasticsearch server, the Kibana server, and what tests to run against them. The sets of tests that exist today are functional UI tests (specified by this config), API integration tests (specified by this config), and SAML API integration tests (specified by this config).
The script runs all sets of tests sequentially like so:
- builds Elasticsearch and X-Pack
- runs Elasticsearch with X-Pack
- starts up the Kibana server with X-Pack
- runs the functional UI tests against those servers
- tears down the servers
- repeats the same process for the API and SAML API integration test configs.
To do all of this in a single command run:
node scripts/functional_tests
Developing functional UI tests
If you are developing functional tests then you probably don't want to rebuild Elasticsearch and wait for all that setup on every test run, so instead use this command to build and start just the Elasticsearch and Kibana servers:
node scripts/functional_tests_server
After the servers are started, open a new terminal and run this command to run just the tests (without tearing down Elasticsearch or Kibana):
node scripts/functional_test_runner
For both of the above commands, it's crucial that you pass in --config
to specify the same config file to both commands. This makes sure that the right tests will run against the right servers. Typically a set of tests and server configuration go together.
Read more about how the scripts work here.
For a deeper dive, read more about the way functional tests and servers work here.
Running API integration tests
API integration tests are run with a unique setup usually without UI assets built for the Kibana server.
API integration tests are intended to test only programmatic API exposed by Kibana. There is no need to run browser and simulate user actions, which significantly reduces execution time. In addition, the configuration for API integration tests typically sets optimize.enabled=false
for Kibana because UI assets are usually not needed for these tests.
To run only the API integration tests:
node scripts/functional_tests --config test/api_integration/config
Running SAML API integration tests
We also have SAML API integration tests which set up Elasticsearch and Kibana with SAML support. Run only API integration tests with SAML enabled like so:
node scripts/functional_tests --config test/saml_api_integration/config
Running Jest integration tests
Jest integration tests can be used to test behavior with Elasticsearch and the Kibana server.
node scripts/jest_integration
An example test exists at test_utils/jest/integration_tests/example_integration.test.ts
Running Reporting functional tests
See here for more information on running reporting tests.