## Summary
Follow up to #132590.
Part of #181111.
This updates the developer examples for `@kbn/ml-response-stream` to
include a variant with a full Redux Toolkit setup. For this case, the
`@kbn/ml-response-stream` now includes a generic slice `streamSlice`
that can be used. This allows the actions created to be streamed via
NDJSON to be shared across server and client.
Functional tests for the examples were added too. To run these tests you
can use the following commands:
```
# Start the test server (can continue running)
node scripts/functional_tests_server.js --config test/examples/config.js
# Start a test run
node scripts/functional_test_runner.js --config test/examples/config.js
```
### Checklist
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] This was checked for breaking API changes and was [labeled
appropriately](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)
Improves the `flushFix` behaviour for Log Rate Analysis. Previously the
setting would add a 4KB size additional dummy payload to each object
returned as ndjson. For the dataset used for testing this, this would
result in an overall response payload of ˜900Kbytes. For comparison,
without `flushFix` the response size would be ˜40Kbytes in this case.
This PR changes the behaviour to only send a dummy payload every 500ms
if the real data sent in the last 500ms wasn't bigger than 4Kbytes.
Depending on the speed of the response, this can bring down the overall
response payload to ˜300Kbytes (Cloud uncached), ˜150Kbytes (Cloud
cached) or even ˜70Kbytes (local cluster) for the same dataset.
Migrates deprecated EUI components for the response stream developer
example plugin. Adds a breadcrumb to the example to be able to navigate back to the overview page.
- Originally Kibana's `http` service did not support receiving streams,
that's why we used plain `fetch` for this. This has been fixed in
#158678, so this PR updates the streaming helpers to use Kibana's `http`
service from now on.
- The PR also breaks out the response stream code into its own package
and restructures it to separate client and server side code. This brings
down the `aiops` bundle size by `~300KB`! 🥳
- The approach to client side throttling/buffering was also revamped:
There was an issue doing the throttling inside the generator function,
it always waited for the timeout. The buffering is now removed from
`fetchStream`, instead `useThrottle` from `react-use` is used on the
reduced `data` in `useFetchStream`. Loading log rate analysis results
got a lot snappier with this update!
- Adds a flag for `compressResponse` and `flushFix` to the request body to be able to overrule compression settings inferred from headers.
- Updates the developer examples with a toggle to run requests with compression enabled or disabled.
- Adds support for backpressure handling for response streams.
- The backpressure update includes a fix where uncompressed streams would never start streaming to the client.
- The analysis endpoint for Explain Log Rate Spikes now includes a ping every 10 seconds to keep the stream alive.
- Integration tests were updated to test both uncompressed and compressed streaming.
- Adds a check to aiops API endpoints to only allow requests with active platinum license.
- Adds integration tests for basic license where the endpoints should return permission denied.
- Improved error handling:
- Low level errors (like a non valid argument pushed to a stream) will now be logged to Kibana server's console, because the way HTTP streams work we cannot really emit a useful error to an already running stream to the client. So the stream will just abort but Kibana server will log an error.
- Higher level errors on the application level (like when we find out an index does not exist to run the analysis) will be pushed to the stream now as an error type action and we can update the UI accordingly. Note this PR only updates the API and corresponding tests to support this, the UI doesn't make use of it yet.
This creates a response_stream plugin in the Kibana /examples section. The plugin demonstrates API endpoints that can stream data chunks with a single request with gzip/compression support. gzip-streams get decompressed natively by browsers. The plugin demonstrates two use cases to get started: Streaming a raw string as well as a more complex example that streams Redux-like actions to the client which update React state via useReducer().