- Originally Kibana's `http` service did not support receiving streams,
that's why we used plain `fetch` for this. This has been fixed in
#158678, so this PR updates the streaming helpers to use Kibana's `http`
service from now on.
- The PR also breaks out the response stream code into its own package
and restructures it to separate client and server side code. This brings
down the `aiops` bundle size by `~300KB`! 🥳
- The approach to client side throttling/buffering was also revamped:
There was an issue doing the throttling inside the generator function,
it always waited for the timeout. The buffering is now removed from
`fetchStream`, instead `useThrottle` from `react-use` is used on the
reduced `data` in `useFetchStream`. Loading log rate analysis results
got a lot snappier with this update!
Adds versioning to the AIOps API.
Versions are added to the server side routes and to the client side
functions which call the routes.
Updates API tests to add the API version to the request headers.
The single API endpoint is already internal and now has been given the
version '1'.
**Internal APIs**
`/internal/aiops/explain_log_rate_spikes`
- Adds a flag for `compressResponse` and `flushFix` to the request body to be able to overrule compression settings inferred from headers.
- Updates the developer examples with a toggle to run requests with compression enabled or disabled.
- Adds support for backpressure handling for response streams.
- The backpressure update includes a fix where uncompressed streams would never start streaming to the client.
- The analysis endpoint for Explain Log Rate Spikes now includes a ping every 10 seconds to keep the stream alive.
- Integration tests were updated to test both uncompressed and compressed streaming.
- Adds a check to aiops API endpoints to only allow requests with active platinum license.
- Adds integration tests for basic license where the endpoints should return permission denied.
- Improved error handling:
- Low level errors (like a non valid argument pushed to a stream) will now be logged to Kibana server's console, because the way HTTP streams work we cannot really emit a useful error to an already running stream to the client. So the stream will just abort but Kibana server will log an error.
- Higher level errors on the application level (like when we find out an index does not exist to run the analysis) will be pushed to the stream now as an error type action and we can update the UI accordingly. Note this PR only updates the API and corresponding tests to support this, the UI doesn't make use of it yet.
This creates a response_stream plugin in the Kibana /examples section. The plugin demonstrates API endpoints that can stream data chunks with a single request with gzip/compression support. gzip-streams get decompressed natively by browsers. The plugin demonstrates two use cases to get started: Streaming a raw string as well as a more complex example that streams Redux-like actions to the client which update React state via useReducer().