## Summary
Resolves https://github.com/elastic/kibana/issues/188043
This PR adds new connector which is define integration with Elastic
Inference Endpoint via [Inference
APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html)
The lifecycle of the Inference Endpoint are managed by the connector
registered handlers:
- `preSaveHook` -
[create](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html)
new Inference Endpoint in the connector create mode (`isEdit === false`)
and
[delete](https://www.elastic.co/guide/en/elasticsearch/reference/current/delete-inference-api.html)+[create](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-inference-api.html)
in the connector edit mode (`isEdit === true`)
- `postSaveHook` - check if the connector SO was created/updated and if
not removes Inference Endpoint from preSaveHook
- `postDeleteHook` -
[delete](https://www.elastic.co/guide/en/elasticsearch/reference/current/delete-inference-api.html)
Inference Endpoint if connector was deleted.
In the Kibana Stack Management Connectors, its represented with the new
card (Technical preview badge):
<img width="1261" alt="Screenshot 2024-09-27 at 2 11 12 PM"
src="https://github.com/user-attachments/assets/dcbcce1f-06e7-4d08-8b77-0ba4105354f8">
To simplify the future integration with AI Assistants, the Connector
consists from the two main UI parts: provider selector and required
provider settings, which will be always displayed
<img width="862" alt="Screenshot 2024-10-07 at 7 59 09 AM"
src="https://github.com/user-attachments/assets/87bae493-c642-479e-b28f-6150354608dd">
and Additional options, which contains optional provider settings and
Task Type configuration:
<img width="861" alt="Screenshot 2024-10-07 at 8 00 15 AM"
src="https://github.com/user-attachments/assets/2341c034-6198-4731-8ce7-e22e6c6fb20f">
subActions corresponds to the different taskTypes Inference API
supports. Each of the task type has its own Inference Perform params.
Currently added:
- completion & completionStream
- rerank
- text_embedding
- sparse_embedding
Follow up work:
1. Collapse/expand Additional options, when the connector flyout/modal
has AI Assistant as a context (path through the extending context
implementation on the connector framework level)
2. Add support for additional params for Completion subAction to be able
to path functions
3. Add support for tokens usage Dashboard, when inference API will
include the used tokens count in the response
4. Add functionality and UX for migration from existing specific AI
connectors to the Inference connector with proper provider and
completion task
5. Integrate Connector with the AI Assistants
---------
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
Co-authored-by: Steph Milovic <stephanie.milovic@elastic.co>