[DOCS] Playground updates for 8.15.0 (#188842)

- Updated/added screenshots
- Updated text per UI changes
- Introduced separate chat and query modes with descriptions and updated
interface images
- Added "View and download Python code" section with screenshot of new
button
- Updated "Balancing cost and latency" section title to include result
quality


### [URL
preview](https://kibana_bk_188842.docs-preview.app.elstc.co/guide/en/kibana/master/playground.html)
This commit is contained in:
Liam Thompson 2024-07-24 17:07:27 +01:00 committed by GitHub
parent 9422ef9977
commit 58c4be1d2e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
10 changed files with 62 additions and 23 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 234 KiB

After

Width:  |  Height:  |  Size: 216 KiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9 KiB

View file

@ -93,6 +93,9 @@ a|
[[playground-getting-started]]
== Getting started
[.screenshot]
image::get-started.png[width=600]
[float]
[[playground-getting-started-connect]]
=== Connect to LLM provider
@ -100,7 +103,7 @@ a|
To get started with {x}, you need to create a <<action-types,connector>> for your LLM provider.
Follow these steps on the {x} landing page:
. Under *Connect to LLM*, click *Create connector*.
. Under *Connect to an LLM*, click *Create connector*.
. Select your *LLM provider*.
. *Name* your connector.
. Select a *URL endpoint* (or use the default).
@ -108,7 +111,7 @@ Follow these steps on the {x} landing page:
[TIP]
====
If you need to update a connector, or add a new one, click the wrench button (🔧) under *Model settings*.
If you need to update a connector, or add a new one, click the 🔧 *Manage* button beside *Model settings*.
====
[float]
@ -149,22 +152,48 @@ POST /_bulk
We've also provided some Jupyter notebooks to easily ingest sample data into {es}.
Find these in the https://github.com/elastic/elasticsearch-labs/blob/main/notebooks/ingestion-and-chunking[elasticsearch-labs] repository.
These notebooks use the official {es} Python client.
// TODO: [The above link will be broken until https://github.com/elastic/elasticsearch-labs/pull/232 is merged]
[float]
[[playground-getting-started-index]]
=== Select {es} indices
Once you've connected to your LLM provider, it's time to choose the data you want to search.
Follow the steps under *Select indices*:
. Select one or more {es} indices under *Add index*.
. Click *Start* to launch the chat interface.
+
. Click *Add data sources*.
. Select one or more {es} indices.
. Click *Save and continue* to launch the chat interface.
[TIP]
====
You can always add or remove indices later by selecting the *Data* button from the main {x} UI.
[.screenshot]
image::select-indices.png[width=400]
image::images/data-button.png[width=100]
====
Learn more about the underlying {es} queries used to search your data in <<playground-query>>.
[float]
[[playground-getting-started-chat-query-modes]]
=== Chat and query modes
Since 8.15.0 (and earlier for {es} Serverless), the main {x} UI has two modes:
* *Chat mode*: The default mode, where you can chat with your data via the LLM.
* *Query mode*: View and modify the {es} query generated by the chat interface.
The *chat mode* is selected when you first set up your {x} instance.
[.screenshot]
image::images/chat-interface.png[width=700]
To switch to *query mode*, select *Query* from the main UI.
[.screenshot]
image::images/query-interface.png[width=700]
[TIP]
====
Learn more about the underlying {es} queries used to search your data in <<playground-query>>
====
[float]
[[playground-getting-started-setup-chat]]
@ -172,9 +201,6 @@ Learn more about the underlying {es} queries used to search your data in <<playg
You can start chatting with your data immediately, but you might want to tweak some defaults first.
[.screenshot]
image::chat-interface.png[]
You can adjust the following under *Model settings*:
* *Model*. The model used for generating responses.
@ -194,6 +220,20 @@ Click *✨ Regenerate* to resend the last query to the model for a fresh respons
Click *⟳ Clear chat* to clear chat history and start a new conversation.
====
[float]
[[playground-getting-started-view-code]]
=== View and download Python code
Use the *View code* button to see the Python code that powers the chat interface.
You can integrate it into your own application, modifying as needed.
We currently support two implementation options:
* {es} Python Client + LLM provider
* LangChain + LLM provider
[.screenshot]
image::images/view-code-button.png[width=100]
[float]
[[playground-next-steps]]
=== Next steps
@ -202,7 +242,7 @@ Once you've got {x} up and running, and you've tested out the chat interface, yo
* <<playground-context>>
* <<playground-query>>
* <<playground-troubleshooting>>
* <<playground-troubleshooting>>
include::playground-context.asciidoc[]
include::playground-query.asciidoc[]

View file

@ -56,9 +56,9 @@ Refer to the following Python notebooks for examples of how to chunk your docume
[float]
[[playground-context-balance]]
=== Balancing cost and latency
=== Balancing cost/latency and result quality
Here are some general recommendations for balancing cost and latency with different context sizes:
Here are some general recommendations for balancing cost/latency and result quality with different context sizes:
Optimize context length::
Determine the optimal context length through empirical testing.

View file

@ -24,7 +24,10 @@ In this simple example, the `books` index has two fields: `author` and `name`.
Selecting a field adds it to the `fields` array in the query.
[.screenshot]
image::images/edit-query.png[View and modify queries]
image::images/query-interface.png[View and modify queries]
Certain fields in your documents may be hidden.
Learn more about <<playground-hidden-fields, hidden fields>>.
[float]
[[playground-query-relevance]]
@ -38,17 +41,13 @@ Refer to <<playground-context, Optimize context>> for more information.
<<playground-troubleshooting, Troubleshooting>> provides tips on how to diagnose and fix relevance issues.
[.screenshot]
[NOTE]
====
{x} uses the {ref}/retriever.html[`retriever`] syntax for {es} queries.
Retrievers make it easier to compose and test different retrieval strategies in your search pipelines.
====
// TODO: uncomment and add to note once following page is live
Retrievers make it easier to compose and test different retrieval strategies in your search pipelines.
Refer to {ref}/retrievers-overview.html[documentation] for a high level overview of retrievers.
====
[float]
[[playground-hidden-fields]]
=== Hidden fields