mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-04-25 15:47:23 -04:00
270 lines
7.3 KiB
Text
270 lines
7.3 KiB
Text
////
|
|
|
|
[source,console]
|
|
----
|
|
DELETE _ingest/pipeline/*_embeddings_pipeline
|
|
----
|
|
// TEST
|
|
// TEARDOWN
|
|
|
|
////
|
|
|
|
// tag::cohere[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/cohere_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "cohere_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::cohere[]
|
|
|
|
// tag::elser[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/elser_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "elser_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::elser[]
|
|
|
|
// tag::hugging-face[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/hugging_face_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "hugging_face_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::hugging-face[]
|
|
|
|
// tag::openai[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/openai_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "openai_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::openai[]
|
|
|
|
// tag::azure-openai[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/azure_openai_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "azure_openai_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::azure-openai[]
|
|
|
|
// tag::azure-ai-studio[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/azure_ai_studio_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "azure_ai_studio_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::azure-ai-studio[]
|
|
|
|
// tag::google-vertex-ai[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/google_vertex_ai_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "google_vertex_ai_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::google-vertex-ai[]
|
|
|
|
// tag::mistral[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/mistral_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "mistral_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::mistral[]
|
|
|
|
// tag::amazon-bedrock[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/amazon_bedrock_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "amazon_bedrock_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::amazon-bedrock[]
|
|
|
|
// tag::alibabacloud-ai-search[]
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/alibabacloud_ai_search_embeddings_pipeline
|
|
{
|
|
"processors": [
|
|
{
|
|
"inference": {
|
|
"model_id": "alibabacloud_ai_search_embeddings", <1>
|
|
"input_output": { <2>
|
|
"input_field": "content",
|
|
"output_field": "content_embedding"
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
<1> The name of the inference endpoint you created by using the
|
|
<<put-inference-api>>, it's referred to as `inference_id` in that step.
|
|
<2> Configuration object that defines the `input_field` for the {infer} process
|
|
and the `output_field` that will contain the {infer} results.
|
|
|
|
// end::alibabacloud-ai-search[]
|