mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-29 01:44:36 -04:00
Related to issue #77823 This does the following: - Updates several asciidoc files that contained code snippets with invalid JSON, most involving unnecessary trailing commas. - Makes the switch from the Groovy JSON parser to the Jackson parser, pursuant to the general goal of eliminating Groovy dependence. - Makes testing of JSON validity at build time more strict. Note that this update still allows backslash escaping for any character. Currently that matters because of the file "docs/reference/ml/anomaly-detection/apis/get-datafeed-stats.asciidoc", specifically this part: "attributes" : { "ml.machine_memory" : "$body.datafeeds.0.node.attributes.ml\.machine_memory", "ml.max_open_jobs" : "512" } It's not clear to me what change, if any, is appropriate there. So, I've left in the escaped period and configured the parser to ignore it for the time being.
117 lines
2.6 KiB
Text
117 lines
2.6 KiB
Text
[[pipeline-processor]]
|
|
=== Pipeline processor
|
|
++++
|
|
<titleabbrev>Pipeline</titleabbrev>
|
|
++++
|
|
|
|
Executes another pipeline.
|
|
|
|
[[pipeline-options]]
|
|
.Pipeline Options
|
|
[options="header"]
|
|
|======
|
|
| Name | Required | Default | Description
|
|
| `name` | yes | - | The name of the pipeline to execute. Supports <<template-snippets,template snippets>>.
|
|
include::common-options.asciidoc[]
|
|
|======
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
{
|
|
"pipeline": {
|
|
"name": "inner-pipeline"
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// NOTCONSOLE
|
|
|
|
The name of the current pipeline can be accessed from the `_ingest.pipeline` ingest metadata key.
|
|
|
|
An example of using this processor for nesting pipelines would be:
|
|
|
|
Define an inner pipeline:
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/pipelineA
|
|
{
|
|
"description" : "inner pipeline",
|
|
"processors" : [
|
|
{
|
|
"set" : {
|
|
"field": "inner_pipeline_set",
|
|
"value": "inner"
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
|
|
Define another pipeline that uses the previously defined inner pipeline:
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT _ingest/pipeline/pipelineB
|
|
{
|
|
"description" : "outer pipeline",
|
|
"processors" : [
|
|
{
|
|
"pipeline" : {
|
|
"name": "pipelineA"
|
|
}
|
|
},
|
|
{
|
|
"set" : {
|
|
"field": "outer_pipeline_set",
|
|
"value": "outer"
|
|
}
|
|
}
|
|
]
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[continued]
|
|
|
|
Now indexing a document while applying the outer pipeline will see the inner pipeline executed
|
|
from the outer pipeline:
|
|
|
|
[source,console]
|
|
--------------------------------------------------
|
|
PUT /my-index/_doc/1?pipeline=pipelineB
|
|
{
|
|
"field": "value"
|
|
}
|
|
--------------------------------------------------
|
|
// TEST[continued]
|
|
|
|
Response from the index request:
|
|
|
|
[source,console-result]
|
|
--------------------------------------------------
|
|
{
|
|
"_index": "my-index",
|
|
"_type": "_doc",
|
|
"_id": "1",
|
|
"_version": 1,
|
|
"result": "created",
|
|
"_shards": {
|
|
"total": 2,
|
|
"successful": 1,
|
|
"failed": 0
|
|
},
|
|
"_seq_no": 66,
|
|
"_primary_term": 1
|
|
}
|
|
--------------------------------------------------
|
|
// TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/]
|
|
|
|
Indexed document:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
{
|
|
"field": "value",
|
|
"inner_pipeline_set": "inner",
|
|
"outer_pipeline_set": "outer"
|
|
}
|
|
--------------------------------------------------
|
|
// NOTCONSOLE
|