Add `ignore_missing_pipeline` option to `pipeline` processor. This
controls whether the `pipeline` processor should fail with an error if
no pipeline with a name specified in the `name` option exists.
This enhancement is useful to setup a pipeline infrastructure that
lazily adds extension points for overwrites. So that for specific
cluster setups custom pre-processing can be added at a later point in
time.
Relates to #87323
Related to issue #77823
This does the following:
- Updates several asciidoc files that contained code snippets with
invalid JSON, most involving unnecessary trailing commas.
- Makes the switch from the Groovy JSON parser to the Jackson parser,
pursuant to the general goal of eliminating Groovy dependence.
- Makes testing of JSON validity at build time more strict.
Note that this update still allows backslash escaping for any
character. Currently that matters because of the file
"docs/reference/ml/anomaly-detection/apis/get-datafeed-stats.asciidoc",
specifically this part:
"attributes" : {
"ml.machine_memory" :
"$body.datafeeds.0.node.attributes.ml\.machine_memory",
"ml.max_open_jobs" : "512"
}
It's not clear to me what change, if any, is appropriate there. So,
I've left in the escaped period and configured the parser to ignore
it for the time being.
This commit adds the name of the current pipeline to ingest metadata.
This pipeline name is accessible under the following key: '_ingest.pipeline'.
Example usage in pipeline:
PUT /_ingest/pipeline/2
{
"processors": [
{
"set": {
"field": "pipeline_name",
"value": "{{_ingest.pipeline}}"
}
}
]
}
Closes#42106
This commit breaks the single ingest docs file into multiple files,
factoring out the processor docs into a documentation file per
processor. This will help make this content easier to maintain.