Add docs about ingest pipeline converter (#8426)

Add fix from review
This commit is contained in:
DeDe Morton 2017-10-03 11:24:25 -07:00 committed by GitHub
parent 754b935302
commit dc2852b66c
3 changed files with 101 additions and 19 deletions

View file

@ -18,6 +18,9 @@ include::static/managing-multiline-events.asciidoc[]
:edit_url: https://github.com/elastic/logstash/edit/5.6/docs/static/glob-support.asciidoc
include::static/glob-support.asciidoc[]
:edit_url: https://github.com/elastic/logstash/edit/5.6/docs/static/ingest-convert.asciidoc
include::static/ingest-convert.asciidoc[]
// Working with Logstash Modules
include::static/modules.asciidoc[]

View file

@ -15,26 +15,15 @@ you'll need to use Logstash.
[float]
[[graduating-to-Logstash]]
=== Graduating to Logstash
=== Using Logstash instead of Ingest Node
You may need to graduate to using Logstash instead of ingest pipelines if you
want to:
* Use multiple outputs. Ingest pipelines were designed to only support
Elasticsearch as an output, but you may want to use more than one output. For
example, you may want to archive your incoming data to S3 as well as indexing
it in Elasticsearch.
* Use the <<persistent-queues,persistent queue>> feature to handle spikes when
ingesting data (from Beats and other sources).
* Take advantage of the richer transformation capabilities in Logstash, such as
external lookups.
Currently, we don't provide an automatic migration path from ingest pipelines
to Logstash pipelines (but that's coming). For now, you can follow the steps in
this section to configure Filebeat and build Logstash pipeline configurations
that are equivalent to the ingest node pipelines available with the Filebeat
modules. Then you'll be able to use the same dashboards available with Filebeat
to visualize your data in Kibana.
Logstash provides an <<ingest-converter,ingest pipeline conversion tool>>
to help you migrate ingest pipeline definitions to Logstash configs. However,
the tool does not currently support all the processors that are available for
ingest node. You can follow the steps in this section to configure Filebeat and
build Logstash pipeline configurations that are equivalent to the ingest node
pipelines available with the Filebeat modules. Then you'll be able to use the
same dashboards available with Filebeat to visualize your data in Kibana.
Follow the steps in this section to build and run Logstash configurations that
provide capabilities similar to Filebeat modules.

90
docs/static/ingest-convert.asciidoc vendored Normal file
View file

@ -0,0 +1,90 @@
[[ingest-converter]]
=== Converting Ingest Node Pipelines
After implementing {ref}/ingest.html[ingest] pipelines to parse your data, you
might decide that you want to take advantage of the richer transformation
capabilities in Logstash. For example, you may need to use Logstash instead of
ingest pipelines if you want to:
* Ingest from more inputs. Logstash can natively ingest data from many other
sources like TCP, UDP, syslog, and relational databases.
* Use multiple outputs. Ingest node was designed to only support Elasticsearch
as an output, but you may want to use more than one output. For example, you may
want to archive your incoming data to S3 as well as indexing it in
Elasticsearch.
* Take advantage of the richer transformation capabilities in Logstash, such as
external lookups.
* Use the persistent queue feature to handle spikes when ingesting data (from
Beats and other sources).
To make it easier for you to migrate your configurations, Logstash provides an
ingest pipeline conversion tool. The conversion tool takes the ingest pipeline
definition as input and, when possible, creates the equivalent Logstash
configuration as output.
See <<ingest-converter-limitations>> for a full list of tool limitations.
[[ingest-converter-run]]
==== Running the tool
You'll find the conversion tool in the `bin` directory of your Logstash
installation. See <<dir-layout>> to find the location of `bin` on your system.
To run the conversion tool, use the following command:
[source,shell]
-----
bin/ingest-convert.sh --input INPUT_FILE_URI --output OUTPUT_FILE_URI [--append-stdio]
-----
Where:
* `INPUT_FILE_URI` is a file URI that specifies the full path to the JSON file
that defines the ingest node pipeline.
* `OUTPUT_FILE_URI` is the file URI of the Logstash DSL file that will be
generated by the tool.
* `--append-stdio` is an optional flag that adds stdin and stdout sections to
the config instead of adding the default Elasticsearch output.
This command expects a file URI, so make sure you use forward slashes and
specify the full path to the file.
For example:
[source,text]
-----
bin/ingest-convert.sh --input file:///tmp/ingest/apache.json --output file:///tmp/ingest/apache.conf
-----
[[ingest-converter-limitations]]
==== Limitations
* Painless script conversion is not supported.
* Only a subset of available processors are
<<ingest-converter-supported-processors,supported>> for conversion. For
processors that are not supported, the tool produces a warning and continues
with a best-effort conversion.
[[ingest-converter-supported-processors]]
==== Supported Processors
The following ingest node processors are currently supported for conversion by
the tool:
* Append
* Convert
* Date
* GeoIP
* Grok
* Gsub
* Json
* Lowercase
* Rename
* Set