logstash/docs/static/filebeat-modules.asciidoc
kaisecheng 4e82655cd5
remove ingest-converter (#16453)
Removed the tool Ingest Converter
2024-09-16 15:57:51 +01:00

191 lines
6.8 KiB
Text

[[filebeat-modules]]
== Working with {filebeat} Modules
{filebeat} comes packaged with pre-built
{filebeat-ref}/filebeat-modules.html[modules] that contain the configurations
needed to collect, parse, enrich, and visualize data from various log file
formats. Each {filebeat} module consists of one or more filesets that contain
ingest node pipelines, {es} templates, {filebeat} input configurations, and
{kib} dashboards.
You can use {filebeat} modules with {ls}, but you need to do some extra setup.
The simplest approach is to <<use-ingest-pipelines,set up and use the ingest
pipelines>> provided by {filebeat}.
/////
//Commenting out this section until we can update docs to use ECS-compliant.
//fields for 7.0
//
//If the ingest pipelines don't meet your
//requirements, you can
//<<logstash-config-for-filebeat-modules,create {ls} configurations>> to use
//instead of the ingest pipelines.
//
//Either approach allows you to use the configurations, index templates, and
//dashboards available with {filebeat} modules, as long as you maintain the
//field structure expected by the index and dashboards.
/////
[[use-ingest-pipelines]]
=== Use ingest pipelines for parsing
When you use {filebeat} modules with {ls}, you can use the ingest pipelines
provided by {filebeat} to parse the data. You need to load the pipelines
into {es} and configure {ls} to use them.
*To load the ingest pipelines:*
On the system where {filebeat} is installed, run the `setup` command with the
`--pipelines` option specified to load ingest pipelines for specific modules.
For example, the following command loads ingest pipelines for the system and
nginx modules:
[source,shell]
-----
filebeat setup --pipelines --modules nginx,system
-----
A connection to {es} is required for this setup step because {filebeat} needs to
load the ingest pipelines into {es}. If necessary, you can temporarily disable
your configured output and enable the {es} output before running the command.
*To configure {ls} to use the pipelines:*
On the system where {ls} is installed, create a {ls} pipeline configuration
that reads from a {ls} input, such as {beats} or Kafka, and sends events to an
{es} output. Set the `pipeline` option in the {es} output to
`%{[@metadata][pipeline]}` to use the ingest pipelines that you loaded
previously.
Here's an example configuration that reads data from the Beats input and uses
{filebeat} ingest pipelines to parse data collected by modules:
[source,yaml]
-----
input {
beats {
port => 5044
}
}
output {
if [@metadata][pipeline] {
elasticsearch {
hosts => "https://061ab24010a2482e9d64729fdb0fd93a.us-east-1.aws.found.io:9243"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}" <1>
action => "create" <2>
pipeline => "%{[@metadata][pipeline]}" <3>
user => "elastic"
password => "secret"
}
} else {
elasticsearch {
hosts => "https://061ab24010a2482e9d64729fdb0fd93a.us-east-1.aws.found.io:9243"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}" <1>
action => "create"
user => "elastic"
password => "secret"
}
}
}
-----
<1> If data streams are disabled in your configuration, set the `index` option to `%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}`. Data streams are enabled by default.
<2> If you are disabling the use of Data Streams on your configuration, you can
remove this setting, or set it to a different value as appropriate.
<3> Configures {ls} to select the correct ingest pipeline based on metadata
passed in the event.
See the {filebeat} {filebeat-ref}/filebeat-modules-overview.html[Modules]
documentation for more information about setting up and running modules.
For a full example, see <<use-filebeat-modules-kafka>>.
/////
//Commenting out this section until we can update docs to use ECS-compliant.
//fields for 7.0
//
//[[logstash-config-for-filebeat-modules]]
//=== Use {ls} pipelines for parsing
//
//The examples in this section show how to build {ls} pipeline configurations that
//replace the ingest pipelines provided with {filebeat} modules. The pipelines
//take the data collected by {filebeat} modules, parse it into fields expected by
//the {filebeat} index, and send the fields to {es} so that you can visualize the
//data in the pre-built dashboards provided by {filebeat}.
//
//This approach is more time consuming than using the existing ingest pipelines to
//parse the data, but it gives you more control over how the data is processed.
//By writing your own pipeline configurations, you can do additional processing,
//such as dropping fields, after the fields are extracted, or you can move your
//load from {es} ingest nodes to {ls} nodes.
//
//Before deciding to replaced the ingest pipelines with {ls} configurations,
//read <<use-ingest-pipelines>>.
//
//Here are some examples that show how to implement {ls} configurations to replace
//ingest pipelines:
//
//* <<parsing-apache2>>
//* <<parsing-mysql>>
//* <<parsing-nginx>>
//* <<parsing-system>>
//
//
//[[parsing-apache2]]
//==== Apache 2 Logs
//
//The {ls} pipeline configuration in this example shows how to ship and parse
//access and error logs collected by the
//{filebeat-ref}/filebeat-module-apache.html[`apache` {filebeat} module].
//
//[source,json]
//----------------------------------------------------------------------------
//include::filebeat_modules/apache2/pipeline.conf[]
//----------------------------------------------------------------------------
//
//
//[[parsing-mysql]]
//==== MySQL Logs
//
//The {ls} pipeline configuration in this example shows how to ship and parse
//error and slowlog logs collected by the
//{filebeat-ref}/filebeat-module-mysql.html[`mysql` {filebeat} module].
//
//[source,json]
//----------------------------------------------------------------------------
//include::filebeat_modules/mysql/pipeline.conf[]
//----------------------------------------------------------------------------
//
//
//[[parsing-nginx]]
//==== Nginx Logs
//
//The {ls} pipeline configuration in this example shows how to ship and parse
//access and error logs collected by the
//{filebeat-ref}/filebeat-module-nginx.html[`nginx` {filebeat} module].
//
//[source,json]
//----------------------------------------------------------------------------
//include::filebeat_modules/nginx/pipeline.conf[]
//----------------------------------------------------------------------------
//
//
//[[parsing-system]]
//==== System Logs
//
//The {ls} pipeline configuration in this example shows how to ship and parse
//system logs collected by the
//{filebeat-ref}/filebeat-module-system.html[`system` {filebeat} module].
//
//[source,json]
//----------------------------------------------------------------------------
//include::filebeat_modules/system/pipeline.conf[]
//----------------------------------------------------------------------------
/////
include::fb-ls-kafka-example.asciidoc[]