logstash/docs/static/fb-ls-kafka-example.asciidoc
2022-08-02 19:23:05 -04:00

159 lines
No EOL
5.5 KiB
Text

[[use-filebeat-modules-kafka]]
=== Example: Set up {filebeat} modules to work with Kafka and {ls}
This section shows how to set up {filebeat}
{filebeat-ref}/filebeat-modules-overview.html[modules] to work with {ls} when
you are using Kafka in between {filebeat} and {ls} in your publishing pipeline.
The main goal of this example is to show how to load ingest pipelines from
{filebeat} and use them with {ls}.
The examples in this section show simple configurations with topic names hard
coded. For a full list of configuration options, see documentation about
configuring the <<plugins-inputs-kafka,Kafka input plugin>>. Also see
{filebeat-ref}/kafka-output.html[Configure the Kafka output] in the _{filebeat}
Reference_.
==== Set up and run {filebeat}
. If you haven't already set up the {filebeat} index template and sample {kib}
dashboards, run the {filebeat} `setup` command to do that now:
+
[source,shell]
----------------------------------------------------------------------
filebeat -e setup
----------------------------------------------------------------------
+
The `-e` flag is optional and sends output to standard error instead of syslog.
+
A connection to {es} and {kib} is required for this one-time setup
step because {filebeat} needs to create the index template in {es} and
load the sample dashboards into {kib}. For more information about configuring
the connection to {es}, see the Filebeat
{filebeat-ref}/filebeat-installation-configuration.html[quick start].
+
After the template and dashboards are loaded, you'll see the message `INFO
{kib} dashboards successfully loaded. Loaded dashboards`.
. Run the `modules enable` command to enable the modules that you want to run.
For example:
+
[source,shell]
----------------------------------------------------------------------
filebeat modules enable system
----------------------------------------------------------------------
+
You can further configure the module by editing the config file under the
{filebeat} `modules.d` directory. For example, if the log files are not in the
location expected by the module, you can set the `var.paths` option.
+
NOTE: You must enable at least one fileset in the module.
**Filesets are disabled by default.**
. Run the `setup` command with the `--pipelines` and `--modules` options
specified to load ingest pipelines for the modules you've enabled. This step
also requires a connection to {es}. If you want use a {ls} pipeline instead of
ingest node to parse the data, skip this step.
+
[source,shell]
----------------------------------------------------------------------
filebeat setup --pipelines --modules system
----------------------------------------------------------------------
. Configure {filebeat} to send log lines to Kafka. To do this, in the
+filebeat.yml+ config file, disable the {es} output by commenting it out, and
enable the Kafka output. For example:
+
[source,yaml]
-----
#output.elasticsearch:
#hosts: ["localhost:9200"]
output.kafka:
hosts: ["kafka:9092"]
topic: "filebeat"
codec.json:
pretty: false
-----
. Start {filebeat}. For example:
+
[source,shell]
----------------------------------------------------------------------
filebeat -e
----------------------------------------------------------------------
+
{filebeat} will attempt to send messages to {ls} and continue until {ls} is
available to receive them.
+
NOTE: Depending on how you've installed {filebeat}, you might see errors
related to file ownership or permissions when you try to run {filebeat} modules.
See {beats-ref}/config-file-permissions.html[Config File Ownership and Permissions]
in the _Beats Platform Reference_ if you encounter errors related to file
ownership or permissions.
==== Create and start the {ls} pipeline
. On the system where {ls} is installed, create a {ls} pipeline configuration
that reads from a Kafka input and sends events to an {es} output:
+
--
[source,yaml]
-----
input {
kafka {
bootstrap_servers => "myhost:9092"
topics => ["filebeat"]
codec => json
}
}
output {
if [@metadata][pipeline] {
elasticsearch {
hosts => "https://myEShost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
pipeline => "%{[@metadata][pipeline]}" <1>
user => "elastic"
password => "secret"
}
} else {
elasticsearch {
hosts => "https://myEShost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "secret"
}
}
}
-----
<1> Set the `pipeline` option to `%{[@metadata][pipeline]}`. This setting
configures {ls} to select the correct ingest pipeline based on metadata
passed in the event.
/////
//Commenting out this section until we can update docs to use ECS-compliant.
//fields for 7.0
//
//If you want use a {ls} pipeline instead of ingest node to parse the data, see
//the `filter` and `output` settings in the examples under
//<<logstash-config-for-filebeat-modules>>.
/////
--
. Start {ls}, passing in the pipeline configuration file you just defined. For
example:
+
[source,shell]
----------------------------------------------------------------------
bin/logstash -f mypipeline.conf
----------------------------------------------------------------------
+
{ls} should start a pipeline and begin receiving events from the Kafka input.
==== Visualize the data
To visualize the data in {kib}, launch the {kib} web interface by pointing your
browser to port 5601. For example, http://127.0.0.1:5601[http://127.0.0.1:5601].
Click *Dashboards* then view the {filebeat} dashboards.