logstash/docs/static/advanced-pipeline.asciidoc
Michael Gerbig e4e6473f45 Update advanced-pipeline.asciidoc
add missing curly brace

Fixes #5913
2016-09-14 21:21:54 -04:00

585 lines
24 KiB
Text
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[[advanced-pipeline]]
=== Parsing Logs with Logstash
In <<first-event>>, you created a basic Logstash pipeline to test your Logstash setup. In the real world, a Logstash
pipeline is a bit more complex: it typically has one or more input, filter, and output plugins.
In this section, you create a Logstash pipeline that takes Apache web logs as input, parses those
logs to create specific, named fields from the logs, and writes the parsed data to an Elasticsearch cluster. Rather than
defining the pipeline configuration at the command line, you'll define the pipeline in a config file.
The following text represents the skeleton of a configuration pipeline:
[source,shell]
--------------------------------------------------------------------------------
# The # character at the beginning of a line indicates a comment. Use
# comments to describe your configuration.
input {
}
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }
output {
}
--------------------------------------------------------------------------------
This skeleton is non-functional, because the input and output sections dont have any valid options defined.
To get started, copy and paste the skeleton configuration pipeline into a file named `first-pipeline.conf` in your home
Logstash directory. Then go https://download.elastic.co/demos/logstash/gettingstarted/logstash-tutorial.log.gz[here] to
download the sample data set used in this example. Unpack the file.
[float]
[[configuring-file-input]]
==== Configuring Logstash for File Input
NOTE: This example uses the file input plugin for convenience. To tail files in the real world, you'll use
Filebeat to ship log events to Logstash. You learn how to <<configuring-lsf,configure the Filebeat input plugin>> later
when you build a more sophisticated pipeline.
To begin your Logstash pipeline, configure the Logstash instance to read from a file by using the
{logstash}plugins-inputs-file.html[`file`] input plugin.
Edit the `first-pipeline.conf` file and replace the entire `input` section with the following text:
[source,json]
--------------------------------------------------------------------------------
input {
file {
path => "/path/to/file/*.log"
start_position => beginning <1>
ignore_older => 0 <2>
}
}
--------------------------------------------------------------------------------
<1> The default behavior of the file input plugin is to monitor a file for new information, in a manner similar to the
UNIX `tail -f` command. To change this default behavior and process the entire file, we need to specify the position
where Logstash starts processing the file.
<2> The default behavior of the file input plugin is to ignore files whose last modification is greater than 86400s. To change this default behavior and process the tutorial file (which is probably much older than a day), we need to configure Logstash so that it does not ignore old files.
Replace `/path/to/file` with the absolute path to the location of `logstash-tutorial.log` in your file system.
[float]
[[configuring-grok-filter]]
==== Parsing Web Logs with the Grok Filter Plugin
The {logstash}plugins-filters-grok.html[`grok`] filter plugin is one of several plugins that are available by default in
Logstash. For details on how to manage Logstash plugins, see the <<working-with-plugins,reference documentation>> for
the plugin manager.
The `grok` filter plugin enables you to parse the unstructured log data into something structured and queryable.
Because the `grok` filter plugin looks for patterns in the incoming log data, configuring the plugin requires you to
make decisions about how to identify the patterns that are of interest to your use case. A representative line from the
web server log sample looks like this:
[source,shell]
--------------------------------------------------------------------------------
83.149.9.216 - - [04/Jan/2015:05:13:42 +0000] "GET /presentations/logstash-monitorama-2013/images/kibana-search.png
HTTP/1.1" 200 203023 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel
Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"
--------------------------------------------------------------------------------
The IP address at the beginning of the line is easy to identify, as is the timestamp in brackets. To parse the data, you can use the `%{COMBINEDAPACHELOG}` grok pattern, which structures lines from the Apache log using the following schema:
[horizontal]
*Information*:: *Field Name*
IP Address:: `clientip`
User ID:: `ident`
User Authentication:: `auth`
timestamp:: `timestamp`
HTTP Verb:: `verb`
Request body:: `request`
HTTP Version:: `httpversion`
HTTP Status Code:: `response`
Bytes served:: `bytes`
Referrer URL:: `referrer`
User agent:: `agent`
Edit the `first-pipeline.conf` file and replace the entire `filter` section with the following text:
[source,json]
--------------------------------------------------------------------------------
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
}
--------------------------------------------------------------------------------
After processing the log file with the grok pattern, the sample line will have the following JSON representation:
[source,json]
--------------------------------------------------------------------------------
{
"clientip" : "83.149.9.216",
"ident" : ,
"auth" : ,
"timestamp" : "04/Jan/2015:05:13:42 +0000",
"verb" : "GET",
"request" : "/presentations/logstash-monitorama-2013/images/kibana-search.png",
"httpversion" : "HTTP/1.1",
"response" : "200",
"bytes" : "203023",
"referrer" : "http://semicomplete.com/presentations/logstash-monitorama-2013/",
"agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"
}
--------------------------------------------------------------------------------
[float]
[[configuring-geoip-plugin]]
==== Enhancing Your Data with the Geoip Filter Plugin
In addition to parsing log data for better searches, filter plugins can derive supplementary information from existing
data. As an example, the {logstash}plugins-filters-geoip.html[`geoip`] plugin looks up IP addresses, derives geographic
location information from the addresses, and adds that location information to the logs.
Configure your Logstash instance to use the `geoip` filter plugin by adding the following lines to the `filter` section
of the `first-pipeline.conf` file:
[source,json]
--------------------------------------------------------------------------------
geoip {
source => "clientip"
}
--------------------------------------------------------------------------------
The `geoip` plugin configuration requires you to specify the name of the source field that contains the IP address to look up. In this example, the `clientip` field contains the IP address.
Since filters are evaluated in sequence, make sure that the `geoip` section is after the `grok` section of
the configuration file and that both the `grok` and `geoip` sections are nested within the `filter` section
like this:
[source,json]
--------------------------------------------------------------------------------
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
--------------------------------------------------------------------------------
[float]
[[indexing-parsed-data-into-elasticsearch]]
==== Indexing Your Data into Elasticsearch
Now that the web logs are broken down into specific fields, the Logstash pipeline can index the data into an
Elasticsearch cluster. Edit the `first-pipeline.conf` file and replace the entire `output` section with the following
text:
[source,json]
--------------------------------------------------------------------------------
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
--------------------------------------------------------------------------------
With this configuration, Logstash uses http protocol to connect to Elasticsearch. The above example assumes that
Logstash and Elasticsearch are running on the same instance. You can specify a remote Elasticsearch instance by using
the `hosts` configuration to specify something like `hosts => [ "es-machine:9092" ]`.
[float]
[[testing-initial-pipeline]]
===== Testing Your Initial Pipeline
At this point, your `first-pipeline.conf` file has input, filter, and output sections properly configured, and looks
something like this:
[source,json]
--------------------------------------------------------------------------------
input {
file {
path => "/Users/myusername/tutorialdata/*.log"
start_position => beginning
ignore_older => 0
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
--------------------------------------------------------------------------------
To verify your configuration, run the following command:
[source,shell]
--------------------------------------------------------------------------------
bin/logstash -f first-pipeline.conf --config.test_and_exit
--------------------------------------------------------------------------------
The `--config.test_and_exit` option parses your configuration file and reports any errors. When the configuration file
passes the configuration test, start Logstash with the following command:
[source,shell]
--------------------------------------------------------------------------------
bin/logstash -f first-pipeline.conf
--------------------------------------------------------------------------------
Try a test query to Elasticsearch based on the fields created by the `grok` filter plugin:
[source,shell]
--------------------------------------------------------------------------------
curl -XGET 'localhost:9200/logstash-$DATE/_search?pretty&q=response=200'
--------------------------------------------------------------------------------
Replace $DATE with the current date, in YYYY.MM.DD format.
We get multiple hits back. For example:
[source,json]
--------------------------------------------------------------------------------
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 98,
"max_score" : 4.833623,
"hits" : [ {
"_index" : "logstash-2016.05.27",
"_type" : "logs",
"_id" : "AVT0nBiGe_tzyi1erg7-",
"_score" : 4.833623,
"_source" : {
"request" : "/presentations/logstash-monitorama-2013/images/frontend-response-codes.png",
"agent" : "\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36\"",
"geoip" : {
"timezone" : "Europe/Moscow",
"ip" : "83.149.9.216",
"latitude" : 55.7522,
"continent_code" : "EU",
"city_name" : "Moscow",
"country_code2" : "RU",
"country_name" : "Russia",
"dma_code" : null,
"country_code3" : "RU",
"region_name" : "Moscow",
"location" : [ 37.6156, 55.7522 ],
"postal_code" : "101194",
"longitude" : 37.6156,
"region_code" : "MOW"
},
"auth" : "-",
"ident" : "-",
"verb" : "GET",
"message" : "83.149.9.216 - - [04/Jan/2015:05:13:45 +0000] \"GET /presentations/logstash-monitorama-2013/images/frontend-response-codes.png HTTP/1.1\" 200 52878 \"http://semicomplete.com/presentations/logstash-monitorama-2013/\" \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36\"",
"referrer" : "\"http://semicomplete.com/presentations/logstash-monitorama-2013/\"",
"@timestamp" : "2016-05-27T23:45:50.828Z",
"response" : "200",
"bytes" : "52878",
"clientip" : "83.149.9.216",
"@version" : "1",
"host" : "myexamplehost",
"httpversion" : "1.1",
"timestamp" : "04/Jan/2015:05:13:45 +0000"
}
},
...
--------------------------------------------------------------------------------
Try another search for the geographic information derived from the IP address:
[source,shell]
--------------------------------------------------------------------------------
curl -XGET 'localhost:9200/logstash-$DATE/_search?pretty&q=geoip.city_name=Buffalo'
--------------------------------------------------------------------------------
Replace $DATE with the current date, in YYYY.MM.DD format.
A few log entries come from Buffalo, so the query produces the following response:
[source,json]
--------------------------------------------------------------------------------
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 3,
"max_score" : 1.0520113,
"hits" : [ {
"_index" : "logstash-2016.05.27",
"_type" : "logs",
"_id" : "AVT0nBiHe_tzyi1erg9T",
"_score" : 1.0520113,
"_source" : {
"request" : "/blog/geekery/solving-good-or-bad-problems.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+semicomplete%2Fmain+%28semicomplete.com+-+Jordan+Sissel%29",
"agent" : "\"Tiny Tiny RSS/1.11 (http://tt-rss.org/)\"",
"geoip" : {
"timezone" : "America/New_York",
"ip" : "198.46.149.143",
"latitude" : 42.9864,
"continent_code" : "NA",
"city_name" : "Buffalo",
"country_code2" : "US",
"country_name" : "United States",
"dma_code" : 514,
"country_code3" : "US",
"region_name" : "New York",
"location" : [ -78.7279, 42.9864 ],
"postal_code" : "14221",
"longitude" : -78.7279,
"region_code" : "NY"
},
"auth" : "-",
"ident" : "-",
"verb" : "GET",
"message" : "198.46.149.143 - - [04/Jan/2015:05:29:13 +0000] \"GET /blog/geekery/solving-good-or-bad-problems.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+semicomplete%2Fmain+%28semicomplete.com+-+Jordan+Sissel%29 HTTP/1.1\" 200 10756 \"-\" \"Tiny Tiny RSS/1.11 (http://tt-rss.org/)\"",
"referrer" : "\"-\"",
"@timestamp" : "2016-05-27T23:45:50.836Z",
"response" : "200",
"bytes" : "10756",
"clientip" : "198.46.149.143",
"@version" : "1",
"host" : "myexamplehost",
"httpversion" : "1.1",
"timestamp" : "04/Jan/2015:05:29:13 +0000"
}
},
...
--------------------------------------------------------------------------------
[[multiple-input-output-plugins]]
=== Stitching Together Multiple Input and Output Plugins
The information you need to manage often comes from several disparate sources, and use cases can require multiple
destinations for your data. Your Logstash pipeline can use multiple input and output plugins to handle these
requirements.
In this section, you create a Logstash pipeline that takes input from a Twitter feed and the Filebeat client, then
sends the information to an Elasticsearch cluster as well as writing the information directly to a file.
[float]
[[twitter-configuration]]
==== Reading from a Twitter Feed
To add a Twitter feed, you use the {logstash}plugins-inputs-twitter.html[`twitter`] input plugin. To
configure the plugin, you need several pieces of information:
* A _consumer_ key, which uniquely identifies your Twitter app.
* A _consumer secret_, which serves as the password for your Twitter app.
* One or more _keywords_ to search in the incoming feed. The example shows using "cloud" as a keyword, but you can use whatever you want.
* An _oauth token_, which identifies the Twitter account using this app.
* An _oauth token secret_, which serves as the password of the Twitter account.
Visit https://dev.twitter.com/apps[https://dev.twitter.com/apps] to set up a Twitter account and generate your consumer
key and secret, as well as your access token and secret. See the docs for the {logstash}plugins-inputs-twitter.html[`twitter`] input plugin if you're not sure how to generate these keys.
Like you did earlier when you worked on <<advanced-pipeline>>, create a config file (called `second-pipeline.conf`) that
contains the skeleton of a configuration pipeline. If you want, you can reuse the file you created earlier, but make
sure you pass in the correct config file name when you run Logstash.
Add the following lines to the `input` section of the `second-pipeline.conf` file, substituting your values for the
placeholder values shown here:
[source,json]
--------------------------------------------------------------------------------
twitter {
consumer_key => "enter_your_consumer_key_here"
consumer_secret => "enter_your_secret_here"
keywords => ["cloud"]
oauth_token => "enter_your_access_token_here"
oauth_token_secret => "enter_your_access_token_secret_here"
}
--------------------------------------------------------------------------------
[float]
[[configuring-lsf]]
==== The Filebeat Client
The https://github.com/elastic/beats/tree/master/filebeat[Filebeat] client is a lightweight, resource-friendly tool that
collects logs from files on the server and forwards these logs to your Logstash instance for processing. Filebeat is
designed for reliability and low latency. Filebeat uses the computing resources of the machine hosting the source data,
and the {logstash}plugins-inputs-beats.html[`Beats input`] plugin minimizes the
resource demands on the Logstash instance.
NOTE: In a typical use case, Filebeat runs on a separate machine from the machine running your
Logstash instance. For the purposes of this tutorial, Logstash and Filebeat are running on the
same machine.
The default Logstash installation includes the {logstash}plugins-inputs-beats.html[`Beats input`] plugin. To install
Filebeat on your data source machine, download the appropriate package from the Filebeat https://www.elastic.co/downloads/beats/filebeat[product page].
After installing Filebeat, you need to configure it. Open the `filebeat.yml` file located in your Filebeat installation
directory, and replace the contents with the following lines. Make sure `paths` points to your syslog:
[source,shell]
--------------------------------------------------------------------------------
filebeat.prospectors:
- input_type: log
paths:
- /var/log/*.log <1>
fields:
type: syslog <2>
output.logstash:
hosts: ["localhost:5043"]
--------------------------------------------------------------------------------
<1> Absolute path to the file or files that Filebeat processes.
<2> Adds a field called `type` with the value `syslog` to the event.
Save your changes.
To keep the configuration simple, you won't specify TLS/SSL settings as you would in a real world
scenario.
Configure your Logstash instance to use the Filebeat input plugin by adding the following lines to the `input` section
of the `second-pipeline.conf` file:
[source,json]
--------------------------------------------------------------------------------
beats {
port => "5043"
}
--------------------------------------------------------------------------------
[float]
[[logstash-file-output]]
==== Writing Logstash Data to a File
You can configure your Logstash pipeline to write data directly to a file with the
{logstash}plugins-outputs-file.html[`file`] output plugin.
Configure your Logstash instance to use the `file` output plugin by adding the following lines to the `output` section
of the `second-pipeline.conf` file:
[source,json]
--------------------------------------------------------------------------------
file {
path => "/path/to/target/file"
}
--------------------------------------------------------------------------------
[float]
[[multiple-es-nodes]]
==== Writing to Multiple Elasticsearch Nodes
Writing to multiple Elasticsearch nodes lightens the resource demands on a given Elasticsearch node, as well as
providing redundant points of entry into the cluster when a particular node is unavailable.
To configure your Logstash instance to write to multiple Elasticsearch nodes, edit the `output` section of the `second-pipeline.conf` file to read:
[source,json]
--------------------------------------------------------------------------------
output {
elasticsearch {
hosts => ["IP Address 1:port1", "IP Address 2:port2", "IP Address 3"]
}
}
--------------------------------------------------------------------------------
Use the IP addresses of three non-master nodes in your Elasticsearch cluster in the host line. When the `hosts`
parameter lists multiple IP addresses, Logstash load-balances requests across the list of addresses. Also note that
the default port for Elasticsearch is `9200` and can be omitted in the configuration above.
[float]
[[testing-second-pipeline]]
===== Testing the Pipeline
At this point, your `second-pipeline.conf` file looks like this:
[source,json]
--------------------------------------------------------------------------------
input {
twitter {
consumer_key => "enter_your_consumer_key_here"
consumer_secret => "enter_your_secret_here"
keywords => ["cloud"]
oauth_token => "enter_your_access_token_here"
oauth_token_secret => "enter_your_access_token_secret_here"
}
beats {
port => "5043"
}
}
output {
elasticsearch {
hosts => ["IP Address 1:port1", "IP Address 2:port2", "IP Address 3"]
}
file {
path => "/path/to/target/file"
}
}
--------------------------------------------------------------------------------
Logstash is consuming data from the Twitter feed you configured, receiving data from Filebeat, and
indexing this information to three nodes in an Elasticsearch cluster as well as writing to a file.
At the data source machine, run Filebeat with the following command:
[source,shell]
--------------------------------------------------------------------------------
sudo ./filebeat -e -c filebeat.yml -d "publish"
--------------------------------------------------------------------------------
Filebeat will attempt to connect on port 5403. Until Logstash starts with an active Beats plugin, there
wont be any answer on that port, so any messages you see regarding failure to connect on that port are normal for now.
To verify your configuration, run the following command:
[source,shell]
--------------------------------------------------------------------------------
bin/logstash -f second-pipeline.conf --config.test_and_exit
--------------------------------------------------------------------------------
The `--config.test_and_exit` option parses your configuration file and reports any errors. When the configuration file
passes the configuration test, start Logstash with the following command:
[source,shell]
--------------------------------------------------------------------------------
bin/logstash -f second-pipeline.conf
--------------------------------------------------------------------------------
Use the `grep` utility to search in the target file to verify that information is present:
[source,shell]
--------------------------------------------------------------------------------
grep syslog /path/to/target/file
--------------------------------------------------------------------------------
Run an Elasticsearch query to find the same information in the Elasticsearch cluster:
[source,shell]
--------------------------------------------------------------------------------
curl -XGET 'localhost:9200/logstash-$DATE/_search?pretty&q=fields.type:syslog'
--------------------------------------------------------------------------------
Replace $DATE with the current date, in YYYY.MM.DD format.
To see data from the Twitter feed, try this query:
[source,shell]
--------------------------------------------------------------------------------
curl -XGET 'http://localhost:9200/logstash-$DATE/_search?pretty&q=client:iphone'
--------------------------------------------------------------------------------
Again, remember to replace $DATE with the current date, in YYYY.MM.DD format.