diff --git a/docs/1.2.0.beta1/codecs/dots.html b/docs/1.2.0.beta1/codecs/dots.html deleted file mode 100644 index ecdf6a55b..000000000 --- a/docs/1.2.0.beta1/codecs/dots.html +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: logstash docs for codecs/dots -layout: content_right ---- -

dots

-

Milestone: 1

- - - - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   dots {
-  }
-  }
-}
-
- -

Details

- - -
- -This is documentation from lib/logstash/codecs/dots.rb diff --git a/docs/1.2.0.beta1/codecs/json.html b/docs/1.2.0.beta1/codecs/json.html deleted file mode 100644 index 616415049..000000000 --- a/docs/1.2.0.beta1/codecs/json.html +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: logstash docs for codecs/json -layout: content_right ---- -

json

-

Milestone: 1

- -

This is the base class for logstash codecs.

- - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   json {
-  }
-  }
-}
-
- -

Details

- - -
- -This is documentation from lib/logstash/codecs/json.rb diff --git a/docs/1.2.0.beta1/codecs/json_spooler.html b/docs/1.2.0.beta1/codecs/json_spooler.html deleted file mode 100644 index 2888b8e53..000000000 --- a/docs/1.2.0.beta1/codecs/json_spooler.html +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: logstash docs for codecs/json_spooler -layout: content_right ---- -

json_spooler

-

Milestone: 1

- - - - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   json_spooler {
-      spool_size => ... # number (optional), default: 50
-  }
-  }
-}
-
- -

Details

- -

- - spool_size - - -

- - - - - - -
- -This is documentation from lib/logstash/codecs/json_spooler.rb diff --git a/docs/1.2.0.beta1/codecs/line.html b/docs/1.2.0.beta1/codecs/line.html deleted file mode 100644 index 931f6843c..000000000 --- a/docs/1.2.0.beta1/codecs/line.html +++ /dev/null @@ -1,70 +0,0 @@ ---- -title: logstash docs for codecs/line -layout: content_right ---- -

line

-

Milestone: 3

- -

Line-oriented text data.

- -

Decoding behavior: Only whole line events will be emitted.

- -

Encoding behavior: Each event will be emitted with a trailing newline.

- - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   line {
-      charset => ... # string, one of ["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-1251", "BINARY", "IBM437", "CP437", "IBM737", "CP737", "IBM775", "CP775", "CP850", "IBM850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "CP857", "IBM860", "CP860", "IBM861", "CP861", "IBM862", "CP862", "IBM863", "CP863", "IBM864", "CP864", "IBM865", "CP865", "IBM866", "CP866", "IBM869", "CP869", "Windows-1258", "CP1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "Big5-HKSCS:2008", "CP951", "stateless-ISO-2022-JP", "eucJP", "eucJP-ms", "euc-jp-ms", "CP51932", "eucKR", "eucTW", "GB2312", "EUC-CN", "eucCN", "GB12345", "CP936", "ISO-2022-JP", "ISO2022-JP", "ISO-2022-JP-2", "ISO2022-JP2", "CP50220", "CP50221", "ISO8859-1", "Windows-1252", "CP1252", "ISO8859-2", "Windows-1250", "CP1250", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "Windows-1256", "CP1256", "ISO8859-7", "Windows-1253", "CP1253", "ISO8859-8", "Windows-1255", "CP1255", "ISO8859-9", "Windows-1254", "CP1254", "ISO8859-10", "ISO8859-11", "TIS-620", "Windows-874", "CP874", "ISO8859-13", "Windows-1257", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "Windows-31J", "CP932", "csWindows31J", "SJIS", "PCK", "MacJapanese", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "UTF-7", "CP65000", "CP65001", "UTF8-MAC", "UTF-8-MAC", "UTF-8-HFS", "UTF-16", "UTF-32", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP1251", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "locale", "external", "filesystem", "internal"] (optional), default: "UTF-8"
-      format => ... # string (optional)
-  }
-  }
-}
-
- -

Details

- -

- - charset - - -

- - - -

The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

- -

This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

- -

This only affects "plain" format logs since json is UTF-8 already.

- -

- - format - - -

- - - -

Set the desired text format for encoding.

- - -
- -This is documentation from lib/logstash/codecs/line.rb diff --git a/docs/1.2.0.beta1/codecs/msgpack.html b/docs/1.2.0.beta1/codecs/msgpack.html deleted file mode 100644 index 174c1f496..000000000 --- a/docs/1.2.0.beta1/codecs/msgpack.html +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: logstash docs for codecs/msgpack -layout: content_right ---- -

msgpack

-

Milestone: 1

- - - - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   msgpack {
-      format => ... # string (optional), default: nil
-  }
-  }
-}
-
- -

Details

- -

- - format - - -

- - - - - - -
- -This is documentation from lib/logstash/codecs/msgpack.rb diff --git a/docs/1.2.0.beta1/codecs/multiline.html b/docs/1.2.0.beta1/codecs/multiline.html deleted file mode 100644 index 32dba9929..000000000 --- a/docs/1.2.0.beta1/codecs/multiline.html +++ /dev/null @@ -1,179 +0,0 @@ ---- -title: logstash docs for codecs/multiline -layout: content_right ---- -

multiline

-

Milestone: 1

- -

The multiline codec is for taking line-oriented text and merging them into a -single event.

- -

The original goal of this codec was to allow joining of multi-line messages -from files into a single event. For example - joining java exception and -stacktrace messages into a single event.

- -

The config looks like this:

- -
input {
-  stdin {
-    codec => multiline {
-      pattern => "pattern, a regexp"
-      negate => true or false
-      what => "previous" or "next"
-    }
-  }
-}
-
- -

The 'pattern' should match what you believe to be an indicator that the field -is part of a multi-line event.

- -

The 'what' must be "previous" or "next" and indicates the relation -to the multi-line event.

- -

The 'negate' can be "true" or "false" (defaults false). If true, a -message not matching the pattern will constitute a match of the multiline -filter and the what will be applied. (vice-versa is also true)

- -

For example, java stack traces are multiline and usually have the message -starting at the far-left, then each subsequent line indented. Do this:

- -
input {
-  stdin {
-    codec => multiline {
-      pattern => "^\s"
-      what => "previous"
-    }
-  }
-}
-
- -

This says that any line starting with whitespace belongs to the previous line.

- -

Another example is C line continuations (backslash). Here's how to do that:

- -
filter {
-  multiline {
-    type => "somefiletype "
-    pattern => "\\$"
-    what => "next"
-  }
-}
-
- -

This is the base class for logstash codecs.

- - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   multiline {
-      charset => ... # string, one of ["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-1251", "BINARY", "IBM437", "CP437", "IBM737", "CP737", "IBM775", "CP775", "CP850", "IBM850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "CP857", "IBM860", "CP860", "IBM861", "CP861", "IBM862", "CP862", "IBM863", "CP863", "IBM864", "CP864", "IBM865", "CP865", "IBM866", "CP866", "IBM869", "CP869", "Windows-1258", "CP1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "Big5-HKSCS:2008", "CP951", "stateless-ISO-2022-JP", "eucJP", "eucJP-ms", "euc-jp-ms", "CP51932", "eucKR", "eucTW", "GB2312", "EUC-CN", "eucCN", "GB12345", "CP936", "ISO-2022-JP", "ISO2022-JP", "ISO-2022-JP-2", "ISO2022-JP2", "CP50220", "CP50221", "ISO8859-1", "Windows-1252", "CP1252", "ISO8859-2", "Windows-1250", "CP1250", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "Windows-1256", "CP1256", "ISO8859-7", "Windows-1253", "CP1253", "ISO8859-8", "Windows-1255", "CP1255", "ISO8859-9", "Windows-1254", "CP1254", "ISO8859-10", "ISO8859-11", "TIS-620", "Windows-874", "CP874", "ISO8859-13", "Windows-1257", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "Windows-31J", "CP932", "csWindows31J", "SJIS", "PCK", "MacJapanese", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "UTF-7", "CP65000", "CP65001", "UTF8-MAC", "UTF-8-MAC", "UTF-8-HFS", "UTF-16", "UTF-32", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP1251", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "locale", "external", "filesystem", "internal"] (optional), default: "UTF-8"
-      negate => ... # boolean (optional), default: false
-      pattern => ... # string (required)
-      patterns_dir => ... # array (optional), default: []
-      what => ... # string, one of ["previous", "next"] (required)
-  }
-  }
-}
-
- -

Details

- -

- - charset - - -

- - - -

The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

- -

This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

- -

This only affects "plain" format logs since json is UTF-8 already.

- -

- - negate - - -

- - - -

Negate the regexp pattern ('if not matched')

- -

- - pattern (required setting) - - -

- - - -

The regular expression to match

- -

- - patterns_dir - - -

- - - -

logstash ships by default with a bunch of patterns, so you don't -necessarily need to define this yourself unless you are adding additional -patterns.

- -

Pattern files are plain text with format:

- -
NAME PATTERN
-
- -

For example:

- -
NUMBER \d+
-
- -

- - what (required setting) - - -

- - - -

If the pattern matched, does event belong to the next or previous event?

- - -
- -This is documentation from lib/logstash/codecs/multiline.rb diff --git a/docs/1.2.0.beta1/codecs/noop.html b/docs/1.2.0.beta1/codecs/noop.html deleted file mode 100644 index 3be70251a..000000000 --- a/docs/1.2.0.beta1/codecs/noop.html +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: logstash docs for codecs/noop -layout: content_right ---- -

noop

-

Milestone: 1

- - - - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   noop {
-  }
-  }
-}
-
- -

Details

- - -
- -This is documentation from lib/logstash/codecs/noop.rb diff --git a/docs/1.2.0.beta1/codecs/oldlogstashjson.html b/docs/1.2.0.beta1/codecs/oldlogstashjson.html deleted file mode 100644 index 6df2dedfe..000000000 --- a/docs/1.2.0.beta1/codecs/oldlogstashjson.html +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: logstash docs for codecs/oldlogstashjson -layout: content_right ---- -

oldlogstashjson

-

Milestone: 1

- - - - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   oldlogstashjson {
-  }
-  }
-}
-
- -

Details

- - -
- -This is documentation from lib/logstash/codecs/oldlogstashjson.rb diff --git a/docs/1.2.0.beta1/codecs/plain.html b/docs/1.2.0.beta1/codecs/plain.html deleted file mode 100644 index be55d4e5c..000000000 --- a/docs/1.2.0.beta1/codecs/plain.html +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: logstash docs for codecs/plain -layout: content_right ---- -

plain

-

Milestone: 3

- -

The "plain" codec is for plain text with no delimiting between events.

- -

This is mainly useful on inputs and outputs that already have a defined -framing in their transport protocol (such as zeromq, rabbitmq, redis, etc)

- - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   plain {
-      charset => ... # string, one of ["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-1251", "BINARY", "IBM437", "CP437", "IBM737", "CP737", "IBM775", "CP775", "CP850", "IBM850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "CP857", "IBM860", "CP860", "IBM861", "CP861", "IBM862", "CP862", "IBM863", "CP863", "IBM864", "CP864", "IBM865", "CP865", "IBM866", "CP866", "IBM869", "CP869", "Windows-1258", "CP1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "Big5-HKSCS:2008", "CP951", "stateless-ISO-2022-JP", "eucJP", "eucJP-ms", "euc-jp-ms", "CP51932", "eucKR", "eucTW", "GB2312", "EUC-CN", "eucCN", "GB12345", "CP936", "ISO-2022-JP", "ISO2022-JP", "ISO-2022-JP-2", "ISO2022-JP2", "CP50220", "CP50221", "ISO8859-1", "Windows-1252", "CP1252", "ISO8859-2", "Windows-1250", "CP1250", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "Windows-1256", "CP1256", "ISO8859-7", "Windows-1253", "CP1253", "ISO8859-8", "Windows-1255", "CP1255", "ISO8859-9", "Windows-1254", "CP1254", "ISO8859-10", "ISO8859-11", "TIS-620", "Windows-874", "CP874", "ISO8859-13", "Windows-1257", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "Windows-31J", "CP932", "csWindows31J", "SJIS", "PCK", "MacJapanese", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "UTF-7", "CP65000", "CP65001", "UTF8-MAC", "UTF-8-MAC", "UTF-8-HFS", "UTF-16", "UTF-32", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP1251", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "locale", "external", "filesystem", "internal"] (optional), default: "UTF-8"
-      format => ... # string (optional)
-  }
-  }
-}
-
- -

Details

- -

- - charset - - -

- - - -

The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

- -

This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

- -

This only affects "plain" format logs since json is UTF-8 already.

- -

- - format - - -

- - - -

Set the message you which to emit for each event. This supports sprintf -strings.

- -

This setting only affects outputs (encoding of events).

- - -
- -This is documentation from lib/logstash/codecs/plain.rb diff --git a/docs/1.2.0.beta1/codecs/rubydebug.html b/docs/1.2.0.beta1/codecs/rubydebug.html deleted file mode 100644 index 7fcba4c8a..000000000 --- a/docs/1.2.0.beta1/codecs/rubydebug.html +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: logstash docs for codecs/rubydebug -layout: content_right ---- -

rubydebug

-

Milestone: 3

- - - - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   rubydebug {
-  }
-  }
-}
-
- -

Details

- - -
- -This is documentation from lib/logstash/codecs/rubydebug.rb diff --git a/docs/1.2.0.beta1/codecs/spool.html b/docs/1.2.0.beta1/codecs/spool.html deleted file mode 100644 index 264fc565e..000000000 --- a/docs/1.2.0.beta1/codecs/spool.html +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: logstash docs for codecs/spool -layout: content_right ---- -

spool

-

Milestone: 1

- - - - -

Synopsis

- -This is what it might look like in your config file: - -
# with an input plugin:
-# you can also use this codec with an output.
-input { 
-  file { 
-    codec =>   spool {
-      spool_size => ... # number (optional), default: 50
-  }
-  }
-}
-
- -

Details

- -

- - spool_size - - -

- - - - - - -
- -This is documentation from lib/logstash/codecs/spool.rb diff --git a/docs/1.2.0.beta1/configuration.md b/docs/1.2.0.beta1/configuration.md deleted file mode 100644 index 35e1edbe6..000000000 --- a/docs/1.2.0.beta1/configuration.md +++ /dev/null @@ -1,247 +0,0 @@ ---- -title: Configuration Language - logstash -layout: content_right ---- -# LogStash Config Language - -The logstash config language aims to be simple. - -There's 3 main sections: inputs, filters, outputs. Each section has -configurations for each plugin available in that section. - -Example: - - # This is a comment. You should use comments to describe - # parts of your configuration. - input { - ... - } - - filter { - ... - } - - output { - ... - } - -## Filters and Ordering - -For a given event, are applied in the order of appearance in the -configuration file. - -## Comments - -Comments are as in ruby, perl, and python. Starts with a '#' character. Example: - - # this is a comment - - input { # comments can appear at the end of a line, too - # ... - } - -## Plugins - -The input, filter, and output sections all let you configure plugins. Plugins -configuration consists of the plugin name followed by a block of settings for -that plugin. For example, how about two file inputs: - - input { - file { - path => "/var/log/messages" - type => "syslog" - } - - file { - path => "/var/log/apache/access.log" - type => "apache" - } - } - -The above configures a two file separate inputs. Both set two -configuration settings each: path and type. Each plugin has different -settings for configuring it, seek the documentation for your plugin to -learn what settings are available and what they mean. For example, the -[file input][fileinput] documentation will explain the meanings of the -path and type settings. - -[fileinput]: inputs/file - -## Value Types - -The documentation for a plugin may say that a configuration field has a -certain type. Examples include boolean, string, array, number, hash, -etc. - -### Boolean - -A boolean must be either `true` or `false`. - -Examples: - - debug => true - -### String - -A string must be a single value. - -Example: - - name => "Hello world" - -Single, unquoted words are valid as strings, too, but you should use quotes. - -### Number - -Numbers must be valid numerics (floating point or integer are OK) - -Example: - - port => 33 - -### Array - -An array can be a single string value or multiple. If you specify the same -field multiple times, it appends to the array. - -Examples: - - path => [ "/var/log/messages", "/var/log/*.log" ] - path => "/data/mysql/mysql.log" - -The above makes 'path' a 3-element array including all 3 strings. - -### Hash - -A hash is basically the same syntax as Ruby hashes. -The key and value are simply pairs, such as: - - match => { "field1" => "value1", "field2" => "value2", ... } - -## Field References - -All events have properties. For example, an apache access log would have things -like status code, request path, http verb, client ip, etc. Logstash calls these -properties "fields." - -In many cases, it is useful to be able to refer to a field by name. To do this, -you can use the logstash field reference syntax. - -By way of example, let us suppose we have this event: - - { - "agent": "Mozilla/5.0 (compatible; MSIE 9.0)", - "ip": "192.168.24.44", - "request": "/index.html" - "response": { - "status": 200, - "bytes": 52353 - }, - "ua": { - "os": "Windows 7" - } - } - -The syntax to access fields is `[fieldname]`. If you are only referring to a -top-level field, you can omit the `[]` and simply say `fieldname`. In the case -of nested fields, -like the "os" field above, you need the full path to that field: `[ua][os]`. - -## sprintf format - -This syntax is also used in what logstash calls 'sprintf format'. This format -allows you to refer to field values from within other strings. For example, the -statsd output has an 'increment' setting, to allow you to keep a count of -apache logs by status code: - - output { - statsd { - increment => "apache.%{[response][status]}" - } - } - -You can also do time formatting in this sprintf format. Instead of specifying a field name, use the `+FORMAT` syntax where `FORMAT` is a [time format](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html). - -For example, if you want to use the file output to write to logs based on the -hour and the 'type' field: - - output { - file { - path => "/var/log/%{type}.%{+yyyy.MM.dd.HH}" - } - } - -## Conditionals - -Sometimes you only want a filter or output to process an even under -certain conditions. For that, you'll want to use a conditional! - -Conditionals in logstash look and act the same way they do in programming -languages. You have `if`, `else if` and `else` statements. Conditionals may be -nested if you need that. - -The syntax is follows: - - if EXPRESSION { - ... - } else if EXPRESSION { - ... - } else { - ... - } - -What's an expression? Comparison tests, boolean logic, etc! - -The following comparison operators are supported: - -* equality, etc: == != < > <= >= -* regexp: =~ !~ -* inclusion: in - -The following boolean operators are supported: - -* and, or, nand, xor - -The following unary operators are supported: - -* ! - -Expressions may contain expressions. Expressions may be negated with `!`. -Expressions may be grouped with parentheses `(...)`. - -For example, if we want to remove the field `secret` if the field -`action` has a value of `login`: - - filter { - if [action] == "login" { - mutate { remove => "secret" } - } - } - -The above uses the field reference syntax to get the value of the -`action` field. It is compared against the text `login` and, when equal, -allows the mutate filter to do delete the field named `secret` - -How about a more complex example? - -* alert nagios of any apache events with status 5xx -* record any 4xx status to elasticsearch -* record all status code hits via statsd - -How about telling nagios of any http event that has a status code of 5xx? - - output { - if [type] == "apache" { - if [status] =~ /^5\d\d/ { - nagios { ... } - } else if [status] =~ /^4\d\d/ { - elasticsearch { ... } - } - - statsd { increment => "apache.%{status}" } - } - } - -## Further Reading - -For more information, see [the plugin docs index](index) diff --git a/docs/1.2.0.beta1/docgen.rb b/docs/1.2.0.beta1/docgen.rb deleted file mode 100644 index 843c05d9b..000000000 --- a/docs/1.2.0.beta1/docgen.rb +++ /dev/null @@ -1,241 +0,0 @@ -require "rubygems" -require "erb" -require "optparse" -require "bluecloth" # for markdown parsing - -$: << Dir.pwd -$: << File.join(File.dirname(__FILE__), "..", "lib") - -require "logstash/config/mixin" -require "logstash/inputs/base" -require "logstash/codecs/base" -require "logstash/filters/base" -require "logstash/outputs/base" -require "logstash/version" - -class LogStashConfigDocGenerator - COMMENT_RE = /^ *#(?: (.*)| *$)/ - - def initialize - @rules = { - COMMENT_RE => lambda { |m| add_comment(m[1]) }, - /^ *class.*< *LogStash::(Outputs|Filters|Inputs|Codecs)::(Base|Threadable)/ => \ - lambda { |m| set_class_description }, - /^ *config +[^=].*/ => lambda { |m| add_config(m[0]) }, - /^ *milestone .*/ => lambda { |m| set_milestone(m[0]) }, - /^ *config_name .*/ => lambda { |m| set_config_name(m[0]) }, - /^ *flag[( ].*/ => lambda { |m| add_flag(m[0]) }, - /^ *(class|def|module) / => lambda { |m| clear_comments }, - } - end - - def parse(string) - clear_comments - buffer = "" - string.split(/\r\n|\n/).each do |line| - # Join long lines - if line =~ COMMENT_RE - # nothing - else - # Join extended lines - if line =~ /(, *$)|(\\$)|(\[ *$)/ - buffer += line.gsub(/\\$/, "") - next - end - end - - line = buffer + line - buffer = "" - - @rules.each do |re, action| - m = re.match(line) - if m - action.call(m) - end - end # RULES.each - end # string.split("\n").each - end # def parse - - def set_class_description - @class_description = @comments.join("\n") - clear_comments - end # def set_class_description - - def add_comment(comment) - @comments << comment - end # def add_comment - - def add_config(code) - # I just care about the 'config :name' part - code = code.sub(/,.*/, "") - - # call the code, which calls 'config' in this class. - # This will let us align comments with config options. - name, opts = eval(code) - - # TODO(sissel): This hack is only required until regexp configs - # are gone from logstash. - name = name.to_s unless name.is_a?(Regexp) - - description = BlueCloth.new(@comments.join("\n")).to_html - @attributes[name][:description] = description - clear_comments - end # def add_config - - def add_flag(code) - # call the code, which calls 'config' in this class. - # This will let us align comments with config options. - #p :code => code - fixed_code = code.gsub(/ do .*/, "") - #p :fixedcode => fixed_code - name, description = eval(fixed_code) - @flags[name] = description - clear_comments - end # def add_flag - - def set_config_name(code) - name = eval(code) - @name = name - end # def set_config_name - - def set_milestone(code) - @milestone = eval(code) - end - - # pretend to be the config DSL and just get the name - def config(name, opts={}) - return name, opts - end # def config - - # Pretend to support the flag DSL - def flag(*args, &block) - name = args.first - description = args.last - return name, description - end # def config - - # pretend to be the config dsl's 'config_name' method - def config_name(name) - return name - end # def config_name - - # pretend to be the config dsl's 'milestone' method - def milestone(m) - return m - end # def milestone - - def clear_comments - @comments.clear - end # def clear_comments - - def generate(file, settings) - @class_description = "" - @milestone = "" - @comments = [] - @attributes = Hash.new { |h,k| h[k] = {} } - @flags = {} - - # local scoping for the monkeypatch belowg - attributes = @attributes - # Monkeypatch the 'config' method to capture - # Note, this monkeypatch requires us do the config processing - # one at a time. - #LogStash::Config::Mixin::DSL.instance_eval do - #define_method(:config) do |name, opts={}| - #p name => opts - #attributes[name].merge!(opts) - #end - #end - - # Loading the file will trigger the config dsl which should - # collect all the config settings. - load file - - # parse base first - parse(File.new(File.join(File.dirname(file), "base.rb"), "r").read) - - # Now parse the real library - code = File.new(file).read - - # inputs either inherit from Base or Threadable. - if code =~ /\< LogStash::Inputs::Threadable/ - parse(File.new(File.join(File.dirname(file), "threadable.rb"), "r").read) - end - - if code =~ /include LogStash::PluginMixins/ - mixin = code.gsub(/.*include LogStash::PluginMixins::(\w+)\s.*/m, '\1') - mixin.gsub!(/(.)([A-Z])/, '\1_\2') - mixin.downcase! - parse(File.new(File.join(File.dirname(file), "..", "plugin_mixins", "#{mixin}.rb")).read) - end - - parse(code) - - puts "Generating docs for #{file}" - - if @name.nil? - $stderr.puts "Missing 'config_name' setting in #{file}?" - return nil - end - - klass = LogStash::Config::Registry.registry[@name] - if klass.ancestors.include?(LogStash::Inputs::Base) - section = "input" - elsif klass.ancestors.include?(LogStash::Filters::Base) - section = "filter" - elsif klass.ancestors.include?(LogStash::Outputs::Base) - section = "output" - elsif klass.ancestors.include?(LogStash::Codecs::Base) - section = "codec" - end - - template_file = File.join(File.dirname(__FILE__), "plugin-doc.html.erb") - template = ERB.new(File.new(template_file).read, nil, "-") - - # descriptions are assumed to be markdown - description = BlueCloth.new(@class_description).to_html - - klass.get_config.each do |name, settings| - @attributes[name].merge!(settings) - end - sorted_attributes = @attributes.sort { |a,b| a.first.to_s <=> b.first.to_s } - klassname = LogStash::Config::Registry.registry[@name].to_s - name = @name - - synopsis_file = File.join(File.dirname(__FILE__), "plugin-synopsis.html.erb") - synopsis = ERB.new(File.new(synopsis_file).read, nil, "-").result(binding) - - if settings[:output] - dir = File.join(settings[:output], section + "s") - path = File.join(dir, "#{name}.html") - Dir.mkdir(settings[:output]) if !File.directory?(settings[:output]) - Dir.mkdir(dir) if !File.directory?(dir) - File.open(path, "w") do |out| - html = template.result(binding) - html.gsub!("1.2.0.beta1", LOGSTASH_VERSION) - html.gsub!("%PLUGIN%", @name) - out.puts(html) - end - else - puts template.result(binding) - end - end # def generate - -end # class LogStashConfigDocGenerator - -if __FILE__ == $0 - opts = OptionParser.new - settings = {} - opts.on("-o DIR", "--output DIR", - "Directory to output to; optional. If not specified,"\ - "we write to stdout.") do |val| - settings[:output] = val - end - - args = opts.parse(ARGV) - - args.each do |arg| - gen = LogStashConfigDocGenerator.new - gen.generate(arg, settings) - end -end diff --git a/docs/1.2.0.beta1/extending/example-add-a-new-filter.md b/docs/1.2.0.beta1/extending/example-add-a-new-filter.md deleted file mode 100644 index 458ae3d29..000000000 --- a/docs/1.2.0.beta1/extending/example-add-a-new-filter.md +++ /dev/null @@ -1,120 +0,0 @@ ---- -title: How to extend - logstash -layout: content_right ---- -# Add a new filter - -This document shows you how to add a new filter to logstash. - -For a general overview of how to add a new plugin, see [the extending -logstash](.) overview. - -## Write code. - -Let's write a 'hello world' filter. This filter will replace the 'message' in -the event with "Hello world!" - -First, logstash expects plugins in a certain directory structure: `logstash/TYPE/PLUGIN_NAME.rb` - -Since we're creating a filter, let's mkdir this: - - mkdir -p logstash/filters/ - cd logstash/filters - -Now add the code: - - # Call this file 'foo.rb' (in logstash/filters, as above) - require "logstash/filters/base" - require "logstash/namespace" - - class LogStash::Filters::Foo < LogStash::Filters::Base - - # Setting the config_name here is required. This is how you - # configure this filter from your logstash config. - # - # filter { - # foo { ... } - # } - config_name "foo" - # need to set a plugin_status - plugin_status "experimental" - - # Replace the message with this value. - config :message, :validate => :string - - public - def register - # nothing to do - end # def register - - public - def filter(event) - # return nothing unless there's an actual filter event - return unless filter?(event) - if @message - # Replace the event message with our message as configured in the - # config file. - # If no message is specified, do nothing. - event.message = @message - end - # filter_matched should go in the last line of our successful code - filter_matched(event) - end # def filter - end # class LogStash::Filters::Foo - -## Add it to your configuration - -For this simple example, let's just use stdin input and stdout output. -The config file looks like this: - - input { - stdin { type => "foo" } - } - filter { - foo { - type => "foo" - message => "Hello world!" - } - } - output { - stdout { } - } - -Call this file 'example.conf' - -## Tell logstash about it. - -Depending on how you installed logstash, you have a few ways of including this -plugin. - -You can use the agent flag --pluginpath flag to specify where the root of your -plugin tree is. In our case, it's the current directory. - - % logstash --pluginpath . -f example.conf - -If you use the jar release of logstash, you have an additional option - you can -include the plugin right in the jar file. - - # This command will take your 'logstash/filters/foo.rb' file - # and add it into the jar file. - % jar -uf logstash-1.2.0.beta1-flatjar.jar logstash/filters/foo.rb - - # Verify it's in the right location in the jar! - % jar tf logstash-1.2.0.beta1-flatjar.jar | grep foo.rb - logstash/filters/foo.rb - - % java -jar logstash-1.2.0.beta1-flatjar.jar agent -f example.conf - -## Example running - -In the example below, I typed in "the quick brown fox" after running the java -command. - - % java -jar logstash-1.2.0.beta1-flatjar.jar agent -f example.conf - the quick brown fox - 2011-05-12T01:05:09.495000Z stdin://snack.home/: Hello world! - -The output is the standard logstash stdout output, but in this case our "the -quick brown fox" message was replaced with "Hello world!" - -All done! :) diff --git a/docs/1.2.0.beta1/extending/index.md b/docs/1.2.0.beta1/extending/index.md deleted file mode 100644 index b9934d8d9..000000000 --- a/docs/1.2.0.beta1/extending/index.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: How to extend - logstash -layout: content_right ---- -# Extending logstash - -You can add your own input, output, or filter plugins to logstash. - -If you're looking to extend logstash today, please look at the existing plugins. - -## Good examples of plugins - -* [inputs/tcp](https://github.com/logstash/logstash/blob/master/lib/logstash/inputs/tcp.rb) -* [filters/multiline](https://github.com/logstash/logstash/blob/master/lib/logstash/filters/multiline.rb) -* [outputs/mongodb](https://github.com/logstash/logstash/blob/master/lib/logstash/outputs/mongodb.rb) - -## Common concepts - -* The `config_name` sets the name used in the config file. -* The `plugin_status` sets the status of the plugin for example `beta`. -* The `config` lines define config options. -* The `register` method is called per plugin instantiation. Do any of your initialization here. - -### Required modules - -All plugins should require the Logstash module. - - require 'logstash/namespace' - -### Plugin name - -Every plugin must have a name set with the `config_name` method. If this -is not specified plugins will fail to load with an error. - -### Plugin status - -Every plugin needs a status set using `plugin_status`. Valid values are -`stable`, `beta`, `experimental`, and `unsupported`. Plugins with either -the `experimental` and `unsupported` status will generate warnings when -used. - -### Config lines - -The `config` lines define configuration options and are constructed like -so: - - config :host, :validate => :string, :default => "0.0.0.0" - -The name of the option is specified, here `:host` and then the -attributes of the option. They can include `:validate`, `:default`, -`:required` (a Boolean `true` or `false`), and `:deprecated` (also a -Boolean). - -## Inputs - -All inputs require the LogStash::Inputs::Base class: - - require 'logstash/inputs/base' - -Inputs have two methods: `register` and `run`. - -* Each input runs as its own thread. -* The `run` method is expected to run-forever. - -## Filters - -All filters require the LogStash::Filters::Base class: - - require 'logstash/filters/base' - -Filters have two methods: `register` and `filter`. - -* The `filter` method gets an event. -* Call `event.cancel` to drop the event. -* To modify an event, simply make changes to the event you are given. -* The return value is ignored. - -## Outputs - -All outputs require the LogStash::Outputs::Base class: - - require 'logstash/outputs/base' - -Outputs have two methods: `register` and `receive`. - -* The `register` method is called per plugin instantiation. Do any of your initialization here. -* The `receive` method is called when an event gets pushed to your output - -## Example: a new filter - -Learn by example how to [add a new filter to logstash](example-add-a-new-filter) - - diff --git a/docs/1.2.0.beta1/filters/advisor.html b/docs/1.2.0.beta1/filters/advisor.html deleted file mode 100644 index ac28914ac..000000000 --- a/docs/1.2.0.beta1/filters/advisor.html +++ /dev/null @@ -1,244 +0,0 @@ ---- -title: logstash docs for filters/advisor -layout: content_right ---- -

advisor

-

Milestone: 1

- -

INFORMATION: -The filter Advisor is designed for capture and confrontation the events. -The events must be grep by a filter first, then it can pull out a copy of it, like clone, whit tags "advisorfirst", -this copy is the first occurrence of this event verified in timeadv. -After timeadv Advisor will pull out an event tagged "advisorinfo" who will tell you the number of same events verified in time_adv. -INFORMATION ABOUT CLASS: -For do this job, i used a thread that will sleep time adv. I assume that events coming on advisor are tagged, then i use an array for storing different events. -If an events is not present on array, then is the first and if the option is activate then advisor push out a copy of event. -Else if the event is present on array, then is another same event and not the first, let's count it.
-USAGE: -This is an example of logstash config: -filter{ - advisor {

- -
time_adv => 1                     #(optional)
-send_first => true                #(optional)
-
- -

} -} -We analize this: -timeadv => 1 -Means the time when the events matched and collected are pushed on outputs with tag "advisorinfo". -sendfirst => true -Means you can push out the first events different who came in advisor like clone copy and tagged with "advisorfirst"

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  advisor {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    send_first => ... # boolean (optional), default: true
-    time_adv => ... # number (optional), default: 0
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  advisor {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  advisor {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  advisor {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  advisor {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - send_first - - -

- - - -

If you want the first different event will be pushed out like a copy

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - time_adv - - -

- - - -

If you do not set time_adv the plugin does nothing.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/advisor.rb diff --git a/docs/1.2.0.beta1/filters/alter.html b/docs/1.2.0.beta1/filters/alter.html deleted file mode 100644 index 585fa5e7e..000000000 --- a/docs/1.2.0.beta1/filters/alter.html +++ /dev/null @@ -1,278 +0,0 @@ ---- -title: logstash docs for filters/alter -layout: content_right ---- -

alter

-

Milestone: 1

- -

The alter filter allows you to do general alterations to fields -that are not included in the normal mutate filter.

- -

NOTE: The functionality provided by this plugin is likely to -be merged into the 'mutate' filter in future versions.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  alter {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    coalesce => ... # array (optional)
-    condrewrite => ... # array (optional)
-    condrewriteother => ... # array (optional)
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  alter {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  alter {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - coalesce - - -

- - - -

Sets the value of field_name to the first nonnull expression among its arguments.

- -

Example:

- -
filter {
-  alter {
-    coalesce => [
-         "field_name", "value1", "value2", "value3", ...
-    ]
-  }
-}
-
- -

- - condrewrite - - -

- - - -

Change the content of the field to the specified value -if the actual content is equal to the expected one.

- -

Example:

- -
filter {
-  alter {
-    condrewrite => [ 
-         "field_name", "expected_value", "new_value" 
-         "field_name2", "expected_value2, "new_value2"
-         ....
-       ]
-  }
-}
-
- -

- - condrewriteother - - -

- - - -

Change the content of the field to the specified value -if the content of another field is equal to the expected one.

- -

Example:

- -
filter {
-  alter {
-    condrewriteother => [ 
-         "field_name", "expected_value", "field_name_to_change", "value",
-         "field_name2", "expected_value2, "field_name_to_change2", "value2",
-         ....
-    ]
-  }
-}
-
- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  alter {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  alter {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/alter.rb diff --git a/docs/1.2.0.beta1/filters/anonymize.html b/docs/1.2.0.beta1/filters/anonymize.html deleted file mode 100644 index a528ec6a1..000000000 --- a/docs/1.2.0.beta1/filters/anonymize.html +++ /dev/null @@ -1,237 +0,0 @@ ---- -title: logstash docs for filters/anonymize -layout: content_right ---- -

anonymize

-

Milestone: 1

- -

Anonymize fields using by replacing values with a consistent hash.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  anonymize {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    algorithm => ... # string, one of ["SHA1", "SHA256", "SHA384", "SHA512", "MD5", "MURMUR3", "IPV4_NETWORK"] (required), default: "SHA1"
-    fields => ... # array (required)
-    key => ... # string (required)
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  anonymize {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  anonymize {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - algorithm (required setting) - - -

- - - -

digest/hash type

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - fields (required setting) - - -

- - - -

The fields to be anonymized

- -

- - key (required setting) - - -

- - - -

Hashing key -When using MURMUR3 the key is ignored but must still be set. -When using IPV4_NETWORK key is the subnet prefix lentgh

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  anonymize {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  anonymize {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/anonymize.rb diff --git a/docs/1.2.0.beta1/filters/checksum.html b/docs/1.2.0.beta1/filters/checksum.html deleted file mode 100644 index 4f0ed5d25..000000000 --- a/docs/1.2.0.beta1/filters/checksum.html +++ /dev/null @@ -1,228 +0,0 @@ ---- -title: logstash docs for filters/checksum -layout: content_right ---- -

checksum

-

Milestone: 1

- -

This filter let's you create a checksum based on various parts -of the logstash event. -This can be useful for deduplication of messages or simply to provide -a custom unique identifier.

- -

This is VERY experimental and is largely a proof-of-concept

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  checksum {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    algorithm => ... # string, one of ["md5", "sha128", "sha256", "sha384"] (optional), default: "sha256"
-    keys => ... # array (optional), default: ["message", "@timestamp", "type"]
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  checksum {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  checksum {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - algorithm - - -

- - - - - -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - keys - - -

- - - -

A list of keys to use in creating the string to checksum -Keys will be sorted before building the string -keys and values will then be concatenated with pipe delimeters -and checksummed

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  checksum {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  checksum {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/checksum.rb diff --git a/docs/1.2.0.beta1/filters/cipher.html b/docs/1.2.0.beta1/filters/cipher.html deleted file mode 100644 index a0a6ccbab..000000000 --- a/docs/1.2.0.beta1/filters/cipher.html +++ /dev/null @@ -1,389 +0,0 @@ ---- -title: logstash docs for filters/cipher -layout: content_right ---- -

cipher

-

Milestone: 1

- -

This filter parses a source and apply a cipher or decipher before -storing it in the target.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  cipher {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    algorithm => ... # string (required)
-    base64 => ... # boolean (optional), default: true
-    cipher_padding => ... # string (optional)
-    iv => ... # string (optional)
-    key => ... # string (optional)
-    key_pad => ... #  (optional), default: "\x00"
-    key_size => ... # number (optional), default: 32
-    mode => ... # string (required)
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    source => ... # string (optional), default: "message"
-    target => ... # string (optional), default: "message"
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  cipher {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  cipher {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - algorithm (required setting) - - -

- - - -

The cipher algorythm

- -

A list of supported algorithms can be obtained by

- -
puts OpenSSL::Cipher.ciphers
-
- -

- - base64 - - -

- - - -

Do we have to perform a base64 decode or encode?

- -

If we are decrypting, base64 decode will be done before. -If we are encrypting, base64 will be done after.

- -

- - cipher_padding - - -

- - - -

Cypher padding to use. Enables or disables padding.

- -

By default encryption operations are padded using standard block padding -and the padding is checked and removed when decrypting. If the pad -parameter is zero then no padding is performed, the total amount of data -encrypted or decrypted must then be a multiple of the block size or an -error will occur.

- -

See EVPCIPHERCTXsetpadding for further information.

- -

We are using Openssl jRuby which uses default padding to PKCS5Padding -If you want to change it, set this parameter. If you want to disable -it, Set this parameter to 0

- -
filter { cipher { padding => 0 }}
-
- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - iv - - -

- - - -

The initialization vector to use

- -

The cipher modes CBC, CFB, OFB and CTR all need an "initialization -vector", or short, IV. ECB mode is the only mode that does not require -an IV, but there is almost no legitimate use case for this mode -because of the fact that it does not sufficiently hide plaintext patterns.

- -

- - key - - -

- - - -

The key to use

- -

- - key_pad - - -

- - - -

The character used to pad the key

- -

- - key_size - - -

- - - -

The key size to pad

- -

It depends of the cipher algorythm.I your key don't need -padding, don't set this parameter

- -

Example, for AES-256, we must have 32 char long key

- -
filter { cipher { key_size => 32 }
-
- -

- - mode (required setting) - - -

- - - -

Encrypting or decrypting some data

- -

Valid values are encrypt or decrypt

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  cipher {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  cipher {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - source - - -

- - - -

The field to perform filter

- -

Example, to use the @message field (default) :

- -
filter { cipher { source => "message" } }
-
- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - target - - -

- - - -

The name of the container to put the result

- -

Example, to place the result into crypt :

- -
filter { cipher { target => "crypt" } }
-
- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/cipher.rb diff --git a/docs/1.2.0.beta1/filters/clone.html b/docs/1.2.0.beta1/filters/clone.html deleted file mode 100644 index a22b912a6..000000000 --- a/docs/1.2.0.beta1/filters/clone.html +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: logstash docs for filters/clone -layout: content_right ---- -

clone

-

Milestone: 2

- -

The clone filter is for duplicating events. -A clone will be made for each type in the clone list. -The original event is left unchanged.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  clone {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    clones => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  clone {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  clone {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - clones - - -

- - - -

A new clone will be created with the given type for each type in this list.

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  clone {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  clone {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/clone.rb diff --git a/docs/1.2.0.beta1/filters/csv.html b/docs/1.2.0.beta1/filters/csv.html deleted file mode 100644 index 363055dff..000000000 --- a/docs/1.2.0.beta1/filters/csv.html +++ /dev/null @@ -1,258 +0,0 @@ ---- -title: logstash docs for filters/csv -layout: content_right ---- -

csv

-

Milestone: 2

- -

CSV filter. Takes an event field containing CSV data, parses it, -and stores it as individual fields (can optionally specify the names).

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  csv {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    columns => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    separator => ... # string (optional), default: ","
-    source => ... # string (optional), default: "message"
-    target => ... # string (optional)
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  csv {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  csv {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - columns - - -

- - - -

Define a list of column names (in the order they appear in the CSV, -as if it were a header line). If this is not specified or there -are not enough columns specified, the default column name is "columnX" -(where X is the field number, starting from 1).

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  csv {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  csv {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - separator - - -

- - - -

Define the column separator value. If this is not specified the default -is a comma ',' -Optional.

- -

- - source - - -

- - - -

The CSV data in the value of the source field will be expanded into a -datastructure.

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - target - - -

- - - -

Define target for placing the data -Defaults to writing to the root of the event.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/csv.rb diff --git a/docs/1.2.0.beta1/filters/date.html b/docs/1.2.0.beta1/filters/date.html deleted file mode 100644 index 15b2bdd06..000000000 --- a/docs/1.2.0.beta1/filters/date.html +++ /dev/null @@ -1,298 +0,0 @@ ---- -title: logstash docs for filters/date -layout: content_right ---- -

date

-

Milestone: 3

- -

The date filter is used for parsing dates from fields and using that -date or timestamp as the timestamp for the event.

- -

For example, syslog events usually have timestamps like this:

- -
"Apr 17 09:32:01"
-
- -

You would use the date format "MMM dd HH:mm:ss" to parse this.

- -

The date filter is especially important for sorting events and for -backfilling old data. If you don't get the date correct in your -event, then searching for them later will likely sort out of order.

- -

In the absence of this filter, logstash will choose a timestamp based on the -first time it sees the event (at input time), if the timestamp is not already -set in the event. For example, with file input, the timestamp is set to the -time of each read.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  date {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    locale => ... # string (optional)
-    match => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    timezone => ... # string (optional)
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  date {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  date {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - locale - - -

- - - -

specify a locale to be used for date parsing. If this is not specified the -platform default will be used

- -

The locale is mostly necessary to be set for parsing month names and -weekday names

- -

- - match - - -

- - - -

The date formats allowed are anything allowed by Joda-Time (java time -library): You can see the docs for this format here:

- -

joda.time.format.DateTimeFormat

- -

An array with field name first, and format patterns following, [ field, -formats... ]

- -

If your time field has multiple possible formats, you can do this:

- -
match => [ "logdate", "MMM dd YYY HH:mm:ss",
-          "MMM  d YYY HH:mm:ss", "ISO8601" ]
-
- -

The above will match a syslog (rfc3164) or iso8601 timestamp.

- -

There are a few special exceptions, the following format literals exist -to help you save time and ensure correctness of date parsing.

- - - - -

For example, if you have a field 'logdate' and with a value that looks like -'Aug 13 2010 00:03:44', you would use this configuration:

- -
filter {
-  date {
-    match => [ "logdate", "MMM dd YYYY HH:mm:ss" ]
-  }
-}
-
- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  date {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  date {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - timezone - - -

- - - -

specify a timezone canonical ID to be used for date parsing. -The valid ID are listed on http://joda-time.sourceforge.net/timezones.html -Useful in case the timezone cannot be extracted from the value, -and is not the platform default -If this is not specified the platform default will be used. -Canonical ID is good as it takes care of daylight saving time for you -For example, America/Los_Angeles or Europe/France are valid IDs

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/date.rb diff --git a/docs/1.2.0.beta1/filters/dns.html b/docs/1.2.0.beta1/filters/dns.html deleted file mode 100644 index 164791ea1..000000000 --- a/docs/1.2.0.beta1/filters/dns.html +++ /dev/null @@ -1,294 +0,0 @@ ---- -title: logstash docs for filters/dns -layout: content_right ---- -

dns

-

Milestone: 2

- -

DNS Filter

- -

This filter will resolve any IP addresses from a field of your choosing.

- -

The DNS filter performs a lookup (either an A record/CNAME record lookup -or a reverse lookup at the PTR record) on records specified under the -"reverse" and "resolve" arrays.

- -

The config should look like this:

- -
filter {
-  dns {
-    type => 'type'
-    reverse => [ "source_host", "field_with_address" ]
-    resolve => [ "field_with_fqdn" ]
-    action => "replace"
-  }
-}
-
- -

Caveats: at the moment, there's no way to tune the timeout with the 'resolv' -core library. It does seem to be fixed in here:

- -

http://redmine.ruby-lang.org/issues/5100

- -

but isn't currently in JRuby.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  dns {
-    action => ... # string, one of ["append", "replace"] (optional), default: "append"
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    nameserver => ... # string (optional)
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    resolve => ... # array (optional)
-    reverse => ... # array (optional)
-    timeout => ... # int (optional), default: 2
-}
-
-}
-
- -

Details

- -

- - action - - -

- - - -

Determine what action to do: append or replace the values in the fields -specified under "reverse" and "resolve."

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  dns {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  dns {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - nameserver - - -

- - - -

Use custom nameserver.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  dns {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  dns {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - resolve - - -

- - - -

Forward resolve one or more fields.

- -

- - reverse - - -

- - - -

Reverse resolve one or more fields.

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - timeout - - -

- - - -

TODO(sissel): make 'action' required? This was always the intent, but it -due to a typo it was never enforced. Thus the default behavior in past -versions was 'append' by accident. -resolv calls will be wrapped in a timeout instance

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/dns.rb diff --git a/docs/1.2.0.beta1/filters/drop.html b/docs/1.2.0.beta1/filters/drop.html deleted file mode 100644 index 85208fa22..000000000 --- a/docs/1.2.0.beta1/filters/drop.html +++ /dev/null @@ -1,204 +0,0 @@ ---- -title: logstash docs for filters/drop -layout: content_right ---- -

drop

-

Milestone: 1

- -

Drop filter.

- -

Drops everything that gets to this filter.

- -

This is best used in combination with conditionals, for example:

- -
filter {
-  if [loglevel] == "debug" { 
-    drop { } 
-  }
-}
-
- -

The above will only pass events to the drop filter if the loglevel field is -"debug". This will cause all events matching to be dropped.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  drop {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  drop {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  drop {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  drop {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  drop {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/drop.rb diff --git a/docs/1.2.0.beta1/filters/environment.html b/docs/1.2.0.beta1/filters/environment.html deleted file mode 100644 index 6b1277b61..000000000 --- a/docs/1.2.0.beta1/filters/environment.html +++ /dev/null @@ -1,206 +0,0 @@ ---- -title: logstash docs for filters/environment -layout: content_right ---- -

environment

-

Milestone: 1

- -

Set fields from environment variables

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  environment {
-    add_field => ... # hash (optional), default: {}
-    add_field_from_env => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  environment {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_field_from_env - - -

- - - -

Specify a hash of fields to the environment variable -A hash of matches of field => environment variable

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  environment {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  environment {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  environment {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/environment.rb diff --git a/docs/1.2.0.beta1/filters/gelfify.html b/docs/1.2.0.beta1/filters/gelfify.html deleted file mode 100644 index cbe1c35bc..000000000 --- a/docs/1.2.0.beta1/filters/gelfify.html +++ /dev/null @@ -1,191 +0,0 @@ ---- -title: logstash docs for filters/gelfify -layout: content_right ---- -

gelfify

-

Milestone: 2

- -

The GELFify filter parses RFC3164 severity levels to -corresponding GELF levels.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  gelfify {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  gelfify {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  gelfify {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  gelfify {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  gelfify {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/gelfify.rb diff --git a/docs/1.2.0.beta1/filters/geoip.html b/docs/1.2.0.beta1/filters/geoip.html deleted file mode 100644 index 1a0523ac3..000000000 --- a/docs/1.2.0.beta1/filters/geoip.html +++ /dev/null @@ -1,272 +0,0 @@ ---- -title: logstash docs for filters/geoip -layout: content_right ---- -

geoip

-

Milestone: 1

- -

Add GeoIP fields from Maxmind database

- -

GeoIP filter, adds information about geographical location of IP addresses. -This filter uses Maxmind GeoIP databases, have a look at -https://www.maxmind.com/app/geolite

- -

Logstash releases ship with the GeoLiteCity database made available from -Maxmind with a CCA-ShareAlike 3.0 license. For more details on geolite, see -http://www.maxmind.com/en/geolite.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  geoip {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    database => ... # a valid filesystem path (optional)
-    fields => ... # array (optional)
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    source => ... # string (optional)
-    target => ... # string (optional), default: "geoip"
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  geoip {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  geoip {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - database - - -

- - - -

GeoIP database file to use, Country, City, ASN, ISP and organization -databases are supported

- -

If not specified, this will default to the GeoLiteCity database that ships -with logstash.

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - fields - - -

- - - -

Array of geoip fields that we want to be included in our event.

- -

Possible fields depend on the database type. By default, all geoip fields -are included in the event.

- -

For the built in GeoLiteCity database, the following are available: -city_name, continent_code, country_code2, country_code3, country_name, -dma_code, ip, latitude, longitude, postal_code, region_name, timezone

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  geoip {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  geoip {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - source - - -

- - - -

The field containing IP address, hostname is also OK. If this field is an -array, only the first value will be used.

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - target - - -

- - - -

Specify into what field you want the geoip data. -This can be useful for example if you have a src_ip and dst_ip and want -information of both IP's

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/geoip.rb diff --git a/docs/1.2.0.beta1/filters/grep.html b/docs/1.2.0.beta1/filters/grep.html deleted file mode 100644 index a92ce6cbc..000000000 --- a/docs/1.2.0.beta1/filters/grep.html +++ /dev/null @@ -1,278 +0,0 @@ ---- -title: logstash docs for filters/grep -layout: content_right ---- -

grep

-

Milestone: 3

- -

Grep filter. Useful for dropping events you don't want to pass, or -adding tags or fields to events that match.

- -

Events not matched are dropped. If 'negate' is set to true (defaults false), -then matching events are dropped.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  grep {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    drop => ... # boolean (optional), default: true
-    ignore_case => ... # boolean (optional), default: false
-    match => ... # hash (optional), default: {}
-    negate => ... # boolean (optional), default: false
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  grep {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  grep {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - drop - - -

- - - -

Drop events that don't match

- -

If this is set to false, no events will be dropped at all. Rather, the -requested tags and fields will be added to matching events, and -non-matching events will be passed through unchanged.

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - ignore_case - - -

- - - -

Use case-insensitive matching. Similar to 'grep -i'

- -

If enabled, ignore case distinctions in the patterns.

- -

- - match - - -

- - - -

A hash of matches of field => regexp. If multiple matches are specified, -all must match for the grep to be considered successful. Normal regular -expressions are supported here.

- -

For example:

- -
filter {
-  grep {
-    match => [ "message", "hello world" ]
-  }
-}
-
- -

The above will drop all events with a message not matching "hello world" as -a regular expression.

- -

- - negate - - -

- - - -

Negate the match. Similar to 'grep -v'

- -

If this is set to true, then any positive matches will result in the -event being cancelled and dropped. Non-matching will be allowed -through.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  grep {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  grep {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/grep.rb diff --git a/docs/1.2.0.beta1/filters/grok.html b/docs/1.2.0.beta1/filters/grok.html deleted file mode 100644 index 736b2f70f..000000000 --- a/docs/1.2.0.beta1/filters/grok.html +++ /dev/null @@ -1,533 +0,0 @@ ---- -title: logstash docs for filters/grok -layout: content_right ---- -

grok

-

Milestone: 3

- -

Parse arbitrary text and structure it.

- -

Grok is currently the best way in logstash to parse crappy unstructured log -data into something structured and queryable.

- -

This tool is perfect for syslog logs, apache and other webserver logs, mysql -logs, and in general, any log format that is generally written for humans -and not computer consumption.

- -

Logstash ships with about 120 patterns by default. You can find them here: -https://github.com/logstash/logstash/tree/v1.2.0.beta1/patterns. You can add -your own trivially. (See the patterns_dir setting)

- -

If you need help building patterns to match your logs, you will find the -http://grokdebug.herokuapp.com too quite useful!

- -

Grok Basics

- -

Grok works by using combining text patterns into something that matches your -logs.

- -

The syntax for a grok pattern is %{SYNTAX:SEMANTIC}

- -

The SYNTAX is the name of the pattern that will match your text. For -example, "3.44" will be matched by the NUMBER pattern and "55.3.244.1" will -be matched by the IP pattern. The syntax is how you match.

- -

The SEMANTIC is the identifier you give to the piece of text being matched. -For example, "3.44" could be the duration of an event, so you could call it -simply 'duration'. Further, a string "55.3.244.1" might identify the client -making a request.

- -

Optionally you can add a data type conversion to your grok pattern. By default -all semantics are saved as strings. If you wish to convert a semnatic's data type, -for example change a string to an integer then suffix it with the target data type. -For example ${NUMBER:num:int} which converts the 'num' semantic from a string to an -integer. Currently the only supporting conversions are int and float.

- -

Example

- -

With that idea of a syntax and semantic, we can pull out useful fields from a -sample log like this fictional http request log:

- -
55.3.244.1 GET /index.html 15824 0.043
-
- -

The pattern for this could be:

- -
%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}
-
- -

A more realistic example, let's read these logs from a file:

- -
input {
-  file {
-    path => "/var/log/http.log"
-    type => "examplehttp"
-  }
-}
-filter {
-  grok {
-    type => "examplehttp"
-    match => [ "message", "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" ]
-  }
-}
-
- -

After the grok filter, the event will have a few extra fields in it:

- - - - -

Regular Expressions

- -

Grok sits on top of regular expressions, so any regular expressions are valid -in grok as well. The regular expression library is Oniguruma, and you can see -the full supported regexp syntax on the Onigiruma -site

- -

Custom Patterns

- -

Sometimes logstash doesn't have a pattern you need. For this, you have -a few options.

- -

First, you can use the Oniguruma syntax for 'named capture' which will -let you match a piece of text and save it as a field:

- -
(?<field_name>the pattern here)
-
- -

For example, postfix logs have a 'queue id' that is an 11-character -hexadecimal value. I can capture that easily like this:

- -
(?<queue_id>[0-9A-F]{11})
-
- -

Alternately, you can create a custom patterns file.

- - - - -

For example, doing the postfix queue id example as above:

- -
# in ./patterns/postfix 
-POSTFIX_QUEUEID [0-9A-F]{11}
-
- -

Then use the patterns_dir setting in this plugin to tell logstash where -your custom patterns directory is. Here's a full example with a sample log:

- -
Jan  1 06:25:43 mailserver14 postfix/cleanup[21403]: BEF25A72965: message-id=<20130101142543.5828399CCAF@mailserver14.example.com>
-
-filter {
-  grok {
-    patterns_dir => "./patterns"
-    match => [ "message", "%{SYSLOGBASE} %{POSTFIX_QUEUEID:queue_id}: %{GREEDYDATA:message}" ]
-  }
-}
-
- -

The above will match and result in the following fields:

- - - - -

The timestamp, logsource, program, and pid fields come from the -SYSLOGBASE pattern which itself is defined by other patterns.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  grok {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    break_on_match => ... # boolean (optional), default: true
-    drop_if_match => ... # boolean (optional), default: false
-    keep_empty_captures => ... # boolean (optional), default: false
-    match => ... # hash (optional), default: {}
-    named_captures_only => ... # boolean (optional), default: true
-    overwrite => ... # array (optional), default: []
-    patterns_dir => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    singles => ... # boolean (optional), default: true
-    tag_on_failure => ... # array (optional), default: ["_grokparsefailure"]
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  grok {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  grok {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - break_on_match - - -

- - - -

Break on first match. The first successful match by grok will result in the -filter being finished. If you want grok to try all patterns (maybe you are -parsing different things), then set this to false.

- -

- - drop_if_match - - -

- - - -

Drop if matched. Note, this feature may not stay. It is preferable to combine -grok + grep filters to do parsing + dropping.

- -

requested in: googlecode/issue/26

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - keep_empty_captures - - -

- - - -

If true, keep empty captures as event fields.

- -

- - match - - -

- - - -

A hash of matches of field => value

- -

For example:

- -
filter {
-  grok {
-    match => [ "message", "Duration: %{NUMBER:duration}" ]
-  }
-}
-
- -

- - named_captures_only - - -

- - - -

If true, only store named captures from grok.

- -

- - overwrite - - -

- - - -

The fields to overwrite.

- -

This allows you to overwrite a value in a field that already exists.

- -

For example, if you have a syslog line in the 'message' field, you can -overwrite the 'message' field with part of the match like so:

- -
filter {
-  grok {
-    match => [ 
-      "message",
-      "%{SYSLOGBASE} %{DATA:message}
-    ]
-    overwrite => [ "message" ]
-  }
-}
-
- -

In this case, a line like "May 29 16:37:11 sadness logger: hello world" - will be parsed and 'hello world' will overwrite the original message.

- -

- - pattern - DEPRECATED - -

- - - -

Specify a pattern to parse with. This will match the 'message' field.

- -

If you want to match other fields than message, use the 'match' setting. -Multiple patterns is fine.

- -

- - patterns_dir - - -

- - - -

logstash ships by default with a bunch of patterns, so you don't -necessarily need to define this yourself unless you are adding additional -patterns.

- -

Pattern files are plain text with format:

- -
NAME PATTERN
-
- -

For example:

- -
NUMBER \d+
-
- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  grok {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  grok {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - singles - - -

- - - -

If true, make single-value fields simply that value, not an array -containing that one value.

- -

- - tag_on_failure - - -

- - - -

If true, ensure the '_grokparsefailure' tag is present when there has been no -successful match

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/grok.rb diff --git a/docs/1.2.0.beta1/filters/grokdiscovery.html b/docs/1.2.0.beta1/filters/grokdiscovery.html deleted file mode 100644 index 71490f411..000000000 --- a/docs/1.2.0.beta1/filters/grokdiscovery.html +++ /dev/null @@ -1,191 +0,0 @@ ---- -title: logstash docs for filters/grokdiscovery -layout: content_right ---- -

grokdiscovery

-

Milestone: 1

- -

TODO(sissel): This is not supported yet. There is a bug in grok discovery -that causes segfaults in libgrok.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  grokdiscovery {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  grokdiscovery {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  grokdiscovery {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  grokdiscovery {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  grokdiscovery {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/grokdiscovery.rb diff --git a/docs/1.2.0.beta1/filters/json.html b/docs/1.2.0.beta1/filters/json.html deleted file mode 100644 index 6d5673db0..000000000 --- a/docs/1.2.0.beta1/filters/json.html +++ /dev/null @@ -1,250 +0,0 @@ ---- -title: logstash docs for filters/json -layout: content_right ---- -

json

-

Milestone: 2

- -

JSON filter. Takes a field that contains JSON and expands it into -an actual datastructure.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  json {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    source => ... # string (required)
-    target => ... # string (optional)
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  json {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  json {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  json {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  json {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - source (required setting) - - -

- - - -

Config for json is:

- -
source => source_field
-
- -

For example, if you have json data in the @message field:

- -
filter {
-  json {
-    source => "message"
-  }
-}
-
- -

The above would parse the json from the @message field

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - target - - -

- - - -

Define target for placing the data. If this setting is omitted, -the json data will be stored at the root of the event.

- -

For example if you want the data to be put in the 'doc' field:

- -
filter {
-  json {
-    target => "doc"
-  }
-}
-
- -

json in the value of the source field will be expanded into a -datastructure in the "target" field.

- -

Note: if the "target" field already exists, it will be overwritten.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/json.rb diff --git a/docs/1.2.0.beta1/filters/json_encode.html b/docs/1.2.0.beta1/filters/json_encode.html deleted file mode 100644 index d6b0fb2c6..000000000 --- a/docs/1.2.0.beta1/filters/json_encode.html +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: logstash docs for filters/json_encode -layout: content_right ---- -

json_encode

-

Milestone: 2

- -

JSON encode filter. Takes a field and serializes it into JSON

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  json_encode {
-    /[A-Za-z0-9_@-]+/ => ... # string (optional)
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - /[A-Za-z0-9_@-]+/ - - -

- - - -

Config for json_encode is:

- - - - -

For example, if you have a field named 'foo', and you want to store the -JSON encoded string in 'bar', do this:

- -
filter {
-  json_encode {
-    foo => bar
-  }
-}
-
- -

Note: if the "dest" field already exists, it will be overridden.

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  json_encode {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  json_encode {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  json_encode {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  json_encode {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/json_encode.rb diff --git a/docs/1.2.0.beta1/filters/kv.html b/docs/1.2.0.beta1/filters/kv.html deleted file mode 100644 index 4a57923dc..000000000 --- a/docs/1.2.0.beta1/filters/kv.html +++ /dev/null @@ -1,476 +0,0 @@ ---- -title: logstash docs for filters/kv -layout: content_right ---- -

kv

-

Milestone: 2

- -

This filter helps automatically parse messages which are of the 'foo=bar' -variety.

- -

For example, if you have a log message which contains 'ip=1.2.3.4 -error=REFUSED', you can parse those automatically by doing:

- -
filter {
-  kv { }
-}
-
- -

The above will result in a message of "ip=1.2.3.4 error=REFUSED" having -the fields:

- - - - -

This is great for postfix, iptables, and other types of logs that -tend towards 'key=value' syntax.

- -

Further, this can often be used to parse query parameters like -'foo=bar&baz=fizz' by setting the field_split to "&"

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  kv {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    default_keys => ... # hash (optional), default: {}
-    exclude_keys => ... # array (optional), default: []
-    field_split => ... # string (optional), default: " "
-    include_keys => ... # array (optional), default: []
-    prefix => ... # string (optional), default: ""
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    source => ... # string (optional), default: "message"
-    target => ... # string (optional)
-    trim => ... # string (optional)
-    trimkey => ... # string (optional)
-    value_split => ... # string (optional), default: "="
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  kv {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  kv {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - default_keys - - -

- - - -

A hash that specifies the default keys and their values that should be added to event -in case these keys do no exist in the source field being parsed.

- -
filter {
-  kv {
-    default_keys = [ "from", "logstash@example.com",
-                     "to", "default@dev.null" ]
-  }
-}
-
- -

- - exclude_keys - - -

- - - -

An array that specifies the parsed keys which should not be added to event. -By default no keys will be excluded.

- -

Example, to exclude "from" and "to" from a source like "Hey, from=, to=def foo=bar" -while "foo" key will be added to event.

- -
filter {
-  kv {
-    exclude_keys = [ "from", "to" ]
-  }
-}
-
- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - field_split - - -

- - - -

A string of characters to use as delimiters for parsing out key-value pairs.

- -

These characters form a regex character class and thus you must escape special regex -characters like [ or ] using .

- -

Example with URL Query Strings

- -

Example, to split out the args from a url query string such as -'?pin=12345~0&d=123&e=foo@bar.com&oq=bobo&ss=12345':

- -
filter {
-  kv {
-    field_split => "&?"
-  }
-}
-
- -

The above splits on both "&" and "?" characters, giving you the following -fields:

- - - - -

- - include_keys - - -

- - - -

An array that specifies the parsed keys which should be added to event. -By default all keys will be added.

- -

Example, to include only "from" and "to" from a source like "Hey, from=, to=def foo=bar" -while "foo" key will not be added to event.

- -
filter {
-  kv {
-    include_keys = [ "from", "to" ]
-  }
-}
-
- -

- - prefix - - -

- - - -

A string to prepend to all of the extracted keys

- -

Example, to prepend arg_ to all keys:

- -
filter { kv { prefix => "arg_" } }
-
- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  kv {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  kv {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - source - - -

- - - -

The fields to perform 'key=value' searching on

- -

Example, to use the message field:

- -
filter { kv { source => "message" } }
-
- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - target - - -

- - - -

The name of the container to put all of the key-value pairs into

- -

If this setting is omitted, fields will be written to the root of the -event.

- -

Example, to place all keys into field kv:

- -
filter { kv { target => "kv" } }
-
- -

- - trim - - -

- - - -

A string of characters to trim from the value. This is useful if your -values are wrapped in brackets or are terminated by comma (like postfix -logs)

- -

These characters form a regex character class and thus you must escape special regex -characters like [ or ] using .

- -

Example, to strip '<' '>' '[' ']' and ',' characters from values:

- -
filter {
-  kv {
-    trim => "<>\[\],"
-  }
-}
-
- -

- - trimkey - - -

- - - -

A string of characters to trim from the key. This is useful if your -key are wrapped in brackets or starts with space

- -

These characters form a regex character class and thus you must escape special regex -characters like [ or ] using .

- -

Example, to strip '<' '>' '[' ']' and ',' characters from keys:

- -
filter {
-  kv {
-    trimkey => "<>\[\],"
-  }
-}
-
- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- -

- - value_split - - -

- - - -

A string of characters to use as delimiters for identifying key-value relations.

- -

These characters form a regex character class and thus you must escape special regex -characters like [ or ] using .

- -

Example, to identify key-values such as -'key1:value1 key2:value2':

- -
filter { kv { value_split => ":" } }
-
- - -
- -This is documentation from lib/logstash/filters/kv.rb diff --git a/docs/1.2.0.beta1/filters/metaevent.html b/docs/1.2.0.beta1/filters/metaevent.html deleted file mode 100644 index c19277ab6..000000000 --- a/docs/1.2.0.beta1/filters/metaevent.html +++ /dev/null @@ -1,220 +0,0 @@ ---- -title: logstash docs for filters/metaevent -layout: content_right ---- -

metaevent

-

Milestone: 1

- - - - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  metaevent {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    followed_by_tags => ... # array (required)
-    period => ... # number (optional), default: 5
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  metaevent {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  metaevent {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - followed_by_tags (required setting) - - -

- - - -

syntax: followed_by_tags => [ "tag", "tag" ]

- -

- - period - - -

- - - -

syntax: period => 60

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  metaevent {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  metaevent {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/metaevent.rb diff --git a/docs/1.2.0.beta1/filters/metrics.html b/docs/1.2.0.beta1/filters/metrics.html deleted file mode 100644 index c8faa0d56..000000000 --- a/docs/1.2.0.beta1/filters/metrics.html +++ /dev/null @@ -1,348 +0,0 @@ ---- -title: logstash docs for filters/metrics -layout: content_right ---- -

metrics

-

Milestone: 1

- -

The metrics filter is useful for aggregating metrics.

- -

For example, if you have a field 'response' that is -a http response code, and you want to count each -kind of response, you can do this:

- -
filter {
-  metrics {
-    meter => [ "http.%{response}" ]
-    add_tag => "metric"
-  }
-}
-
- -

Metrics are flushed every 5 seconds. Metrics appear as -new events in the event stream and go through any filters -that occur after as well as outputs.

- -

In general, you will want to add a tag to your metrics and have an output -explicitly look for that tag.

- -

The event that is flushed will include every 'meter' and 'timer' -metric in the following way:

- -

'meter' values

- -

For a meter => "something" you will receive the following fields:

- - - - -

'timer' values

- -

For a timer => [ "thing", "%{duration}" ] you will receive the following fields:

- - - - -

Example: computing event rate

- -

For a simple example, let's track how many events per second are running -through logstash:

- -
input {
-  generator {
-    type => "generated"
-  }
-}
-
-filter {
-  metrics {
-    type => "generated"
-    meter => "events"
-    add_tag => "metric"
-  }
-}
-
-output {
-  stdout {
-    # only emit events with the 'metric' tag
-    tags => "metric"
-    message => "rate: %{events.rate_1m}"
-  }
-}
-
- -

Running the above:

- -
% java -jar logstash.jar agent -f example.conf
-rate: 23721.983566819246
-rate: 24811.395722536377
-rate: 25875.892745934525
-rate: 26836.42375967113
-
- -

We see the output includes our 'events' 1-minute rate.

- -

In the real world, you would emit this to graphite or another metrics store, -like so:

- -
output {
-  graphite {
-    metrics => [ "events.rate_1m", "%{events.rate_1m}" ]
-  }
-}
-
- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  metrics {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    ignore_older_than => ... # number (optional), default: 0
-    meter => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    timer => ... # hash (optional), default: {}
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  metrics {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  metrics {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - ignore_older_than - - -

- - - -

Don't track events that have @timestamp older than some number of seconds.

- -

This is useful if you want to only include events that are near real-time -in your metrics.

- -

Example, to only count events that are within 10 seconds of real-time, you -would do this:

- -
filter {
-  metrics {
-    meter => [ "hits" ]
-    ignore_older_than => 10
-  }
-}
-
- -

- - meter - - -

- - - -

syntax: meter => [ "name of metric", "name of metric" ]

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  metrics {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  metrics {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - timer - - -

- - - -

syntax: timer => [ "name of metric", "%{time_value}" ]

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/metrics.rb diff --git a/docs/1.2.0.beta1/filters/multiline.html b/docs/1.2.0.beta1/filters/multiline.html deleted file mode 100644 index c34c11a4a..000000000 --- a/docs/1.2.0.beta1/filters/multiline.html +++ /dev/null @@ -1,284 +0,0 @@ ---- -title: logstash docs for filters/multiline -layout: content_right ---- -

multiline

-

Milestone: 3

- -

This filter was replaced by a codec.

- -

See the multiline codec instead.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  multiline {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    negate => ... # boolean (optional), default: false
-    pattern => ... # string (required)
-    patterns_dir => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    source => ... # string (optional), default: "message"
-    stream_identity => ... # string (optional), default: "%{host}-%{path}-%{type}"
-    what => ... # string, one of ["previous", "next"] (required)
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  multiline {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  multiline {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - negate - - -

- - - - - -

- - pattern (required setting) - - -

- - - -

Leave these config settings until we remove this filter entirely. -THe idea is that we want the register method to cause an abort -giving the user a clue to use the codec instead of the filter.

- -

- - patterns_dir - - -

- - - - - -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  multiline {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  multiline {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - source - - -

- - - - - -

- - stream_identity - - -

- - - - - -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- -

- - what (required setting) - - -

- - - - - - -
- -This is documentation from lib/logstash/filters/multiline.rb diff --git a/docs/1.2.0.beta1/filters/mutate.html b/docs/1.2.0.beta1/filters/mutate.html deleted file mode 100644 index 053222747..000000000 --- a/docs/1.2.0.beta1/filters/mutate.html +++ /dev/null @@ -1,511 +0,0 @@ ---- -title: logstash docs for filters/mutate -layout: content_right ---- -

mutate

-

Milestone: 3

- -

The mutate filter allows you to do general mutations to fields. You -can rename, remove, replace, and modify fields in your events.

- -

TODO(sissel): Support regexp replacements like String#gsub ?

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  mutate {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    convert => ... # hash (optional)
-    gsub => ... # array (optional)
-    join => ... # hash (optional)
-    lowercase => ... # array (optional)
-    merge => ... # hash (optional)
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    rename => ... # hash (optional)
-    replace => ... # hash (optional)
-    split => ... # hash (optional)
-    strip => ... # array (optional)
-    update => ... # hash (optional)
-    uppercase => ... # array (optional)
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  mutate {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  mutate {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - convert - - -

- - - -

Convert a field's value to a different type, like turning a string to an -integer. If the field value is an array, all members will be converted. -If the field is a hash, no action will be taken.

- -

Valid conversion targets are: integer, float, string

- -

Example:

- -
filter {
-  mutate {
-    convert => [ "fieldname", "integer" ]
-  }
-}
-
- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - gsub - - -

- - - -

Convert a string field by applying a regular expression and a replacement -if the field is not a string, no action will be taken

- -

This configuration takes an array consisting of 3 elements per -field/substitution.

- -

be aware of escaping any backslash in the config file

- -

for example:

- -
filter {
-  mutate {
-    gsub => [
-      # replace all forward slashes with underscore
-      "fieldname", "/", "_",
-
-      # replace backslashes, question marks, hashes, and minuses with
-      # dot
-      "fieldname2", "[\\?#-]", "."
-    ]
-  }
-}
-
- -

- - join - - -

- - - -

Join an array with a separator character, does nothing on non-array fields

- -

Example:

- -

filter {

- -
 mutate { 
-   join => ["fieldname", ","]
- }
-
- -

}

- -

- - lowercase - - -

- - - -

Convert a string to its lowercase equivalent

- -

Example:

- -
filter {
-  mutate {
-    lowercase => [ "fieldname" ]
-  }
-}
-
- -

- - merge - - -

- - - -

merge two fields or arrays or hashes -String fields will be converted in array, so - array + string will work - string + string will result in an 2 entry array in dest_field - array and hash will not work

- -

Example:

- -
filter {
-  mutate { 
-     merge => ["dest_field", "added_field"]
-  }
-}
-
- -

- - remove - DEPRECATED - -

- - - -

Remove one or more fields.

- -

Example:

- -
filter {
-  mutate {
-    remove => [ "client" ]  # Removes the 'client' field
-  }
-}
-
- -

This option is deprecated, instead use remove_field option available in all -filters.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  mutate {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  mutate {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - rename - - -

- - - -

Rename one or more fields.

- -

Example:

- -
filter {
-  mutate {
-    # Renames the 'HOSTORIP' field to 'client_ip'
-    rename => [ "HOSTORIP", "client_ip" ]
-  }
-}
-
- -

- - replace - - -

- - - -

Replace a field with a new value. The new value can include %{foo} strings -to help you build a new value from other parts of the event.

- -

Example:

- -
filter {
-  mutate {
-    replace => [ "message", "%{source_host}: My new message" ]
-  }
-}
-
- -

- - split - - -

- - - -

Split a field to an array using a separator character. Only works on string -fields.

- -

Example:

- -
filter {
-  mutate { 
-     split => ["fieldname", ","]
-  }
-}
-
- -

- - strip - - -

- - - -

Strip whitespaces

- -

Example:

- -
filter {
-  mutate { 
-     strip => ["field1", "field2"]
-  }
-}
-
- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- -

- - update - - -

- - - -

Update an existing field with a new value. If the field does not exist, -then no action will be taken.

- -

Example:

- -
filter {
-  mutate {
-    update => [ "sample", "My new message" ]
-  }
-}
-
- -

- - uppercase - - -

- - - -

Convert a string to its uppercase equivalent

- -

Example:

- -
filter {
-  mutate {
-    uppercase => [ "fieldname" ]
-  }
-}
-
- - -
- -This is documentation from lib/logstash/filters/mutate.rb diff --git a/docs/1.2.0.beta1/filters/noop.html b/docs/1.2.0.beta1/filters/noop.html deleted file mode 100644 index 6a7883ebd..000000000 --- a/docs/1.2.0.beta1/filters/noop.html +++ /dev/null @@ -1,190 +0,0 @@ ---- -title: logstash docs for filters/noop -layout: content_right ---- -

noop

-

Milestone: 2

- -

No-op filter. This is used generally for internal/dev testing.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  noop {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  noop {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  noop {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  noop {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  noop {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/noop.rb diff --git a/docs/1.2.0.beta1/filters/prune.html b/docs/1.2.0.beta1/filters/prune.html deleted file mode 100644 index 4c8508d6a..000000000 --- a/docs/1.2.0.beta1/filters/prune.html +++ /dev/null @@ -1,308 +0,0 @@ ---- -title: logstash docs for filters/prune -layout: content_right ---- -

prune

-

Milestone: 1

- -

The prune filter is for pruning event data from @fileds based on whitelist/blacklist -of field names or their values (names and values can also be regular expressions).

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  prune {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    blacklist_names => ... # array (optional), default: ["%{[^}]+}"]
-    blacklist_values => ... # hash (optional), default: {}
-    interpolate => ... # boolean (optional), default: false
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    whitelist_names => ... # array (optional), default: []
-    whitelist_values => ... # hash (optional), default: {}
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  prune {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  prune {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - blacklist_names - - -

- - - -

Exclude fields which names match specified regexps, by default exclude unresolved %{field} strings.

- -
filter { 
-  prune { 
-    tags            => [ "apache-accesslog" ]
-    blacklist_names => [ "method", "(referrer|status)", "${some}_field" ]
-  }
-}
-
- -

- - blacklist_values - - -

- - - -

Exclude specified fields if their values match regexps. -In case field values are arrays, the fields are pruned on per array item -in case all array items are matched whole field will be deleted.

- -
filter { 
-  prune { 
-    tags             => [ "apache-accesslog" ]
-    blacklist_values => [ "uripath", "/index.php",
-                          "method", "(HEAD|OPTIONS)",
-                          "status", "^[^2]" ]
-  }
-}
-
- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - interpolate - - -

- - - -

Trigger whether configation fields and values should be interpolated for -dynamic values. -Probably adds some performance overhead. Defaults to false.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  prune {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  prune {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- -

- - whitelist_names - - -

- - - -

Include only fields only if their names match specified regexps, default to empty list which means include everything.

- -
filter { 
-  prune { 
-    tags            => [ "apache-accesslog" ]
-    whitelist_names => [ "method", "(referrer|status)", "${some}_field" ]
-  }
-}
-
- -

- - whitelist_values - - -

- - - -

Include specified fields only if their values match regexps. -In case field values are arrays, the fields are pruned on per array item -thus only matching array items will be included.

- -
filter { 
-  prune { 
-    tags             => [ "apache-accesslog" ]
-    whitelist_values => [ "uripath", "/index.php",
-                          "method", "(GET|POST)",
-                          "status", "^[^2]" ]
-  }
-}
-
- - -
- -This is documentation from lib/logstash/filters/prune.rb diff --git a/docs/1.2.0.beta1/filters/railsparallelrequest.html b/docs/1.2.0.beta1/filters/railsparallelrequest.html deleted file mode 100644 index 179853e65..000000000 --- a/docs/1.2.0.beta1/filters/railsparallelrequest.html +++ /dev/null @@ -1,192 +0,0 @@ ---- -title: logstash docs for filters/railsparallelrequest -layout: content_right ---- -

railsparallelrequest

-

Milestone: 1

- -

parallel request filter

- -

This filter will separate out the parallel requests into separate events.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  railsparallelrequest {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  railsparallelrequest {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  railsparallelrequest {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  railsparallelrequest {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  railsparallelrequest {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/railsparallelrequest.rb diff --git a/docs/1.2.0.beta1/filters/range.html b/docs/1.2.0.beta1/filters/range.html deleted file mode 100644 index 324647bc1..000000000 --- a/docs/1.2.0.beta1/filters/range.html +++ /dev/null @@ -1,251 +0,0 @@ ---- -title: logstash docs for filters/range -layout: content_right ---- -

range

-

Milestone: 1

- -

This filter is used to check that certain fields are within expected size/length ranges. -Supported types are numbers and strings. -Numbers are checked to be within numeric value range. -Strings are checked to be within string length range. -More than one range can be specified for same fieldname, actions will be applied incrementally. -Then field value is with in a specified range and action will be taken -supported actions are drop event add tag or add field with specified value.

- -

Example usecases are for histogram like tagging of events -or for finding anomaly values in fields or too big events that should be dropped.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  range {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    negate => ... # boolean (optional), default: false
-    ranges => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  range {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  range {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - negate - - -

- - - -

Negate the range match logic, events should be outsize of the specificed range to match.

- -

- - ranges - - -

- - - -

An array of field, min, max ,action tuples. -Example:

- -
filter {
-  range {
-    ranges => [ "message", 0, 10, "tag:short",
-                "message", 11, 100, "tag:medium",
-                "message", 101, 1000, "tag:long",
-                "message", 1001, 1e1000, "drop",
-                "duration", 0, 100, "field:latency:fast",
-                "duration", 101, 200, "field:latency:normal",
-                "duration", 201, 1000, "field:latency:slow",
-                "duration", 1001, 1e1000, "field:latency:outlier" 
-                "requests", 0, 10, "tag:to_few_%{source}_requests" ]
-  }
-}
-
- -

Supported actions are drop tag or field with specified value. -Added tag names and field names and field values can have %{dynamic} values.

- -

TODO(piavlo): The action syntax is ugly at the moment due to logstash grammar limitations - arrays grammar should support -TODO(piavlo): simple not nested hashses as values in addition to numaric and string values to prettify the syntax.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  range {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  range {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/range.rb diff --git a/docs/1.2.0.beta1/filters/ruby.html b/docs/1.2.0.beta1/filters/ruby.html deleted file mode 100644 index 8614dec9e..000000000 --- a/docs/1.2.0.beta1/filters/ruby.html +++ /dev/null @@ -1,231 +0,0 @@ ---- -title: logstash docs for filters/ruby -layout: content_right ---- -

ruby

-

Milestone: 1

- -

Execute ruby code.

- -

For example, to cancel 90% of events, you can do this:

- -
filter {
-  ruby {
-    # Cancel 90% of events
-    code => "event.cancel if rand <= 0.90"
-  } 
-} 
-
- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  ruby {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    code => ... # string (required)
-    init => ... # string (optional)
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  ruby {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  ruby {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - code (required setting) - - -

- - - -

The code to execute for every event. -You will have an 'event' variable available that is the event itself.

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - init - - -

- - - -

Any code to execute at logstash startup-time

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  ruby {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  ruby {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/ruby.rb diff --git a/docs/1.2.0.beta1/filters/sleep.html b/docs/1.2.0.beta1/filters/sleep.html deleted file mode 100644 index f82d0b98e..000000000 --- a/docs/1.2.0.beta1/filters/sleep.html +++ /dev/null @@ -1,286 +0,0 @@ ---- -title: logstash docs for filters/sleep -layout: content_right ---- -

sleep

-

Milestone: 1

- -

Sleep a given amount of time. This will cause logstash -to stall for the given amount of time. This is useful -for rate limiting, etc.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  sleep {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    every => ... # string (optional), default: 1
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    replay => ... # boolean (optional), default: false
-    time => ... # string (optional)
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  sleep {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  sleep {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - every - - -

- - - -

Sleep on every N'th. This option is ignored in replay mode.

- -

Example:

- -
filter {
-  sleep {
-    time => "1"   # Sleep 1 second 
-    every => 10   # on every 10th event
-  }
-}
-
- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  sleep {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  sleep {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - replay - - -

- - - -

Enable replay mode.

- -

Replay mode tries to sleep based on timestamps in each event.

- -

The amount of time to sleep is computed by subtracting the -previous event's timestamp from the current event's timestamp. -This helps you replay events in the same timeline as original.

- -

If you specify a time setting as well, this filter will -use the time value as a speed modifier. For example, -a time value of 2 will replay at double speed, while a -value of 0.25 will replay at 1/4th speed.

- -

For example:

- -
filter {
-  sleep {
-    time => 2
-    replay => true
-  }
-}
-
- -

The above will sleep in such a way that it will perform -replay 2-times faster than the original time speed.

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - time - - -

- - - -

The length of time to sleep, in seconds, for every event.

- -

This can be a number (eg, 0.5), or a string (eg, "%{foo}") -The second form (string with a field value) is useful if -you have an attribute of your event that you want to use -to indicate the amount of time to sleep.

- -

Example:

- -
filter {
-  sleep {
-    # Sleep 1 second for every event.
-    time => "1"
-  }
-}
-
- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/sleep.rb diff --git a/docs/1.2.0.beta1/filters/split.html b/docs/1.2.0.beta1/filters/split.html deleted file mode 100644 index 02b8ee6b1..000000000 --- a/docs/1.2.0.beta1/filters/split.html +++ /dev/null @@ -1,228 +0,0 @@ ---- -title: logstash docs for filters/split -layout: content_right ---- -

split

-

Milestone: 2

- -

The split filter is for splitting multiline messages into separate events.

- -

An example use case of this filter is for taking output from the 'exec' input -which emits one event for the whole output of a command and splitting that -output by newline - making each line an event.

- -

The end result of each split is a complete copy of the event -with only the current split section of the given field changed.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  split {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    field => ... # string (optional), default: "message"
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    terminator => ... # string (optional), default: "\n"
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  split {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  split {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - field - - -

- - - -

The field which value is split by the terminator

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  split {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  split {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - terminator - - -

- - - -

The string to split on. This is usually a line terminator, but can be any -string.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/split.rb diff --git a/docs/1.2.0.beta1/filters/syslog_pri.html b/docs/1.2.0.beta1/filters/syslog_pri.html deleted file mode 100644 index dc65ea3af..000000000 --- a/docs/1.2.0.beta1/filters/syslog_pri.html +++ /dev/null @@ -1,256 +0,0 @@ ---- -title: logstash docs for filters/syslog_pri -layout: content_right ---- -

syslog_pri

-

Milestone: 1

- -

Filter plugin for logstash to parse the PRI field from the front -of a Syslog (RFC3164) message. If no priority is set, it will -default to 13 (per RFC).

- -

This filter is based on the original syslog.rb code shipped -with logstash.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  syslog_pri {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    facility_labels => ... # array (optional), default: ["kernel", "user-level", "mail", "daemon", "security/authorization", "syslogd", "line printer", "network news", "uucp", "clock", "security/authorization", "ftp", "ntp", "log audit", "log alert", "clock", "local0", "local1", "local2", "local3", "local4", "local5", "local6", "local7"]
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    severity_labels => ... # array (optional), default: ["emergency", "alert", "critical", "error", "warning", "notice", "informational", "debug"]
-    syslog_pri_field_name => ... # string (optional), default: "syslog_pri"
-    use_labels => ... # boolean (optional), default: true
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  syslog_pri {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  syslog_pri {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - facility_labels - - -

- - - -

Labels for facility levels. This comes from RFC3164.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  syslog_pri {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  syslog_pri {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - severity_labels - - -

- - - -

Labels for severity levels. This comes from RFC3164.

- -

- - syslog_pri_field_name - - -

- - - -

Name of field which passes in the extracted PRI part of the syslog message

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- -

- - use_labels - - -

- - - -

set the status to experimental/beta/stable -Add human-readable names after parsing severity and facility from PRI

- - -
- -This is documentation from lib/logstash/filters/syslog_pri.rb diff --git a/docs/1.2.0.beta1/filters/translate.html b/docs/1.2.0.beta1/filters/translate.html deleted file mode 100644 index 383c1b3a4..000000000 --- a/docs/1.2.0.beta1/filters/translate.html +++ /dev/null @@ -1,340 +0,0 @@ ---- -title: logstash docs for filters/translate -layout: content_right ---- -

translate

-

Milestone: 1

- -

Originally written to translate HTTP response codes -but turned into a general translation tool which uses -configured has or/and .yaml files as a dictionary. -response codes in default dictionary were scraped from -'gem install cheat; cheat status_codes'

- -

Alternatively for simple string search and replacements for just a few values -use the gsub function of the mutate filter.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  translate {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    destination => ... # string (optional), default: "translation"
-    dictionary => ... # hash (optional), default: {}
-    dictionary_path => ... # a valid filesystem path (optional)
-    exact => ... # boolean (optional), default: true
-    fallback => ... # string (optional)
-    field => ... # string (required)
-    override => ... # boolean (optional), default: false
-    regex => ... # boolean (optional), default: false
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  translate {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  translate {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - destination - - -

- - - -

The destination field you wish to populate with the translation code. -default is "translation". -Set to the same value as source if you want to do a substitution, in this case filter will allways succeed.

- -

- - dictionary - - -

- - - -

Dictionary to use for translation. -Example:

- -
filter {
-  translate {
-    dictionary => [ "100", "Continue",
-                    "101", "Switching Protocols",
-                    "200", "OK",
-                    "201", "Created",
-                    "202", "Accepted" ]
-  }
-}
-
- -

- - dictionary_path - - -

- - - -

name with full path of external dictionary file.
-format of the table should be a YAML file which will be merged with the @dictionary. -make sure you encase any integer based keys in quotes. -The YAML file should look something like this:

- -
100: Continue
-101: Switching Protocols
-
- -

- - exact - - -

- - - -

set to false if you want to match multiple terms -a large dictionary could get expensive if set to false.

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - fallback - - -

- - - -

Incase no translation was made add default translation string

- -

- - field (required setting) - - -

- - - -

The field containing a response code If this field is an -array, only the first value will be used.

- -

- - override - - -

- - - -

In case dstination field already exists should we skip translation(default) or override it with new translation

- -

- - regex - - -

- - - -

treat dictionary keys as regular expressions to match against, used only then @exact enabled.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  translate {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  translate {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/translate.rb diff --git a/docs/1.2.0.beta1/filters/urldecode.html b/docs/1.2.0.beta1/filters/urldecode.html deleted file mode 100644 index 4e1031b96..000000000 --- a/docs/1.2.0.beta1/filters/urldecode.html +++ /dev/null @@ -1,220 +0,0 @@ ---- -title: logstash docs for filters/urldecode -layout: content_right ---- -

urldecode

-

Milestone: 2

- -

The urldecode filter is for decoding fields that are urlencoded.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  urldecode {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    all_fields => ... # boolean (optional), default: false
-    field => ... # string (optional), default: "message"
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  urldecode {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  urldecode {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - all_fields - - -

- - - -

Urldecode all fields

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - field - - -

- - - -

The field which value is urldecoded

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  urldecode {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  urldecode {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/urldecode.rb diff --git a/docs/1.2.0.beta1/filters/useragent.html b/docs/1.2.0.beta1/filters/useragent.html deleted file mode 100644 index 133e76734..000000000 --- a/docs/1.2.0.beta1/filters/useragent.html +++ /dev/null @@ -1,266 +0,0 @@ ---- -title: logstash docs for filters/useragent -layout: content_right ---- -

useragent

-

Milestone: 1

- -

Parse user agent strings into structured data based on BrowserScope data

- -

UserAgent filter, adds information about user agent like family, operating -system, version, and device

- -

Logstash releases ship with the regexes.yaml database made available from -ua-parser with an Apache 2.0 license. For more details on ua-parser, see -https://github.com/tobie/ua-parser/.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  useragent {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    prefix => ... # string (optional), default: ""
-    regexes => ... # string (optional)
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    source => ... # string (required)
-    target => ... # string (optional)
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  useragent {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  useragent {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - prefix - - -

- - - -

A string to prepend to all of the extracted keys

- -

- - regexes - - -

- - - -

regexes.yaml file to use

- -

If not specified, this will default to the regexes.yaml that ships -with logstash.

- -

You can find the latest version of this here: -https://github.com/tobie/ua-parser/blob/master/regexes.yaml

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  useragent {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  useragent {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - source (required setting) - - -

- - - -

The field containing the user agent string. If this field is an -array, only the first value will be used.

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - target - - -

- - - -

The name of the field to assign user agent data into.

- -

If not specified user agent data will be stored in the root of the event.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/useragent.rb diff --git a/docs/1.2.0.beta1/filters/uuid.html b/docs/1.2.0.beta1/filters/uuid.html deleted file mode 100644 index ba320eed7..000000000 --- a/docs/1.2.0.beta1/filters/uuid.html +++ /dev/null @@ -1,246 +0,0 @@ ---- -title: logstash docs for filters/uuid -layout: content_right ---- -

uuid

-

Milestone: 2

- -

The uuid filter allows you to add a UUID field to messages. -This is useful to be able to control the _id messages are indexed into Elasticsearch -with, so that you can insert duplicate messages (i.e. the same message multiple times -without creating duplicates) - for log pipeline reliability

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  uuid {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    field => ... # string (optional)
-    overwrite => ... # boolean (optional), default: false
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  uuid {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  uuid {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - field - - -

- - - -

Add a UUID to a field.

- -

Example:

- -
filter {
-  uuid {
-    field => "@uuid"
-  }
-}
-
- -

- - overwrite - - -

- - - -

If the value in the field currently (if any) should be overridden -by the generated UUID. Defaults to false (i.e. if the field is -present, with ANY value, it won't be overridden)

- -

Example:

- -

filter {

- -
  uuid {
-    field     => "@uuid"
-    overwrite => true
-  }
-
- -

}

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  uuid {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  uuid {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/uuid.rb diff --git a/docs/1.2.0.beta1/filters/xml.html b/docs/1.2.0.beta1/filters/xml.html deleted file mode 100644 index 2a6ce4ff6..000000000 --- a/docs/1.2.0.beta1/filters/xml.html +++ /dev/null @@ -1,294 +0,0 @@ ---- -title: logstash docs for filters/xml -layout: content_right ---- -

xml

-

Milestone: 1

- -

XML filter. Takes a field that contains XML and expands it into -an actual datastructure.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  xml {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    source => ... # string (optional)
-    store_xml => ... # boolean (optional), default: true
-    target => ... # string (optional)
-    xpath => ... # hash (optional), default: {}
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  xml {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  xml {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  xml {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  xml {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - source - - -

- - - -

Config for xml to hash is:

- -
source => source_field
-
- -

For example, if you have the whole xml document in your @message field:

- -
filter {
-  xml {
-    source => "message"
-  }
-}
-
- -

The above would parse the xml from the @message field

- -

- - store_xml - - -

- - - -

By default the filter will store the whole parsed xml in the destination -field as described above. Setting this to false will prevent that.

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - target - - -

- - - -

Define target for placing the data

- -

for example if you want the data to be put in the 'doc' field:

- -
filter {
-  xml {
-    target => "doc"
-  }
-}
-
- -

XML in the value of the source field will be expanded into a -datastructure in the "target" field. -Note: if the "target" field already exists, it will be overridden -Required

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- -

- - xpath - - -

- - - -

xpath will additionally select string values (.to_s on whatever is selected) -from parsed XML (using each source field defined using the method above) -and place those values in the destination fields. Configuration:

- -

xpath => [ "xpath-syntax", "destination-field" ]

- -

Values returned by XPath parsring from xpath-synatx will be put in the -destination field. Multiple values returned will be pushed onto the -destination field as an array. As such, multiple matches across -multiple source fields will produce duplicate entries in the field

- -

More on xpath: http://www.w3schools.com/xpath/

- -

The xpath functions are particularly powerful: -http://www.w3schools.com/xpath/xpath_functions.asp

- - -
- -This is documentation from lib/logstash/filters/xml.rb diff --git a/docs/1.2.0.beta1/filters/zeromq.html b/docs/1.2.0.beta1/filters/zeromq.html deleted file mode 100644 index 376ef13dd..000000000 --- a/docs/1.2.0.beta1/filters/zeromq.html +++ /dev/null @@ -1,280 +0,0 @@ ---- -title: logstash docs for filters/zeromq -layout: content_right ---- -

zeromq

-

Milestone: 1

- -

ZeroMQ filter. This is the best way to send an event externally for filtering -It works much like an exec filter would by sending the event "offsite" -for processing and waiting for a response

- -

The protocol here is: - * REQ sent with JSON-serialized logstash event - * REP read expected to be the full JSON 'filtered' event - * - if reply read is an empty string, it will cancel the event.

- -

Note that this is a limited subset of the zeromq functionality in -inputs and outputs. The only topology that makes sense here is: -REQ/REP.

- - -

Synopsis

- -This is what it might look like in your config file: - -
filter {
-  zeromq {
-    add_field => ... # hash (optional), default: {}
-    add_tag => ... # array (optional), default: []
-    address => ... # string (optional), default: "tcp://127.0.0.1:2121"
-    field => ... # string (optional)
-    mode => ... # string, one of ["server", "client"] (optional), default: "client"
-    remove_field => ... # array (optional), default: []
-    remove_tag => ... # array (optional), default: []
-    sockopt => ... # hash (optional)
-}
-
-}
-
- -

Details

- -

- - add_field - - -

- - - -

If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  zeromq {
-    add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.

- -

- - add_tag - - -

- - - -

If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  zeromq {
-    add_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"

- -

- - address - - -

- - - -

0mq socket address to connect or bind -Please note that inproc:// will not work with logstash -as we use a context per thread -By default, filters connect

- -

- - exclude_tags - DEPRECATED - -

- - - -

Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.

- -

- - field - - -

- - - -

The field to send off-site for processing -If this is unset, the whole event will be sent -TODO (lusis) -Allow filtering multiple fields

- -

- - mode - - -

- - - -

0mq mode -server mode binds/listens -client mode connects

- -

- - remove_field - - -

- - - -

If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:

- -
filter {
-  zeromq {
-    remove_field => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present

- -

- - remove_tag - - -

- - - -

If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:

- -
filter {
-  zeromq {
-    remove_tag => [ "foo_%{somefield}" ]
-  }
-}
-
- -

If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present

- -

- - sockopt - - -

- - - -

0mq socket options -This exposes zmq_setsockopt -for advanced tuning -see http://api.zeromq.org/2-1:zmq-setsockopt for details

- -

This is where you would set values like: -ZMQ::HWM - high water mark -ZMQ::IDENTITY - named queues -ZMQ::SWAP_SIZE - space for disk overflow -ZMQ::SUBSCRIBE - topic filters for pubsub

- -

example: sockopt => ["ZMQ::HWM", 50, "ZMQ::IDENTITY", "mynamedqueue"]

- -

- - tags - DEPRECATED - -

- - - -

Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.

- -

- - type - DEPRECATED - -

- - - -

Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

- - -
- -This is documentation from lib/logstash/filters/zeromq.rb diff --git a/docs/1.2.0.beta1/flags.md b/docs/1.2.0.beta1/flags.md deleted file mode 100644 index 8ebcf1457..000000000 --- a/docs/1.2.0.beta1/flags.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Command-line flags - logstash -layout: content_right ---- -# Command-line flags - -## Agent - -The logstash agent has the following flags (also try using the '--help' flag) - -
-
-f, --config CONFIGFILE
-
Load the logstash config from a specific file, directory, or a -wildcard. If given a directory or wildcard, config files will be read -from the directory in alphabetical order.
-
-e CONFIGSTRING
-
Use the given string as the configuration data. Same syntax as the -config file. If not input is specified, 'stdin { type => stdin }' is -default. If no output is specified, 'stdout { debug => true }}' is -default.
-
-w, --filterworkers COUNT
-
Run COUNT filter workers (default: 1)
-
--watchdog-timeout TIMEOUT
-
Set watchdog timeout value in seconds. Default is 10.
-
-l, --log FILE
-
Log to a given path. Default is to log to stdout
-
-v
-
Increase verbosity. There are multiple levels of verbosity available with -'-vv' currently being the highest
-
--pluginpath PLUGIN_PATH
-
A colon-delimted path to find other logstash plugins in
-
- -Note: Plugins can provide addition command-line flags, such as the -[grok](filters/grok) filter. Plugin-specific flags always start with the plugin -name, like --grok-foo. - -## Web UI - -The logstash web interface has the following flags (also try using the '--help' -flag) - -
-
--log FILE
-
Log to a given path. Default is stdout.
-
--address ADDRESS
-
Address on which to start webserver. Default is 0.0.0.0.
-
--port PORT
-
Port on which to start webserver. Default is 9292.
-
-B, --elasticsearch-bind-host ADDRESS
-
Address on which to bind elastic search node.
-
-b, --backend URL
-
The backend URL to use. Default is elasticsearch:/// (assumes multicast discovery). -You can specify elasticsearch://[host][:port]/[clustername]
-
diff --git a/docs/1.2.0.beta1/generate_index.rb b/docs/1.2.0.beta1/generate_index.rb deleted file mode 100644 index 6e7bed8e4..000000000 --- a/docs/1.2.0.beta1/generate_index.rb +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env ruby - -require "erb" - -if ARGV.size != 1 - $stderr.puts "No path given to search for plugin docs" - $stderr.puts "Usage: #{$0} plugin_doc_dir" - exit 1 -end - -def plugins(glob) - files = Dir.glob(glob) - names = files.collect { |f| File.basename(f).gsub(".html", "") } - return names.sort -end # def plugins - -basedir = ARGV[0] -docs = { - "inputs" => plugins(File.join(basedir, "inputs/*.html")), - "codecs" => plugins(File.join(basedir, "codecs/*.html")), - "filters" => plugins(File.join(basedir, "filters/*.html")), - "outputs" => plugins(File.join(basedir, "outputs/*.html")), -} - -template_path = File.join(File.dirname(__FILE__), "index.html.erb") -template = File.new(template_path).read -erb = ERB.new(template, nil, "-") -puts erb.result(binding) diff --git a/docs/1.2.0.beta1/index.html b/docs/1.2.0.beta1/index.html deleted file mode 100644 index d893ab758..000000000 --- a/docs/1.2.0.beta1/index.html +++ /dev/null @@ -1,462 +0,0 @@ ---- -title: logstash docs index -layout: content_right ---- -
- -

for users

- - -

for developers

-
  • writing your own plugins
  • - - -

    use cases and tutorials

    - - - -

    books and articles

    - - - -

    plugin documentation

    -
    -

    inputs

    - -
    -
    -

    codecs

    - -
    -
    -

    filters

    - -
    -
    -

    outputs

    - -
    -
    -
    diff --git a/docs/1.2.0.beta1/index.html.erb b/docs/1.2.0.beta1/index.html.erb deleted file mode 100644 index 50e2a1b96..000000000 --- a/docs/1.2.0.beta1/index.html.erb +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: logstash docs index -layout: content_right ---- -
    - -

    for users

    - - -

    for developers

    -
  • writing your own plugins
  • - - -

    use cases and tutorials

    - - - -

    books and articles

    - - - -

    plugin documentation

    -<% docs.each do |type, paths| -%> -
    -

    <%= type %>

    - -
    -<% end -%> -
    -
    diff --git a/docs/1.2.0.beta1/inputs/amqp.html b/docs/1.2.0.beta1/inputs/amqp.html deleted file mode 100644 index 7d4dfa88f..000000000 --- a/docs/1.2.0.beta1/inputs/amqp.html +++ /dev/null @@ -1,445 +0,0 @@ ---- -title: logstash docs for inputs/amqp -layout: content_right ---- -

    amqp

    -

    Milestone: 2

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  amqp {
    -    ack => ... # boolean (optional), default: true
    -    add_field => ... # hash (optional), default: {}
    -    arguments => ... # array (optional), default: {}
    -    auto_delete => ... # boolean (optional), default: true
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    durable => ... # boolean (optional), default: false
    -    exchange => ... # string (optional)
    -    exclusive => ... # boolean (optional), default: true
    -    host => ... # string (required)
    -    key => ... # string (optional), default: "logstash"
    -    passive => ... # boolean (optional), default: false
    -    password => ... # password (optional), default: "guest"
    -    port => ... # number (optional), default: 5672
    -    prefetch_count => ... # number (optional), default: 256
    -    queue => ... # string (optional), default: ""
    -    ssl => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    threads => ... # number (optional), default: 1
    -    type => ... # string (optional)
    -    user => ... # string (optional), default: "guest"
    -    verify_ssl => ... # boolean (optional), default: false
    -    vhost => ... # string (optional), default: "/"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - ack - - -

    - - - - - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - arguments - - -

    - - - - - -

    - - auto_delete - - -

    - - - - - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - durable - - -

    - - - - - -

    - - exchange - - -

    - - - - - -

    - - exclusive - - -

    - - - - - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host (required setting) - - -

    - - - - - -

    - - key - - -

    - - - - - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - passive - - -

    - - - - - -

    - - password - - -

    - - - - - -

    - - port - - -

    - - - - - -

    - - prefetch_count - - -

    - - - - - -

    - - queue - - -

    - - - - - -

    - - ssl - - -

    - - - - - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - threads - - -

    - - - - - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - user - - -

    - - - - - -

    - - verify_ssl - - -

    - - - - - -

    - - vhost - - -

    - - - - - - -
    - -This is documentation from lib/logstash/inputs/amqp.rb diff --git a/docs/1.2.0.beta1/inputs/drupal_dblog.html b/docs/1.2.0.beta1/inputs/drupal_dblog.html deleted file mode 100644 index e24106ded..000000000 --- a/docs/1.2.0.beta1/inputs/drupal_dblog.html +++ /dev/null @@ -1,250 +0,0 @@ ---- -title: logstash docs for inputs/drupal_dblog -layout: content_right ---- -

    drupal_dblog

    -

    Milestone: 1

    - -

    Retrieve watchdog log events from a Drupal installation with DBLog enabled. -The events are pulled out directly from the database. -The original events are not deleted, and on every consecutive run only new -events are pulled.

    - -

    The last watchdog event id that was processed is stored in the Drupal -variable table with the name "logstashlastwid". Delete this variable or -set it to 0 if you want to re-import all events.

    - -

    More info on DBLog: http://drupal.org/documentation/modules/dblog

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  drupal_dblog {
    -    add_field => ... # hash (optional), default: {}
    -    add_usernames => ... # boolean (optional), default: false
    -    bulksize => ... # number (optional), default: 5000
    -    codec => ... # codec (optional), default: "plain"
    -    databases => ... # hash (optional)
    -    debug => ... # boolean (optional), default: false
    -    interval => ... # number (optional), default: 10
    -    tags => ... # array (optional)
    -    type => ... # string (optional), default: "watchdog"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - add_usernames - - -

    - - - -

    By default, the event only contains the current user id as a field. -If you whish to add the username as an additional field, set this to true.

    - -

    - - bulksize - - -

    - - - -

    The amount of log messages that should be fetched with each query. -Bulk fetching is done to prevent querying huge data sets when lots of -messages are in the database.

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - databases - - -

    - - - -

    Specify all drupal databases that you whish to import from. -This can be as many as you whish. -The format is a hash, with a unique site name as the key, and a databse -url as the value.

    - -

    Example: -[ - "site1", "mysql://user1:password@host1.com/databasename", - "other_site", "mysql://user2:password@otherhost.com/databasename", - ... -]

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - interval - - -

    - - - -

    Time between checks in minutes.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Label this input with a type. -Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - - -
    - -This is documentation from lib/logstash/inputs/drupal_dblog.rb diff --git a/docs/1.2.0.beta1/inputs/elasticsearch.html b/docs/1.2.0.beta1/inputs/elasticsearch.html deleted file mode 100644 index c90162114..000000000 --- a/docs/1.2.0.beta1/inputs/elasticsearch.html +++ /dev/null @@ -1,254 +0,0 @@ ---- -title: logstash docs for inputs/elasticsearch -layout: content_right ---- -

    elasticsearch

    -

    Milestone: 1

    - -

    Read from elasticsearch.

    - -

    This is useful for replay testing logs, reindexing, etc.

    - -

    Example:

    - -
    input {
    -  # Read all documents from elasticsearch matching the given query
    -  elasticsearch {
    -    host => "localhost"
    -    query => "ERROR"
    -  }
    -}
    -
    - - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  elasticsearch {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (required)
    -    index => ... # string (optional), default: "logstash-*"
    -    port => ... # number (optional), default: 9200
    -    query => ... # string (optional), default: "*"
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host (required setting) - - -

    - - - -

    The address of your elasticsearch server

    - -

    - - index - - -

    - - - -

    The index to search

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - port - - -

    - - - -

    The http port of your elasticsearch server's REST interface

    - -

    - - query - - -

    - - - -

    The query to use

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/elasticsearch.rb diff --git a/docs/1.2.0.beta1/inputs/eventlog.html b/docs/1.2.0.beta1/inputs/eventlog.html deleted file mode 100644 index d43feb834..000000000 --- a/docs/1.2.0.beta1/inputs/eventlog.html +++ /dev/null @@ -1,200 +0,0 @@ ---- -title: logstash docs for inputs/eventlog -layout: content_right ---- -

    eventlog

    -

    Milestone: 2

    - -

    Pull events from a Windows Event Log

    - -

    To collect Events from the System Event Log, use a config like:

    - -
    input {
    -  eventlog {
    -    type  => 'Win32-EventLog'
    -    logfile  => 'System'
    -  }
    -}
    -
    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  eventlog {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    logfile => ... # array (optional), default: ["Application", "Security", "System"]
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - logfile - - -

    - - - -

    Event Log Name

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/eventlog.rb diff --git a/docs/1.2.0.beta1/inputs/exec.html b/docs/1.2.0.beta1/inputs/exec.html deleted file mode 100644 index 91f78a280..000000000 --- a/docs/1.2.0.beta1/inputs/exec.html +++ /dev/null @@ -1,214 +0,0 @@ ---- -title: logstash docs for inputs/exec -layout: content_right ---- -

    exec

    -

    Milestone: 2

    - -

    Run command line tools and capture the whole output as an event.

    - -

    Notes:

    - - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  exec {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    command => ... # string (required)
    -    debug => ... # boolean (optional), default: false
    -    interval => ... # number (required)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - command (required setting) - - -

    - - - -

    Command to run. For example, "uptime"

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - interval (required setting) - - -

    - - - -

    Interval to run the command. Value is in seconds.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/exec.rb diff --git a/docs/1.2.0.beta1/inputs/file.html b/docs/1.2.0.beta1/inputs/file.html deleted file mode 100644 index 54f6c2ee0..000000000 --- a/docs/1.2.0.beta1/inputs/file.html +++ /dev/null @@ -1,311 +0,0 @@ ---- -title: logstash docs for inputs/file -layout: content_right ---- -

    file

    -

    Milestone: 2

    - -

    Stream events from files.

    - -

    By default, each event is assumed to be one line. If you -want to join lines, you'll want to use the multiline filter.

    - -

    Files are followed in a manner similar to "tail -0F". File rotation -is detected and handled by this input.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  file {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    discover_interval => ... # number (optional), default: 15
    -    exclude => ... # array (optional)
    -    path => ... # array (required)
    -    sincedb_path => ... # string (optional)
    -    sincedb_write_interval => ... # number (optional), default: 15
    -    start_position => ... # string, one of ["beginning", "end"] (optional), default: "end"
    -    stat_interval => ... # number (optional), default: 1
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - discover_interval - - -

    - - - -

    How often we expand globs to discover new files to watch.

    - -

    - - exclude - - -

    - - - -

    Exclusions (matched against the filename, not full path). Globs -are valid here, too. For example, if you have

    - -
    path => "/var/log/*"
    -
    - -

    you might want to exclude gzipped files:

    - -
    exclude => "*.gz"
    -
    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - path (required setting) - - -

    - - - -

    TODO(sissel): This should switch to use the 'line' codec by default -once file following -The path to the file to use as an input. -You can use globs here, such as /var/log/*.log -Paths must be absolute and cannot be relative.

    - -

    - - sincedb_path - - -

    - - - -

    Where to write the since database (keeps track of the current -position of monitored log files). The default will write -sincedb files to some path matching "$HOME/.sincedb*"

    - -

    - - sincedb_write_interval - - -

    - - - -

    How often to write a since database with the current position of -monitored log files.

    - -

    - - start_position - - -

    - - - -

    Choose where logstash starts initially reading files - at the beginning or -at the end. The default behavior treats files like live streams and thus -starts at the end. If you have old data you want to import, set this -to 'beginning'

    - -

    This option only modifieds "first contact" situations where a file is new -and not seen before. If a file has already been seen before, this option -has no effect.

    - -

    - - stat_interval - - -

    - - - -

    How often we stat files to see if they have been modified. Increasing -this interval will decrease the number of system calls we make, but -increase the time to detect new log lines.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/file.rb diff --git a/docs/1.2.0.beta1/inputs/ganglia.html b/docs/1.2.0.beta1/inputs/ganglia.html deleted file mode 100644 index 339c6ed92..000000000 --- a/docs/1.2.0.beta1/inputs/ganglia.html +++ /dev/null @@ -1,206 +0,0 @@ ---- -title: logstash docs for inputs/ganglia -layout: content_right ---- -

    ganglia

    -

    Milestone: 1

    - -

    Read ganglia packets from the network via udp

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  ganglia {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "0.0.0.0"
    -    port => ... # number (optional), default: 8649
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    The address to listen on

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - port - - -

    - - - -

    The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root to use.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/ganglia.rb diff --git a/docs/1.2.0.beta1/inputs/gelf.html b/docs/1.2.0.beta1/inputs/gelf.html deleted file mode 100644 index 2ec66420a..000000000 --- a/docs/1.2.0.beta1/inputs/gelf.html +++ /dev/null @@ -1,239 +0,0 @@ ---- -title: logstash docs for inputs/gelf -layout: content_right ---- -

    gelf

    -

    Milestone: 2

    - -

    Read gelf messages as events over the network.

    - -

    This input is a good choice if you already use graylog2 today.

    - -

    The main reasoning for this input is to leverage existing GELF -logging libraries such as the gelf log4j appender

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  gelf {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "0.0.0.0"
    -    port => ... # number (optional), default: 12201
    -    remap => ... # boolean (optional), default: true
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    The address to listen on

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - port - - -

    - - - -

    The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root to use.

    - -

    - - remap - - -

    - - - -

    Whether or not to remap the gelf message fields to logstash event fields or -leave them intact.

    - -

    Default is true

    - -

    Remapping converts the following gelf fields to logstash equivalents:

    - - - - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/gelf.rb diff --git a/docs/1.2.0.beta1/inputs/gemfire.html b/docs/1.2.0.beta1/inputs/gemfire.html deleted file mode 100644 index aaf65282a..000000000 --- a/docs/1.2.0.beta1/inputs/gemfire.html +++ /dev/null @@ -1,306 +0,0 @@ ---- -title: logstash docs for inputs/gemfire -layout: content_right ---- -

    gemfire

    -

    Milestone: 1

    - -

    Push events to a GemFire region.

    - -

    GemFire is an object database.

    - -

    To use this plugin you need to add gemfire.jar to your CLASSPATH. -Using format=json requires jackson.jar too; use of continuous -queries requires antlr.jar.

    - -

    Note: this plugin has only been tested with GemFire 7.0.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  gemfire {
    -    add_field => ... # hash (optional), default: {}
    -    cache_name => ... # string (optional), default: "logstash"
    -    cache_xml_file => ... # string (optional), default: nil
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    interest_regexp => ... # string (optional), default: ".*"
    -    query => ... # string (optional), default: nil
    -    region_name => ... # string (optional), default: "Logstash"
    -    serialization => ... # string (optional), default: nil
    -    tags => ... # array (optional)
    -    threads => ... # number (optional), default: 1
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - cache_name - - -

    - - - -

    Your client cache name

    - -

    - - cache_xml_file - - -

    - - - -

    The path to a GemFire client cache XML file.

    - -

    Example:

    - -
     <client-cache>
    -   <pool name="client-pool" subscription-enabled="true" subscription-redundancy="1">
    -       <locator host="localhost" port="31331"/>
    -   </pool>
    -   <region name="Logstash">
    -       <region-attributes refid="CACHING_PROXY" pool-name="client-pool" >
    -       </region-attributes>
    -   </region>
    - </client-cache>
    -
    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - interest_regexp - - -

    - - - -

    A regexp to use when registering interest for cache events. -Ignored if a :query is specified.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - query - - -

    - - - -

    A query to run as a GemFire "continuous query"; if specified it takes -precedence over :interest_regexp which will be ignore.

    - -

    Important: use of continuous queries requires subscriptions to be enabled on the client pool.

    - -

    - - region_name - - -

    - - - -

    The region name

    - -

    - - serialization - - -

    - - - -

    How the message is serialized in the cache. Can be one of "json" or "plain"; default is plain

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - threads - - -

    - - - -

    Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/gemfire.rb diff --git a/docs/1.2.0.beta1/inputs/generator.html b/docs/1.2.0.beta1/inputs/generator.html deleted file mode 100644 index 12a8885fe..000000000 --- a/docs/1.2.0.beta1/inputs/generator.html +++ /dev/null @@ -1,266 +0,0 @@ ---- -title: logstash docs for inputs/generator -layout: content_right ---- -

    generator

    -

    Milestone: 3

    - -

    Generate random log events.

    - -

    The general intention of this is to test performance of plugins.

    - -

    An event is generated first

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  generator {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    count => ... # number (optional), default: 0
    -    debug => ... # boolean (optional), default: false
    -    lines => ... # array (optional)
    -    message => ... # string (optional), default: "Hello world!"
    -    tags => ... # array (optional)
    -    threads => ... # number (optional), default: 1
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - count - - -

    - - - -

    Set how many messages should be generated.

    - -

    The default, 0, means generate an unlimited number of events.

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - lines - - -

    - - - -

    The lines to emit, in order. This option cannot be used with the 'message' -setting.

    - -

    Example:

    - -
    input {
    -  generator {
    -    lines => [
    -      "line 1",
    -      "line 2",
    -      "line 3"
    -    ]
    -  }
    -
    -  # Emit all lines 3 times.
    -  count => 3
    -}
    -
    - -

    The above will emit "line 1" then "line 2" then "line", then "line 1", etc...

    - -

    - - message - - -

    - - - -

    The message string to use in the event.

    - -

    If you set this to 'stdin' then this plugin will read a single line from -stdin and use that as the message string for every event.

    - -

    Otherwise, this value will be used verbatim as the event message.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - threads - - -

    - - - -

    Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/generator.rb diff --git a/docs/1.2.0.beta1/inputs/graphite.html b/docs/1.2.0.beta1/inputs/graphite.html deleted file mode 100644 index 4a783c182..000000000 --- a/docs/1.2.0.beta1/inputs/graphite.html +++ /dev/null @@ -1,325 +0,0 @@ ---- -title: logstash docs for inputs/graphite -layout: content_right ---- -

    graphite

    -

    Milestone: 1

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  graphite {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    data_timeout => ... # number (optional), default: -1
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "0.0.0.0"
    -    mode => ... # string, one of ["server", "client"] (optional), default: "server"
    -    port => ... # number (required)
    -    ssl_cacert => ... # a valid filesystem path (optional)
    -    ssl_cert => ... # a valid filesystem path (optional)
    -    ssl_enable => ... # boolean (optional), default: false
    -    ssl_key => ... # a valid filesystem path (optional)
    -    ssl_key_passphrase => ... # password (optional), default: nil
    -    ssl_verify => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - data_timeout - - -

    - - - - - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - - - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - mode - - -

    - - - - - -

    - - port (required setting) - - -

    - - - - - -

    - - ssl_cacert - - -

    - - - - - -

    - - ssl_cert - - -

    - - - - - -

    - - ssl_enable - - -

    - - - - - -

    - - ssl_key - - -

    - - - - - -

    - - ssl_key_passphrase - - -

    - - - - - -

    - - ssl_verify - - -

    - - - - - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/graphite.rb diff --git a/docs/1.2.0.beta1/inputs/heroku.html b/docs/1.2.0.beta1/inputs/heroku.html deleted file mode 100644 index 03abc75f9..000000000 --- a/docs/1.2.0.beta1/inputs/heroku.html +++ /dev/null @@ -1,204 +0,0 @@ ---- -title: logstash docs for inputs/heroku -layout: content_right ---- -

    heroku

    -

    Milestone: 1

    - -

    Stream events from a heroku app's logs.

    - -

    This will read events in a manner similar to how the heroku logs -t command -fetches logs.

    - -

    Recommended filters:

    - -
    filter {
    -  grok {
    -    pattern => "^%{TIMESTAMP_ISO8601:timestamp} %{WORD:component}\[%{WORD:process}(?:\.%{INT:instance:int})?\]: %{DATA:message}$"
    -  }
    -  date { timestamp => ISO8601 }
    -}
    -
    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  heroku {
    -    add_field => ... # hash (optional), default: {}
    -    app => ... # string (required)
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - app (required setting) - - -

    - - - -

    The name of your heroku application. This is usually the first part of the -the domain name 'my-app-name.herokuapp.com'

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/heroku.rb diff --git a/docs/1.2.0.beta1/inputs/imap.html b/docs/1.2.0.beta1/inputs/imap.html deleted file mode 100644 index 88cda1cb4..000000000 --- a/docs/1.2.0.beta1/inputs/imap.html +++ /dev/null @@ -1,313 +0,0 @@ ---- -title: logstash docs for inputs/imap -layout: content_right ---- -

    imap

    -

    Milestone: 1

    - -

    Read mail from IMAP servers

    - -

    Periodically scans INBOX and moves any read messages -to the trash.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  imap {
    -    add_field => ... # hash (optional), default: {}
    -    check_interval => ... # number (optional), default: 300
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    delete => ... # boolean (optional), default: false
    -    fetch_count => ... # number (optional), default: 50
    -    host => ... # string (required)
    -    lowercase_headers => ... # boolean (optional), default: true
    -    password => ... # password (required)
    -    port => ... # number (optional)
    -    secure => ... # boolean (optional), default: true
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -    user => ... # string (required)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - check_interval - - -

    - - - - - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - delete - - -

    - - - - - -

    - - fetch_count - - -

    - - - - - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host (required setting) - - -

    - - - - - -

    - - lowercase_headers - - -

    - - - - - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - password (required setting) - - -

    - - - - - -

    - - port - - -

    - - - - - -

    - - secure - - -

    - - - - - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - user (required setting) - - -

    - - - - - - -
    - -This is documentation from lib/logstash/inputs/imap.rb diff --git a/docs/1.2.0.beta1/inputs/irc.html b/docs/1.2.0.beta1/inputs/irc.html deleted file mode 100644 index 16bd9ac09..000000000 --- a/docs/1.2.0.beta1/inputs/irc.html +++ /dev/null @@ -1,298 +0,0 @@ ---- -title: logstash docs for inputs/irc -layout: content_right ---- -

    irc

    -

    Milestone: 1

    - -

    Read events from an IRC Server.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  irc {
    -    add_field => ... # hash (optional), default: {}
    -    channels => ... # array (required)
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (required)
    -    nick => ... # string (optional), default: "logstash"
    -    password => ... # password (optional)
    -    port => ... # number (optional), default: 6667
    -    real => ... # string (optional), default: "logstash"
    -    secure => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -    user => ... # string (optional), default: "logstash"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - channels (required setting) - - -

    - - - -

    Channels to join and read messages from.

    - -

    These should be full channel names including the '#' symbol, such as -"#logstash".

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host (required setting) - - -

    - - - -

    Host of the IRC Server to connect to.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - nick - - -

    - - - -

    IRC Nickname

    - -

    - - password - - -

    - - - -

    IRC Server password

    - -

    - - port - - -

    - - - -

    Port for the IRC Server

    - -

    - - real - - -

    - - - -

    IRC Real name

    - -

    - - secure - - -

    - - - -

    Set this to true to enable SSL.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - user - - -

    - - - -

    IRC Username

    - - -
    - -This is documentation from lib/logstash/inputs/irc.rb diff --git a/docs/1.2.0.beta1/inputs/log4j.html b/docs/1.2.0.beta1/inputs/log4j.html deleted file mode 100644 index 37d9ca843..000000000 --- a/docs/1.2.0.beta1/inputs/log4j.html +++ /dev/null @@ -1,244 +0,0 @@ ---- -title: logstash docs for inputs/log4j -layout: content_right ---- -

    log4j

    -

    Milestone: 1

    - -

    Read events over a TCP socket from Log4j SocketAppender.

    - -

    Can either accept connections from clients or connect to a server, -depending on mode. Depending on mode, you need a matching SocketAppender or SocketHubAppender on the remote side

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  log4j {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    data_timeout => ... # number (optional), default: 5
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "0.0.0.0"
    -    mode => ... # string, one of ["server", "client"] (optional), default: "server"
    -    port => ... # number (required)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - data_timeout - - -

    - - - -

    Read timeout in seconds. If a particular tcp connection is -idle for more than this timeout period, we will assume -it is dead and close it. -If you never want to timeout, use -1.

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    When mode is server, the address to listen on. -When mode is client, the address to connect to.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - mode - - -

    - - - -

    Mode to operate in. server listens for client connections, -client connects to a server.

    - -

    - - port (required setting) - - -

    - - - -

    When mode is server, the port to listen on. -When mode is client, the port to connect to.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/log4j.rb diff --git a/docs/1.2.0.beta1/inputs/lumberjack.html b/docs/1.2.0.beta1/inputs/lumberjack.html deleted file mode 100644 index 59112ec22..000000000 --- a/docs/1.2.0.beta1/inputs/lumberjack.html +++ /dev/null @@ -1,253 +0,0 @@ ---- -title: logstash docs for inputs/lumberjack -layout: content_right ---- -

    lumberjack

    -

    Milestone: 1

    - -

    Receive events using the lumberjack protocol.

    - -

    This is mainly to receive events shipped with lumberjack, -http://github.com/jordansissel/lumberjack

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  lumberjack {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "0.0.0.0"
    -    port => ... # number (required)
    -    ssl_certificate => ... # a valid filesystem path (required)
    -    ssl_key => ... # a valid filesystem path (required)
    -    ssl_key_passphrase => ... # password (optional)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    the address to listen on.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - port (required setting) - - -

    - - - -

    the port to listen on.

    - -

    - - ssl_certificate (required setting) - - -

    - - - -

    ssl certificate to use

    - -

    - - ssl_key (required setting) - - -

    - - - -

    ssl key to use

    - -

    - - ssl_key_passphrase - - -

    - - - -

    ssl key passphrase to use

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/lumberjack.rb diff --git a/docs/1.2.0.beta1/inputs/pipe.html b/docs/1.2.0.beta1/inputs/pipe.html deleted file mode 100644 index b13663211..000000000 --- a/docs/1.2.0.beta1/inputs/pipe.html +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: logstash docs for inputs/pipe -layout: content_right ---- -

    pipe

    -

    Milestone: 1

    - -

    Stream events from a long running command pipe.

    - -

    By default, each event is assumed to be one line. If you -want to join lines, you'll want to use the multiline filter.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  pipe {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    command => ... # string (required)
    -    debug => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - command (required setting) - - -

    - - - -

    TODO(sissel): This should switch to use the 'line' codec by default -once we switch away from doing 'readline' -Command to run and read events from, one line at a time.

    - -

    Example:

    - -

    command => "echo hello world"

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/pipe.rb diff --git a/docs/1.2.0.beta1/inputs/rabbitmq.html b/docs/1.2.0.beta1/inputs/rabbitmq.html deleted file mode 100644 index 1c98171b8..000000000 --- a/docs/1.2.0.beta1/inputs/rabbitmq.html +++ /dev/null @@ -1,479 +0,0 @@ ---- -title: logstash docs for inputs/rabbitmq -layout: content_right ---- -

    rabbitmq

    -

    Milestone: 1

    - -

    Pull events from a RabbitMQ exchange.

    - -

    The default settings will create an entirely transient queue and listen for all messages by default. -If you need durability or any other advanced settings, please set the appropriate options

    - -

    This has been tested with Bunny 0.9.x, which supports RabbitMQ 2.x and 3.x. You can -find links to both here:

    - - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  rabbitmq {
    -    ack => ... # boolean (optional), default: true
    -    add_field => ... # hash (optional), default: {}
    -    arguments => ... # array (optional), default: {}
    -    auto_delete => ... # boolean (optional), default: true
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    durable => ... # boolean (optional), default: false
    -    exchange => ... # string (optional)
    -    exclusive => ... # boolean (optional), default: true
    -    host => ... # string (required)
    -    key => ... # string (optional), default: "logstash"
    -    passive => ... # boolean (optional), default: false
    -    password => ... # password (optional), default: "guest"
    -    port => ... # number (optional), default: 5672
    -    prefetch_count => ... # number (optional), default: 256
    -    queue => ... # string (optional), default: ""
    -    ssl => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    threads => ... # number (optional), default: 1
    -    type => ... # string (optional)
    -    user => ... # string (optional), default: "guest"
    -    verify_ssl => ... # boolean (optional), default: false
    -    vhost => ... # string (optional), default: "/"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - ack - - -

    - - - -

    Enable message acknowledgement

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - arguments - - -

    - - - -

    Extra queue arguments as an array. -To make a RabbitMQ queue mirrored, use: {"x-ha-policy" => "all"}

    - -

    - - auto_delete - - -

    - - - -

    Should the queue be deleted on the broker when the last consumer -disconnects? Set this option to 'false' if you want the queue to remain -on the broker, queueing up messages until a consumer comes along to -consume them.

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Enable or disable logging

    - -

    - - durable - - -

    - - - -

    Is this queue durable? (aka; Should it survive a broker restart?)

    - -

    - - exchange - - -

    - - - -

    (Optional, backwards compatibility) Exchange binding

    - -

    Optional.

    - -

    The name of the exchange to bind the queue to.

    - -

    - - exclusive - - -

    - - - -

    Is the queue exclusive? (aka: Will other clients connect to this named queue?)

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host (required setting) - - -

    - - - -

    Connection

    - -

    RabbitMQ server address

    - -

    - - key - - -

    - - - -

    Optional.

    - -

    The routing key to use when binding a queue to the exchange. -This is only relevant for direct or topic exchanges.

    - - - - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - passive - - -

    - - - -

    Passive queue creation? Useful for checking queue existance without modifying server state

    - -

    - - password - - -

    - - - -

    RabbitMQ password

    - -

    - - port - - -

    - - - -

    RabbitMQ port to connect on

    - -

    - - prefetch_count - - -

    - - - -

    Prefetch count. Number of messages to prefetch

    - -

    - - queue - - -

    - - - -

    Queue & Consumer

    - -

    The name of the queue Logstash will consume events from.

    - -

    - - ssl - - -

    - - - -

    Enable or disable SSL

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - threads - - -

    - - - -

    Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - user - - -

    - - - -

    RabbitMQ username

    - -

    - - verify_ssl - - -

    - - - -

    Validate SSL certificate

    - -

    - - vhost - - -

    - - - -

    The vhost to use. If you don't know what this is, leave the default.

    - - -
    - -This is documentation from lib/logstash/inputs/rabbitmq.rb diff --git a/docs/1.2.0.beta1/inputs/redis.html b/docs/1.2.0.beta1/inputs/redis.html deleted file mode 100644 index 9742e064f..000000000 --- a/docs/1.2.0.beta1/inputs/redis.html +++ /dev/null @@ -1,355 +0,0 @@ ---- -title: logstash docs for inputs/redis -layout: content_right ---- -

    redis

    -

    Milestone: 2

    - -

    Read events from a redis. Supports both redis channels and also redis lists -(using BLPOP)

    - -

    For more information about redis, see http://redis.io/

    - -

    batch_count note

    - -

    If you use the 'batch_count' setting, you must use a redis version 2.6.0 or -newer. Anything older does not support the operations used by batching.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  redis {
    -    add_field => ... # hash (optional), default: {}
    -    batch_count => ... # number (optional), default: 1
    -    codec => ... # codec (optional), default: "plain"
    -    data_type => ... # string, one of ["list", "channel", "pattern_channel"] (optional)
    -    db => ... # number (optional), default: 0
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "127.0.0.1"
    -    key => ... # string (optional)
    -    password => ... # password (optional)
    -    port => ... # number (optional), default: 6379
    -    tags => ... # array (optional)
    -    threads => ... # number (optional), default: 1
    -    timeout => ... # number (optional), default: 5
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - batch_count - - -

    - - - -

    How many events to return from redis using EVAL

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - data_type - - -

    - - - -

    Either list or channel. If redis_type is list, then we will BLPOP the -key. If redis_type is channel, then we will SUBSCRIBE to the key. -If redis_type is pattern_channel, then we will PSUBSCRIBE to the key. -TODO: change required to true

    - -

    - - db - - -

    - - - -

    The redis database number.

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    The hostname of your redis server.

    - -

    - - key - - -

    - - - -

    The name of a redis list or channel. -TODO: change required to true

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - name - DEPRECATED - -

    - - - -

    Name is used for logging in case there are multiple instances. -This feature has no real function and will be removed in future versions.

    - -

    - - password - - -

    - - - -

    Password to authenticate with. There is no authentication by default.

    - -

    - - port - - -

    - - - -

    The port to connect on.

    - -

    - - queue - DEPRECATED - -

    - - - -

    The name of the redis queue (we'll use BLPOP against this). -TODO: remove soon.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - threads - - -

    - - - -

    Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times

    - -

    - - timeout - - -

    - - - -

    Initial connection timeout in seconds.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/redis.rb diff --git a/docs/1.2.0.beta1/inputs/relp.html b/docs/1.2.0.beta1/inputs/relp.html deleted file mode 100644 index 1e52995c5..000000000 --- a/docs/1.2.0.beta1/inputs/relp.html +++ /dev/null @@ -1,214 +0,0 @@ ---- -title: logstash docs for inputs/relp -layout: content_right ---- -

    relp

    -

    Milestone: 1

    - -

    Read RELP events over a TCP socket.

    - -

    For more information about RELP, see -http://www.rsyslog.com/doc/imrelp.html

    - -

    This protocol implements application-level acknowledgements to help protect -against message loss.

    - -

    Message acks only function as far as messages being put into the queue for -filters; anything lost after that point will not be retransmitted

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  relp {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "0.0.0.0"
    -    port => ... # number (required)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    The address to listen on.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - port (required setting) - - -

    - - - -

    The port to listen on.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/relp.rb diff --git a/docs/1.2.0.beta1/inputs/s3.html b/docs/1.2.0.beta1/inputs/s3.html deleted file mode 100644 index eef753ebf..000000000 --- a/docs/1.2.0.beta1/inputs/s3.html +++ /dev/null @@ -1,322 +0,0 @@ ---- -title: logstash docs for inputs/s3 -layout: content_right ---- -

    s3

    -

    Milestone: 1

    - -

    Stream events from files from a S3 bucket.

    - -

    Each line from each file generates an event. -Files ending in '.gz' are handled as gzip'ed files.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  s3 {
    -    add_field => ... # hash (optional), default: {}
    -    backup_to_bucket => ... # string (optional), default: nil
    -    backup_to_dir => ... # string (optional), default: nil
    -    bucket => ... # string (required)
    -    codec => ... # codec (optional), default: "plain"
    -    credentials => ... # array (optional), default: nil
    -    debug => ... # boolean (optional), default: false
    -    delete => ... # boolean (optional), default: false
    -    interval => ... # number (optional), default: 60
    -    prefix => ... # string (optional), default: nil
    -    region => ... # string (optional), default: "us-east-1"
    -    sincedb_path => ... # string (optional), default: nil
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - backup_to_bucket - - -

    - - - -

    Name of a S3 bucket to backup processed files to.

    - -

    - - backup_to_dir - - -

    - - - -

    Path of a local directory to backup processed files to.

    - -

    - - bucket (required setting) - - -

    - - - -

    The name of the S3 bucket.

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - credentials - - -

    - - - -

    TODO(sissel): refactor to use 'line' codec (requires removing both gzip -support and readline usage). Support gzip through a gzip codec! ;) -The credentials of the AWS account used to access the bucket. -Credentials can be specified: -- As an ["id","secret"] array -- As a path to a file containing AWSACCESSKEYID=... and AWSSECRETACCESSKEY=... -- In the environment (variables AWSACCESSKEYID and AWSSECRETACCESSKEY)

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - delete - - -

    - - - -

    Whether to delete processed files from the original bucket.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - interval - - -

    - - - -

    Interval to wait between to check the file list again after a run is finished. -Value is in seconds.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - prefix - - -

    - - - -

    If specified, the prefix the filenames in the bucket must match (not a regexp)

    - -

    - - region - - -

    - - - -

    The AWS region for your bucket.

    - -

    - - sincedb_path - - -

    - - - -

    Where to write the since database (keeps track of the date -the last handled file was added to S3). The default will write -sincedb files to some path matching "$HOME/.sincedb*"

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/s3.rb diff --git a/docs/1.2.0.beta1/inputs/snmptrap.html b/docs/1.2.0.beta1/inputs/snmptrap.html deleted file mode 100644 index d3d2bc842..000000000 --- a/docs/1.2.0.beta1/inputs/snmptrap.html +++ /dev/null @@ -1,242 +0,0 @@ ---- -title: logstash docs for inputs/snmptrap -layout: content_right ---- -

    snmptrap

    -

    Milestone: 1

    - -

    Read snmp trap messages as events

    - -

    Resulting @message looks like : - #<SNMP::SNMPv1Trap:0x6f1a7a4 @varbindlist=[#<SNMP::VarBind:0x2d7bcd8f @value="teststring", - @name=[1.11.12.13.14.15]>], @timestamp=#<SNMP::TimeTicks:0x1af47e9d @value=55>, @generictrap=6, - @enterprise=[1.2.3.4.5.6], @sourceip="127.0.0.1", @agentaddr=#<SNMP::IpAddress:0x29a4833e @value="\xC0\xC1\xC2\xC3">, - @specifictrap=99>

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  snmptrap {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    community => ... # string (optional), default: "public"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "0.0.0.0"
    -    port => ... # number (optional), default: 1062
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -    yamlmibdir => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - community - - -

    - - - -

    SNMP Community String to listen for.

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    The address to listen on

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - port - - -

    - - - -

    The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root to use. hence the default of 1062.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - yamlmibdir - - -

    - - - -

    directory of YAML MIB maps (same format ruby-snmp uses)

    - - -
    - -This is documentation from lib/logstash/inputs/snmptrap.rb diff --git a/docs/1.2.0.beta1/inputs/sqlite.html b/docs/1.2.0.beta1/inputs/sqlite.html deleted file mode 100644 index 1ef042b09..000000000 --- a/docs/1.2.0.beta1/inputs/sqlite.html +++ /dev/null @@ -1,274 +0,0 @@ ---- -title: logstash docs for inputs/sqlite -layout: content_right ---- -

    sqlite

    -

    Milestone: 1

    - -

    Read rows from an sqlite database.

    - -

    This is most useful in cases where you are logging directly to a table. -Any tables being watched must have an 'id' column that is monotonically -increasing.

    - -

    All tables are read by default except: -* ones matching 'sqlite%' - these are internal/adminstrative tables for sqlite -* 'sincetable' - this is used by this plugin to track state.

    - -

    Example

    - -
    % sqlite /tmp/example.db
    -sqlite> CREATE TABLE weblogs (
    -    id INTEGER PRIMARY KEY AUTOINCREMENT,
    -    ip STRING,
    -    request STRING,
    -    response INTEGER);
    -sqlite> INSERT INTO weblogs (ip, request, response) 
    -    VALUES ("1.2.3.4", "/index.html", 200);
    -
    - -

    Then with this logstash config:

    - -
    input {
    -  sqlite {
    -    path => "/tmp/example.db"
    -    type => weblogs
    -  }
    -}
    -output {
    -  stdout {
    -    debug => true
    -  }
    -}
    -
    - -

    Sample output:

    - -
    {
    -  "@source"      => "sqlite://sadness/tmp/x.db",
    -  "@tags"        => [],
    -  "@fields"      => {
    -    "ip"       => "1.2.3.4",
    -    "request"  => "/index.html",
    -    "response" => 200
    -  },
    -  "@timestamp"   => "2013-05-29T06:16:30.850Z",
    -  "@source_host" => "sadness",
    -  "@source_path" => "/tmp/x.db",
    -  "@message"     => "",
    -  "@type"        => "foo"
    -}
    -
    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  sqlite {
    -    add_field => ... # hash (optional), default: {}
    -    batch => ... # number (optional), default: 5
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    exclude_tables => ... # array (optional), default: []
    -    path => ... # string (required)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - batch - - -

    - - - -

    How many rows to fetch at a time from each SELECT call.

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - exclude_tables - - -

    - - - -

    Any tables to exclude by name. -By default all tables are followed.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - path (required setting) - - -

    - - - -

    The path to the sqlite database file.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/sqlite.rb diff --git a/docs/1.2.0.beta1/inputs/sqs.html b/docs/1.2.0.beta1/inputs/sqs.html deleted file mode 100644 index 3933d1c48..000000000 --- a/docs/1.2.0.beta1/inputs/sqs.html +++ /dev/null @@ -1,396 +0,0 @@ ---- -title: logstash docs for inputs/sqs -layout: content_right ---- -

    sqs

    -

    Milestone: 1

    - -

    Pull events from an Amazon Web Services Simple Queue Service (SQS) queue.

    - -

    SQS is a simple, scalable queue system that is part of the -Amazon Web Services suite of tools.

    - -

    Although SQS is similar to other queuing systems like AMQP, it -uses a custom API and requires that you have an AWS account. -See http://aws.amazon.com/sqs/ for more details on how SQS works, -what the pricing schedule looks like and how to setup a queue.

    - -

    To use this plugin, you must:

    - - - - -

    The "consumer" identity must have the following permissions on the queue:

    - - - - -

    Typically, you should setup an IAM policy, create a user and apply the IAM policy to the user. -A sample policy is as follows:

    - -
    {
    -  "Statement": [
    -    {
    -      "Action": [
    -        "sqs:ChangeMessageVisibility",
    -        "sqs:ChangeMessageVisibilityBatch",
    -        "sqs:GetQueueAttributes",
    -        "sqs:GetQueueUrl",
    -        "sqs:ListQueues",
    -        "sqs:SendMessage",
    -        "sqs:SendMessageBatch"
    -      ],
    -      "Effect": "Allow",
    -      "Resource": [
    -        "arn:aws:sqs:us-east-1:123456789012:Logstash"
    -      ]
    -    }
    -  ]
    -} 
    -
    - -

    See http://aws.amazon.com/iam/ for more details on setting up AWS identities.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  sqs {
    -    access_key_id => ... # string (optional)
    -    add_field => ... # hash (optional), default: {}
    -    aws_credentials_file => ... # string (optional)
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    id_field => ... # string (optional)
    -    md5_field => ... # string (optional)
    -    queue => ... # string (required)
    -    region => ... # string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us-east-1"
    -    secret_access_key => ... # string (optional)
    -    sent_timestamp_field => ... # string (optional)
    -    tags => ... # array (optional)
    -    threads => ... # number (optional), default: 1
    -    type => ... # string (optional)
    -    use_ssl => ... # boolean (optional), default: true
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - access_key_id - - -

    - - - -

    This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order...
    -1. Static configuration, using access_key_id and secret_access_key params in logstash plugin config
    -2. External credentials file specified by aws_credentials_file
    -3. Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
    -4. Environment variables AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY
    -5. IAM Instance Profile (available when running inside EC2)

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - aws_credentials_file - - -

    - - - -

    Path to YAML file containing a hash of AWS credentials.
    -This file will only be loaded if access_key_id and -secret_access_key aren't set. The contents of the -file should look like this:

    - -
    :access_key_id: "12345"
    -:secret_access_key: "54321"
    -
    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - id_field - - -

    - - - -

    Name of the event field in which to store the SQS message ID

    - -

    - - md5_field - - -

    - - - -

    Name of the event field in which to store the SQS message MD5 checksum

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - queue (required setting) - - -

    - - - -

    Name of the SQS Queue name to pull messages from. Note that this is just the name of the queue, not the URL or ARN.

    - -

    - - region - - -

    - - - -

    The AWS Region

    - -

    - - secret_access_key - - -

    - - - -

    The AWS Secret Access Key

    - -

    - - sent_timestamp_field - - -

    - - - -

    Name of the event field in which to store the SQS message Sent Timestamp

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - threads - - -

    - - - -

    Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - use_ssl - - -

    - - - -

    Should we require (true) or disable (false) using SSL for communicating with the AWS API
    -The AWS SDK for Ruby defaults to SSL so we preserve that

    - - -
    - -This is documentation from lib/logstash/inputs/sqs.rb diff --git a/docs/1.2.0.beta1/inputs/stdin.html b/docs/1.2.0.beta1/inputs/stdin.html deleted file mode 100644 index d70542aba..000000000 --- a/docs/1.2.0.beta1/inputs/stdin.html +++ /dev/null @@ -1,178 +0,0 @@ ---- -title: logstash docs for inputs/stdin -layout: content_right ---- -

    stdin

    -

    Milestone: 3

    - -

    Read events from standard input.

    - -

    By default, each event is assumed to be one line. If you -want to join lines, you'll want to use the multiline filter.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  stdin {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/stdin.rb diff --git a/docs/1.2.0.beta1/inputs/stomp.html b/docs/1.2.0.beta1/inputs/stomp.html deleted file mode 100644 index d201e18ca..000000000 --- a/docs/1.2.0.beta1/inputs/stomp.html +++ /dev/null @@ -1,267 +0,0 @@ ---- -title: logstash docs for inputs/stomp -layout: content_right ---- -

    stomp

    -

    Milestone: 2

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  stomp {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    destination => ... # string (required)
    -    host => ... # string (required), default: "localhost"
    -    password => ... # password (optional), default: ""
    -    port => ... # number (optional), default: 61613
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -    user => ... # string (optional), default: ""
    -    vhost => ... # string (optional), default: nil
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Enable debugging output?

    - -

    - - destination (required setting) - - -

    - - - -

    The destination to read events from.

    - -

    Example: "/topic/logstash"

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host (required setting) - - -

    - - - -

    The address of the STOMP server.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - password - - -

    - - - -

    The password to authenticate with.

    - -

    - - port - - -

    - - - -

    The port to connet to on your STOMP server.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - user - - -

    - - - -

    The username to authenticate with.

    - -

    - - vhost - - -

    - - - -

    The vhost to use

    - - -
    - -This is documentation from lib/logstash/inputs/stomp.rb diff --git a/docs/1.2.0.beta1/inputs/syslog.html b/docs/1.2.0.beta1/inputs/syslog.html deleted file mode 100644 index 23d8c0525..000000000 --- a/docs/1.2.0.beta1/inputs/syslog.html +++ /dev/null @@ -1,265 +0,0 @@ ---- -title: logstash docs for inputs/syslog -layout: content_right ---- -

    syslog

    -

    Milestone: 1

    - -

    Read syslog messages as events over the network.

    - -

    This input is a good choice if you already use syslog today. -It is also a good choice if you want to receive logs from -appliances and network devices where you cannot run your own -log collector.

    - -

    Of course, 'syslog' is a very muddy term. This input only supports RFC3164 -syslog with some small modifications. The date format is allowed to be -RFC3164 style or ISO8601. Otherwise the rest of the RFC3164 must be obeyed. -If you do not use RFC3164, do not use this input.

    - -

    Note: this input will start listeners on both TCP and UDP

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  syslog {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    facility_labels => ... # array (optional), default: ["kernel", "user-level", "mail", "system", "security/authorization", "syslogd", "line printer", "network news", "UUCP", "clock", "security/authorization", "FTP", "NTP", "log audit", "log alert", "clock", "local0", "local1", "local2", "local3", "local4", "local5", "local6", "local7"]
    -    host => ... # string (optional), default: "0.0.0.0"
    -    port => ... # number (optional), default: 514
    -    severity_labels => ... # array (optional), default: ["Emergency", "Alert", "Critical", "Error", "Warning", "Notice", "Informational", "Debug"]
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -    use_labels => ... # boolean (optional), default: true
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - facility_labels - - -

    - - - -

    Labels for facility levels -This comes from RFC3164.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    The address to listen on

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - port - - -

    - - - -

    The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root to use.

    - -

    - - severity_labels - - -

    - - - -

    Labels for severity levels -This comes from RFC3164.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - use_labels - - -

    - - - -

    Use label parsing for severity and facility levels

    - - -
    - -This is documentation from lib/logstash/inputs/syslog.rb diff --git a/docs/1.2.0.beta1/inputs/tcp.html b/docs/1.2.0.beta1/inputs/tcp.html deleted file mode 100644 index 653f93779..000000000 --- a/docs/1.2.0.beta1/inputs/tcp.html +++ /dev/null @@ -1,338 +0,0 @@ ---- -title: logstash docs for inputs/tcp -layout: content_right ---- -

    tcp

    -

    Milestone: 2

    - -

    Read events over a TCP socket.

    - -

    Like stdin and file inputs, each event is assumed to be one line of text.

    - -

    Can either accept connections from clients or connect to a server, -depending on mode.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  tcp {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    data_timeout => ... # number (optional), default: -1
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "0.0.0.0"
    -    mode => ... # string, one of ["server", "client"] (optional), default: "server"
    -    port => ... # number (required)
    -    ssl_cacert => ... # a valid filesystem path (optional)
    -    ssl_cert => ... # a valid filesystem path (optional)
    -    ssl_enable => ... # boolean (optional), default: false
    -    ssl_key => ... # a valid filesystem path (optional)
    -    ssl_key_passphrase => ... # password (optional), default: nil
    -    ssl_verify => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - data_timeout - - -

    - - - -

    The 'read' timeout in seconds. If a particular tcp connection is idle for -more than this timeout period, we will assume it is dead and close it.

    - -

    If you never want to timeout, use -1.

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    When mode is server, the address to listen on. -When mode is client, the address to connect to.

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - mode - - -

    - - - -

    Mode to operate in. server listens for client connections, -client connects to a server.

    - -

    - - port (required setting) - - -

    - - - -

    When mode is server, the port to listen on. -When mode is client, the port to connect to.

    - -

    - - ssl_cacert - - -

    - - - -

    ssl CA certificate, chainfile or CA path -The system CA path is automatically included

    - -

    - - ssl_cert - - -

    - - - -

    ssl certificate

    - -

    - - ssl_enable - - -

    - - - -

    Enable ssl (must be set for other ssl_ options to take effect)

    - -

    - - ssl_key - - -

    - - - -

    ssl key

    - -

    - - ssl_key_passphrase - - -

    - - - -

    ssl key passphrase

    - -

    - - ssl_verify - - -

    - - - -

    Verify the identity of the other end of the ssl connection against the CA -For input, sets the @field.sslsubject to that of the client certificate

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/tcp.rb diff --git a/docs/1.2.0.beta1/inputs/twitter.html b/docs/1.2.0.beta1/inputs/twitter.html deleted file mode 100644 index 713b8f8b4..000000000 --- a/docs/1.2.0.beta1/inputs/twitter.html +++ /dev/null @@ -1,273 +0,0 @@ ---- -title: logstash docs for inputs/twitter -layout: content_right ---- -

    twitter

    -

    Milestone: 1

    - -

    Read events from the twitter streaming api.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  twitter {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    consumer_key => ... # string (required)
    -    consumer_secret => ... # password (required)
    -    debug => ... # boolean (optional), default: false
    -    keywords => ... # array (required)
    -    oauth_token => ... # string (required)
    -    oauth_token_secret => ... # password (required)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - consumer_key (required setting) - - -

    - - - -

    Your twitter app's consumer key

    - -

    Don't know what this is? You need to create an "application" -on twitter, see this url: https://dev.twitter.com/apps/new

    - -

    - - consumer_secret (required setting) - - -

    - - - -

    Your twitter app's consumer secret

    - -

    If you don't have one of these, you can create one by -registering a new application with twitter: -https://dev.twitter.com/apps/new

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - keywords (required setting) - - -

    - - - -

    Any keywords to track in the twitter stream

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - oauth_token (required setting) - - -

    - - - -

    Your oauth token.

    - -

    To get this, login to twitter with whatever account you want, -then visit https://dev.twitter.com/apps

    - -

    Click on your app (used with the consumerkey and consumersecret settings) -Then at the bottom of the page, click 'Create my access token' which -will create an oauth token and secret bound to your account and that -application.

    - -

    - - oauth_token_secret (required setting) - - -

    - - - -

    Your oauth token secret.

    - -

    To get this, login to twitter with whatever account you want, -then visit https://dev.twitter.com/apps

    - -

    Click on your app (used with the consumerkey and consumersecret settings) -Then at the bottom of the page, click 'Create my access token' which -will create an oauth token and secret bound to your account and that -application.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/twitter.rb diff --git a/docs/1.2.0.beta1/inputs/udp.html b/docs/1.2.0.beta1/inputs/udp.html deleted file mode 100644 index b4a10ebd5..000000000 --- a/docs/1.2.0.beta1/inputs/udp.html +++ /dev/null @@ -1,221 +0,0 @@ ---- -title: logstash docs for inputs/udp -layout: content_right ---- -

    udp

    -

    Milestone: 2

    - -

    Read messages as events over the network via udp.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  udp {
    -    add_field => ... # hash (optional), default: {}
    -    buffer_size => ... # number (optional), default: 8192
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "0.0.0.0"
    -    port => ... # number (required)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - buffer_size - - -

    - - - -

    Buffer size

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    The address to listen on

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - port (required setting) - - -

    - - - -

    The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root or elevated privileges to use.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/udp.rb diff --git a/docs/1.2.0.beta1/inputs/unix.html b/docs/1.2.0.beta1/inputs/unix.html deleted file mode 100644 index 1cfb46f96..000000000 --- a/docs/1.2.0.beta1/inputs/unix.html +++ /dev/null @@ -1,245 +0,0 @@ ---- -title: logstash docs for inputs/unix -layout: content_right ---- -

    unix

    -

    Milestone: 2

    - -

    Read events over a UNIX socket.

    - -

    Like stdin and file inputs, each event is assumed to be one line of text.

    - -

    Can either accept connections from clients or connect to a server, -depending on mode.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  unix {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    data_timeout => ... # number (optional), default: -1
    -    debug => ... # boolean (optional), default: false
    -    force_unlink => ... # boolean (optional), default: false
    -    mode => ... # string, one of ["server", "client"] (optional), default: "server"
    -    path => ... # string (required)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - data_timeout - - -

    - - - -

    The 'read' timeout in seconds. If a particular connection is idle for -more than this timeout period, we will assume it is dead and close it.

    - -

    If you never want to timeout, use -1.

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - force_unlink - - -

    - - - -

    Remove socket file in case of EADDRINUSE failure

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - mode - - -

    - - - -

    Mode to operate in. server listens for client connections, -client connects to a server.

    - -

    - - path (required setting) - - -

    - - - -

    When mode is server, the path to listen on. -When mode is client, the path to connect to.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/unix.rb diff --git a/docs/1.2.0.beta1/inputs/varnishlog.html b/docs/1.2.0.beta1/inputs/varnishlog.html deleted file mode 100644 index 761e032e2..000000000 --- a/docs/1.2.0.beta1/inputs/varnishlog.html +++ /dev/null @@ -1,191 +0,0 @@ ---- -title: logstash docs for inputs/varnishlog -layout: content_right ---- -

    varnishlog

    -

    Milestone: 1

    - -

    Read from varnish cache's shared memory log

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  varnishlog {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    threads => ... # number (optional), default: 1
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - threads - - -

    - - - -

    Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/varnishlog.rb diff --git a/docs/1.2.0.beta1/inputs/websocket.html b/docs/1.2.0.beta1/inputs/websocket.html deleted file mode 100644 index 21da97c25..000000000 --- a/docs/1.2.0.beta1/inputs/websocket.html +++ /dev/null @@ -1,212 +0,0 @@ ---- -title: logstash docs for inputs/websocket -layout: content_right ---- -

    websocket

    -

    Milestone: 1

    - -

    Read events over the websocket protocol.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  websocket {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    mode => ... # string, one of ["server", "client"] (optional), default: "client"
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -    url => ... # string (optional), default: "0.0.0.0"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - mode - - -

    - - - -

    Operate as a client or a server.

    - -

    Client mode causes this plugin to connect as a websocket client -to the URL given. It expects to receive events as websocket messages.

    - -

    (NOT IMPLEMENTED YET) Server mode causes this plugin to listen on -the given URL for websocket clients. It expects to receive events -as websocket messages from these clients.

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - url - - -

    - - - -

    The url to connect to or serve from

    - - -
    - -This is documentation from lib/logstash/inputs/websocket.rb diff --git a/docs/1.2.0.beta1/inputs/wmi.html b/docs/1.2.0.beta1/inputs/wmi.html deleted file mode 100644 index efbb3a8ae..000000000 --- a/docs/1.2.0.beta1/inputs/wmi.html +++ /dev/null @@ -1,221 +0,0 @@ ---- -title: logstash docs for inputs/wmi -layout: content_right ---- -

    wmi

    -

    Milestone: 1

    - -

    Collect data from WMI query

    - -

    This is useful for collecting performance metrics and other data -which is accessible via WMI on a Windows host

    - -

    Example:

    - -
    input {
    -  wmi {
    -    query => "select * from Win32_Process"
    -    interval => 10
    -  }
    -  wmi {
    -    query => "select PercentProcessorTime from Win32_PerfFormattedData_PerfOS_Processor where name = '_Total'"
    -  }
    -}
    -
    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  wmi {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    interval => ... # number (optional), default: 10
    -    query => ... # string (required)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - interval - - -

    - - - -

    Polling interval

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - query (required setting) - - -

    - - - -

    WMI query

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/wmi.rb diff --git a/docs/1.2.0.beta1/inputs/xmpp.html b/docs/1.2.0.beta1/inputs/xmpp.html deleted file mode 100644 index 4edd6b087..000000000 --- a/docs/1.2.0.beta1/inputs/xmpp.html +++ /dev/null @@ -1,242 +0,0 @@ ---- -title: logstash docs for inputs/xmpp -layout: content_right ---- -

    xmpp

    -

    Milestone: 2

    - -

    This input allows you to receive events over XMPP/Jabber.

    - -

    This plugin can be used for accepting events from humans or applications -XMPP, or you can use it for PubSub or general message passing for logstash to -logstash.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  xmpp {
    -    add_field => ... # hash (optional), default: {}
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional)
    -    password => ... # password (required)
    -    rooms => ... # array (optional)
    -    tags => ... # array (optional)
    -    type => ... # string (optional)
    -    user => ... # string (required)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set to true to enable greater debugging in XMPP. Useful for debugging -network/authentication erros.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    The xmpp server to connect to. This is optional. If you omit this setting, -the host on the user/identity is used. (foo.com for user@foo.com)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - password (required setting) - - -

    - - - -

    The xmpp password for the user/identity.

    - -

    - - rooms - - -

    - - - -

    if muc/multi-user-chat required, give the name of the room that -you want to join: room@conference.domain/nick

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - user (required setting) - - -

    - - - -

    The user or resource ID, like foo@example.com.

    - - -
    - -This is documentation from lib/logstash/inputs/xmpp.rb diff --git a/docs/1.2.0.beta1/inputs/zenoss.html b/docs/1.2.0.beta1/inputs/zenoss.html deleted file mode 100644 index b04b6fcaf..000000000 --- a/docs/1.2.0.beta1/inputs/zenoss.html +++ /dev/null @@ -1,452 +0,0 @@ ---- -title: logstash docs for inputs/zenoss -layout: content_right ---- -

    zenoss

    -

    Milestone: 1

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  zenoss {
    -    ack => ... # boolean (optional), default: true
    -    add_field => ... # hash (optional), default: {}
    -    arguments => ... # array (optional), default: {}
    -    auto_delete => ... # boolean (optional), default: true
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    durable => ... # boolean (optional), default: false
    -    exchange => ... # string (optional), default: "zenoss.zenevents"
    -    exclusive => ... # boolean (optional), default: true
    -    host => ... # string (optional), default: "localhost"
    -    key => ... # string (optional), default: "zenoss.zenevent.#"
    -    passive => ... # boolean (optional), default: false
    -    password => ... # password (optional), default: "zenoss"
    -    port => ... # number (optional), default: 5672
    -    prefetch_count => ... # number (optional), default: 256
    -    queue => ... # string (optional), default: ""
    -    ssl => ... # boolean (optional), default: false
    -    tags => ... # array (optional)
    -    threads => ... # number (optional), default: 1
    -    type => ... # string (optional)
    -    user => ... # string (optional), default: "zenoss"
    -    verify_ssl => ... # boolean (optional), default: false
    -    vhost => ... # string (optional), default: "/zenoss"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - ack - - -

    - - - - - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - arguments - - -

    - - - - - -

    - - auto_delete - - -

    - - - - - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - durable - - -

    - - - - - -

    - - exchange - - -

    - - - -

    The name of the exchange to bind the queue. This is analogous to the 'rabbitmq -output' config 'name'

    - -

    - - exclusive - - -

    - - - - - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - host - - -

    - - - -

    Your rabbitmq server address

    - -

    - - key - - -

    - - - -

    The routing key to use. This is only valid for direct or fanout exchanges

    - - - - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - passive - - -

    - - - - - -

    - - password - - -

    - - - -

    Your rabbitmq password

    - -

    - - port - - -

    - - - - - -

    - - prefetch_count - - -

    - - - - - -

    - - queue - - -

    - - - - - -

    - - ssl - - -

    - - - - - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - threads - - -

    - - - - - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - -

    - - user - - -

    - - - -

    Your rabbitmq username

    - -

    - - verify_ssl - - -

    - - - - - -

    - - vhost - - -

    - - - -

    The vhost to use. If you don't know what this is, leave the default.

    - - -
    - -This is documentation from lib/logstash/inputs/zenoss.rb diff --git a/docs/1.2.0.beta1/inputs/zeromq.html b/docs/1.2.0.beta1/inputs/zeromq.html deleted file mode 100644 index 33febad16..000000000 --- a/docs/1.2.0.beta1/inputs/zeromq.html +++ /dev/null @@ -1,305 +0,0 @@ ---- -title: logstash docs for inputs/zeromq -layout: content_right ---- -

    zeromq

    -

    Milestone: 2

    - -

    Read events over a 0MQ SUB socket.

    - -

    You need to have the 0mq 2.1.x library installed to be able to use -this input plugin.

    - -

    The default settings will create a subscriber binding to tcp://127.0.0.1:2120 -waiting for connecting publishers.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    input {
    -  zeromq {
    -    add_field => ... # hash (optional), default: {}
    -    address => ... # array (optional), default: ["tcp://*:2120"]
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    mode => ... # string, one of ["server", "client"] (optional), default: "server"
    -    sender => ... # string (optional)
    -    sockopt => ... # hash (optional)
    -    tags => ... # array (optional)
    -    topic => ... # array (optional)
    -    topology => ... # string, one of ["pushpull", "pubsub", "pair"] (required)
    -    type => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - add_field - - -

    - - - -

    Add a field to an event

    - -

    - - address - - -

    - - - -

    0mq socket address to connect or bind -Please note that inproc:// will not work with logstash -as each we use a context per thread. -By default, inputs bind/listen -and outputs connect

    - -

    - - charset - DEPRECATED - -

    - - - -

    The character encoding used in this input. Examples include "UTF-8" -and "cp1252"

    - -

    This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.

    - -

    This only affects "plain" format logs since json is UTF-8 already.

    - -

    - - codec - - -

    - - - -

    The codec used for input data

    - -

    - - debug - - -

    - - - -

    Set this to true to enable debugging on an input.

    - -

    - - format - DEPRECATED - -

    - - - -

    The format of input data (plain, json, json_event)

    - -

    - - message_format - DEPRECATED - -

    - - - -

    If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}

    - -

    If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.

    - -

    - - mode - - -

    - - - -

    mode -server mode binds/listens -client mode connects

    - -

    - - sender - - -

    - - - -

    sender -overrides the sender to -set the source of the event -default is "zmq+topology://type/"

    - -

    - - sockopt - - -

    - - - -

    0mq socket options -This exposes zmq_setsockopt -for advanced tuning -see http://api.zeromq.org/2-1:zmq-setsockopt for details

    - -

    This is where you would set values like: -ZMQ::HWM - high water mark -ZMQ::IDENTITY - named queues -ZMQ::SWAP_SIZE - space for disk overflow

    - -

    example: sockopt => ["ZMQ::HWM", 50, "ZMQ::IDENTITY", "mynamedqueue"]

    - -

    - - tags - - -

    - - - -

    Add any number of arbitrary tags to your event.

    - -

    This can help with processing later.

    - -

    - - topic - - -

    - - - -

    0mq topic -This is used for the 'pubsub' topology only -On inputs, this allows you to filter messages by topic -On outputs, this allows you to tag a message for routing -NOTE: ZeroMQ does subscriber side filtering. -NOTE: All topics have an implicit wildcard at the end -You can specify multiple topics here

    - -

    - - topology (required setting) - - -

    - - - -

    0mq topology -The default logstash topologies work as follows: -* pushpull - inputs are pull, outputs are push -* pubsub - inputs are subscribers, outputs are publishers -* pair - inputs are clients, inputs are servers

    - -

    If the predefined topology flows don't work for you, -you can change the 'mode' setting -TODO (lusis) add req/rep MAYBE -TODO (lusis) add router/dealer

    - -

    - - type - - -

    - - - -

    Add a 'type' field to all events handled by this input.

    - -

    Types are used mainly for filter activation.

    - -

    If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.

    - -

    The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.

    - -

    If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.

    - - -
    - -This is documentation from lib/logstash/inputs/zeromq.rb diff --git a/docs/1.2.0.beta1/learn.md b/docs/1.2.0.beta1/learn.md deleted file mode 100644 index d900b42f3..000000000 --- a/docs/1.2.0.beta1/learn.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Learn - logstash -layout: content_right ---- -# What is logstash? - -logstash is a tool for managing your logs. - -It helps you take logs and other event data from your systems and move it into -a central place. logstash is open source and completely free. You can find -support on the mailing list and on IRC. - -For an overview of logstash and why you would use it, you should watch the -presentation I gave at CarolinaCon 2011: -[video here](http://carolinacon.blip.tv/file/5105901/). This presentation covers -logstash, how you can use it, some alternatives, logging best practices, -parsing tools, etc. Video also below: - - - - -The slides are available online here: [slides](http://semicomplete.com/presentations/logstash-puppetconf-2012/). - -## Getting Help - -There's [documentation](.) here on this site. If that isn't sufficient, you can -email the mailing list (logstash-users@googlegroups.com). Further, there is also -an IRC channel - #logstash on irc.freenode.org. - -If you find a bug or have a feature request, file them -on . (Honestly though, if you prefer email or irc -for such things, that works for me, too.) - -## Download It - -[Download logstash-1.2.0.beta1](https://logstash.objects.dreamhost.com/release/logstash-1.2.0.beta1-flatjar.jar) - -## What's next? - -Try the [standalone logstash guide](tutorials/getting-started-simple) for a simple -real-world example getting started using logstash. diff --git a/docs/1.2.0.beta1/life-of-an-event.md b/docs/1.2.0.beta1/life-of-an-event.md deleted file mode 100644 index 90865019c..000000000 --- a/docs/1.2.0.beta1/life-of-an-event.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: the life of an event - logstash -layout: content_right ---- -# the life of an event - -The logstash agent is an event pipeline. - -The logstash agent is 3 parts: inputs -> filters -> outputs. Inputs generate -events, filters modify them, outputs ship them elsewhere. - -Internal to logstash, events are passed from each phase using internal queues. -It is implemented with a 'SizedQueue' in Ruby. SizedQueue allows a bounded -maximum of items in the queue such that any writes to the queue will block if -the queue is full at maximum capacity. - -Logstash sets each queue size to 20. This means only 20 events can be pending -into the next phase - this helps reduce any data loss and in general avoids -logstash trying to act as a data storage system. These internal queues are not -for storing messages long-term. - -Starting at outputs, here's what happens with a queue fills up. - -If an output is failing, the output thread will wait until this output is -healthy again and able to successfully send the message. Therefore, the output -queue will stop being read from by this output and will eventually fill up with -events and cause write blocks. - -A full output queue means filters will block trying to write to the output -queue. Because filters will be stuck, blocked writing to the output queue, they -will stop reading from the filter queue which will eventually cause the filter -queue (input -> filter) to fill up. - -A full filter queue will cause inputs to block when writing to the filters. -This will cause each input to block, causing each input to stop processing new -data from wherever that input is getting new events. - -In ideal circumstances, this will behave similarly to when the tcp window -closes to 0, no new data is sent because the receiver hasn't finished -processing the current queue of data. - -## Thread Model - -The thread model in logstash is currently: - - input threads | filter threads | output threads - -Filters are optional, so you will have this model if you have no filters defined: - - input threads | output threads - -Each input runs in a thread by itself. This allows busier inputs to not be -blocked by slower ones, etc. It also allows for easier containment of scope -because each input has a thread. - -The filter thread model is a 'worker' model where each worker receives an event -and applies all filters, in order, before emitting that to the output queue. -This allows scalability across CPUs because many filters are CPU intensive -(permitting that we have thread safety). Currently, logstash forces the number -of filter worker threads to be 1, but this will be tunable in the future once -we analyze the thread safety of each filter. - -The output thread model one thread per output. Each output has its own queue -receiving events. This is implemented in logstash with LogStash::MultiQueue. - -## Consequences and Expectations - -Small queue sizes mean that logstash simply blocks and stalls safely during -times of load or other temporary pipeline problems. The alternative is -unlimited queues which grow unbounded and eventually exceed memory causing a -crash which loses all of those messages. - -At a minum, logstash will have probably 3 threads (2 if you have no filters). -One input, one filter worker, and one output thread each. - -If you see logstash using multiple CPUs, this is likely why. If you want to -know more about what each thread is doing, you should read this: -. - -Threads in java have names, and you can use jstack and top to figure out who is -using what resources. The URL above will help you learn how to do this. diff --git a/docs/1.2.0.beta1/logging-tool-comparisons.md b/docs/1.2.0.beta1/logging-tool-comparisons.md deleted file mode 100644 index a39fea054..000000000 --- a/docs/1.2.0.beta1/logging-tool-comparisons.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: Logging tools comparisons - logstash -layout: content_right ---- -# Logging tools comparison - -The information below is provided as "best effort" and is not strictly intended -as a complete source of truth. If the information below is unclear or incorrect, please -email the logstash-users list (or send a pull request with the fix) :) - -Where feasible, this document will also provide information on how you can use -logstash with these other projects. - -# logstash - -Primary goal: Make log/event data and analytics accessible. - -Overview: Where your logs come from, how you store them, or what you do with -them is up to you. Logstash exists to help make such actions easier and faster. - -It provides you a simple event pipeline for taking events and logs from any -input, manipulating them with filters, and sending them to any output. Inputs -can be files, network, message brokers, etc. Filters are date and string -parsers, grep-like, etc. Outputs are data stores (elasticsearch, mongodb, etc), -message systems (rabbitmq, stomp, etc), network (tcp, syslog), etc. - -It also provides a web interface for doing search and analytics on your -logs. - -# graylog2 - -[http://graylog2.org/](http://graylog2.org) - -_Overview to be written_ - -You can use graylog2 with logstash by using the 'gelf' output to send logstash -events to a graylog2 server. This gives you logstash's excellent input and -filter features while still being able to use the graylog2 web interface. - -# whoops - -[whoops site](http://www.whoopsapp.com/) - -_Overview to be written_ - -A logstash output to whoops is coming soon - - -# flume - -[flume site](https://github.com/cloudera/flume/wiki) - -Flume is primarily a transport system aimed at reliably copying logs from -application servers to HDFS. - -You can use it with logstash by having a syslog sink configured to shoot logs -at a logstash syslog input. - -# scribe - -_Overview to be written_ diff --git a/docs/1.2.0.beta1/outputs/amqp.html b/docs/1.2.0.beta1/outputs/amqp.html deleted file mode 100644 index 25484c540..000000000 --- a/docs/1.2.0.beta1/outputs/amqp.html +++ /dev/null @@ -1,287 +0,0 @@ ---- -title: logstash docs for outputs/amqp -layout: content_right ---- -

    amqp

    -

    Milestone: 2

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  amqp {
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    durable => ... # boolean (optional), default: true
    -    exchange => ... # string (required)
    -    exchange_type => ... # string, one of ["fanout", "direct", "topic"] (required)
    -    host => ... # string (required)
    -    key => ... # string (optional), default: "logstash"
    -    password => ... # password (optional), default: "guest"
    -    persistent => ... # boolean (optional), default: true
    -    port => ... # number (optional), default: 5672
    -    ssl => ... # boolean (optional), default: false
    -    user => ... # string (optional), default: "guest"
    -    verify_ssl => ... # boolean (optional), default: false
    -    vhost => ... # string (optional), default: "/"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - debug - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - - - -

    - - durable - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - - - -

    - - exchange (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - - - -

    - - exchange_type (required setting) - - -

    - -
      -
    • Value can be any of: "fanout", "direct", "topic"
    • -
    • There is no default value for this setting.
    • -
    - - - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - - - -

    - - key - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash"
    • -
    - - - -

    - - password - - -

    - -
      -
    • Value type is password
    • -
    • Default value is "guest"
    • -
    - - - -

    - - persistent - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - - - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5672
    • -
    - - - -

    - - ssl - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - - - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - user - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "guest"
    • -
    - - - -

    - - verify_ssl - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - - - -

    - - vhost - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "/"
    • -
    - - - - -
    - -This is documentation from lib/logstash/outputs/amqp.rb diff --git a/docs/1.2.0.beta1/outputs/boundary.html b/docs/1.2.0.beta1/outputs/boundary.html deleted file mode 100644 index 5a7a72aa0..000000000 --- a/docs/1.2.0.beta1/outputs/boundary.html +++ /dev/null @@ -1,235 +0,0 @@ ---- -title: logstash docs for outputs/boundary -layout: content_right ---- -

    boundary

    -

    Milestone: 1

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  boundary {
    -    api_key => ... # string (required)
    -    auto => ... # boolean (optional), default: false
    -    bsubtype => ... # string (optional)
    -    btags => ... # array (optional)
    -    btype => ... # string (optional)
    -    codec => ... # codec (optional), default: "plain"
    -    end_time => ... # string (optional)
    -    org_id => ... # string (required)
    -    start_time => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - api_key (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    This output lets you send annotations to -Boundary based on Logstash events

    - -

    Note that since Logstash maintains no state -these will be one-shot events

    - -

    By default the start and stop time will be -the event timestamp

    - -

    Your Boundary API key

    - -

    - - auto - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Auto -If set to true, logstash will try to pull boundary fields out -of the event. Any field explicitly set by config options will -override these. -['type', 'subtype', 'creationtime', 'endtime', 'links', 'tags', 'loc']

    - -

    - - bsubtype - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Sub-Type

    - -

    - - btags - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    Tags -Set any custom tags for this event -Default are the Logstash tags if any

    - -

    - - btype - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Type

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - end_time - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    End time -Override the stop time -Note that Boundary requires this to be seconds since epoch -If overriding, it is your responsibility to type this correctly -By default this is set to event.unix_timestamp.to_i

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - org_id (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Your Boundary Org ID

    - -

    - - start_time - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Start time -Override the start time -Note that Boundary requires this to be seconds since epoch -If overriding, it is your responsibility to type this correctly -By default this is set to event.unix_timestamp.to_i

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/boundary.rb diff --git a/docs/1.2.0.beta1/outputs/circonus.html b/docs/1.2.0.beta1/outputs/circonus.html deleted file mode 100644 index 43cdb0151..000000000 --- a/docs/1.2.0.beta1/outputs/circonus.html +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: logstash docs for outputs/circonus -layout: content_right ---- -

    circonus

    -

    Milestone: 1

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  circonus {
    -    annotation => ... # hash (required), default: {}
    -    api_token => ... # string (required)
    -    app_name => ... # string (required)
    -    codec => ... # codec (optional), default: "plain"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - annotation (required setting) - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    Annotations -Registers an annotation with Circonus -The only required field is title and description. -start and stop will be set to event.unix_timestamp -You can add any other optional annotation values as well. -All values will be passed through event.sprintf

    - -

    Example: - ["title":"Logstash event", "description":"Logstash event for %{host}"] -or - ["title":"Logstash event", "description":"Logstash event for %{host}", "parent_id", "1"]

    - -

    - - api_token (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    This output lets you send annotations to -Circonus based on Logstash events

    - -

    Your Circonus API Token

    - -

    - - app_name (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Your Circonus App name -This will be passed through event.sprintf -so variables are allowed here:

    - -

    Example: - app_name => "%{myappname}"

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/circonus.rb diff --git a/docs/1.2.0.beta1/outputs/cloudwatch.html b/docs/1.2.0.beta1/outputs/cloudwatch.html deleted file mode 100644 index 6acc5f36f..000000000 --- a/docs/1.2.0.beta1/outputs/cloudwatch.html +++ /dev/null @@ -1,457 +0,0 @@ ---- -title: logstash docs for outputs/cloudwatch -layout: content_right ---- -

    cloudwatch

    -

    Milestone: 1

    - -

    This output lets you aggregate and send metric data to AWS CloudWatch

    - -

    Summary:

    - -

    This plugin is intended to be used on a logstash indexer agent (but that -is not the only way, see below.) In the intended scenario, one cloudwatch -output plugin is configured, on the logstash indexer node, with just AWS API -credentials, and possibly a region and/or a namespace. The output looks -for fields present in events, and when it finds them, it uses them to -calculate aggregate statistics. If the metricname option is set in this -output, then any events which pass through it will be aggregated & sent to -CloudWatch, but that is not recommended. The intended use is to NOT set the -metricname option here, and instead to add a CW_metricname field (and other -fields) to only the events you want sent to CloudWatch.

    - -

    When events pass through this output they are queued for background -aggregation and sending, which happens every minute by default. The -queue has a maximum size, and when it is full aggregated statistics will be -sent to CloudWatch ahead of schedule. Whenever this happens a warning -message is written to logstash's log. If you see this you should increase -the queue_size configuration option to avoid the extra API calls. The queue -is emptied every time we send data to CloudWatch.

    - -

    Note: when logstash is stopped the queue is destroyed before it can be processed. -This is a known limitation of logstash and will hopefully be addressed in a -future version.

    - -

    Details:

    - -

    There are two ways to configure this plugin, and they can be used in -combination: event fields & per-output defaults

    - -

    Event Field configuration... -You add fields to your events in inputs & filters and this output reads -those fields to aggregate events. The names of the fields read are -configurable via the field_* options.

    - -

    Per-output defaults... -You set universal defaults in this output plugin's configuration, and -if an event does not have a field for that option then the default is -used.

    - -

    Notice, the event fields take precedence over the per-output defaults.

    - -

    At a minimum events must have a "metric name" to be sent to CloudWatch. -This can be achieved either by providing a default here OR by adding a -CW_metricname field. By default, if no other configuration is provided -besides a metric name, then events will be counted (Unit: Count, Value: 1) -by their metric name (either a default or from their CW_metricname field)

    - -

    Other fields which can be added to events to modify the behavior of this -plugin are, CW_namespace, CW_unit, CW_value, and -CW_dimensions. All of these field names are configurable in -this output. You can also set per-output defaults for any of them. -See below for details.

    - -

    Read more about AWS CloudWatch, -and the specific of API endpoint this output uses, -PutMetricData

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  cloudwatch {
    -    access_key_id => ... # string (optional)
    -    aws_credentials_file => ... # string (optional)
    -    codec => ... # codec (optional), default: "plain"
    -    dimensions => ... # hash (optional)
    -    field_dimensions => ... # string (optional), default: "CW_dimensions"
    -    field_metricname => ... # string (optional), default: "CW_metricname"
    -    field_namespace => ... # string (optional), default: "CW_namespace"
    -    field_unit => ... # string (optional), default: "CW_unit"
    -    field_value => ... # string (optional), default: "CW_value"
    -    metricname => ... # string (optional)
    -    namespace => ... # string (optional), default: "Logstash"
    -    queue_size => ... # number (optional), default: 10000
    -    region => ... # string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us-east-1"
    -    secret_access_key => ... # string (optional)
    -    timeframe => ... # string (optional), default: "1m"
    -    unit => ... # string, one of ["Seconds", "Microseconds", "Milliseconds", "Bytes", "Kilobytes", "Megabytes", "Gigabytes", "Terabytes", "Bits", "Kilobits", "Megabits", "Gigabits", "Terabits", "Percent", "Count", "Bytes/Second", "Kilobytes/Second", "Megabytes/Second", "Gigabytes/Second", "Terabytes/Second", "Bits/Second", "Kilobits/Second", "Megabits/Second", "Gigabits/Second", "Terabits/Second", "Count/Second", "None"] (optional), default: "Count"
    -    use_ssl => ... # boolean (optional), default: true
    -    value => ... # string (optional), default: "1"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - access_key_id - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order...
    -1. Static configuration, using access_key_id and secret_access_key params in logstash plugin config
    -2. External credentials file specified by aws_credentials_file
    -3. Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
    -4. Environment variables AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY
    -5. IAM Instance Profile (available when running inside EC2)

    - -

    - - aws_credentials_file - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Path to YAML file containing a hash of AWS credentials.
    -This file will only be loaded if access_key_id and -secret_access_key aren't set. The contents of the -file should look like this:

    - -
    :access_key_id: "12345"
    -:secret_access_key: "54321"
    -
    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - dimensions - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    The default dimensions [ name, value, ... ] to use for events which do not have a CW_dimensions field

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - field_dimensions - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "CW_dimensions"
    • -
    - -

    The name of the field used to set the dimensions on an event metric
    -The field named here, if present in an event, must have an array of -one or more key & value pairs, for example...

    - -
    add_field => [ "CW_dimensions", "Environment", "CW_dimensions", "prod" ]
    -
    - -

    or, equivalently...

    - -
    add_field => [ "CW_dimensions", "Environment" ]
    -add_field => [ "CW_dimensions", "prod" ]
    -
    - -

    - - field_metricname - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "CW_metricname"
    • -
    - -

    The name of the field used to set the metric name on an event
    -The author of this plugin recommends adding this field to events in inputs & -filters rather than using the per-output default setting so that one output -plugin on your logstash indexer can serve all events (which of course had -fields set on your logstash shippers.)

    - -

    - - field_namespace - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "CW_namespace"
    • -
    - -

    The name of the field used to set a different namespace per event
    -Note: Only one namespace can be sent to CloudWatch per API call -so setting different namespaces will increase the number of API calls -and those cost money.

    - -

    - - field_unit - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "CW_unit"
    • -
    - -

    The name of the field used to set the unit on an event metric

    - -

    - - field_value - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "CW_value"
    • -
    - -

    The name of the field used to set the value (float) on an event metric

    - -

    - - metricname - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The default metric name to use for events which do not have a CW_metricname field.
    -Beware: If this is provided then all events which pass through this output will be aggregated and -sent to CloudWatch, so use this carefully. Furthermore, when providing this option, you -will probably want to also restrict events from passing through this output using event -type, tag, and field matching

    - -

    - - namespace - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "Logstash"
    • -
    - -

    The default namespace to use for events which do not have a CW_namespace field

    - -

    - - queue_size - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 10000
    • -
    - -

    How many events to queue before forcing a call to the CloudWatch API ahead of timeframe schedule
    -Set this to the number of events-per-timeframe you will be sending to CloudWatch to avoid extra API calls

    - -

    - - region - - -

    - -
      -
    • Value can be any of: "us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"
    • -
    • Default value is "us-east-1"
    • -
    - -

    The AWS Region

    - -

    - - secret_access_key - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The AWS Secret Access Key

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - timeframe - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "1m"
    • -
    - -

    Constants -aggregate_key members -Units -How often to send data to CloudWatch
    -This does not affect the event timestamps, events will always have their -actual timestamp (to-the-minute) sent to CloudWatch.

    - -

    We only call the API if there is data to send.

    - -

    See the Rufus Scheduler docs for an explanation of allowed values

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - unit - - -

    - -
      -
    • Value can be any of: "Seconds", "Microseconds", "Milliseconds", "Bytes", "Kilobytes", "Megabytes", "Gigabytes", "Terabytes", "Bits", "Kilobits", "Megabits", "Gigabits", "Terabits", "Percent", "Count", "Bytes/Second", "Kilobytes/Second", "Megabytes/Second", "Gigabytes/Second", "Terabytes/Second", "Bits/Second", "Kilobits/Second", "Megabits/Second", "Gigabits/Second", "Terabits/Second", "Count/Second", "None"
    • -
    • Default value is "Count"
    • -
    - -

    The default unit to use for events which do not have a CW_unit field
    -If you set this option you should probably set the "value" option along with it

    - -

    - - use_ssl - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    Should we require (true) or disable (false) using SSL for communicating with the AWS API
    -The AWS SDK for Ruby defaults to SSL so we preserve that

    - -

    - - value - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "1"
    • -
    - -

    The default value to use for events which do not have a CW_value field
    -If provided, this must be a string which can be converted to a float, for example...

    - -
    "1", "2.34", ".5", and "0.67"
    -
    - -

    If you set this option you should probably set the unit option along with it

    - - -
    - -This is documentation from lib/logstash/outputs/cloudwatch.rb diff --git a/docs/1.2.0.beta1/outputs/datadog.html b/docs/1.2.0.beta1/outputs/datadog.html deleted file mode 100644 index 7f8dd7254..000000000 --- a/docs/1.2.0.beta1/outputs/datadog.html +++ /dev/null @@ -1,220 +0,0 @@ ---- -title: logstash docs for outputs/datadog -layout: content_right ---- -

    datadog

    -

    Milestone: 1

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  datadog {
    -    alert_type => ... # string, one of ["info", "error", "warning", "success"] (optional)
    -    api_key => ... # string (required)
    -    codec => ... # codec (optional), default: "plain"
    -    date_happened => ... # string (optional)
    -    dd_tags => ... # array (optional)
    -    priority => ... # string, one of ["normal", "low"] (optional)
    -    source_type_name => ... # string, one of ["nagios", "hudson", "jenkins", "user", "my apps", "feed", "chef", "puppet", "git", "bitbucket", "fabric", "capistrano"] (optional), default: "my apps"
    -    text => ... # string (optional), default: "%{message}"
    -    title => ... # string (optional), default: "Logstash event for %{source}"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - alert_type - - -

    - -
      -
    • Value can be any of: "info", "error", "warning", "success"
    • -
    • There is no default value for this setting.
    • -
    - -

    Alert type

    - -

    - - api_key (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    This output lets you send events (for now. soon metrics) to -DataDogHQ based on Logstash events

    - -

    Note that since Logstash maintains no state -these will be one-shot events

    - -

    Your DatadogHQ API key

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - date_happened - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Date Happened

    - -

    - - dd_tags - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    Tags -Set any custom tags for this event -Default are the Logstash tags if any

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - priority - - -

    - -
      -
    • Value can be any of: "normal", "low"
    • -
    • There is no default value for this setting.
    • -
    - -

    Priority

    - -

    - - source_type_name - - -

    - -
      -
    • Value can be any of: "nagios", "hudson", "jenkins", "user", "my apps", "feed", "chef", "puppet", "git", "bitbucket", "fabric", "capistrano"
    • -
    • Default value is "my apps"
    • -
    - -

    Source type name

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - text - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{message}"
    • -
    - -

    Text

    - -

    - - title - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "Logstash event for %{source}"
    • -
    - -

    Title

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/datadog.rb diff --git a/docs/1.2.0.beta1/outputs/datadog_metrics.html b/docs/1.2.0.beta1/outputs/datadog_metrics.html deleted file mode 100644 index 162c2617e..000000000 --- a/docs/1.2.0.beta1/outputs/datadog_metrics.html +++ /dev/null @@ -1,232 +0,0 @@ ---- -title: logstash docs for outputs/datadog_metrics -layout: content_right ---- -

    datadog_metrics

    -

    Milestone: 1

    - -

    This output lets you send metrics to -DataDogHQ based on Logstash events. -Default queue_size and timeframe are low in order to provide near realtime alerting. -If you do not use Datadog for alerting, consider raising these thresholds.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  datadog_metrics {
    -    api_key => ... # string (required)
    -    codec => ... # codec (optional), default: "plain"
    -    dd_tags => ... # array (optional)
    -    device => ... # string (optional), default: "%{metric_device}"
    -    host => ... # string (optional), default: "%{source}"
    -    metric_name => ... # string (optional), default: "%{metric_name}"
    -    metric_type => ... # string, one of ["gauge", "counter"] (optional), default: "%{metric_type}"
    -    metric_value => ... #  (optional), default: "%{metric_value}"
    -    queue_size => ... # number (optional), default: 10
    -    timeframe => ... # number (optional), default: 10
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - api_key (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Your DatadogHQ API key. https://app.datadoghq.com/account/settings#api

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - dd_tags - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    Set any custom tags for this event, -default are the Logstash tags if any.

    - -

    - - device - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{metric_device}"
    • -
    - -

    The name of the device that produced the metric.

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{source}"
    • -
    - -

    The name of the host that produced the metric.

    - -

    - - metric_name - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{metric_name}"
    • -
    - -

    The name of the time series.

    - -

    - - metric_type - - -

    - -
      -
    • Value can be any of: "gauge", "counter"
    • -
    • Default value is "%{metric_type}"
    • -
    - -

    The type of the metric.

    - -

    - - metric_value - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{metric_value}"
    • -
    - -

    The value.

    - -

    - - queue_size - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 10
    • -
    - -

    How many events to queue before flushing to Datadog -prior to schedule set in @timeframe

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - timeframe - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 10
    • -
    - -

    How often (in seconds) to flush queued events to Datadog

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/datadog_metrics.rb diff --git a/docs/1.2.0.beta1/outputs/elasticsearch.html b/docs/1.2.0.beta1/outputs/elasticsearch.html deleted file mode 100644 index e260bf428..000000000 --- a/docs/1.2.0.beta1/outputs/elasticsearch.html +++ /dev/null @@ -1,329 +0,0 @@ ---- -title: logstash docs for outputs/elasticsearch -layout: content_right ---- -

    elasticsearch

    -

    Milestone: 3

    - -

    This output lets you store logs in elasticsearch and is the most recommended -output for logstash. If you plan on using the logstash web interface, you'll -need to use this output.

    - -

    VERSION NOTE: Your elasticsearch cluster must be running elasticsearch - 0.90.3. If you use any other version of elasticsearch, - you should consider using the elasticsearch_http - output instead.

    - -

    If you want to set other elasticsearch options that are not exposed directly -as config options, there are two options:

    - -
      -
    • create an elasticsearch.yml file in the $PWD of the logstash process
    • -
    • pass in es.* java properties (java -Des.node.foo= or ruby -J-Des.node.foo=)
    • -
    - - -

    This plugin will join your elasticsearch cluster, so it will show up in -elasticsearch's cluster health status.

    - -

    You can learn more about elasticsearch at http://elasticsearch.org

    - -

    Operational Notes

    - -

    Your firewalls will need to permit port 9300 in both directions (from -logstash to elasticsearch, and elasticsearch to logstash)

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  elasticsearch {
    -    bind_host => ... # string (optional)
    -    cluster => ... # string (optional)
    -    codec => ... # codec (optional), default: "plain"
    -    document_id => ... # string (optional), default: nil
    -    embedded => ... # boolean (optional), default: false
    -    embedded_http_port => ... # string (optional), default: "9200-9300"
    -    flush_size => ... # number (optional), default: 100
    -    host => ... # string (optional)
    -    idle_flush_time => ... # number (optional), default: 1
    -    index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
    -    index_type => ... # string (optional)
    -    node_name => ... # string (optional)
    -    port => ... # string (optional), default: "9300-9400"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - bind_host - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The name/address of the host to bind to for ElasticSearch clustering

    - -

    - - cluster - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The name of your cluster if you set it on the ElasticSearch side. Useful -for discovery.

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - document_id - - -

    - -
      -
    • Value type is string
    • -
    • Default value is nil
    • -
    - -

    The document ID for the index. Useful for overwriting existing entries in -elasticsearch with the same ID.

    - -

    - - embedded - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Run the elasticsearch server embedded in this process. -This option is useful if you want to run a single logstash process that -handles log processing and indexing; it saves you from needing to run -a separate elasticsearch process.

    - -

    - - embedded_http_port - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "9200-9300"
    • -
    - -

    If you are running the embedded elasticsearch server, you can set the http -port it listens on here; it is not common to need this setting changed from -default.

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - flush_size - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 100
    • -
    - -

    The maximum number of events to spool before flushing to elasticsearch.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The name/address of the host to use for ElasticSearch unicast discovery -This is only required if the normal multicast/cluster discovery stuff won't -work in your environment.

    - -

    - - idle_flush_time - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 1
    • -
    - -

    The amount of time since last flush before a flush is forced.

    - -

    - - index - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash-%{+YYYY.MM.dd}"
    • -
    - -

    The index to write events to. This can be dynamic using the %{foo} syntax. -The default value will partition your indices by day so you can more easily -delete old data or only search specific date ranges.

    - -

    - - index_type - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The index type to write events to. Generally you should try to write only -similar events to the same 'type'. String expansion '%{foo}' works here.

    - -

    - - max_inflight_requests - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is number
    • -
    • Default value is 50
    • -
    - -

    This setting no longer does anything. It exists to keep config validation -from failing. It will be removed in future versions.

    - -

    - - node_name - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The node name ES will use when joining a cluster.

    - -

    By default, this is generated internally by the ES client.

    - -

    - - port - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "9300-9400"
    • -
    - -

    The port for ElasticSearch transport to use. This is not the ElasticSearch -REST API port (normally 9200).

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/elasticsearch.rb diff --git a/docs/1.2.0.beta1/outputs/elasticsearch_http.html b/docs/1.2.0.beta1/outputs/elasticsearch_http.html deleted file mode 100644 index 2c022a57d..000000000 --- a/docs/1.2.0.beta1/outputs/elasticsearch_http.html +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: logstash docs for outputs/elasticsearch_http -layout: content_right ---- -

    elasticsearch_http

    -

    Milestone: 2

    - -

    This output lets you store logs in elasticsearch.

    - -

    This plugin uses the HTTP/REST interface to ElasticSearch, which usually -lets you use any version of elasticsearch server. It is known to work -with elasticsearch 0.90.3

    - -

    You can learn more about elasticsearch at http://elasticsearch.org

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  elasticsearch_http {
    -    codec => ... # codec (optional), default: "plain"
    -    document_id => ... # string (optional), default: nil
    -    flush_size => ... # number (optional), default: 100
    -    host => ... # string (optional)
    -    idle_flush_time => ... # number (optional), default: 1
    -    index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
    -    index_type => ... # string (optional)
    -    port => ... # number (optional), default: 9200
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - document_id - - -

    - -
      -
    • Value type is string
    • -
    • Default value is nil
    • -
    - -

    The document ID for the index. Useful for overwriting existing entries in -elasticsearch with the same ID.

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - flush_size - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 100
    • -
    - -

    Set the number of events to queue up before writing to elasticsearch.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The hostname or ip address to reach your elasticsearch server.

    - -

    - - idle_flush_time - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 1
    • -
    - -

    The amount of time since last flush before a flush is forced.

    - -

    - - index - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash-%{+YYYY.MM.dd}"
    • -
    - -

    The index to write events to. This can be dynamic using the %{foo} syntax. -The default value will partition your indices by day so you can more easily -delete old data or only search specific date ranges.

    - -

    - - index_type - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The index type to write events to. Generally you should try to write only -similar events to the same 'type'. String expansion '%{foo}' works here.

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 9200
    • -
    - -

    The port for ElasticSearch HTTP interface to use.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/elasticsearch_http.rb diff --git a/docs/1.2.0.beta1/outputs/elasticsearch_river.html b/docs/1.2.0.beta1/outputs/elasticsearch_river.html deleted file mode 100644 index db29fe7c4..000000000 --- a/docs/1.2.0.beta1/outputs/elasticsearch_river.html +++ /dev/null @@ -1,404 +0,0 @@ ---- -title: logstash docs for outputs/elasticsearch_river -layout: content_right ---- -

    elasticsearch_river

    -

    Milestone: 2

    - -

    This output lets you store logs in elasticsearch. It's similar to the -'elasticsearch' output but improves performance by using a queue server, -rabbitmq, to send data to elasticsearch.

    - -

    Upon startup, this output will automatically contact an elasticsearch cluster -and configure it to read from the queue to which we write.

    - -

    You can learn more about elasticseasrch at http://elasticsearch.org -More about the elasticsearch rabbitmq river plugin: https://github.com/elasticsearch/elasticsearch-river-rabbitmq/blob/master/README.md

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  elasticsearch_river {
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    document_id => ... # string (optional), default: nil
    -    durable => ... # boolean (optional), default: true
    -    es_bulk_size => ... # number (optional), default: 1000
    -    es_bulk_timeout_ms => ... # number (optional), default: 100
    -    es_host => ... # string (required)
    -    es_ordered => ... # boolean (optional), default: false
    -    es_port => ... # number (optional), default: 9200
    -    exchange => ... # string (optional), default: "elasticsearch"
    -    exchange_type => ... # string, one of ["fanout", "direct", "topic"] (optional), default: "direct"
    -    index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
    -    index_type => ... # string (optional), default: "%{type}"
    -    key => ... # string (optional), default: "elasticsearch"
    -    password => ... # string (optional), default: "guest"
    -    persistent => ... # boolean (optional), default: true
    -    queue => ... # string (optional), default: "elasticsearch"
    -    rabbitmq_host => ... # string (required)
    -    rabbitmq_port => ... # number (optional), default: 5672
    -    user => ... # string (optional), default: "guest"
    -    vhost => ... # string (optional), default: "/"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - debug - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - - - -

    - - document_id - - -

    - -
      -
    • Value type is string
    • -
    • Default value is nil
    • -
    - -

    The document ID for the index. Useful for overwriting existing entries in -elasticsearch with the same ID.

    - -

    - - durable - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    RabbitMQ durability setting. Also used for ElasticSearch setting

    - -

    - - es_bulk_size - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 1000
    • -
    - -

    ElasticSearch river configuration: bulk fetch size

    - -

    - - es_bulk_timeout_ms - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 100
    • -
    - -

    ElasticSearch river configuration: bulk timeout in milliseconds

    - -

    - - es_host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The name/address of an ElasticSearch host to use for river creation

    - -

    - - es_ordered - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    ElasticSearch river configuration: is ordered?

    - -

    - - es_port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 9200
    • -
    - -

    ElasticSearch API port

    - -

    - - exchange - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "elasticsearch"
    • -
    - -

    RabbitMQ exchange name

    - -

    - - exchange_type - - -

    - -
      -
    • Value can be any of: "fanout", "direct", "topic"
    • -
    • Default value is "direct"
    • -
    - -

    The exchange type (fanout, topic, direct)

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - index - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash-%{+YYYY.MM.dd}"
    • -
    - -

    The index to write events to. This can be dynamic using the %{foo} syntax. -The default value will partition your indeces by day so you can more easily -delete old data or only search specific date ranges.

    - -

    - - index_type - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{type}"
    • -
    - -

    The index type to write events to. Generally you should try to write only -similar events to the same 'type'. String expansion '%{foo}' works here.

    - -

    - - key - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "elasticsearch"
    • -
    - -

    RabbitMQ routing key

    - -

    - - password - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "guest"
    • -
    - -

    RabbitMQ password

    - -

    - - persistent - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    RabbitMQ persistence setting

    - -

    - - queue - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "elasticsearch"
    • -
    - -

    RabbitMQ queue name

    - -

    - - rabbitmq_host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Hostname of RabbitMQ server

    - -

    - - rabbitmq_port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5672
    • -
    - -

    Port of RabbitMQ server

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - user - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "guest"
    • -
    - -

    RabbitMQ user

    - -

    - - vhost - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "/"
    • -
    - -

    RabbitMQ vhost

    - - -
    - -This is documentation from lib/logstash/outputs/elasticsearch_river.rb diff --git a/docs/1.2.0.beta1/outputs/email.html b/docs/1.2.0.beta1/outputs/email.html deleted file mode 100644 index adb45f4da..000000000 --- a/docs/1.2.0.beta1/outputs/email.html +++ /dev/null @@ -1,329 +0,0 @@ ---- -title: logstash docs for outputs/email -layout: content_right ---- -

    email

    -

    Milestone: 1

    - -

    Send email when any event is received.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  email {
    -    attachments => ... # array (optional), default: []
    -    body => ... # string (optional), default: ""
    -    cc => ... # string (optional)
    -    codec => ... # codec (optional), default: "plain"
    -    contenttype => ... # string (optional), default: "text/html; charset=UTF-8"
    -    from => ... # string (optional), default: "logstash.alert@nowhere.com"
    -    htmlbody => ... # string (optional), default: ""
    -    options => ... # hash (optional), default: {}
    -    replyto => ... # string (optional)
    -    subject => ... # string (optional), default: ""
    -    to => ... # string (required)
    -    via => ... # string (optional), default: "smtp"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - attachments - - -

    - -
      -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    attachments - has of name of file and file location

    - -

    - - body - - -

    - -
      -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    body for email - just plain text

    - -

    - - cc - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Who to CC on this email?

    - -

    See "to" setting for what is valid here.

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - contenttype - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "text/html; charset=UTF-8"
    • -
    - -

    contenttype : for multipart messages, set the content type and/or charset of the html part

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - from - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash.alert@nowhere.com"
    • -
    - -

    The From setting for email - fully qualified email address for the From:

    - -

    - - htmlbody - - -

    - -
      -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    body for email - can contain html markup

    - -

    - - match - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    This setting is deprecated in favor of logstash's "conditionals" feature -If you were using this setting previously, please use conditionals instead.

    - -

    If you need help converting your older 'match' setting to a conditional, -I welcome you to join the #logstash irc channel on freenode or to email -the logstash-users@googlegroups.com mailling list and ask for help! :)

    - -

    - - options - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    the options to use: -smtp: address, port, enablestarttlsauto, user_name, password, authentication(bool), domain -sendmail: location, arguments -If you do not specify anything, you will get the following equivalent code set in -every new mail object:

    - -

    Mail.defaults do

    - -
    delivery_method :smtp, { :address              => "localhost",
    -                         :port                 => 25,
    -                         :domain               => 'localhost.localdomain',
    -                         :user_name            => nil,
    -                         :password             => nil,
    -                         :authentication       => nil,(plain, login and cram_md5)
    -                         :enable_starttls_auto => true  }
    -
    -retriever_method :pop3, { :address             => "localhost",
    -                          :port                => 995,
    -                          :user_name           => nil,
    -                          :password            => nil,
    -                          :enable_ssl          => true }
    -
    - -

    end

    - -

    Mail.deliverymethod.new #=> Mail::SMTP instance - Mail.retrievermethod.new #=> Mail::POP3 instance

    - -

    Each mail object inherits the default set in Mail.delivery_method, however, on -a per email basis, you can override the method:

    - -

    mail.delivery_method :sendmail

    - -

    Or you can override the method and pass in settings:

    - -

    mail.delivery_method :sendmail, { :address => 'some.host' }

    - -

    You can also just modify the settings:

    - -

    mail.delivery_settings = { :address => 'some.host' }

    - -

    The passed in hash is just merged against the defaults with +merge!+ and the result -assigned the mail object. So the above example will change only the :address value -of the global smtp_settings to be 'some.host', keeping all other values

    - -

    - - replyto - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The Reply-To setting for email - fully qualified email address is required -here.

    - -

    - - subject - - -

    - -
      -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    subject for email

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - to (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Who to send this email to? -A fully qualified email address to send to

    - -

    This field also accept a comma separated list of emails like -"me@host.com, you@host.com"

    - -

    You can also use dynamic field from the event with the %{fieldname} syntax.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - via - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "smtp"
    • -
    - -

    how to send email: either smtp or sendmail - default to 'smtp'

    - - -
    - -This is documentation from lib/logstash/outputs/email.rb diff --git a/docs/1.2.0.beta1/outputs/exec.html b/docs/1.2.0.beta1/outputs/exec.html deleted file mode 100644 index 8e714bc4f..000000000 --- a/docs/1.2.0.beta1/outputs/exec.html +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: logstash docs for outputs/exec -layout: content_right ---- -

    exec

    -

    Milestone: 1

    - -

    This output will run a command for any matching event.

    - -

    Example:

    - -
    output {
    -  exec {
    -    type => abuse
    -    command => "iptables -A INPUT -s %{clientip} -j DROP"
    -  }
    -}
    -
    - -

    Run subprocesses via system ruby function

    - -

    WARNING: if you want it non-blocking you should use & or dtach or other such -techniques

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  exec {
    -    codec => ... # codec (optional), default: "plain"
    -    command => ... # string (required)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - command (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Command line to execute via subprocess. Use dtach or screen to make it non blocking

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/exec.rb diff --git a/docs/1.2.0.beta1/outputs/file.html b/docs/1.2.0.beta1/outputs/file.html deleted file mode 100644 index ba361fcfe..000000000 --- a/docs/1.2.0.beta1/outputs/file.html +++ /dev/null @@ -1,186 +0,0 @@ ---- -title: logstash docs for outputs/file -layout: content_right ---- -

    file

    -

    Milestone: 2

    - -

    File output.

    - -

    Write events to files on disk. You can use fields from the -event as parts of the filename.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  file {
    -    codec => ... # codec (optional), default: "plain"
    -    flush_interval => ... # number (optional), default: 2
    -    gzip => ... # boolean (optional), default: false
    -    max_size => ... # string (optional)
    -    message_format => ... # string (optional)
    -    path => ... # string (required)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - flush_interval - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 2
    • -
    - -

    Flush interval for flushing writes to log files. 0 will flush on every meesage

    - -

    - - gzip - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Gzip output stream

    - -

    - - max_size - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The maximum size of file to write. When the file exceeds this -threshold, it will be rotated to the current filename + ".1" -If that file already exists, the previous .1 will shift to .2 -and so forth.

    - -

    NOT YET SUPPORTED

    - -

    - - message_format - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The format to use when writing events to the file. This value -supports any string and can include %{name} and other dynamic -strings.

    - -

    If this setting is omitted, the full json representation of the -event will be written as a single line.

    - -

    - - path (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The path to the file to write. Event fields can be used here, -like "/var/log/logstash/%{host}/%{application}" -One may also utilize the path option for date-based log -rotation via the joda time format. This will use the event -timestamp. -E.g.: path => "./test-%{+YYYY-MM-dd}.txt" to create -./test-2013-05-29.txt

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/file.rb diff --git a/docs/1.2.0.beta1/outputs/ganglia.html b/docs/1.2.0.beta1/outputs/ganglia.html deleted file mode 100644 index 7038b6db7..000000000 --- a/docs/1.2.0.beta1/outputs/ganglia.html +++ /dev/null @@ -1,216 +0,0 @@ ---- -title: logstash docs for outputs/ganglia -layout: content_right ---- -

    ganglia

    -

    Milestone: 2

    - -

    This output allows you to pull metrics from your logs and ship them to -ganglia's gmond. This is heavily based on the graphite output.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  ganglia {
    -    codec => ... # codec (optional), default: "plain"
    -    host => ... # string (optional), default: "localhost"
    -    lifetime => ... # number (optional), default: 300
    -    max_interval => ... # number (optional), default: 60
    -    metric => ... # string (required)
    -    metric_type => ... # string, one of ["string", "int8", "uint8", "int16", "uint16", "int32", "uint32", "float", "double"] (optional), default: "uint8"
    -    port => ... # number (optional), default: 8649
    -    units => ... # string (optional), default: ""
    -    value => ... # string (required)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "localhost"
    • -
    - -

    The address of the ganglia server.

    - -

    - - lifetime - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 300
    • -
    - -

    Lifetime in seconds of this metric

    - -

    - - max_interval - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 60
    • -
    - -

    Maximum time in seconds between gmetric calls for this metric.

    - -

    - - metric (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The metric to use. This supports dynamic strings like %{host}

    - -

    - - metric_type - - -

    - -
      -
    • Value can be any of: "string", "int8", "uint8", "int16", "uint16", "int32", "uint32", "float", "double"
    • -
    • Default value is "uint8"
    • -
    - -

    The type of value for this metric.

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 8649
    • -
    - -

    The port to connect on your ganglia server.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - units - - -

    - -
      -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    Gmetric units for metric, such as "kb/sec" or "ms" or whatever unit -this metric uses.

    - -

    - - value (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The value to use. This supports dynamic strings like %{bytes} -It will be coerced to a floating point value. Values which cannot be -coerced will zero (0)

    - - -
    - -This is documentation from lib/logstash/outputs/ganglia.rb diff --git a/docs/1.2.0.beta1/outputs/gelf.html b/docs/1.2.0.beta1/outputs/gelf.html deleted file mode 100644 index 09a5fef29..000000000 --- a/docs/1.2.0.beta1/outputs/gelf.html +++ /dev/null @@ -1,330 +0,0 @@ ---- -title: logstash docs for outputs/gelf -layout: content_right ---- -

    gelf

    -

    Milestone: 2

    - -

    GELF output. This is most useful if you want to use logstash -to output events to graylog2.

    - -

    More information at http://www.graylog2.org/about/gelf

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  gelf {
    -    chunksize => ... # number (optional), default: 1420
    -    codec => ... # codec (optional), default: "plain"
    -    custom_fields => ... # hash (optional), default: {}
    -    facility => ... # string (optional), default: "logstash-gelf"
    -    file => ... # string (optional), default: "%{path}"
    -    full_message => ... # string (optional), default: "%{message}"
    -    host => ... # string (required)
    -    ignore_metadata => ... # array (optional), default: ["@timestamp", "@version", "severity", "source_host", "source_path", "short_message"]
    -    level => ... # array (optional), default: ["%{severity}", "INFO"]
    -    line => ... # string (optional)
    -    port => ... # number (optional), default: 12201
    -    sender => ... # string (optional), default: "%{source}"
    -    ship_metadata => ... # boolean (optional), default: true
    -    ship_tags => ... # boolean (optional), default: true
    -    short_message => ... # string (optional), default: "short_message"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - chunksize - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 1420
    • -
    - -

    The GELF chunksize. You usually don't need to change this.

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - custom_fields - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    The GELF custom field mappings. GELF supports arbitrary attributes as custom -fields. This exposes that. Exclude the _ portion of the field name -e.g. custom_fields => ['foo_field', 'some_value'] -setsfoofield=some_value`

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - facility - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash-gelf"
    • -
    - -

    The GELF facility. Dynamic values like %{foo} are permitted here; this -is useful if you need to use a value from the event as the facility name.

    - -

    - - file - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{path}"
    • -
    - -

    The GELF file; this is usually the source code file in your program where -the log event originated. Dynamic values like %{foo} are permitted here.

    - -

    - - full_message - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{message}"
    • -
    - -

    The GELF full message. Dynamic values like %{foo} are permitted here.

    - -

    - - host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    graylog2 server address

    - -

    - - ignore_metadata - - -

    - -
      -
    • Value type is array
    • -
    • Default value is ["@timestamp", "@version", "severity", "source_host", "source_path", "short_message"]
    • -
    - -

    Ignore these fields when ship_metadata is set. Typically this lists the -fields used in dynamic values for GELF fields.

    - -

    - - level - - -

    - -
      -
    • Value type is array
    • -
    • Default value is ["%{severity}", "INFO"]
    • -
    - -

    The GELF message level. Dynamic values like %{level} are permitted here; -useful if you want to parse the 'log level' from an event and use that -as the gelf level/severity.

    - -

    Values here can be integers [0..7] inclusive or any of -"debug", "info", "warn", "error", "fatal" (case insensitive). -Single-character versions of these are also valid, "d", "i", "w", "e", "f", -"u" -The following additional severitylabels from logstash's syslogpri filter -are accepted: "emergency", "alert", "critical", "warning", "notice", and -"informational"

    - -

    - - line - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The GELF line number; this is usually the line number in your program where -the log event originated. Dynamic values like %{foo} are permitted here, but the -value should be a number.

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 12201
    • -
    - -

    graylog2 server port

    - -

    - - sender - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{source}"
    • -
    - -

    Allow overriding of the gelf 'sender' field. This is useful if you -want to use something other than the event's source host as the -"sender" of an event. A common case for this is using the application name -instead of the hostname.

    - -

    - - ship_metadata - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    Ship metadata within event object? This will cause logstash to ship -any fields in the event (such as those created by grok) in the GELF -messages.

    - -

    - - ship_tags - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    Ship tags within events. This will cause logstash to ship the tags of an -event as the field _tags.

    - -

    - - short_message - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "short_message"
    • -
    - -

    The GELF short message field name. If the field does not exist or is empty, -the event message is taken instead.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/gelf.rb diff --git a/docs/1.2.0.beta1/outputs/gemfire.html b/docs/1.2.0.beta1/outputs/gemfire.html deleted file mode 100644 index b845a1c9f..000000000 --- a/docs/1.2.0.beta1/outputs/gemfire.html +++ /dev/null @@ -1,172 +0,0 @@ ---- -title: logstash docs for outputs/gemfire -layout: content_right ---- -

    gemfire

    -

    Milestone: 1

    - -

    Push events to a GemFire region.

    - -

    GemFire is an object database.

    - -

    To use this plugin you need to add gemfire.jar to your CLASSPATH; -using format=json requires jackson.jar too.

    - -

    Note: this plugin has only been tested with GemFire 7.0.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  gemfire {
    -    cache_name => ... # string (optional), default: "logstash"
    -    cache_xml_file => ... # string (optional), default: nil
    -    codec => ... # codec (optional), default: "plain"
    -    key_format => ... # string (optional), default: "%{source}-%{@timestamp}"
    -    region_name => ... # string (optional), default: "Logstash"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - cache_name - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash"
    • -
    - -

    Your client cache name

    - -

    - - cache_xml_file - - -

    - -
      -
    • Value type is string
    • -
    • Default value is nil
    • -
    - -

    The path to a GemFire client cache XML file.

    - -

    Example:

    - -
     <client-cache>
    -   <pool name="client-pool">
    -       <locator host="localhost" port="31331"/>
    -   </pool>
    -   <region name="Logstash">
    -       <region-attributes refid="CACHING_PROXY" pool-name="client-pool" >
    -       </region-attributes>
    -   </region>
    - </client-cache>
    -
    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - key_format - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{source}-%{@timestamp}"
    • -
    - -

    A sprintf format to use when building keys

    - -

    - - region_name - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "Logstash"
    • -
    - -

    The region name

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/gemfire.rb diff --git a/docs/1.2.0.beta1/outputs/graphite.html b/docs/1.2.0.beta1/outputs/graphite.html deleted file mode 100644 index 1ab0b3f7d..000000000 --- a/docs/1.2.0.beta1/outputs/graphite.html +++ /dev/null @@ -1,261 +0,0 @@ ---- -title: logstash docs for outputs/graphite -layout: content_right ---- -

    graphite

    -

    Milestone: 2

    - -

    This output allows you to pull metrics from your logs and ship them to -graphite. Graphite is an open source tool for storing and graphing metrics.

    - -

    An example use case: At loggly, some of our applications emit aggregated -stats in the logs every 10 seconds. Using the grok filter and this output, -I can capture the metric values from the logs and emit them to graphite.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  graphite {
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    exclude_metrics => ... # array (optional), default: ["%{[^}]+}"]
    -    fields_are_metrics => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "localhost"
    -    include_metrics => ... # array (optional), default: [".*"]
    -    metrics => ... # hash (optional), default: {}
    -    metrics_format => ... # string (optional), default: "*"
    -    port => ... # number (optional), default: 2003
    -    reconnect_interval => ... # number (optional), default: 2
    -    resend_on_failure => ... # boolean (optional), default: false
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - debug - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Enable debug output

    - -

    - - exclude_metrics - - -

    - -
      -
    • Value type is array
    • -
    • Default value is ["%{[^}]+}"]
    • -
    - -

    Exclude regex matched metric names, by default exclude unresolved %{field} strings

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - fields_are_metrics - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Indicate that the event @fields should be treated as metrics and will be sent as is to graphite

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "localhost"
    • -
    - -

    The address of the graphite server.

    - -

    - - include_metrics - - -

    - -
      -
    • Value type is array
    • -
    • Default value is [".*"]
    • -
    - -

    Include only regex matched metric names

    - -

    - - metrics - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    The metric(s) to use. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value. Example:

    - -
    [ "%{source}/uptime", "%{uptime_1m}" ]
    -
    - -

    The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)

    - -

    - - metrics_format - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "*"
    • -
    - -

    Defines format of the metric string. The placeholder '*' will be -replaced with the name of the actual metric.

    - -
    metrics_format => "foo.bar.*.sum"
    -
    - -

    NOTE: If no metrics_format is defined the name of the metric will be used as fallback.

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 2003
    • -
    - -

    The port to connect on your graphite server.

    - -

    - - reconnect_interval - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 2
    • -
    - -

    Interval between reconnect attempts to carboon

    - -

    - - resend_on_failure - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Should metrics be resend on failure?

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/graphite.rb diff --git a/docs/1.2.0.beta1/outputs/graphtastic.html b/docs/1.2.0.beta1/outputs/graphtastic.html deleted file mode 100644 index a8d96ff16..000000000 --- a/docs/1.2.0.beta1/outputs/graphtastic.html +++ /dev/null @@ -1,243 +0,0 @@ ---- -title: logstash docs for outputs/graphtastic -layout: content_right ---- -

    graphtastic

    -

    Milestone: 2

    - -

    A plugin for a newly developed Java/Spring Metrics application -I didn't really want to code this project but I couldn't find -a respectable alternative that would also run on any Windows -machine - which is the problem and why I am not going with Graphite -and statsd. This application provides multiple integration options -so as to make its use under your network requirements possible. -This includes a REST option that is always enabled for your use -in case you want to write a small script to send the occasional -metric data.

    - -

    Find GraphTastic here : https://github.com/NickPadilla/GraphTastic

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  graphtastic {
    -    batch_number => ... # number (optional), default: 60
    -    codec => ... # codec (optional), default: "plain"
    -    context => ... # string (optional), default: "graphtastic"
    -    error_file => ... # string (optional), default: ""
    -    host => ... # string (optional), default: "127.0.0.1"
    -    integration => ... # string, one of ["udp", "tcp", "rmi", "rest"] (optional), default: "udp"
    -    metrics => ... # hash (optional), default: {}
    -    port => ... # number (optional)
    -    retries => ... # number (optional), default: 1
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - batch_number - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 60
    • -
    - -

    the number of metrics to send to GraphTastic at one time. 60 seems to be the perfect -amount for UDP, with default packet size.

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - context - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "graphtastic"
    • -
    - -

    if using rest as your end point you need to also provide the application url -it defaults to localhost/graphtastic. You can customize the application url -by changing the name of the .war file. There are other ways to change the -application context, but they vary depending on the Application Server in use. -Please consult your application server documentation for more on application -contexts.

    - -

    - - error_file - - -

    - -
      -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    setting allows you to specify where we save errored transactions -this makes the most sense at this point - will need to decide -on how we reintegrate these error metrics -NOT IMPLEMENTED!

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "127.0.0.1"
    • -
    - -

    host for the graphtastic server - defaults to 127.0.0.1

    - -

    - - integration - - -

    - -
      -
    • Value can be any of: "udp", "tcp", "rmi", "rest"
    • -
    • Default value is "udp"
    • -
    - -

    options are udp(fastest - default) - rmi(faster) - rest(fast) - tcp(don't use TCP yet - some problems - errors out on linux)

    - -

    - - metrics - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    metrics hash - you will provide a name for your metric and the metric -data as key value pairs. so for example:

    - -

    metrics => { "Response" => "%{response}" }

    - -

    example for the logstash config

    - -

    metrics => [ "Response", "%{response}" ]

    - -

    NOTE: you can also use the dynamic fields for the key value as well as the actual value

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • There is no default value for this setting.
    • -
    - -

    port for the graphtastic instance - defaults to 1199 for RMI, 1299 for TCP, 1399 for UDP, and 8080 for REST

    - -

    - - retries - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 1
    • -
    - -

    number of attempted retry after send error - currently only way to integrate -errored transactions - should try and save to a file or later consumption -either by graphtastic utility or by this program after connectivity is -ensured to be established.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/graphtastic.rb diff --git a/docs/1.2.0.beta1/outputs/hipchat.html b/docs/1.2.0.beta1/outputs/hipchat.html deleted file mode 100644 index 8461da105..000000000 --- a/docs/1.2.0.beta1/outputs/hipchat.html +++ /dev/null @@ -1,184 +0,0 @@ ---- -title: logstash docs for outputs/hipchat -layout: content_right ---- -

    hipchat

    -

    Milestone: 1

    - -

    This output allows you to write events to HipChat.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  hipchat {
    -    codec => ... # codec (optional), default: "plain"
    -    color => ... # string (optional), default: "yellow"
    -    format => ... # string (optional), default: "%{message}"
    -    from => ... # string (optional), default: "logstash"
    -    room_id => ... # string (required)
    -    token => ... # string (required)
    -    trigger_notify => ... # boolean (optional), default: false
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - color - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "yellow"
    • -
    - -

    Background color for message. -HipChat currently supports one of "yellow", "red", "green", "purple", -"gray", or "random". (default: yellow)

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - format - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{message}"
    • -
    - -

    Message format to send, event tokens are usable here.

    - -

    - - from - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash"
    • -
    - -

    The name the message will appear be sent from.

    - -

    - - room_id (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The ID or name of the room.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - token (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The HipChat authentication token.

    - -

    - - trigger_notify - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Whether or not this message should trigger a notification for people in the room.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/hipchat.rb diff --git a/docs/1.2.0.beta1/outputs/http.html b/docs/1.2.0.beta1/outputs/http.html deleted file mode 100644 index 4fe8f1926..000000000 --- a/docs/1.2.0.beta1/outputs/http.html +++ /dev/null @@ -1,239 +0,0 @@ ---- -title: logstash docs for outputs/http -layout: content_right ---- -

    http

    -

    Milestone: 1

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  http {
    -    codec => ... # codec (optional), default: "plain"
    -    content_type => ... # string (optional)
    -    format => ... # string, one of ["json", "form", "message"] (optional), default: "json"
    -    headers => ... # hash (optional)
    -    http_method => ... # string, one of ["put", "post"] (required)
    -    mapping => ... # hash (optional)
    -    message => ... # string (optional)
    -    url => ... # string (required)
    -    verify_ssl => ... # boolean (optional), default: true
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - content_type - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Content type

    - -

    If not specified, this defaults to the following:

    - -
      -
    • if format is "json", "application/json"
    • -
    • if format is "form", "application/x-www-form-urlencoded"
    • -
    - - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - format - - -

    - -
      -
    • Value can be any of: "json", "form", "message"
    • -
    • Default value is "json"
    • -
    - -

    Set the format of the http body.

    - -

    If form, then the body will be the mapping (or whole event) converted -into a query parameter string (foo=bar&baz=fizz...)

    - -

    If message, then the body will be the result of formatting the event according to message

    - -

    Otherwise, the event is sent as json.

    - -

    - - headers - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    Custom headers to use -format is `headers => ["X-My-Header", "%{source}"]

    - -

    - - http_method (required setting) - - -

    - -
      -
    • Value can be any of: "put", "post"
    • -
    • There is no default value for this setting.
    • -
    - -

    What verb to use -only put and post are supported for now

    - -

    - - mapping - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    This lets you choose the structure and parts of the event that are sent.

    - -

    For example:

    - -

    mapping => ["foo", "%{source}", "bar", "%{type}"]

    - -

    - - message - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - - - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - url (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    This output lets you PUT or POST events to a -generic HTTP(S) endpoint

    - -

    Additionally, you are given the option to customize -the headers sent as well as basic customization of the -event json itself. -URL to use

    - -

    - - verify_ssl - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    validate SSL?

    - - -
    - -This is documentation from lib/logstash/outputs/http.rb diff --git a/docs/1.2.0.beta1/outputs/irc.html b/docs/1.2.0.beta1/outputs/irc.html deleted file mode 100644 index 9830be3ea..000000000 --- a/docs/1.2.0.beta1/outputs/irc.html +++ /dev/null @@ -1,245 +0,0 @@ ---- -title: logstash docs for outputs/irc -layout: content_right ---- -

    irc

    -

    Milestone: 1

    - -

    Write events to IRC

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  irc {
    -    channels => ... # array (required)
    -    codec => ... # codec (optional), default: "plain"
    -    format => ... # string (optional), default: "%{message}"
    -    host => ... # string (required)
    -    messages_per_second => ... # number (optional), default: 0.5
    -    nick => ... # string (optional), default: "logstash"
    -    password => ... # password (optional)
    -    port => ... # number (optional), default: 6667
    -    real => ... # string (optional), default: "logstash"
    -    secure => ... # boolean (optional), default: false
    -    user => ... # string (optional), default: "logstash"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - channels (required setting) - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    Channels to broadcast to.

    - -

    These should be full channel names including the '#' symbol, such as -"#logstash".

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - format - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{message}"
    • -
    - -

    Message format to send, event tokens are usable here

    - -

    - - host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Address of the host to connect to

    - -

    - - messages_per_second - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 0.5
    • -
    - -

    Limit the rate of messages sent to IRC in messages per second.

    - -

    - - nick - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash"
    • -
    - -

    IRC Nickname

    - -

    - - password - - -

    - -
      -
    • Value type is password
    • -
    • There is no default value for this setting.
    • -
    - -

    IRC server password

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 6667
    • -
    - -

    Port on host to connect to.

    - -

    - - real - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash"
    • -
    - -

    IRC Real name

    - -

    - - secure - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Set this to true to enable SSL.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - user - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash"
    • -
    - -

    IRC Username

    - - -
    - -This is documentation from lib/logstash/outputs/irc.rb diff --git a/docs/1.2.0.beta1/outputs/juggernaut.html b/docs/1.2.0.beta1/outputs/juggernaut.html deleted file mode 100644 index e6d2d40b6..000000000 --- a/docs/1.2.0.beta1/outputs/juggernaut.html +++ /dev/null @@ -1,208 +0,0 @@ ---- -title: logstash docs for outputs/juggernaut -layout: content_right ---- -

    juggernaut

    -

    Milestone: 1

    - -

    Push messages to the juggernaut websockets server:

    - -
      -
    • https://github.com/maccman/juggernaut
    • -
    - - -

    Wraps Websockets and supports other methods (including xhr longpolling) This -is basically, just an extension of the redis output (Juggernaut pulls -messages from redis). But it pushes messages to a particular channel and -formats the messages in the way juggernaut expects.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  juggernaut {
    -    channels => ... # array (required)
    -    codec => ... # codec (optional), default: "plain"
    -    db => ... # number (optional), default: 0
    -    host => ... # string (optional), default: "127.0.0.1"
    -    message_format => ... # string (optional)
    -    password => ... # password (optional)
    -    port => ... # number (optional), default: 6379
    -    timeout => ... # number (optional), default: 5
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - channels (required setting) - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    List of channels to which to publish. Dynamic names are -valid here, for example "logstash-%{type}".

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - db - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 0
    • -
    - -

    The redis database number.

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "127.0.0.1"
    • -
    - -

    The hostname of the redis server to which juggernaut is listening.

    - -

    - - message_format - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    How should the message be formatted before pushing to the websocket.

    - -

    - - password - - -

    - -
      -
    • Value type is password
    • -
    • There is no default value for this setting.
    • -
    - -

    Password to authenticate with. There is no authentication by default.

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 6379
    • -
    - -

    The port to connect on.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - timeout - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5
    • -
    - -

    Redis initial connection timeout in seconds.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/juggernaut.rb diff --git a/docs/1.2.0.beta1/outputs/librato.html b/docs/1.2.0.beta1/outputs/librato.html deleted file mode 100644 index 4dbf24e0f..000000000 --- a/docs/1.2.0.beta1/outputs/librato.html +++ /dev/null @@ -1,212 +0,0 @@ ---- -title: logstash docs for outputs/librato -layout: content_right ---- -

    librato

    -

    Milestone: 1

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  librato {
    -    account_id => ... # string (required)
    -    annotation => ... # hash (optional), default: {}
    -    api_token => ... # string (required)
    -    batch_size => ... # string (optional), default: "10"
    -    codec => ... # codec (optional), default: "plain"
    -    counter => ... # hash (optional), default: {}
    -    gauge => ... # hash (optional), default: {}
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - account_id (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    This output lets you send metrics, annotations and alerts to -Librato based on Logstash events

    - -

    This is VERY experimental and inefficient right now. -Your Librato account -usually an email address

    - -

    - - annotation - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    Annotations -Registers an annotation with Librato -The only required field is title and name. -start_time and end_time will be set to event.unix_timestamp -You can add any other optional annotation values as well. -All values will be passed through event.sprintf

    - -

    Example: - ["title":"Logstash event on %{source}", "name":"logstashstream"] -or - ["title":"Logstash event", "description":"%{message}", "name":"logstashstream"]

    - -

    - - api_token (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Your Librato API Token

    - -

    - - batch_size - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "10"
    • -
    - -

    Batch size -Number of events to batch up before sending to Librato.

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - counter - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    Counters -Send data to Librato as a counter

    - -

    Example: - ["value", "1", "source", "%{source}", "name", "messagesreceived"] -Additionally, you can override the measure_time for the event. Must be a unix timestamp: - ["value", "1", "source", "%{source}", "name", "messagesreceived", "measuretime", "%{myunixtime_field}"] -Default is to use the event's timestamp

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - gauge - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    Gauges -Send data to Librato as a gauge

    - -

    Example: - ["value", "%{bytesrecieved}", "source", "%{source}", "name", "apachebytes"] -Additionally, you can override the measure_time for the event. Must be a unix timestamp: - ["value", "%{bytesrecieved}", "source", "%{source}", "name", "apachebytes","measuretime", "%{myunixtime_field}] -Default is to use the event's timestamp

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/librato.rb diff --git a/docs/1.2.0.beta1/outputs/loggly.html b/docs/1.2.0.beta1/outputs/loggly.html deleted file mode 100644 index 7cb8de21c..000000000 --- a/docs/1.2.0.beta1/outputs/loggly.html +++ /dev/null @@ -1,214 +0,0 @@ ---- -title: logstash docs for outputs/loggly -layout: content_right ---- -

    loggly

    -

    Milestone: 2

    - -

    Got a loggly account? Use logstash to ship logs to Loggly!

    - -

    This is most useful so you can use logstash to parse and structure -your logs and ship structured, json events to your account at Loggly.

    - -

    To use this, you'll need to use a Loggly input with type 'http' -and 'json logging' enabled.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  loggly {
    -    codec => ... # codec (optional), default: "plain"
    -    host => ... # string (optional), default: "logs.loggly.com"
    -    key => ... # string (required)
    -    proto => ... # string (optional), default: "http"
    -    proxy_host => ... # string (optional)
    -    proxy_password => ... # password (optional), default: ""
    -    proxy_port => ... # number (optional)
    -    proxy_user => ... # string (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logs.loggly.com"
    • -
    - -

    The hostname to send logs to. This should target the loggly http input -server which is usually "logs.loggly.com"

    - -

    - - key (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The loggly http input key to send to. -This is usually visible in the Loggly 'Inputs' page as something like this

    - -
    https://logs.hoover.loggly.net/inputs/abcdef12-3456-7890-abcd-ef0123456789
    -                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    -                                      \---------->   key   <-------------/
    -
    - -

    You can use %{foo} field lookups here if you need to pull the api key from -the event. This is mainly aimed at multitenant hosting providers who want -to offer shipping a customer's logs to that customer's loggly account.

    - -

    - - proto - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "http"
    • -
    - -

    Should the log action be sent over https instead of plain http

    - -

    - - proxy_host - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Proxy Host

    - -

    - - proxy_password - - -

    - -
      -
    • Value type is password
    • -
    • Default value is ""
    • -
    - -

    Proxy Password

    - -

    - - proxy_port - - -

    - -
      -
    • Value type is number
    • -
    • There is no default value for this setting.
    • -
    - -

    Proxy Port

    - -

    - - proxy_user - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Proxy Username

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/loggly.rb diff --git a/docs/1.2.0.beta1/outputs/lumberjack.html b/docs/1.2.0.beta1/outputs/lumberjack.html deleted file mode 100644 index efb8a2b2d..000000000 --- a/docs/1.2.0.beta1/outputs/lumberjack.html +++ /dev/null @@ -1,152 +0,0 @@ ---- -title: logstash docs for outputs/lumberjack -layout: content_right ---- -

    lumberjack

    -

    Milestone: 1

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  lumberjack {
    -    codec => ... # codec (optional), default: "plain"
    -    hosts => ... # array (required)
    -    port => ... # number (required)
    -    ssl_certificate => ... # a valid filesystem path (required)
    -    window_size => ... # number (optional), default: 5000
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - hosts (required setting) - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    list of addresses lumberjack can send to

    - -

    - - port (required setting) - - -

    - -
      -
    • Value type is number
    • -
    • There is no default value for this setting.
    • -
    - -

    the port to connect to

    - -

    - - ssl_certificate (required setting) - - -

    - -
      -
    • Value type is path
    • -
    • There is no default value for this setting.
    • -
    - -

    ssl certificate to use

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - window_size - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5000
    • -
    - -

    window size

    - - -
    - -This is documentation from lib/logstash/outputs/lumberjack.rb diff --git a/docs/1.2.0.beta1/outputs/metriccatcher.html b/docs/1.2.0.beta1/outputs/metriccatcher.html deleted file mode 100644 index 36b3550c5..000000000 --- a/docs/1.2.0.beta1/outputs/metriccatcher.html +++ /dev/null @@ -1,260 +0,0 @@ ---- -title: logstash docs for outputs/metriccatcher -layout: content_right ---- -

    metriccatcher

    -

    Milestone: 2

    - -

    This output ships metrics to MetricCatcher, allowing you to -utilize Coda Hale's Metrics.

    - -

    More info on MetricCatcher: https://github.com/clearspring/MetricCatcher

    - -

    At Clearspring, we use it to count the response codes from Apache logs:

    - -
    metriccatcher {
    -    host => "localhost"
    -    port => "1420"
    -    type => "apache-access"
    -    fields => [ "response" ]
    -    meter => [ "%{source}.apache.response.%{response}", "1" ]
    -}
    -
    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  metriccatcher {
    -    biased => ... # hash (optional)
    -    codec => ... # codec (optional), default: "plain"
    -    counter => ... # hash (optional)
    -    gauge => ... # hash (optional)
    -    host => ... # string (optional), default: "localhost"
    -    meter => ... # hash (optional)
    -    port => ... # number (optional), default: 1420
    -    timer => ... # hash (optional)
    -    uniform => ... # hash (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - biased - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value.

    - -

    The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - counter - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value. Example:

    - -

    counter => [ "%{source}.apache.hits.%{response}, "1" ]

    - -

    The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - gauge - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value.

    - -

    The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "localhost"
    • -
    - -

    The address of the MetricCatcher

    - -

    - - meter - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value.

    - -

    The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 1420
    • -
    - -

    The port to connect on your MetricCatcher

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - timer - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value. Example:

    - -

    timer => [ "%{source}.apache.responsetime, "%{responsetime}" ]

    - -

    The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - uniform - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value.

    - -

    The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)

    - - -
    - -This is documentation from lib/logstash/outputs/metriccatcher.rb diff --git a/docs/1.2.0.beta1/outputs/mongodb.html b/docs/1.2.0.beta1/outputs/mongodb.html deleted file mode 100644 index 165e18161..000000000 --- a/docs/1.2.0.beta1/outputs/mongodb.html +++ /dev/null @@ -1,188 +0,0 @@ ---- -title: logstash docs for outputs/mongodb -layout: content_right ---- -

    mongodb

    -

    Milestone: 2

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  mongodb {
    -    codec => ... # codec (optional), default: "plain"
    -    collection => ... # string (required)
    -    database => ... # string (required)
    -    generateId => ... # boolean (optional), default: false
    -    isodate => ... # boolean (optional), default: false
    -    retry_delay => ... # number (optional), default: 3
    -    uri => ... # string (required)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - collection (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The collection to use. This value can use %{foo} values to dynamically -select a collection based on data in the event.

    - -

    - - database (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The database to use

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - generateId - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    If true, a id field will be added to the document before insertion. -The id field will use the timestamp of the event and overwrite an existing -_id field in the event.

    - -

    - - isodate - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    If true, store the @timestamp field in mongodb as an ISODate type instead -of an ISO8601 string. For more information about this, see -http://www.mongodb.org/display/DOCS/Dates

    - -

    - - retry_delay - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 3
    • -
    - -

    Number of seconds to wait after failure before retrying

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - uri (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    a MongoDB URI to connect to -See http://docs.mongodb.org/manual/reference/connection-string/

    - - -
    - -This is documentation from lib/logstash/outputs/mongodb.rb diff --git a/docs/1.2.0.beta1/outputs/nagios.html b/docs/1.2.0.beta1/outputs/nagios.html deleted file mode 100644 index dc4cf5629..000000000 --- a/docs/1.2.0.beta1/outputs/nagios.html +++ /dev/null @@ -1,161 +0,0 @@ ---- -title: logstash docs for outputs/nagios -layout: content_right ---- -

    nagios

    -

    Milestone: 2

    - -

    The nagios output is used for sending passive check results to nagios via the -nagios command file.

    - -

    For this output to work, your event must have the following fields:

    - -
      -
    • "nagios_host"
    • -
    • "nagios_service"
    • -
    - - -

    These fields are supported, but optional:

    - -
      -
    • "nagios_annotation"
    • -
    • "nagios_level"
    • -
    - - -

    There are two configuration options:

    - -
      -
    • commandfile - The location of the Nagios external command file
    • -
    • nagioslevel - Specifies the level of the check to be sent. Defaults to -CRITICAL and can be overriden by setting the "nagioslevel" field to one -of "OK", "WARNING", "CRITICAL", or "UNKNOWN"

      - -
       match => [ "message", "(error|ERROR|CRITICAL)" ]
      -
      - -

      output{ - if [message] =~ /(error|ERROR|CRITICAL)/ { - nagios { - # your config here - } - } - }

    • -
    - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  nagios {
    -    codec => ... # codec (optional), default: "plain"
    -    commandfile => ... # a valid filesystem path (optional), default: "/var/lib/nagios3/rw/nagios.cmd"
    -    nagios_level => ... # string, one of ["0", "1", "2", "3"] (optional), default: "2"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - commandfile - - -

    - -
      -
    • Value type is path
    • -
    • Default value is "/var/lib/nagios3/rw/nagios.cmd"
    • -
    - -

    The path to your nagios command file

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - nagios_level - - -

    - -
      -
    • Value can be any of: "0", "1", "2", "3"
    • -
    • Default value is "2"
    • -
    - -

    The Nagios check level. Should be one of 0=OK, 1=WARNING, 2=CRITICAL, -3=UNKNOWN. Defaults to 2 - CRITICAL.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/nagios.rb diff --git a/docs/1.2.0.beta1/outputs/nagios_nsca.html b/docs/1.2.0.beta1/outputs/nagios_nsca.html deleted file mode 100644 index 3328d88dc..000000000 --- a/docs/1.2.0.beta1/outputs/nagios_nsca.html +++ /dev/null @@ -1,238 +0,0 @@ ---- -title: logstash docs for outputs/nagios_nsca -layout: content_right ---- -

    nagios_nsca

    -

    Milestone: 1

    - -

    The nagios_nsca output is used for sending passive check results to Nagios -through the NSCA protocol.

    - -

    This is useful if your Nagios server is not the same as the source host from -where you want to send logs or alerts. If you only have one server, this -output is probably overkill # for you, take a look at the 'nagios' output -instead.

    - -

    Here is a sample config using the nagios_nsca output:

    - -
    output {
    -  nagios_nsca {
    -    # specify the hostname or ip of your nagios server
    -    host => "nagios.example.com"
    -
    -    # specify the port to connect to
    -    port => 5667
    -  }
    -}
    -
    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  nagios_nsca {
    -    codec => ... # codec (optional), default: "plain"
    -    host => ... # string (optional), default: "localhost"
    -    message_format => ... # string (optional), default: "%{@timestamp} %{source}: %{message}"
    -    nagios_host => ... # string (optional), default: "%{host}"
    -    nagios_service => ... # string (optional), default: "LOGSTASH"
    -    nagios_status => ... # string (required)
    -    port => ... # number (optional), default: 5667
    -    send_nsca_bin => ... # a valid filesystem path (optional), default: "/usr/sbin/send_nsca"
    -    send_nsca_config => ... # a valid filesystem path (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "localhost"
    • -
    - -

    The nagios host or IP to send logs to. It should have a NSCA daemon running.

    - -

    - - message_format - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{@timestamp} %{source}: %{message}"
    • -
    - -

    The format to use when writing events to nagios. This value -supports any string and can include %{name} and other dynamic -strings.

    - -

    - - nagios_host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{host}"
    • -
    - -

    The nagios 'host' you want to submit a passive check result to. This -parameter accepts interpolation, e.g. you can use @source_host or other -logstash internal variables.

    - -

    - - nagios_service - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "LOGSTASH"
    • -
    - -

    The nagios 'service' you want to submit a passive check result to. This -parameter accepts interpolation, e.g. you can use @source_host or other -logstash internal variables.

    - -

    - - nagios_status (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The status to send to nagios. Should be 0 = OK, 1 = WARNING, 2 = CRITICAL, 3 = UNKNOWN

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5667
    • -
    - -

    The port where the NSCA daemon on the nagios host listens.

    - -

    - - send_nsca_bin - - -

    - -
      -
    • Value type is path
    • -
    • Default value is "/usr/sbin/send_nsca"
    • -
    - -

    The path to the 'send_nsca' binary on the local host.

    - -

    - - send_nsca_config - - -

    - -
      -
    • Value type is path
    • -
    • There is no default value for this setting.
    • -
    - -

    The path to the send_nsca config file on the local host. -Leave blank if you don't want to provide a config file.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/nagios_nsca.rb diff --git a/docs/1.2.0.beta1/outputs/null.html b/docs/1.2.0.beta1/outputs/null.html deleted file mode 100644 index 6caf19aa6..000000000 --- a/docs/1.2.0.beta1/outputs/null.html +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: logstash docs for outputs/null -layout: content_right ---- -

    null

    -

    Milestone: 3

    - -

    A null output. This is useful for testing logstash inputs and filters for -performance.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  null {
    -    codec => ... # codec (optional), default: "plain"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/null.rb diff --git a/docs/1.2.0.beta1/outputs/opentsdb.html b/docs/1.2.0.beta1/outputs/opentsdb.html deleted file mode 100644 index 22c14d109..000000000 --- a/docs/1.2.0.beta1/outputs/opentsdb.html +++ /dev/null @@ -1,168 +0,0 @@ ---- -title: logstash docs for outputs/opentsdb -layout: content_right ---- -

    opentsdb

    -

    Milestone: 1

    - -

    This output allows you to pull metrics from your logs and ship them to -opentsdb. Opentsdb is an open source tool for storing and graphing metrics.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  opentsdb {
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional)
    -    host => ... # string (optional), default: "localhost"
    -    metrics => ... # array (required)
    -    port => ... # number (optional), default: 4242
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - debug - - -

    - -
      -
    • Value type is boolean
    • -
    • There is no default value for this setting.
    • -
    - -

    Enable debugging. Tries to pretty-print the entire event object.

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "localhost"
    • -
    - -

    The address of the opentsdb server.

    - -

    - - metrics (required setting) - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    The metric(s) to use. This supports dynamic strings like %{source_host} -for metric names and also for values. This is an array field with key -of the metric name, value of the metric value, and multiple tag,values . Example:

    - -
    [
    -  "%{host}/uptime",
    -  %{uptime_1m} " ,
    -  "hostname" ,
    -  "%{host}
    -  "anotherhostname" ,
    -  "%{host}
    -]
    -
    - -

    The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 4242
    • -
    - -

    The port to connect on your graphite server.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/opentsdb.rb diff --git a/docs/1.2.0.beta1/outputs/pagerduty.html b/docs/1.2.0.beta1/outputs/pagerduty.html deleted file mode 100644 index 3eddad5c4..000000000 --- a/docs/1.2.0.beta1/outputs/pagerduty.html +++ /dev/null @@ -1,190 +0,0 @@ ---- -title: logstash docs for outputs/pagerduty -layout: content_right ---- -

    pagerduty

    -

    Milestone: 1

    - -

    PagerDuty output -Send specific events to PagerDuty for alerting

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  pagerduty {
    -    codec => ... # codec (optional), default: "plain"
    -    description => ... # string (optional), default: "Logstash event for %{host}"
    -    details => ... # hash (optional), default: {"timestamp"=>"%{@timestamp}", "message"=>"%{message}"}
    -    event_type => ... # string, one of ["trigger", "acknowledge", "resolve"] (optional), default: "trigger"
    -    incident_key => ... # string (optional), default: "logstash/%{host}/%{type}"
    -    pdurl => ... # string (optional), default: "https://events.pagerduty.com/generic/2010-04-15/create_event.json"
    -    service_key => ... # string (required)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - description - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "Logstash event for %{host}"
    • -
    - -

    Custom description

    - -

    - - details - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {"timestamp"=>"%{@timestamp}", "message"=>"%{message}"}
    • -
    - -

    Event details -These might be keys from the logstash event you wish to include -tags are automatically included if detected so no need to add them here

    - -

    - - event_type - - -

    - -
      -
    • Value can be any of: "trigger", "acknowledge", "resolve"
    • -
    • Default value is "trigger"
    • -
    - -

    Event type

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - incident_key - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash/%{host}/%{type}"
    • -
    - -

    The service key to use -You'll need to set this up in PD beforehand

    - -

    - - pdurl - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "https://events.pagerduty.com/generic/2010-04-15/create_event.json"
    • -
    - -

    PagerDuty API url -You shouldn't need to change this -This allows for flexibility -should PD iterate the API -and Logstash hasn't updated yet

    - -

    - - service_key (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Service API Key

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/pagerduty.rb diff --git a/docs/1.2.0.beta1/outputs/pipe.html b/docs/1.2.0.beta1/outputs/pipe.html deleted file mode 100644 index 96cc66f28..000000000 --- a/docs/1.2.0.beta1/outputs/pipe.html +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: logstash docs for outputs/pipe -layout: content_right ---- -

    pipe

    -

    Milestone: 1

    - -

    Pipe output.

    - -

    Pipe events to stdin of another program. You can use fields from the -event as parts of the command. -WARNING: This feature can cause logstash to fork off multiple children if you are not carefull with per-event commandline.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  pipe {
    -    codec => ... # codec (optional), default: "plain"
    -    command => ... # string (required)
    -    message_format => ... # string (optional)
    -    ttl => ... # number (optional), default: 10
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - command (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Command line to launch and pipe to

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - message_format - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The format to use when writing events to the pipe. This value -supports any string and can include %{name} and other dynamic -strings.

    - -

    If this setting is omitted, the full json representation of the -event will be written as a single line.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - ttl - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 10
    • -
    - -

    Close pipe that hasn't been used for TTL seconds. -1 or 0 means never close.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/pipe.rb diff --git a/docs/1.2.0.beta1/outputs/rabbitmq.html b/docs/1.2.0.beta1/outputs/rabbitmq.html deleted file mode 100644 index 7cae50ec6..000000000 --- a/docs/1.2.0.beta1/outputs/rabbitmq.html +++ /dev/null @@ -1,306 +0,0 @@ ---- -title: logstash docs for outputs/rabbitmq -layout: content_right ---- -

    rabbitmq

    -

    Milestone: 1

    - -

    Push events to a RabbitMQ exchange. Requires RabbitMQ 2.x -or later version (3.x is recommended).

    - -

    Relevant links:

    - - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  rabbitmq {
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    durable => ... # boolean (optional), default: true
    -    exchange => ... # string (required)
    -    exchange_type => ... # string, one of ["fanout", "direct", "topic"] (required)
    -    host => ... # string (required)
    -    key => ... # string (optional), default: "logstash"
    -    password => ... # password (optional), default: "guest"
    -    persistent => ... # boolean (optional), default: true
    -    port => ... # number (optional), default: 5672
    -    ssl => ... # boolean (optional), default: false
    -    user => ... # string (optional), default: "guest"
    -    verify_ssl => ... # boolean (optional), default: false
    -    vhost => ... # string (optional), default: "/"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - debug - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Enable or disable logging

    - -

    - - durable - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    Is this exchange durable? (aka; Should it survive a broker restart?)

    - -

    - - exchange (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The name of the exchange

    - -

    - - exchange_type (required setting) - - -

    - -
      -
    • Value can be any of: "fanout", "direct", "topic"
    • -
    • There is no default value for this setting.
    • -
    - -

    Exchange

    - -

    The exchange type (fanout, topic, direct)

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Connection

    - -

    RabbitMQ server address

    - -

    - - key - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash"
    • -
    - -

    Key to route to by default. Defaults to 'logstash'

    - -
      -
    • Routing keys are ignored on fanout exchanges.
    • -
    - - -

    - - password - - -

    - -
      -
    • Value type is password
    • -
    • Default value is "guest"
    • -
    - -

    RabbitMQ password

    - -

    - - persistent - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    Should RabbitMQ persist messages to disk?

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5672
    • -
    - -

    RabbitMQ port to connect on

    - -

    - - ssl - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Enable or disable SSL

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - user - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "guest"
    • -
    - -

    RabbitMQ username

    - -

    - - verify_ssl - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Validate SSL certificate

    - -

    - - vhost - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "/"
    • -
    - -

    The vhost to use. If you don't know what this is, leave the default.

    - - -
    - -This is documentation from lib/logstash/outputs/rabbitmq.rb diff --git a/docs/1.2.0.beta1/outputs/redis.html b/docs/1.2.0.beta1/outputs/redis.html deleted file mode 100644 index 3ec7abb36..000000000 --- a/docs/1.2.0.beta1/outputs/redis.html +++ /dev/null @@ -1,362 +0,0 @@ ---- -title: logstash docs for outputs/redis -layout: content_right ---- -

    redis

    -

    Milestone: 2

    - -

    send events to a redis database using RPUSH

    - -

    For more information about redis, see http://redis.io/

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  redis {
    -    batch => ... # boolean (optional), default: false
    -    batch_events => ... # number (optional), default: 50
    -    batch_timeout => ... # number (optional), default: 5
    -    codec => ... # codec (optional), default: "plain"
    -    congestion_interval => ... # number (optional), default: 1
    -    congestion_threshold => ... # number (optional), default: 0
    -    data_type => ... # string, one of ["list", "channel"] (optional)
    -    db => ... # number (optional), default: 0
    -    host => ... # array (optional), default: ["127.0.0.1"]
    -    key => ... # string (optional)
    -    password => ... # password (optional)
    -    port => ... # number (optional), default: 6379
    -    reconnect_interval => ... # number (optional), default: 1
    -    shuffle_hosts => ... # boolean (optional), default: true
    -    timeout => ... # number (optional), default: 5
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - batch - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Set to true if you want redis to batch up values and send 1 RPUSH command -instead of one command per value to push on the list. Note that this only -works with data_type="list" mode right now.

    - -

    If true, we send an RPUSH every "batchevents" events or -"batchtimeout" seconds (whichever comes first). -Only supported for list redis data_type.

    - -

    - - batch_events - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 50
    • -
    - -

    If batch is set to true, the number of events we queue up for an RPUSH.

    - -

    - - batch_timeout - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5
    • -
    - -

    If batch is set to true, the maximum amount of time between RPUSH commands -when there are pending events to flush.

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - congestion_interval - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 1
    • -
    - -

    How often to check for congestion, defaults to 1 second. -Zero means to check on every event.

    - -

    - - congestion_threshold - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 0
    • -
    - -

    In case redis datatype is list and has more than @congestionthreshold items, block until someone consumes them and reduces -congestion, otherwise if there are no consumers redis will run out of memory, unless it was configured with OOM protection. -But even with OOM protection single redis list can block all other users of redis, as well redis cpu consumption -becomes bad then it reaches the max allowed ram size. -Default value of 0 means that this limit is disabled. -Only supported for list redis data_type.

    - -

    - - data_type - - -

    - -
      -
    • Value can be any of: "list", "channel"
    • -
    • There is no default value for this setting.
    • -
    - -

    Either list or channel. If redistype is list, then we will RPUSH to key. -If redistype is channel, then we will PUBLISH to key. -TODO set required true

    - -

    - - db - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 0
    • -
    - -

    The redis database number.

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is array
    • -
    • Default value is ["127.0.0.1"]
    • -
    - -

    The hostname(s) of your redis server(s). Ports may be specified on any -hostname, which will override the global port config.

    - -

    For example:

    - -
    "127.0.0.1"
    -["127.0.0.1", "127.0.0.2"]
    -["127.0.0.1:6380", "127.0.0.1"]
    -
    - -

    - - key - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The name of a redis list or channel. Dynamic names are -valid here, for example "logstash-%{type}". -TODO set required true

    - -

    - - name - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is "default"
    • -
    - -

    Name is used for logging in case there are multiple instances. -TODO: delete

    - -

    - - password - - -

    - -
      -
    • Value type is password
    • -
    • There is no default value for this setting.
    • -
    - -

    Password to authenticate with. There is no authentication by default.

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 6379
    • -
    - -

    The default port to connect on. Can be overridden on any hostname.

    - -

    - - queue - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The name of the redis queue (we'll use RPUSH on this). Dynamic names are -valid here, for example "logstash-%{type}" -TODO: delete

    - -

    - - reconnect_interval - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 1
    • -
    - -

    Interval for reconnecting to failed redis connections

    - -

    - - shuffle_hosts - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    Shuffle the host list during logstash startup.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - timeout - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5
    • -
    - -

    Redis initial connection timeout in seconds.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/redis.rb diff --git a/docs/1.2.0.beta1/outputs/riak.html b/docs/1.2.0.beta1/outputs/riak.html deleted file mode 100644 index dff8a89c0..000000000 --- a/docs/1.2.0.beta1/outputs/riak.html +++ /dev/null @@ -1,265 +0,0 @@ ---- -title: logstash docs for outputs/riak -layout: content_right ---- -

    riak

    -

    Milestone: 1

    - -

    Riak is a distributed k/v store from Basho. -It's based on the Dynamo model.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  riak {
    -    bucket => ... # array (optional), default: ["logstash-%{+YYYY.MM.dd}"]
    -    bucket_props => ... # hash (optional)
    -    codec => ... # codec (optional), default: "plain"
    -    enable_search => ... # boolean (optional), default: false
    -    enable_ssl => ... # boolean (optional), default: false
    -    indices => ... # array (optional)
    -    key_name => ... # string (optional)
    -    nodes => ... # hash (optional), default: {"localhost"=>"8098"}
    -    proto => ... # string, one of ["http", "pb"] (optional), default: "http"
    -    ssl_opts => ... # hash (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - bucket - - -

    - -
      -
    • Value type is array
    • -
    • Default value is ["logstash-%{+YYYY.MM.dd}"]
    • -
    - -

    The bucket name to write events to -Expansion is supported here as values are -passed through event.sprintf -Multiple buckets can be specified here -but any bucket-specific settings defined -apply to ALL the buckets.

    - -

    - - bucket_props - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    Bucket properties (NYI) -Logstash hash of properties for the bucket -i.e. -bucket_props => ["r", "one", "w", "one", "dw", "one"] -or -bucket_props => ["n_val", "3"] -Note that the Logstash config language cannot support -hash or array values -Properties will be passed as-is

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - enable_search - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Search -Enable search on the bucket defined above

    - -

    - - enable_ssl - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    SSL -Enable SSL

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - indices - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    Indices -Array of fields to add 2i on -e.g. -`indices => ["source_host", "type"] -Off by default as not everyone runs eleveldb

    - -

    - - key_name - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The event key name -variables are valid here.

    - -

    Choose this carefully. Best to let riak decide....

    - -

    - - nodes - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {"localhost"=>"8098"}
    • -
    - -

    The nodes of your Riak cluster -This can be a single host or -a Logstash hash of node/port pairs -e.g -["node1", "8098", "node2", "8098"]

    - -

    - - proto - - -

    - -
      -
    • Value can be any of: "http", "pb"
    • -
    • Default value is "http"
    • -
    - -

    The protocol to use -HTTP or ProtoBuf -Applies to ALL backends listed above -No mix and match

    - -

    - - ssl_opts - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    SSL Options -Options for SSL connections -Only applied if SSL is enabled -Logstash hash that maps to the riak-client options -here: https://github.com/basho/riak-ruby-client/wiki/Connecting-to-Riak -You'll likely want something like this: -ssl_opts => ["pem", "/etc/riak.pem", "ca_path", "/usr/share/certificates"] -Per the riak client docs, the above sample options -will turn on SSLVERIFY_PEER`

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/riak.rb diff --git a/docs/1.2.0.beta1/outputs/riemann.html b/docs/1.2.0.beta1/outputs/riemann.html deleted file mode 100644 index 3f566c4d2..000000000 --- a/docs/1.2.0.beta1/outputs/riemann.html +++ /dev/null @@ -1,225 +0,0 @@ ---- -title: logstash docs for outputs/riemann -layout: content_right ---- -

    riemann

    -

    Milestone: 1

    - -

    Riemann is a network event stream processing system.

    - -

    While Riemann is very similar conceptually to Logstash, it has -much more in terms of being a monitoring system replacement.

    - -

    Riemann is used in Logstash much like statsd or other metric-related -outputs

    - -

    You can learn about Riemann here:

    - - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  riemann {
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    host => ... # string (optional), default: "localhost"
    -    port => ... # number (optional), default: 5555
    -    protocol => ... # string, one of ["tcp", "udp"] (optional), default: "tcp"
    -    riemann_event => ... # hash (optional)
    -    sender => ... # string (optional), default: "%{host}"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - debug - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Enable debugging output?

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "localhost"
    • -
    - -

    The address of the Riemann server.

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5555
    • -
    - -

    The port to connect to on your Riemann server.

    - -

    - - protocol - - -

    - -
      -
    • Value can be any of: "tcp", "udp"
    • -
    • Default value is "tcp"
    • -
    - -

    The protocol to use -UDP is non-blocking -TCP is blocking

    - -

    Logstash's default output behaviour -is to never lose events -As such, we use tcp as default here

    - -

    - - riemann_event - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    A Hash to set Riemann event fields -(http://aphyr.github.com/riemann/concepts.html).

    - -

    The following event fields are supported: -description, state, metric, ttl, service

    - -

    Example:

    - -
    riemann {
    -    riemann_event => [ 
    -        "metric", "%{metric}",
    -        "service", "%{service}"
    -    ]
    -}
    -
    - -

    metric and ttl values will be coerced to a floating point value. -Values which cannot be coerced will zero (0.0).

    - -

    description, by default, will be set to the event message -but can be overridden here.

    - -

    - - sender - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{host}"
    • -
    - -

    The name of the sender. -This sets the host value -in the Riemann event

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/riemann.rb diff --git a/docs/1.2.0.beta1/outputs/s3.html b/docs/1.2.0.beta1/outputs/s3.html deleted file mode 100644 index ea930ac71..000000000 --- a/docs/1.2.0.beta1/outputs/s3.html +++ /dev/null @@ -1,342 +0,0 @@ ---- -title: logstash docs for outputs/s3 -layout: content_right ---- -

    s3

    -

    Milestone: 1

    - -

    TODO integrate awsconfig in the future -INFORMATION: -This plugin was created for store the logstash's events into Amazon Simple Storage Service (Amazon S3). -For use it you needs authentications and an s3 bucket. -Be careful to have the permission to write file on S3's bucket and run logstash with super user for establish connection. -S3 plugin allows you to do something complex, let's explain:) -S3 outputs create temporary files into "/opt/logstash/S3temp/". If you want, you can change the path at the start of register method. -This files have a special name, for example: -ls.s3.ip-10-228-27-95.2013-04-18T10.00.taghello.part0.txt -ls.s3 : indicate logstash plugin s3 -"ip-10-228-27-95" : indicate you ip machine, if you have more logstash and writing on the same bucket for example. -"2013-04-18T10.00" : represents the time whenever you specify timefile. -"taghello" : this indicate the event's tag, you can collect events with the same tag. -"part0" : this means if you indicate sizefile then it will generate more parts if you file.size > size_file.

    - -
          When a file is full it will pushed on bucket and will be deleted in temporary directory. 
    -      If a file is empty is not pushed, but deleted.
    -
    - -

    This plugin have a system to restore the previous temporary files if something crash. -INFORMATION ABOUT CLASS: -I tried to comment the class at best i could do. -I think there are much thing to improve, but if you want some points to develop here a list: -TODO Integrate aws_config in the future -TODO Find a method to push them all files when logtstash close the session. -TODO Integrate @field on the path file -TODO Permanent connection or on demand? For now on demand, but isn't a good implementation.

    - -
     Use a while or a thread to try the connection before break a time_out and signal an error.
    -
    - -

    TODO If you have bugs report or helpful advice contact me, but remember that this code is much mine as much as yours,

    - -
     try to work on it if you want :)
    -
    - -

    The programmer's question is: "Why you fuck you use name ls.s3.... you kidding me, motherfucker? -The answer is simple, s3 not allow special characters like "/" "[,]", very useful in date format, -because if you use them s3 dosen't know no more the key and send you to hell! -For example "/" in s3 means you can specify a subfolder on bucket. -USAGE: -This is an example of logstash config: -output { - s3{

    - -
     access_key_id => "crazy_key"             (required)
    - secret_access_key => "monkey_access_key" (required)
    - endpoint_region => "eu-west-1"           (required)
    - bucket => "boss_please_open_your_bucket" (required)         
    - size_file => 2048                        (optional)
    - time_file => 5                           (optional)
    - format => "plain"                        (optional) 
    -
    - -

    } -} -We analize this: -accesskeyid => "crazykey" -Amazon will give you the key for use their service if you buy it or try it. (not very much open source anyway) -secretaccesskey => "monkeyaccesskey" -Amazon will give you the secretaccesskey for use their service if you buy it or try it . (not very much open source anyway). -endpointregion => "eu-west-1" -When you make a contract with Amazon, you should know where the services you use. -bucket => "bosspleaseopenyourbucket" -Be careful you have the permission to write on bucket and know the name. -sizefile => 2048 -Means the size, in KB, of files who can store on temporary directory before you will be pushed on bucket. -Is useful if you have a little server with poor space on disk and you don't want blow up the server with unnecessary temporary log files. -timefile => 5 -Means, in minutes, the time before the files will be pushed on bucket. Is useful if you want to push the files every specific time. -format => "plain" -Means the format of events you want to store in the files -LET'S ROCK AND ROLL ON THE CODE!

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  s3 {
    -    access_key_id => ... # string (optional)
    -    aws_credentials_file => ... #  (optional)
    -    bucket => ... # string (optional)
    -    codec => ... # codec (optional), default: "plain"
    -    endpoint_region => ... # string, one of ["us_east_1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us_east_1"
    -    format => ... # string, one of ["json", "plain", "nil"] (optional), default: "plain"
    -    region => ... #  (optional)
    -    restore => ... # boolean (optional), default: false
    -    secret_access_key => ... # string (optional)
    -    size_file => ... # number (optional), default: 0
    -    time_file => ... # number (optional), default: 0
    -    use_ssl => ... #  (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - access_key_id - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    include LogStash::PluginMixins::AwsConfig -Aws access_key.

    - -

    - - aws_credentials_file - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Path to YAML file containing a hash of AWS credentials.
    -This file will only be loaded if access_key_id and -secret_access_key aren't set. The contents of the -file should look like this:

    - -
    :access_key_id: "12345"
    -:secret_access_key: "54321"
    -
    - -

    - - bucket - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    S3 bucket

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - endpoint_region - - -

    - -
      -
    • Value can be any of: "us_east_1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"
    • -
    • Default value is "us_east_1"
    • -
    - -

    Aws endpoint_region

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - format - - -

    - -
      -
    • Value can be any of: "json", "plain", "nil"
    • -
    • Default value is "plain"
    • -
    - -

    The event format you want to store in files. Defaults to plain text.

    - -

    - - region - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The AWS Region

    - -

    - - restore - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - - - -

    - - secret_access_key - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Aws secretaccesskey

    - -

    - - size_file - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 0
    • -
    - -

    Set the size of file in KB, this means that files on bucket when have dimension > file_size, they are stored in two or more file. -If you have tags then it will generate a specific size file for every tags

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - time_file - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 0
    • -
    - -

    Set the time, in minutes, to close the current subtimesection of bucket. -If you define filesize you have a number of files in consideration of the section and the current tag. -0 stay all time on listerner, beware if you specific 0 and sizefile 0, because you will not put the file on bucket, -for now the only thing this plugin can do is to put the file when logstash restart.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - use_ssl - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Should we require (true) or disable (false) using SSL for communicating with the AWS API
    -The AWS SDK for Ruby defaults to SSL so we preserve that

    - - -
    - -This is documentation from lib/logstash/outputs/s3.rb diff --git a/docs/1.2.0.beta1/outputs/sns.html b/docs/1.2.0.beta1/outputs/sns.html deleted file mode 100644 index a8fd8ed73..000000000 --- a/docs/1.2.0.beta1/outputs/sns.html +++ /dev/null @@ -1,251 +0,0 @@ ---- -title: logstash docs for outputs/sns -layout: content_right ---- -

    sns

    -

    Milestone: 1

    - -

    SNS output.

    - -

    Send events to Amazon's Simple Notification Service, a hosted pub/sub -framework. It supports subscribers of type email, HTTP/S, SMS, and SQS.

    - -

    For further documentation about the service see:

    - -

    http://docs.amazonwebservices.com/sns/latest/api/

    - -

    This plugin looks for the following fields on events it receives:

    - -
      -
    • sns - If no ARN is found in the configuration file, this will be used as -the ARN to publish.
    • -
    • snssubject - The subject line that should be used. -Optional. The "%{source}" will be used if not present and truncated at -MAXSUBJECTSIZEIN_CHARACTERS.
    • -
    • snsmessage - The message that should be -sent. Optional. The event serialzed as JSON will be used if not present and -with the @message truncated so that the length of the JSON fits in -MAXMESSAGESIZEIN_BYTES.
    • -
    - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  sns {
    -    access_key_id => ... # string (optional)
    -    arn => ... # string (optional)
    -    aws_credentials_file => ... # string (optional)
    -    codec => ... # codec (optional), default: "plain"
    -    format => ... # string, one of ["json", "plain"] (optional), default: "plain"
    -    publish_boot_message_arn => ... # string (optional)
    -    region => ... # string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us-east-1"
    -    secret_access_key => ... # string (optional)
    -    use_ssl => ... # boolean (optional), default: true
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - access_key_id - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order...
    -1. Static configuration, using access_key_id and secret_access_key params in logstash plugin config
    -2. External credentials file specified by aws_credentials_file
    -3. Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
    -4. Environment variables AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY
    -5. IAM Instance Profile (available when running inside EC2)

    - -

    - - arn - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    SNS topic ARN.

    - -

    - - aws_credentials_file - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Path to YAML file containing a hash of AWS credentials.
    -This file will only be loaded if access_key_id and -secret_access_key aren't set. The contents of the -file should look like this:

    - -
    :access_key_id: "12345"
    -:secret_access_key: "54321"
    -
    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - format - - -

    - -
      -
    • Value can be any of: "json", "plain"
    • -
    • Default value is "plain"
    • -
    - -

    Message format. Defaults to plain text.

    - -

    - - publish_boot_message_arn - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    When an ARN for an SNS topic is specified here, the message -"Logstash successfully booted" will be sent to it when this plugin -is registered.

    - -

    Example: arn:aws:sns:us-east-1:770975001275:logstash-testing

    - -

    - - region - - -

    - -
      -
    • Value can be any of: "us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"
    • -
    • Default value is "us-east-1"
    • -
    - -

    The AWS Region

    - -

    - - secret_access_key - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The AWS Secret Access Key

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - use_ssl - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    Should we require (true) or disable (false) using SSL for communicating with the AWS API
    -The AWS SDK for Ruby defaults to SSL so we preserve that

    - - -
    - -This is documentation from lib/logstash/outputs/sns.rb diff --git a/docs/1.2.0.beta1/outputs/sqs.html b/docs/1.2.0.beta1/outputs/sqs.html deleted file mode 100644 index bde8ebbf2..000000000 --- a/docs/1.2.0.beta1/outputs/sqs.html +++ /dev/null @@ -1,299 +0,0 @@ ---- -title: logstash docs for outputs/sqs -layout: content_right ---- -

    sqs

    -

    Milestone: 1

    - -

    Push events to an Amazon Web Services Simple Queue Service (SQS) queue.

    - -

    SQS is a simple, scalable queue system that is part of the -Amazon Web Services suite of tools.

    - -

    Although SQS is similar to other queuing systems like AMQP, it -uses a custom API and requires that you have an AWS account. -See http://aws.amazon.com/sqs/ for more details on how SQS works, -what the pricing schedule looks like and how to setup a queue.

    - -

    To use this plugin, you must:

    - -
      -
    • Have an AWS account
    • -
    • Setup an SQS queue
    • -
    • Create an identify that has access to publish messages to the queue.
    • -
    - - -

    The "consumer" identity must have the following permissions on the queue:

    - -
      -
    • sqs:ChangeMessageVisibility
    • -
    • sqs:ChangeMessageVisibilityBatch
    • -
    • sqs:GetQueueAttributes
    • -
    • sqs:GetQueueUrl
    • -
    • sqs:ListQueues
    • -
    • sqs:SendMessage
    • -
    • sqs:SendMessageBatch
    • -
    - - -

    Typically, you should setup an IAM policy, create a user and apply the IAM policy to the user. -A sample policy is as follows:

    - -
     {
    -   "Statement": [
    -     {
    -       "Sid": "Stmt1347986764948",
    -       "Action": [
    -         "sqs:ChangeMessageVisibility",
    -         "sqs:ChangeMessageVisibilityBatch",
    -         "sqs:DeleteMessage",
    -         "sqs:DeleteMessageBatch",
    -         "sqs:GetQueueAttributes",
    -         "sqs:GetQueueUrl",
    -         "sqs:ListQueues",
    -         "sqs:ReceiveMessage"
    -       ],
    -       "Effect": "Allow",
    -       "Resource": [
    -         "arn:aws:sqs:us-east-1:200850199751:Logstash"
    -       ]
    -     }
    -   ]
    - }
    -
    - -

    See http://aws.amazon.com/iam/ for more details on setting up AWS identities.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  sqs {
    -    access_key_id => ... # string (optional)
    -    aws_credentials_file => ... # string (optional)
    -    batch => ... # boolean (optional), default: true
    -    batch_events => ... # number (optional), default: 10
    -    batch_timeout => ... # number (optional), default: 5
    -    codec => ... # codec (optional), default: "plain"
    -    queue => ... # string (required)
    -    region => ... # string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us-east-1"
    -    secret_access_key => ... # string (optional)
    -    use_ssl => ... # boolean (optional), default: true
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - access_key_id - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order...
    -1. Static configuration, using access_key_id and secret_access_key params in logstash plugin config
    -2. External credentials file specified by aws_credentials_file
    -3. Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
    -4. Environment variables AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY
    -5. IAM Instance Profile (available when running inside EC2)

    - -

    - - aws_credentials_file - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Path to YAML file containing a hash of AWS credentials.
    -This file will only be loaded if access_key_id and -secret_access_key aren't set. The contents of the -file should look like this:

    - -
    :access_key_id: "12345"
    -:secret_access_key: "54321"
    -
    - -

    - - batch - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    Set to true if you want send messages to SQS in batches with batch_send -from the amazon sdk

    - -

    - - batch_events - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 10
    • -
    - -

    If batch is set to true, the number of events we queue up for a batch_send.

    - -

    - - batch_timeout - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 5
    • -
    - -

    If batch is set to true, the maximum amount of time between batch_send commands when there are pending events to flush.

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - queue (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    Name of SQS queue to push messages into. Note that this is just the name of the queue, not the URL or ARN.

    - -

    - - region - - -

    - -
      -
    • Value can be any of: "us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"
    • -
    • Default value is "us-east-1"
    • -
    - -

    The AWS Region

    - -

    - - secret_access_key - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The AWS Secret Access Key

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - use_ssl - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is true
    • -
    - -

    Should we require (true) or disable (false) using SSL for communicating with the AWS API
    -The AWS SDK for Ruby defaults to SSL so we preserve that

    - - -
    - -This is documentation from lib/logstash/outputs/sqs.rb diff --git a/docs/1.2.0.beta1/outputs/statsd.html b/docs/1.2.0.beta1/outputs/statsd.html deleted file mode 100644 index f489d96ab..000000000 --- a/docs/1.2.0.beta1/outputs/statsd.html +++ /dev/null @@ -1,299 +0,0 @@ ---- -title: logstash docs for outputs/statsd -layout: content_right ---- -

    statsd

    -

    Milestone: 2

    - -

    statsd is a server for aggregating counters and other metrics to ship to -graphite.

    - -

    The most basic coverage of this plugin is that the 'namespace', 'sender', and -'metric' names are combined into the full metric path like so:

    - -
    namespace.sender.metric
    -
    - -

    The general idea is that you send statsd count or latency data and every few -seconds it will emit the aggregated values to graphite (aggregates like -average, max, stddev, etc)

    - -

    You can learn about statsd here:

    - - - - -

    A simple example usage of this is to count HTTP hits by response code; to learn -more about that, check out the -log metrics tutorial

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  statsd {
    -    codec => ... # codec (optional), default: "plain"
    -    count => ... # hash (optional), default: {}
    -    debug => ... # boolean (optional), default: false
    -    decrement => ... # array (optional), default: []
    -    gauge => ... # hash (optional), default: {}
    -    host => ... # string (optional), default: "localhost"
    -    increment => ... # array (optional), default: []
    -    namespace => ... # string (optional), default: "logstash"
    -    port => ... # number (optional), default: 8125
    -    sample_rate => ... # number (optional), default: 1
    -    sender => ... # string (optional), default: "%{source}"
    -    set => ... # hash (optional), default: {}
    -    timing => ... # hash (optional), default: {}
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - count - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    A count metric. metric_name => count as hash

    - -

    - - debug - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    The final metric sent to statsd will look like the following (assuming defaults) -logstash.sender.file_name

    - -

    Enable debugging output?

    - -

    - - decrement - - -

    - -
      -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    A decrement metric. metric names as array.

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - gauge - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    A gauge metric. metric_name => gauge as hash

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "localhost"
    • -
    - -

    The address of the Statsd server.

    - -

    - - increment - - -

    - -
      -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    An increment metric. metric names as array.

    - -

    - - namespace - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "logstash"
    • -
    - -

    The statsd namespace to use for this metric

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 8125
    • -
    - -

    The port to connect to on your statsd server.

    - -

    - - sample_rate - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 1
    • -
    - -

    The sample rate for the metric

    - -

    - - sender - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{source}"
    • -
    - -

    The name of the sender. -Dots will be replaced with underscores

    - -

    - - set - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    A set metric. metric_name => string to append as hash

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - timing - - -

    - -
      -
    • Value type is hash
    • -
    • Default value is {}
    • -
    - -

    A timing metric. metric_name => duration as hash

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/statsd.rb diff --git a/docs/1.2.0.beta1/outputs/stdout.html b/docs/1.2.0.beta1/outputs/stdout.html deleted file mode 100644 index aef3fb7a3..000000000 --- a/docs/1.2.0.beta1/outputs/stdout.html +++ /dev/null @@ -1,137 +0,0 @@ ---- -title: logstash docs for outputs/stdout -layout: content_right ---- -

    stdout

    -

    Milestone: 3

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  stdout {
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    message => ... # string (optional), default: "%{+yyyy-MM-dd'T'HH:mm:ss.SSSZ} %{host}: %{message}"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - debug - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Enable debugging. Tries to pretty-print the entire event object.

    - -

    - - debug_format - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value can be any of: "ruby", "dots"
    • -
    • Default value is "ruby"
    • -
    - -

    Debug output format: ruby (default), json

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - message - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{+yyyy-MM-dd'T'HH:mm:ss.SSSZ} %{host}: %{message}"
    • -
    - -

    The message to emit to stdout.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/stdout.rb diff --git a/docs/1.2.0.beta1/outputs/stomp.html b/docs/1.2.0.beta1/outputs/stomp.html deleted file mode 100644 index ecffa5ab9..000000000 --- a/docs/1.2.0.beta1/outputs/stomp.html +++ /dev/null @@ -1,200 +0,0 @@ ---- -title: logstash docs for outputs/stomp -layout: content_right ---- -

    stomp

    -

    Milestone: 2

    - - - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  stomp {
    -    codec => ... # codec (optional), default: "plain"
    -    debug => ... # boolean (optional), default: false
    -    destination => ... # string (required)
    -    host => ... # string (required)
    -    password => ... # password (optional), default: ""
    -    port => ... # number (optional), default: 61613
    -    user => ... # string (optional), default: ""
    -    vhost => ... # string (optional), default: nil
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - debug - - -

    - -
      -
    • Value type is boolean
    • -
    • Default value is false
    • -
    - -

    Enable debugging output?

    - -

    - - destination (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The destination to read events from. Supports string expansion, meaning -%{foo} values will expand to the field value.

    - -

    Example: "/topic/logstash"

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The address of the STOMP server.

    - -

    - - password - - -

    - -
      -
    • Value type is password
    • -
    • Default value is ""
    • -
    - -

    The password to authenticate with.

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 61613
    • -
    - -

    The port to connect to on your STOMP server.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - user - - -

    - -
      -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The username to authenticate with.

    - -

    - - vhost - - -

    - -
      -
    • Value type is string
    • -
    • Default value is nil
    • -
    - -

    The vhost to use

    - - -
    - -This is documentation from lib/logstash/outputs/stomp.rb diff --git a/docs/1.2.0.beta1/outputs/syslog.html b/docs/1.2.0.beta1/outputs/syslog.html deleted file mode 100644 index 436af3967..000000000 --- a/docs/1.2.0.beta1/outputs/syslog.html +++ /dev/null @@ -1,260 +0,0 @@ ---- -title: logstash docs for outputs/syslog -layout: content_right ---- -

    syslog

    -

    Milestone: 1

    - -

    Send events to a syslog server.

    - -

    You can send messages compliant with RFC3164 or RFC5424 -UDP or TCP syslog transport is supported

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  syslog {
    -    appname => ... # string (optional), default: "LOGSTASH"
    -    codec => ... # codec (optional), default: "plain"
    -    facility => ... # string, one of ["kernel", "user-level", "mail", "daemon", "security/authorization", "syslogd", "line printer", "network news", "uucp", "clock", "security/authorization", "ftp", "ntp", "log audit", "log alert", "clock", "local0", "local1", "local2", "local3", "local4", "local5", "local6", "local7"] (required)
    -    host => ... # string (required)
    -    msgid => ... # string (optional), default: "-"
    -    port => ... # number (required)
    -    procid => ... # string (optional), default: "-"
    -    protocol => ... # string, one of ["tcp", "udp"] (optional), default: "udp"
    -    rfc => ... # string, one of ["rfc3164", "rfc5424"] (optional), default: "rfc3164"
    -    severity => ... # string, one of ["emergency", "alert", "critical", "error", "warning", "notice", "informational", "debug"] (required)
    -    sourcehost => ... # string (optional), default: "%{source}"
    -    timestamp => ... # string (optional), default: "%{@timestamp}"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - appname - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "LOGSTASH"
    • -
    - -

    application name for syslog message

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - facility (required setting) - - -

    - -
      -
    • Value can be any of: "kernel", "user-level", "mail", "daemon", "security/authorization", "syslogd", "line printer", "network news", "uucp", "clock", "security/authorization", "ftp", "ntp", "log audit", "log alert", "clock", "local0", "local1", "local2", "local3", "local4", "local5", "local6", "local7"
    • -
    • There is no default value for this setting.
    • -
    - -

    facility label for syslog message

    - -

    - - host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    syslog server address to connect to

    - -

    - - msgid - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "-"
    • -
    - -

    message id for syslog message

    - -

    - - port (required setting) - - -

    - -
      -
    • Value type is number
    • -
    • There is no default value for this setting.
    • -
    - -

    syslog server port to connect to

    - -

    - - procid - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "-"
    • -
    - -

    process id for syslog message

    - -

    - - protocol - - -

    - -
      -
    • Value can be any of: "tcp", "udp"
    • -
    • Default value is "udp"
    • -
    - -

    syslog server protocol. you can choose between udp and tcp

    - -

    - - rfc - - -

    - -
      -
    • Value can be any of: "rfc3164", "rfc5424"
    • -
    • Default value is "rfc3164"
    • -
    - -

    syslog message format: you can choose between rfc3164 or rfc5424

    - -

    - - severity (required setting) - - -

    - -
      -
    • Value can be any of: "emergency", "alert", "critical", "error", "warning", "notice", "informational", "debug"
    • -
    • There is no default value for this setting.
    • -
    - -

    severity label for syslog message

    - -

    - - sourcehost - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{source}"
    • -
    - -

    source host for syslog message

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - timestamp - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "%{@timestamp}"
    • -
    - -

    timestamp for syslog message

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/syslog.rb diff --git a/docs/1.2.0.beta1/outputs/tcp.html b/docs/1.2.0.beta1/outputs/tcp.html deleted file mode 100644 index 54f91d15c..000000000 --- a/docs/1.2.0.beta1/outputs/tcp.html +++ /dev/null @@ -1,180 +0,0 @@ ---- -title: logstash docs for outputs/tcp -layout: content_right ---- -

    tcp

    -

    Milestone: 2

    - -

    Write events over a TCP socket.

    - -

    Each event json is separated by a newline.

    - -

    Can either accept connections from clients or connect to a server, -depending on mode.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  tcp {
    -    codec => ... # codec (optional), default: "plain"
    -    host => ... # string (required)
    -    mode => ... # string, one of ["server", "client"] (optional), default: "client"
    -    port => ... # number (required)
    -    reconnect_interval => ... # number (optional), default: 10
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    When mode is server, the address to listen on. -When mode is client, the address to connect to.

    - -

    - - message_format - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The format to use when writing events to the file. This value -supports any string and can include %{name} and other dynamic -strings.

    - -

    If this setting is omitted, the full json representation of the -event will be written as a single line.

    - -

    - - mode - - -

    - -
      -
    • Value can be any of: "server", "client"
    • -
    • Default value is "client"
    • -
    - -

    Mode to operate in. server listens for client connections, -client connects to a server.

    - -

    - - port (required setting) - - -

    - -
      -
    • Value type is number
    • -
    • There is no default value for this setting.
    • -
    - -

    When mode is server, the port to listen on. -When mode is client, the port to connect to.

    - -

    - - reconnect_interval - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 10
    • -
    - -

    When connect failed,retry interval in sec.

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/tcp.rb diff --git a/docs/1.2.0.beta1/outputs/udp.html b/docs/1.2.0.beta1/outputs/udp.html deleted file mode 100644 index 09e20980d..000000000 --- a/docs/1.2.0.beta1/outputs/udp.html +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: logstash docs for outputs/udp -layout: content_right ---- -

    udp

    -

    Milestone: 1

    - -

    Send events over UDP

    - -

    Keep in mind that UDP will lose messages.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  udp {
    -    codec => ... # codec (optional), default: "plain"
    -    host => ... # string (required)
    -    port => ... # number (required)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The address to send messages to

    - -

    - - port (required setting) - - -

    - -
      -
    • Value type is number
    • -
    • There is no default value for this setting.
    • -
    - -

    The port to send messages on

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/udp.rb diff --git a/docs/1.2.0.beta1/outputs/websocket.html b/docs/1.2.0.beta1/outputs/websocket.html deleted file mode 100644 index f4a408073..000000000 --- a/docs/1.2.0.beta1/outputs/websocket.html +++ /dev/null @@ -1,127 +0,0 @@ ---- -title: logstash docs for outputs/websocket -layout: content_right ---- -

    websocket

    -

    Milestone: 1

    - -

    This output runs a websocket server and publishes any -messages to all connected websocket clients.

    - -

    You can connect to it with ws://<host>:<port>/

    - -

    If no clients are connected, any messages received are ignored.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  websocket {
    -    codec => ... # codec (optional), default: "plain"
    -    host => ... # string (optional), default: "0.0.0.0"
    -    port => ... # number (optional), default: 3232
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "0.0.0.0"
    • -
    - -

    The address to serve websocket data from

    - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 3232
    • -
    - -

    The port to serve websocket data from

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/websocket.rb diff --git a/docs/1.2.0.beta1/outputs/xmpp.html b/docs/1.2.0.beta1/outputs/xmpp.html deleted file mode 100644 index ee8eb3773..000000000 --- a/docs/1.2.0.beta1/outputs/xmpp.html +++ /dev/null @@ -1,187 +0,0 @@ ---- -title: logstash docs for outputs/xmpp -layout: content_right ---- -

    xmpp

    -

    Milestone: 2

    - -

    This output allows you ship events over XMPP/Jabber.

    - -

    This plugin can be used for posting events to humans over XMPP, or you can -use it for PubSub or general message passing for logstash to logstash.

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  xmpp {
    -    codec => ... # codec (optional), default: "plain"
    -    host => ... # string (optional)
    -    message => ... # string (required)
    -    password => ... # password (required)
    -    rooms => ... # array (optional)
    -    user => ... # string (required)
    -    users => ... # array (optional)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The xmpp server to connect to. This is optional. If you omit this setting, -the host on the user/identity is used. (foo.com for user@foo.com)

    - -

    - - message (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The message to send. This supports dynamic strings like %{source}

    - -

    - - password (required setting) - - -

    - -
      -
    • Value type is password
    • -
    • There is no default value for this setting.
    • -
    - -

    The xmpp password for the user/identity.

    - -

    - - rooms - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    if muc/multi-user-chat required, give the name of the room that -you want to join: room@conference.domain/nick

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - user (required setting) - - -

    - -
      -
    • Value type is string
    • -
    • There is no default value for this setting.
    • -
    - -

    The user or resource ID, like foo@example.com.

    - -

    - - users - - -

    - -
      -
    • Value type is array
    • -
    • There is no default value for this setting.
    • -
    - -

    The users to send messages to

    - - -
    - -This is documentation from lib/logstash/outputs/xmpp.rb diff --git a/docs/1.2.0.beta1/outputs/zabbix.html b/docs/1.2.0.beta1/outputs/zabbix.html deleted file mode 100644 index 00b318058..000000000 --- a/docs/1.2.0.beta1/outputs/zabbix.html +++ /dev/null @@ -1,186 +0,0 @@ ---- -title: logstash docs for outputs/zabbix -layout: content_right ---- -

    zabbix

    -

    Milestone: 2

    - -

    The zabbix output is used for sending item data to zabbix via the -zabbix_sender executable.

    - -

    For this output to work, your event must have the following fields:

    - -
      -
    • "zabbix_host" (the host configured in Zabbix)
    • -
    • "zabbix_item" (the item key on the host in Zabbix)
    • -
    - - -

    In Zabbix, create your host with the same name (no spaces in the name of -the host supported) and create your item with the specified key as a -Zabbix Trapper item.

    - -

    The easiest way to use this output is with the grep filter. -Presumably, you only want certain events matching a given pattern -to send events to zabbix, so use grep to match and also to add the required -fields.

    - -
     filter {
    -   grep {
    -     type => "linux-syslog"
    -     match => [ "@message", "(error|ERROR|CRITICAL)" ]
    -     add_tag => [ "zabbix-sender" ]
    -     add_field => [
    -       "zabbix_host", "%{source_host}",
    -       "zabbix_item", "item.key"
    -     ]
    -  }
    -}
    -
    -output {
    -  zabbix {
    -    # only process events with this tag
    -    tags => "zabbix-sender"
    -
    -    # specify the hostname or ip of your zabbix server
    -    # (defaults to localhost)
    -    host => "localhost"
    -
    -    # specify the port to connect to (default 10051)
    -    port => "10051"
    -
    -    # specify the path to zabbix_sender
    -    # (defaults to "/usr/local/bin/zabbix_sender")
    -    zabbix_sender => "/usr/local/bin/zabbix_sender"
    -  }
    -}
    -
    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  zabbix {
    -    codec => ... # codec (optional), default: "plain"
    -    host => ... # string (optional), default: "localhost"
    -    port => ... # number (optional), default: 10051
    -    zabbix_sender => ... # a valid filesystem path (optional), default: "/usr/local/bin/zabbix_sender"
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - host - - -

    - -
      -
    • Value type is string
    • -
    • Default value is "localhost"
    • -
    - - - -

    - - port - - -

    - -
      -
    • Value type is number
    • -
    • Default value is 10051
    • -
    - - - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - -

    - - zabbix_sender - - -

    - -
      -
    • Value type is path
    • -
    • Default value is "/usr/local/bin/zabbix_sender"
    • -
    - - - - -
    - -This is documentation from lib/logstash/outputs/zabbix.rb diff --git a/docs/1.2.0.beta1/outputs/zeromq.html b/docs/1.2.0.beta1/outputs/zeromq.html deleted file mode 100644 index 8e7f9e02b..000000000 --- a/docs/1.2.0.beta1/outputs/zeromq.html +++ /dev/null @@ -1,204 +0,0 @@ ---- -title: logstash docs for outputs/zeromq -layout: content_right ---- -

    zeromq

    -

    Milestone: 2

    - -

    Write events to a 0MQ PUB socket.

    - -

    You need to have the 0mq 2.1.x library installed to be able to use -this output plugin.

    - -

    The default settings will create a publisher connecting to a subscriber -bound to tcp://127.0.0.1:2120

    - - -

    Synopsis

    - -This is what it might look like in your config file: - -
    output {
    -  zeromq {
    -    address => ... # array (optional), default: ["tcp://127.0.0.1:2120"]
    -    codec => ... # codec (optional), default: "plain"
    -    mode => ... # string, one of ["server", "client"] (optional), default: "client"
    -    sockopt => ... # hash (optional)
    -    topic => ... # string (optional), default: ""
    -    topology => ... # string, one of ["pushpull", "pubsub", "pair"] (required)
    -}
    -
    -}
    -
    - -

    Details

    - -

    - - address - - -

    - -
      -
    • Value type is array
    • -
    • Default value is ["tcp://127.0.0.1:2120"]
    • -
    - -

    0mq socket address to connect or bind. -Please note that inproc:// will not work with logstashi. -For each we use a context per thread. -By default, inputs bind/listen and outputs connect.

    - -

    - - codec - - -

    - -
      -
    • Value type is codec
    • -
    • Default value is "plain"
    • -
    - -

    The codec used for output data

    - -

    - - exclude_tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events without any of these tags. Note this check is additional to type and tags.

    - -

    - - mode - - -

    - -
      -
    • Value can be any of: "server", "client"
    • -
    • Default value is "client"
    • -
    - -

    Server mode binds/listens. Client mode connects.

    - -

    - - sockopt - - -

    - -
      -
    • Value type is hash
    • -
    • There is no default value for this setting.
    • -
    - -

    This exposes zmq_setsockopt for advanced tuning. -See http://api.zeromq.org/2-1:zmq-setsockopt for details.

    - -

    This is where you would set values like:

    - -
      -
    • ZMQ::HWM - high water mark
    • -
    • ZMQ::IDENTITY - named queues
    • -
    • ZMQ::SWAP_SIZE - space for disk overflow
    • -
    - - -

    Example: sockopt => ["ZMQ::HWM", 50, "ZMQ::IDENTITY", "mynamedqueue"]

    - -

    - - tags - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is array
    • -
    • Default value is []
    • -
    - -

    Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.

    - -

    - - topic - - -

    - -
      -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    This is used for the 'pubsub' topology only. -On inputs, this allows you to filter messages by topic. -On outputs, this allows you to tag a message for routing. -NOTE: ZeroMQ does subscriber-side filtering -NOTE: Topic is evaluated with event.sprintf so macros are valid here.

    - -

    - - topology (required setting) - - -

    - -
      -
    • Value can be any of: "pushpull", "pubsub", "pair"
    • -
    • There is no default value for this setting.
    • -
    - -

    The default logstash topologies work as follows:

    - -
      -
    • pushpull - inputs are pull, outputs are push
    • -
    • pubsub - inputs are subscribers, outputs are publishers
    • -
    • pair - inputs are clients, inputs are servers
    • -
    - - -

    If the predefined topology flows don't work for you, -you can change the 'mode' setting -TODO (lusis) add req/rep MAYBE -TODO (lusis) add router/dealer

    - -

    - - type - DEPRECATED - -

    - -
      -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -
    • Value type is string
    • -
    • Default value is ""
    • -
    - -

    The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.

    - - -
    - -This is documentation from lib/logstash/outputs/zeromq.rb diff --git a/docs/1.2.0.beta1/plugin-doc.html.erb b/docs/1.2.0.beta1/plugin-doc.html.erb deleted file mode 100644 index 5389ffdf4..000000000 --- a/docs/1.2.0.beta1/plugin-doc.html.erb +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: logstash docs for <%= section %>s/<%= name %> -layout: content_right ---- -

    <%= name %>

    -

    Milestone: <%= @milestone %>

    - -<%= description %> - -<% if !@flags.empty? -%> - - -<% end -%> - -

    Synopsis

    - -This is what it might look like in your config file: - -
    <% if section == "codec" -%>
    -# with an input plugin:
    -# you can also use this codec with an output.
    -input { 
    -  file { 
    -    codec => <%= synopsis.split("\n").map { |l| "  #{l}" }.join("\n") %>
    -  }
    -}
    -<% else -%>
    -<%= section %> {
    -  <%= synopsis %>
    -}
    -<% end -%>
    - -

    Details

    - -<% sorted_attributes.each do |name, config| -%> -<% - if name.is_a?(Regexp) - name = "/" + name.to_s.gsub(/^\(\?-mix:/, "").gsub(/\)$/, "") + "/" - is_regexp = true - else - is_regexp = false - end --%> -

    - - <%= name %><%= " (required setting)" if config[:required] %> - <%= " DEPRECATED" if config[:deprecated] %> - -

    - -
      -<% if config[:deprecated] -%> -
    • DEPRECATED WARNING: This config item is deprecated. It may be removed in a further version.
    • -<% end -%> -<% if is_regexp -%> -
    • The configuration attribute name here is anything that matches the above regular expression.
    • -<% end -%> -<% if config[:validate].is_a?(Symbol) -%> -
    • Value type is <%= config[:validate] %>
    • -<% elsif config[:validate].nil? -%> -
    • Value type is string
    • -<% elsif config[:validate].is_a?(Array) -%> -
    • Value can be any of: <%= config[:validate].map(&:inspect).join(", ") %>
    • -<% end -%> -<% if config.include?(:default) -%> -
    • Default value is <%= config[:default].inspect %>
    • -<% else -%> -
    • There is no default value for this setting.
    • -<% end -%> -
    - -<%= config[:description] %> - -<% end -%> - -
    - -This is documentation from <%= file %> diff --git a/docs/1.2.0.beta1/plugin-milestones.md b/docs/1.2.0.beta1/plugin-milestones.md deleted file mode 100644 index 5d72e9ac4..000000000 --- a/docs/1.2.0.beta1/plugin-milestones.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Plugin Milestones - logstash -layout: content_right ---- -# Plugin Milestones - -Plugins (inputs/outputs/filters/codecs) have a milestone label in logstash. -This is to provide an indicator to the end-user as to the kinds of changes -a given plugin could have between logstash releases. - -The desire here is to allow plugin developers to quickly iterate on possible -new plugins while conveying to the end-user a set of expectations about that -plugin. - -## Milestone 1 - -Plugins at this milestone need your feedback to improve! Plugins at this -milestone may change between releases as the community figures out the best way -for the plugin to behave and be configured. - -## Milestone 2 - -Plugins at this milestone are more likely to have backwards-compatibility to -previous releases than do Milestone 1 plugins. This milestone also indicates -a greater level of in-the-wild usage by the community than the previous -milestone. - -## Milestone 3 - -Plugins at this milestone have strong promises towards backwards-compatibility. -This is enforced with automated tests to ensure behavior and configuration are -consistent across releases. - -## Milestone 0 - -This milestone appears at the bottom of the page because it is very -infrequently used. - -This milestone marker is used to generally indicate that a plugin has no -active code maintainer nor does it have support from the community in terms -of getting help. diff --git a/docs/1.2.0.beta1/plugin-synopsis.html.erb b/docs/1.2.0.beta1/plugin-synopsis.html.erb deleted file mode 100644 index 139a37e75..000000000 --- a/docs/1.2.0.beta1/plugin-synopsis.html.erb +++ /dev/null @@ -1,24 +0,0 @@ -<%= name %> { -<% sorted_attributes.each do |name, config| - next if config[:deprecated] - if config[:validate].is_a?(Array) - annotation = "string, one of #{config[:validate].inspect}" - elsif config[:validate] == :path - annotation = "a valid filesystem path" - else - annotation = "#{config[:validate]}" - end - - if name.is_a?(Regexp) - name = "/" + name.to_s.gsub(/^\(\?-mix:/, "").gsub(/\)$/, "") + "/" - end - if config[:required] - annotation += " (required)" - else - annotation += " (optional)" - end - annotation += ", default: #{config[:default].inspect}" if config.include?(:default) --%> - <%= name %> => ... # <%= annotation %> -<% end -%> -} diff --git a/docs/1.2.0.beta1/release-engineering.md b/docs/1.2.0.beta1/release-engineering.md deleted file mode 100644 index 828f30011..000000000 --- a/docs/1.2.0.beta1/release-engineering.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Release Engineering - logstash -layout: content_right ---- - -# logstash rel-eng. - -The version patterns for logstash are x.y.z - -* In the same x.y release, no backwards-incompatible changes will be made. -* Between x.y.z and x.y.(z+1), deprecations are allowed but should be - functional through the next release. -* Any backwards-incompatible changes should be well-documented and, if - possible, should include tools to help in migrating. -* It is OK to add features, plugins, etc, in minor releases as long as they do - not break existing functionality. - -I do not suspect the 'x' (currently 1) will change frequently. It should only change -if there are major, backwards-incompatible changes made to logstash, and I'm -trying to not make those changes, so logstash should forever be at 1.y,z, -right? ;) - -# building a release. - -* Make sure all tests pass (make test) - * `ruby bin/logstash test` - * `java -jar logstash-x.y.z-flatjar.jar test` -* Update VERSION.rb - * VERSION=$(ruby -r./VERSION -e 'puts LOGSTASH_VERSION') -* Ensure CHANGELOG is up-to-date -* `git tag v$VERSION; git push origin master; git push --tags` -* Build binaries - * `make jar` -* make docs - * copy build/docs to ../logstash.github.com/docs/$VERSION - * Note: you will need to use C-ruby 1.9.2 for this. - * You'll need 'bluecloth' and 'cabin' rubygems installed. -* cd ../logstash.github.com - * `make clean update VERSION=$VERSION` - * `git add docs/$VERSION docs/latest.html index.html _layouts/*` - * `git commit -m "version $VERSION docs" && git push origin master` -* Publish binaries - * Stage binaries at `carrera.databits.net:/home/jls/s/files/logstash/` -* Update #logstash IRC /topic -* Send announcement email to logstash-users@, include relevant download URLs & - changelog (see past emails for a template) diff --git a/docs/1.2.0.beta1/release-test-results.md b/docs/1.2.0.beta1/release-test-results.md deleted file mode 100644 index edcd3349f..000000000 --- a/docs/1.2.0.beta1/release-test-results.md +++ /dev/null @@ -1,14 +0,0 @@ -# Testing for a release - -* exec + split + stdout -* tcp input (server and client modes) -* tcp output (server and client modes) -* graphite output (tested server failure conditions, netcat receiver) -* statsd output (increment, netcat receiver) - -## Test Suite - - Finished in 16.826 seconds. - - 29 tests, 119 assertions, 0 failures, 0 errors - diff --git a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache-elasticsearch.conf b/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache-elasticsearch.conf deleted file mode 100644 index 9c360d236..000000000 --- a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache-elasticsearch.conf +++ /dev/null @@ -1,35 +0,0 @@ -input { - tcp { - type => "apache" - port => 3333 - } -} - -filter { - grok { - type => "apache" - # See the following URL for a complete list of named patterns - # logstash/grok ships with by default: - # https://github.com/logstash/logstash/tree/master/patterns - # - # The grok filter will use the below pattern and on successful match use - # any captured values as new fields in the event. - pattern => "%{COMBINEDAPACHELOG}" - } - - date { - type => "apache" - # Try to pull the timestamp from the 'timestamp' field (parsed above with - # grok). The apache time format looks like: "18/Aug/2011:05:44:34 -0700" - match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ] - } -} - -output { - elasticsearch { - # Setting 'embedded' will run a real elasticsearch server inside logstash. - # This option below saves you from having to run a separate process just - # for ElasticSearch, so you can get started quicker! - embedded => true - } -} diff --git a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache-parse.conf b/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache-parse.conf deleted file mode 100644 index 9d07ef23e..000000000 --- a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache-parse.conf +++ /dev/null @@ -1,33 +0,0 @@ -input { - tcp { - type => "apache" - port => 3333 - } -} - -filter { - grok { - type => "apache" - # See the following URL for a complete list of named patterns - # logstash/grok ships with by default: - # https://github.com/logstash/logstash/tree/master/patterns - # - # The grok filter will use the below pattern and on successful match use - # any captured values as new fields in the event. - pattern => "%{COMBINEDAPACHELOG}" - } - - date { - type => "apache" - # Try to pull the timestamp from the 'timestamp' field (parsed above with - # grok). The apache time format looks like: "18/Aug/2011:05:44:34 -0700" - match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ] - } -} - -output { - # Use stdout in debug mode again to see what logstash makes of the event. - stdout { - debug => true - } -} diff --git a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache_log.1 b/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache_log.1 deleted file mode 100644 index f7911a7eb..000000000 --- a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache_log.1 +++ /dev/null @@ -1 +0,0 @@ -129.92.249.70 - - [18/Aug/2011:06:00:14 -0700] "GET /style2.css HTTP/1.1" 200 1820 "http://www.semicomplete.com/blog/geekery/bypassing-captive-portals.html" "Mozilla/5.0 (iPad; U; CPU OS 4_3_5 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8L1 Safari/6533.18.5" diff --git a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache_log.2.bz2 b/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache_log.2.bz2 deleted file mode 100644 index 841e7b6b1..000000000 Binary files a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/apache_log.2.bz2 and /dev/null differ diff --git a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/hello-search.conf b/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/hello-search.conf deleted file mode 100644 index 5e2cc7c2b..000000000 --- a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/hello-search.conf +++ /dev/null @@ -1,25 +0,0 @@ -input { - stdin { - # A type is a label applied to an event. It is used later with filters - # to restrict what filters are run against each event. - type => "human" - } -} - -output { - # Print each event to stdout. - stdout { - # Enabling 'debug' on the stdout output will make logstash pretty-print the - # entire event as something similar to a JSON representation. - debug => true - } - - # You can have multiple outputs. All events generally to all outputs. - # Output events to elasticsearch - elasticsearch { - # Setting 'embedded' will run a real elasticsearch server inside logstash. - # This option below saves you from having to run a separate process just - # for ElasticSearch, so you can get started quicker! - embedded => true - } -} diff --git a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/hello.conf b/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/hello.conf deleted file mode 100644 index 0a44f9ddf..000000000 --- a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/hello.conf +++ /dev/null @@ -1,16 +0,0 @@ -input { - stdin { - # A type is a label applied to an event. It is used later with filters - # to restrict what filters are run against each event. - type => "human" - } -} - -output { - # Print each event to stdout. - stdout { - # Enabling 'debug' on the stdout output will make logstash pretty-print the - # entire event as something similar to a JSON representation. - debug => true - } -} diff --git a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/index.md b/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/index.md deleted file mode 100644 index 08319a6da..000000000 --- a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/index.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: Logstash 10-Minute Tutorial -layout: content_right ---- -# Logstash 10-minute Tutorial - -## Step 1 - Download - -### Download logstash: - -* [logstash-1.2.0.beta1-flatjar.jar](http://logstash.objects.dreamhost.com/release/logstash-1.2.0.beta1-flatjar.jar) - -### Requirements: - -* java - -### The Secret: - -logstash is written in JRuby, but I release standalone jar files for easy -deployment, so you don't need to download JRuby or most any other dependencies. - -I bake as much as possible into the single release file. - -## Step 2 - A hello world. - -### Download this config file: - -* [hello.conf](hello.conf) - -### Run it: - - java -jar logstash-1.2.0.beta1-flatjar.jar agent -f hello.conf - -Type stuff on standard input. Press enter. Watch what event logstash sees. -Press ^C to kill it. - -## Step 3 - Add ElasticSearch - -### Download this config file: - -* [hello-search.conf](hello-search.conf) - -### Run it: - - java -jar logstash-1.2.0.beta1-flatjar.jar agent -f hello-search.conf - -Same config as step 2, but now we are also writing events to ElasticSearch. Do -a search for `*` (all): - - curl 'http://localhost:9200/_search?pretty=1&q=*' - -## Step 4 - logstash web - -The previous step is good, but a better frontend on elasticsearch would help! - -The same config as step 3 is used. - -### Run it: - - java -jar logstash-1.2.0.beta1-flatjar.jar agent -f hello-search.conf -- web --backend 'elasticsearch://localhost/' - -The above runs both the agent and the logstash web interface in the same -process. Useful for simple deploys. - -### Use it: - -Go to the logstash web interface in browser: - -Type stuff on stdin on the agent, then search for it in the web interface. - -## Step 5 - real world example - -Let's backfill some old apache logs. First, let's use grok. - -Use the ['grok'](../../filters/grok) logstash filter to parse logs. - -### Download - -* [apache-parse.conf](apache-parse.conf) -* [apache_log.1](apache_log.1) (a single apache log line) - -### Run it - - java -jar logstash-1.2.0.beta1-flatjar.jar agent -f apache-parse.conf - -Logstash will now be listening on TCP port 3333. Send an apache log message at it: - - nc localhost 3333 < apache_log.1 - -The expected output can be viewed here: [step-5-output.txt](step-5-output.txt) - -## Step 6 - real world example + search - -Same as the previous step, but we'll output to ElasticSearch now. - -### Download - -* [apache-elasticsearch.conf](apache-elasticsearch.conf) -* [apache_log.2.bz2](apache_log.2.bz2) (2 days of apache logs) - -### Run it - - java -jar logstash-1.2.0.beta1-flatjar.jar agent -f apache-elasticsearch.conf -- web --backend 'elasticsearch://localhost/' - -Logstash should be all set for you now. Start feeding it logs: - - bzip2 -d apache_log.2.bz2 - - nc localhost 3333 < apache_log.2 - -Go to the logstash web interface in browser: - -Try some search queries. To see all the data, search for `*` (no quotes). Click -on some results, drill around in some logs. - -## Want more? - -For further learning, try these: - -* [Watch a presentation on logstash](http://www.youtube.com/embed/RuUFnog29M4) -* [Getting started 'standalone' guide](http://logstash.net/docs/1.2.0.beta1/tutorials/getting-started-simple) -* [Getting started 'centralized' guide](http://logstash.net/docs/1.2.0.beta1/tutorials/getting-started-centralized) - - learn how to build out your logstash infrastructure and centralize your logs. -* [Dive into the docs](http://logstash.net/docs/1.2.0.beta1/) diff --git a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/step-5-output.txt b/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/step-5-output.txt deleted file mode 100644 index 2c14f4a08..000000000 --- a/docs/1.2.0.beta1/tutorials/10-minute-walkthrough/step-5-output.txt +++ /dev/null @@ -1,107 +0,0 @@ -{ - "@source" => "tcp://0.0.0.0:3333/client/127.0.0.1:35019", - "@type" => "apache", - "@tags" => [], - "@fields" => { - "COMBINEDAPACHELOG" => [ - [0] "129.92.249.70 - - [18/Aug/2011:06:00:14 -0700] \"GET /style2.css HTTP/1.1\" 200 1820 \"http://www.semicomplete.com/blog/geekery/bypassing-captive-portals.html\" \"Mozilla/5.0 (iPad; U; CPU OS 4_3_5 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8L1 Safari/6533.18.5\"" - ], - "clientip" => [ - [0] "129.92.249.70" - ], - "HOSTNAME" => [ - [0] "129.92.249.70", - [1] "www.semicomplete.com" - ], - "IP" => [], - "ident" => [ - [0] "-" - ], - "USERNAME" => [ - [0] "-", - [1] "-" - ], - "auth" => [ - [0] "-" - ], - "timestamp" => [ - [0] "18/Aug/2011:06:00:14 -0700" - ], - "MONTHDAY" => [ - [0] "18" - ], - "MONTH" => [ - [0] "Aug" - ], - "YEAR" => [ - [0] "2011" - ], - "TIME" => [ - [0] "06:00:14" - ], - "HOUR" => [ - [0] "06" - ], - "MINUTE" => [ - [0] "00" - ], - "SECOND" => [ - [0] "14" - ], - "ZONE" => [ - [0] "-0700" - ], - "verb" => [ - [0] "GET" - ], - "request" => [ - [0] "/style2.css" - ], - "URIPATH" => [ - [0] "/style2.css", - [1] "/blog/geekery/bypassing-captive-portals.html" - ], - "URIPARAM" => [], - "httpversion" => [ - [0] "1.1" - ], - "BASE10NUM" => [ - [0] "1.1", - [1] "200", - [2] "1820" - ], - "response" => [ - [0] "200" - ], - "bytes" => [ - [0] "1820" - ], - "referrer" => [ - [0] "http://www.semicomplete.com/blog/geekery/bypassing-captive-portals.html" - ], - "URIPROTO" => [ - [0] "http" - ], - "USER" => [], - "URIHOST" => [ - [0] "www.semicomplete.com" - ], - "IPORHOST" => [ - [0] "www.semicomplete.com" - ], - "POSINT" => [], - "URIPATHPARAM" => [ - [0] "/blog/geekery/bypassing-captive-portals.html" - ], - "agent" => [ - [0] "\"Mozilla/5.0 (iPad; U; CPU OS 4_3_5 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8L1 Safari/6533.18.5\"" - ], - "QUOTEDSTRING" => [ - [0] "\"Mozilla/5.0 (iPad; U; CPU OS 4_3_5 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8L1 Safari/6533.18.5\"" - ] - }, - "@timestamp" => "2011-08-18T13:00:14.000Z", - "@source_host" => "0.0.0.0", - "@source_path" => "/client/127.0.0.1:35019", - "@message" => "129.92.249.70 - - [18/Aug/2011:06:00:14 -0700] \"GET /style2.css HTTP/1.1\" 200 1820 \"http://www.semicomplete.com/blog/geekery/bypassing-captive-portals.html\" \"Mozilla/5.0 (iPad; U; CPU OS 4_3_5 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8L1 Safari/6533.18.5\"\n" -} diff --git a/docs/1.2.0.beta1/tutorials/getting-started-centralized-overview-diagram.png b/docs/1.2.0.beta1/tutorials/getting-started-centralized-overview-diagram.png deleted file mode 100644 index a865e6eff..000000000 Binary files a/docs/1.2.0.beta1/tutorials/getting-started-centralized-overview-diagram.png and /dev/null differ diff --git a/docs/1.2.0.beta1/tutorials/getting-started-centralized-overview-diagram.xml b/docs/1.2.0.beta1/tutorials/getting-started-centralized-overview-diagram.xml deleted file mode 100644 index f17ff9d1c..000000000 --- a/docs/1.2.0.beta1/tutorials/getting-started-centralized-overview-diagram.xml +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/docs/1.2.0.beta1/tutorials/getting-started-centralized.md b/docs/1.2.0.beta1/tutorials/getting-started-centralized.md deleted file mode 100644 index 84698a84b..000000000 --- a/docs/1.2.0.beta1/tutorials/getting-started-centralized.md +++ /dev/null @@ -1,218 +0,0 @@ ---- -title: Getting Started (Centralized Server) - logstash -layout: content_right ---- - -# Getting Started - -## Centralized Setup with Event Parsing - -This guide shows how to get you going quickly with logstash with multiple -servers. This guide is for folks who want to ship all their logstash logs to a -central location for indexing and search. - -We'll have two classes of server. First, one that ships logs. Second, one that -collects and indexes logs. - -It's important to note that logstash itself has no concept of "shipper" and -"collector" - the behavior of an agent depends entirely on how you configure -it. - -This diagram gives you an overview of the architecture: - -![Centralized setup overview](getting-started-centralized-overview-diagram.png) - -On servers shipping logs: - -* Download and run logstash (See section 'logstash log shipper' below) - -On the server collecting and indexing your logs: - -* Download and run Elasticsearch -* Download and run Redis -* Download and run Logstash - -## ElasticSearch - -Requirements: java - -You'll most likely want the version of ElasticSearch specified by the -[elasticsearch output](../outputs/elasticsearch) docs. Modify this in your shell -for easy downloading of ElasticSearch: - - ES_PACKAGE=elasticsearch-0.90.3.zip - ES_DIR=${ES_PACKAGE%%.zip} - SITE=https://download.elasticsearch.org/elasticsearch/elasticsearch - if [ ! -d "$ES_DIR" ] ; then - wget --no-check-certificate $SITE/$ES_PACKAGE - unzip $ES_PACKAGE - fi - -ElasticSearch requires Java (uses Lucene on the backend; if you want to know -more read the elasticsearch docs). - -To start the service, run `bin/elasticsearch -f`. This will run it in the foreground. We want to keep it this way for debugging for now. - -## Redis - -Previous versions of this guide used AMQP via RabbitMQ. Due to the complexity of AMQP as well as performance issues related to the Bunny driver we use, we're now recommending Redis instead. - -Redis has no external dependencies and has a much simpler configuration in Logstash. - -Building and installing Redis is fairly straightforward. While normally this would be out of the scope of this document, as the instructions are so simple we'll include them here: - -- Download Redis from http://redis.io/download (The latest stable release is likely what you want) -- Extract the source, change to the directory and run `make` -- Run Redis with `src/redis-server --loglevel verbose` - -That's it. - -## logstash - -Once you have elasticsearch and redis running, you're -ready to configure logstash. - -Download the logstash release jar file. The package contains all -required dependencies to save you time chasing down requirements. - -Follow [this link to download logstash-1.2.0.beta1](http://logstash.objects.dreamhost.com/release/logstash-1.2.0.beta1-flatjar.jar). - -Since we're doing a centralized configuration, you'll have two main -logstash agent roles: a shipper and an indexer. You will ship logs from -all servers via Redis and have another agent receive those messages, -parse them, and index them in elasticsearch. - -### logstash log shipper - -As with the simple example, we're going to start simple to ensure that events are flowing - - input { - stdin { - type => "stdin-type" - } - } - - output { - stdout { debug => true debug_format => "json"} - redis { host => "127.0.0.1" data_type => "list" key => "logstash" } - } - -Put this in a file and call it 'shipper.conf' (or anything, really), and run: - - java -jar logstash-1.2.0.beta1-flatjar.jar agent -f shipper.conf - -This will take anything you type into this console and display it on the console. Additionally it will save events to Redis in a `list` named after the `key` value you provided. - -### Testing the Redis output - -To verify that the message made it into Redis, check your Redis window. You should see something like the following: - - [83019] 02 Jul 12:51:02 - Accepted 127.0.0.1:58312 - [83019] 02 Jul 12:51:06 - Client closed connection - [83019] 02 Jul 12:51:06 - DB 0: 1 keys (0 volatile) in 4 slots HT. - -The redis application ships with a CLI application that you can use to query the data. From your Redis source directory, run the following: - -`src/redis-cli` - -Once connected, run the following commands: - - redis 127.0.0.1:6379> llen logstash - (integer) 1 - redis 127.0.0.1:6379> lpop logstash - "{\"@source\":\"stdin://jvstratusmbp.local/\",\"@type\":\"stdin-type\",\"@tags\":[],\"@fields\":{},\"@timestamp\":\"2012-07-02T17:01:12.278000Z\",\"@source_host\":\"jvstratusmbp.local\",\"@source_path\":\"/\",\"@message\":\"test\"}" - redis 127.0.0.1:6379> llen logstash - (integer) 0 - redis 127.0.0.1:6379> - -What we've just done is check the length of the list, read and removed the oldest item in the list, and checked the length again. - -This behavior is what Logstash does when it reads from a Redis input (technically logstash performs a blocking lpop). We're essentially using Redis to simulate a queue via the `list` data type. - -Go ahead and type a few more entries in the agent window: - -- test 1 -- test 2 -- test 3 - -As you `lpop` you should get them in the correct order of insertion. - -### logstash indexer - -This agent will parse and index your logs as they come in over Redis. Here's a -sample config based on the previous section. Save this as `indexer.conf` - - input { - redis { - host => "127.0.0.1" - type => "redis-input" - # these settings should match the output of the agent - data_type => "list" - key => "logstash" - - # We use json_event here since the sender is a logstash agent - format => "json_event" - } - } - - output { - stdout { debug => true debug_format => "json"} - - elasticsearch { - host => "127.0.0.1" - } - } - -The above configuration will attach to Redis and issue a `BLPOP` against the `logstash` list. When an event is recieved, it will be pulled off and sent to Elasticsearch for indexing. - -Start the indexer the same way as the agent but specifying the `indexer.conf` file: - -`java -jar logstash-1.2.0.beta1-flatjar.jar agent -f indexer.conf` - -To verify that your Logstash indexer is connecting to Elasticsearch properly, you should see a message in your Elasticsearch window similar to the following: - -`[2012-07-02 13:14:27,008][INFO ][cluster.service ] [Baron Samedi] added {[Bes][JZQBMR21SUWRNtTMsDV3_g][inet[/192.168.1.194:9301]]{client=true, data=false},}` - -The names `Bes` and `Baron Samedi` may differ as ES uses random names for nodes. - -### Testing the flow -Now we want to test the flow. In your agent window, type something to generate an event. -The indexer should read this and persist it to Elasticsearch. It will also display the event to stdout. - -In your Elasticsearch window, you should see something like the following: - - [2012-07-02 13:21:58,982][INFO ][cluster.metadata ] [Baron Samedi] [logstash-2012.07.02] creating index, cause [auto(index api)], shards [5]/[1], mappings [] - [2012-07-02 13:21:59,495][INFO ][cluster.metadata ] [Baron Samedi] [logstash-2012.07.02] update_mapping [stdin-type] (dynamic) - -Since indexes are created dynamically, this is the first sign that Logstash was able to write to ES. Let's use curl to verify our data is there: -Using our curl command from the simple tutorial should let us see the data: - -`curl -s -XGET http://localhost:9200/logstash-2012.07.02/_search?q=@type:stdin-type` - -You may need to modify the date as this is based on the date this guide was written. - -Now we can move on to the final step... -## logstash web interface - -Run this on the same server as your elasticsearch server. - -To run the logstash web server, just run the jar with 'web' as the first -argument. - - java -jar logstash-1.2.0.beta1-flatjar.jar web - -Just point your browser at the http://127.0.0.1:9292/ and start searching -logs! - -The web interface is called 'kibana' - you can learn more about kibana at - -# Distributing the load -At this point we've been simulating a distributed environment on a single machine. If only the world were so easy. -In all of the example configurations, we've been explicitly setting the connection to connect to `127.0.0.1` despite the fact in most network-related plugins, that's the default host. - -Since Logstash is so modular, you can install the various components on different systems. - -- If you want to give Redis a dedicated host, simply ensure that the `host` attribute in configurations points to that host. -- If you want to give Elasticsearch a dedicated host, simple ensure that the `host` attribute is correct as well (in both web and indexer). - -As with the simple input example, reading from stdin is fairly useless. Check the Logstash documentation for the various inputs offered and mix and match to taste! diff --git a/docs/1.2.0.beta1/tutorials/getting-started-simple.md b/docs/1.2.0.beta1/tutorials/getting-started-simple.md deleted file mode 100644 index 701fa4afe..000000000 --- a/docs/1.2.0.beta1/tutorials/getting-started-simple.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: Getting Started (Standalone server) - logstash -layout: content_right ---- -# Getting started with logstash (standalone server example) - -This guide shows how to get you going quickly with logstash on a single, -standalone server. We'll begin by showing you how to read events from standard -input (your keyboard) and emit them to standard output. After that, we'll start -collecting actual log files. - -By standalone, I mean that everything happens on a single server: log collection, indexing, and the web interface. - -logstash can be run on multiple servers (collect from many servers to a single -indexer) if you want, but this example shows simply a standalone configuration. - -Steps detailed in this guide: - -* Download and run logstash - -## Problems? - -If you have problems, feel free to email the users list -(logstash-users@googlegroups.com) or join IRC (#logstash on irc.freenode.org) - -## logstash - -You should download the logstash jar file - if you haven't yet, -[download it -now](http://logstash.objects.dreamhost.com/release/logstash-1.2.0.beta1-flatjar.jar). -This package includes most of the dependencies for logstash in it and -helps you get started quicker. - -The configuration of any logstash agent consists of specifying inputs, filters, -and outputs. For this example, we will not configure any filters. - -The inputs are your log files. The output will be elasticsearch. The config -format should be simple to read and write. The bottom of this document includes -links for further reading (config, etc) if you want to learn more. - -Here is the simplest Logstash configuration you can work with: - - input { stdin { type => "stdin-type"}} - output { stdout { debug => true debug_format => "json"}} - -Save this to a file called `logstash-simple.conf` and run it like so: - - java -jar logstash-1.2.0.beta1-flatjar.jar agent -f logstash-simple.conf - -After a few seconds, type something in the console where you started logstash. Maybe `test`. -You should get some output like so: - - { - "@source":"stdin://jvstratusmbp.local/", - "@type":"stdin", - "@tags":[], - "@fields":{}, - "@timestamp":"2012-07-02T05:20:16.092000Z", - "@source_host":"jvstratusmbp.local", - "@source_path":"/", - "@message":"test" - } - -If everything is okay, let's move on to a more complex version: - -### Saving to Elasticsearch -The recommended storage engine for Logstash is Elasticsearch. If you're running Logstash from the jar file or via jruby, you can use an embedded version of Elasticsearch for storage. - -Using our configuration above, let's change it to look like so: - - input { stdin { type => "stdin-type"}} - output { - stdout { debug => true debug_format => "json"} - elasticsearch { embedded => true } - } - -We're going to KEEP the existing configuration but add a second output - embedded Elasticsearch. -Restart your Logstash (CTRL-C and rerun the java command). Depending on the horsepower of your machine, this could take some time. -Logstash needs to extract the jar contents to a working directory AND start an instance of Elasticsearch. - -Let's do our test again by simply typing `test`. You should get the same output to the console. -Now let's verify that Logstash stored the message in Elasticsearch: - - curl -s http://127.0.0.1:9200/_status?pretty=true | grep logstash - -_This assumes you have the `curl` command installed._ - -You should get back some output like so: - - "logstash-2012.07.02" : { - "index" : "logstash-2012.07.02" - -This means Logstash created a new index based on today's date. Likely your data is in there as well: - -`curl -s -XGET http://localhost:9200/logstash-2012.07.02/_search?q=@type:stdin` - -This will return a rather large JSON output. We're only concerned with a subset: - - "_index": "logstash-2012.07.02", - "_type": "stdin", - "_id": "JdRaI5R6RT2do_WhCYM-qg", - "_score": 0.30685282, - "_source": { - "@source": "stdin://dist/", - "@type": "stdin", - "@tags": [ - "tag1", - "tag2" - ], - "@fields": {}, - "@timestamp": "2012-07-02T06:17:48.533000Z", - "@source_host": "dist", - "@source_path": "/", - "@message": "test" - } - -Your output may look a little different. -The reason we're going about it this way is to make absolutely sure that we have all the bits working before adding more complexity. - -If you are unable to get these steps working, you likely have something interfering with multicast traffic. This has been known to happen when connected to VPNs for instance. -For best results, test on a Linux VM or system with less complicated networking. If in doubt, rerun the command with the options `-vv` and paste the output to Github Gist or Pastie. -Hop on the logstash IRC channel or mailing list and ask for help with that output as reference. - -Obviously this is fairly useless this way. Let's add the final step and test with the builtin logstash web ui: - -### Testing the webui -We've already proven that events can make it into Elasticsearch. However using curl for everything is less than ideal. -Logstash ships with a built-in web interface. It's fairly spartan but it's a good proof-of-concept. Let's restart our logstash process with an additional option: - - java -jar logstash-1.2.0.beta1-flatjar.jar agent -f logstash-simple.conf -- web - -One important thing to note is that the `web` option is actually its own set of commmand-line options. We're essentially starting two programs in one. -This is worth remembering as you move to an external Elasticsearch server. The options you specify in your logstash.conf have no bearing on the web ui. It has its own options. - -Again, the reason for testing without the web interface is to ensure that the logstash agent itself is getting events into Elasticsearch. This is different than the Logstash web ui being able to read them. -As before we'll need to wait a bit for everything to spin up. You can verify that everything is running (assuming you aren't running with any `-v` options) by checking the output of `netstat`: - - netstat -napt | grep -i LISTEN - -What's interesting is that you should see the following ports in use: - -- 9200 -- 9300 -- 9301 -- 9302 -- 9292 - -The `9200` and `9300` ports are the embedded ES listening. The `9301` and `9302` ports are the agent and web interfaces talking to ES. `9292` is the port the web ui listens on. - -If you open a browser to http://localhost:9292/ and click on the link in the body, you should see results. If not, switch back to your console, type some test and hit return. -Refresh the browser page and you should have results! - -### Continuing on -At this point you have a working self-contained Logstash instance. However typing things into stdin is likely not to be what you want. - -Here is a sample config you can start with. It defines some basic inputs -grouped by type and two outputs. - - input { - stdin { - type => "stdin-type" - } - - file { - type => "syslog" - - # Wildcards work, here :) - path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ] - } - } - - output { - stdout { } - elasticsearch { embedded => true } - } - -Put this in a file called "logstash-complex.conf" - -Now run it all (again. Be sure to stop your previous Logstash tests!): - - java -jar logstash-1.2.0.beta1-flatjar.jar agent -f logstash-complex.conf -- web - -Point your browser at and start searching! - -*Note*: If things are not working, such as you get an error message while -searching, like 'SERVICE_UNAVAILABLE' or some other elasticsearch error, you -should check that your firewall (local, too) is not blocking multicast. - -## Further reading - -Want to know more about the configuration language? Check out the -[configuration](../configuration) documentation. - -You may have logs on many servers you want to centralize through logstash. To -learn how to do that, [read this](getting-started-centralized) diff --git a/docs/1.2.0.beta1/tutorials/just-enough-rabbitmq-for-logstash.md b/docs/1.2.0.beta1/tutorials/just-enough-rabbitmq-for-logstash.md deleted file mode 100644 index 060fa6f0a..000000000 --- a/docs/1.2.0.beta1/tutorials/just-enough-rabbitmq-for-logstash.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -title: Just Enough RabbitMQ - logstash -layout: content_right ---- - -While configuring your RabbitMQ broker is out of scope for logstash, it's important -to understand how logstash uses RabbitMQ. To do that, we need to understand a -little about AMQP. - -You should also consider reading -[this](http://www.rabbitmq.com/tutorials/amqp-concepts.html) at the RabbitMQ -website. - -# Exchanges, queues and bindings; OH MY! - -You can get a long way by understanding a few key terms. - -## Exchanges - -Exchanges are for message **producers**. In Logstash, we map these to -**outputs**. Logstash puts messages on exchanges. There are many types of -exchanges and they are discussed below. - -## Queues - -Queues are for message **consumers**. In Logstash, we map these to inputs. -Logstash reads messages from queues. Optionally, queues can consume only a -subset of messages. This is done with "routing keys". - -## Bindings - -Just having a producer and a consumer is not enough. We must `bind` a queue to -an exchange. When we bind a queue to an exchange, we can optionally provide a -routing key. Routing keys are discussed below. - -## Broker - -A broker is simply the AMQP server software. There are several brokers, but this -tutorial will cover the most common (and arguably popular), [RabbitMQ](http://www.rabbitmq.com). - -# Routing Keys - -Simply put, routing keys are somewhat like tags for messages. In practice, they -are hierarchical in nature with the each level separated by a dot: - -- `messages.servers.production` -- `sports.atlanta.baseball` -- `company.myorg.mydepartment` - -Routing keys are really handy with a tool like logstash where you -can programatically define the routing key for a given event using the metadata that logstash provides: - -- `logs.servers.production.host1` -- `logs.servers.development.host1.syslog` -- `logs.servers.application_foo.critical` - -From a consumer/queue perspective, routing keys also support two types wildcards - `#` and `*`. - -- `*` (asterisk) matches any single word. -- `#` (hash) matches any number of words and behaves like a traditional wildcard. - -Using the above examples, if you wanted to bind to an exchange and see messages -for just production, you would use the routing key `logs.servers.production.*`. -If you wanted to see messages for host1, regardless of environment you could -use `logs.servers.%.host1.#`. - -Wildcards can be a bit confusing but a good general rule to follow is to use -`*` in places where you need wildcards for a known element. Use `#` when you -need to match any remaining placeholders. Note that wildcards in routing keys -only make sense on the consumer/queue binding, not in the publishing/exchange -side. - -We'll get into some of that neat stuff below. For now, it's enough to -understand the general idea behind routing keys. - -# Exchange types - -There are three primary types of exchanges that you'll see. - -## Direct - -A direct exchange is one that is probably most familiar to people. Message -comes in and, assuming there is a queue bound, the message is picked up. You -can have multiple queues bound to the same direct exchange. The best way to -understand this pattern is pool of workers (queues) that read from a direct -exchange to get units of work. Only one consumer will see a given message in a -direct exchange. - -You can set routing keys on messages published to a direct exchange. This -allows you do have workers that do different tasks read from the same global -pool of messages yet consume only the ones they know how to handle. - -The RabbitMQ concepts guide (linked below) does a good job of describing this -visually -[here](http://www.rabbitmq.com/img/tutorials/intro/exchange-direct.png) - -## Fanout - -Fanouts are another type of exchange. Unlike direct exchanges, every queue -bound to a fanout exchange will see the same messages. This is best described -as a PUB/SUB pattern. This is helpful when you need broadcast messages to -multiple interested parties. - -Fanout exchanges do NOT support routing keys. All bound queues see all -messages. - -## Topic - -Topic exchanges are special type of fanout exchange. Fanout exchanges don't -support routing keys. Topic exchanges do support them. Just like a fanout -exchange, all bound queues see all messages with the additional filter of the -routing key. - -# RabbitMQ in logstash - -As stated earlier, in Logstash, Outputs publish to Exchanges. Inputs read from -Queues that are bound to Exchanges. Logstash uses the `bunny` RabbitMQ library for -interaction with a broker. Logstash endeavors to expose as much of the -configuration for both exchanges and queues. There are many different tunables -that you might be concerned with setting - including things like message -durability or persistence of declared queues/exchanges. See the relevant input -and output documentation for RabbitMQ for a full list of tunables. - -# Sample configurations, tips, tricks and gotchas - -There are several examples in the logstash source directory of RabbitMQ usage, -however a few general rules might help eliminate any issues. - -## Check your bindings - -If logstash is publishing the messages and logstash is consuming the messages, -the `exchange` value for the input should match the `name` in the output. - -sender agent - - input { stdin { type = "test" } } - output { - rabbitmq { - exchange => "test_exchange" - host => "my_rabbitmq_server" - exchange_type => "fanout" - } - } - -receiver agent - - input { - rabbitmq { - queue => "test_queue" - host => "my_rabbitmq_server" - exchange => "test_exchange" # This matches the exchange declared above - } - } - output { stdout { debug => true }} - -## Message persistence - -By default, logstash will attempt to ensure that you don't lose any messages. -This is reflected in the RabbitMQ default settings as well. However there are -cases where you might not want this. A good example is where RabbitMQ is not your -primary method of shipping. - -In the following example, we use RabbitMQ as a sniffing interface. Our primary -destination is the embedded ElasticSearch instance. We have a secondary RabbitMQ -output that we use for duplicating messages. However we disable persistence and -durability on this interface so that messages don't pile up waiting for -delivery. We only use RabbitMQ when we want to watch messages in realtime. -Additionally, we're going to leverage routing keys so that we can optionally -filter incoming messages to subsets of hosts. The exercise of getting messages -to this logstash agent are left up to the user. - - input { - # some input definition here - } - - output { - elasticsearch { embedded => true } - rabbitmq { - exchange => "logtail" - host => "my_rabbitmq_server" - exchange_type => "topic" # We use topic here to enable pub/sub with routing keys - key => "logs.%{host}" - durable => false # If rabbitmq restarts, the exchange disappears. - auto_delete => true # If logstash disconnects, the exchange goes away - persistent => false # Messages are not persisted to disk - } - } - -Now if you want to stream logs in realtime, you can use the programming -language of your choice to bind a queue to the `logtail` exchange. If you do -not specify a routing key, you will see every message that comes in to -logstash. However, you can specify a routing key like `logs.apache1` and see -only messages from host `apache1`. - -Note that any logstash variable is valid in the key definition. This allows you -to create really complex routing key hierarchies for advanced filtering. - -Note that RabbitMQ has specific rules about durability and persistence matching -on both the queue and exchange. You should read the RabbitMQ documentation to -make sure you don't crash your RabbitMQ server with messages awaiting someone -to pick them up. diff --git a/docs/1.2.0.beta1/tutorials/media/frontend-response-codes.png b/docs/1.2.0.beta1/tutorials/media/frontend-response-codes.png deleted file mode 100644 index e5b0ed47e..000000000 Binary files a/docs/1.2.0.beta1/tutorials/media/frontend-response-codes.png and /dev/null differ diff --git a/docs/1.2.0.beta1/tutorials/metrics-from-logs.md b/docs/1.2.0.beta1/tutorials/metrics-from-logs.md deleted file mode 100644 index 1da416725..000000000 --- a/docs/1.2.0.beta1/tutorials/metrics-from-logs.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: Metrics from Logs - logstash -layout: content_right ---- -# Pull metrics from logs - -Logs are more than just text. How many customers signed up today? How many HTTP -errors happened this week? When was your last puppet run? - -Apache logs give you the http response code and bytes sent - that's useful in a -graph. Metrics occur in logs so frequently there are piles of tools available to -help process them. - -Logstash can help (and even replace some tools you might already be using). - -## Example: Replacing Etsy's Logster - -[Etsy](https://github.com/etsy) has some excellent open source tools. One of -them, [logster](https://github.com/etsy/logster), is meant to help you pull -metrics from logs and ship them to [graphite](http://graphite.wikidot.com/) so -you can make pretty graphs of those metrics. - -One sample logster parser is one that pulls http response codes out of your -apache logs: [SampleLogster.py](https://github.com/etsy/logster/blob/master/logster/parsers/SampleLogster.py) - -The above code is roughly 50 lines of python and only solves one specific -problem in only apache logs: count http response codes by major number (1xx, -2xx, 3xx, etc). To be completely fair, you could shrink the code required for -a Logster parser, but size is not strictly the point, here. - -## Keep it simple - -Logstash can do more than the above, simpler, and without much coding skill: - - input { - file { - path => "/var/log/apache/access.log" - type => "apache-access" - } - } - - filter { - grok { - type => "apache-access" - pattern => "%{COMBINEDAPACHELOG}" - } - } - - output { - statsd { - # Count one hit every event by response - increment => "apache.response.%{response}" - } - } - -The above uses grok to parse fields out of apache logs and using the statsd -output to increment counters based on the response code. Of course, now that we -are parsing apache logs fully, we can trivially add additional metrics: - - output { - statsd { - # Count one hit every event by response - increment => "apache.response.%{response}" - - # Use the 'bytes' field from the apache log as the count value. - count => [ "apache.bytes", "%{bytes}" ] - } - } - -Now adding additional metrics is just one more line in your logstash config -file. BTW, the 'statsd' output writes to another Etsy tool, -[statsd](https://github.com/etsy/statsd), which helps build counters/latency -data and ship it to graphite for graphing. - -Using the logstash config above and a bunch of apache access requests, you might end up -with a graph that looks like this: - -![apache response codes graphed with graphite, fed data with logstash](media/frontend-response-codes.png) - -The point made above is not "logstash is better than Logster" - the point is -that logstash is a general-purpose log management and pipelining tool and that -while you can centralize logs with logstash, you can read, modify, and write -them to and from just about anywhere. - -## A full use case - -TODO(sissel): include sample logs, show custom grok format, output to statsd and/or graphite. diff --git a/docs/1.2.0.beta1/tutorials/zeromq.md b/docs/1.2.0.beta1/tutorials/zeromq.md deleted file mode 100644 index 796ec0ea3..000000000 --- a/docs/1.2.0.beta1/tutorials/zeromq.md +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: ZeroMQ - logstash -layout: content_right ---- - -*ZeroMQ support in Logstash is currently in an experimental phase. As such, parts of this document are subject to change.* - -# ZeroMQ -Simply put ZeroMQ (0mq) is a socket on steroids. This makes it a perfect compliment to Logstash - a pipe on steroids. - -ZeroMQ allows you to easily create sockets of various types for moving data around. These sockets are refered to in ZeroMQ by the behavior of each side of the socket pair: - -* PUSH/PULL -* REQ/REP -* PUB/SUB -* ROUTER/DEALER - -There is also a `PAIR` socket type as well. - -Additionally, the socket type is independent of the connection method. A PUB/SUB socket pair could have the SUB side of the socket be a listener and the PUB side a connecting client. This makes it very easy to fit ZeroMQ into various firewalled architectures. - -Note that this is not a full-fledged tutorial on ZeroMQ. It is a tutorial on how Logstash uses ZeroMQ. - -# ZeroMQ and logstash -In the spirit of ZeroMQ, Logstash takes these socket type pairs and uses them to create topologies with some very simply rules that make usage very easy to understand: - -* The receiving end of a socket pair is always a logstash input -* The sending end of a socket pair is always a logstash output -* By default, inputs `bind`/listen and outputs `connect` -* Logstash refers to the socket pairs as topologies and mirrors the naming scheme from ZeroMQ -* By default, ZeroMQ inputs listen on all interfaces on port 2120, ZeroMQ outputs connect to `localhost` on port 2120 - -The currently understood Logstash topologies for ZeroMQ inputs and outputs are: - -* `pushpull` -* `pubsub` -* `pair` - -We have found from various discussions that these three topologies will cover most of user's needs. We hope to expose the full span of ZeroMQ socket types as time goes on. - -By keeping the options simple, this allows you to get started VERY easily with what are normally complex message flows. No more confusion over `exchanges` and `queues` and `brokers`. If you need to add fanout capability to your flow, you can simply use the following configs: - -* _node agent lives at 192.168.1.2_ -* _indexer agent lives at 192.168.1.1_ - - # Node agent config - input { stdin { type => "test-stdin-input" } } - output { zeromq { topology => "pubsub" address => "tcp://192.168.1.1.:2120" } } - - # Indexer agent config - input { zeromq { topology => "pubsub" } } - output { stdout { debug => true }} - -If for some reason you need connections to initiate from the indexer because of firewall rules: - - # Node agent config - now listening on all interfaces port 2120 - input { stdin { type => "test-stdin-input" } } - output { zeromq { topology => "pubsub" address => "tcp://*.:2120" mode => "server" } } - - # Indexer agent config - input { zeromq { topology => "pubsub" address => "tcp://192.168.1.2" mode => "client" } } - output { stdout { debug => true }} - -As stated above, by default `inputs` always start as listeners and `outputs` always start as initiators. Please don't confuse what happens once the socket is connect with the direction of the connection. ZeroMQ separates connection from topology. In the second case of the above configs, once the two sockets are connected, regardless of who initiated the connection, the message flow itself is absolute. The indexer is reading events from the node. - -# Which topology to use -The choice of topology can be broken down very easily based on need - -## one to one -Use `pair` topology. On the output side, specify the ipaddress and port of the input side. - -## broadcast -Use `pubsub` -If you need to broadcast ALL messages to multiple hosts that each need to see all events, use `pubsub`. Note that all events are broadcast to all subscribers. When using `pubsub` you might also want to investigate the `topic` configuration option which allows subscribers to see only a subset of messages. - -## Filter workers -Use `pushpull` -In `pushpull`, ZeroMQ automatically load balances to all connected peers. This means that no peer sees the same message as any other peer. - -# What's with the address format? -ZeroMQ supports multiple types of transports: - -* inproc:// (unsupported by logstash due to threading) -* tcp:// (exactly what it sounds like) -* ipc:// (probably useless in logstash) -* pgm:// and epgm:// (a multicast format - only usable with PUB and SUB socket types) - -For pretty much all cases, you'll be using `tcp://` transports with Logstash. - -## Topic - applies to `pubsub` -This opt mimics the routing keys functionality in AMQP. Imagine you have a network of receivers but only a subset of the messages need to be seen by a subset of the hosts. You can use this option as a routing key to facilite that: - - # This output is a PUB - output { - zeromq { topology => "pubsub" topic => "logs.production.%{host}" } - } - - # This input is a SUB - # I only care about db1 logs - input { zeromq { type => "db1logs" address => "tcp://:2120" topic => "logs.production.db1"}} - -One thing important to note about 0mq PUBSUB and topics is that all filtering is done on the subscriber side. The subscriber will get ALL messages but discard any that don't match the topic. - -Also important to note is that 0mq doesn't do topic in the same sense as an AMQP broker might. When a SUB socket gets a message, it compares the first bytes of the message against the topic. However, this isn't always flexible depending on the format of your message. The common practice then, is to send a 0mq multipart message and make the first part the topic. The next parts become the actual message body. - -This is approach is how logstash handles this. When using PUBSUB, Logstash will send a multipart message where the first part is the name of the topic and the second part is the event. This is important to know if you are sending to a SUB input from sources other than Logstash. - -# sockopts -Sockopts is not you choosing between blue or black socks. ZeroMQ supports setting various flags or options on sockets. In the interest of minimizing configuration syntax, these are _hidden_ behind a logstash configuration element called `sockopts`. You probably won't need to tune these for most cases. If you do need to tune them, you'll probably set the following: - -## ZMQ::HWM - sets the high water mark -The high water mark is the maximum number of messages a given socket pair can have in its internal queue. Use this to throttle essentially. - -## ZMQ::SWAP_SIZE -TODO - -## ZMQ::IDENTITY -TODO