diff --git a/docs/1.2.0.beta1/codecs/dots.html b/docs/1.2.0.beta1/codecs/dots.html deleted file mode 100644 index ecdf6a55b..000000000 --- a/docs/1.2.0.beta1/codecs/dots.html +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: logstash docs for codecs/dots -layout: content_right ---- -
# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => dots {
- }
- }
-}
-
-
-This is the base class for logstash codecs.
- - -# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => json {
- }
- }
-}
-
-
-# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => json_spooler {
- spool_size => ... # number (optional), default: 50
- }
- }
-}
-
-
-Line-oriented text data.
- -Decoding behavior: Only whole line events will be emitted.
- -Encoding behavior: Each event will be emitted with a trailing newline.
- - -# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => line {
- charset => ... # string, one of ["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-1251", "BINARY", "IBM437", "CP437", "IBM737", "CP737", "IBM775", "CP775", "CP850", "IBM850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "CP857", "IBM860", "CP860", "IBM861", "CP861", "IBM862", "CP862", "IBM863", "CP863", "IBM864", "CP864", "IBM865", "CP865", "IBM866", "CP866", "IBM869", "CP869", "Windows-1258", "CP1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "Big5-HKSCS:2008", "CP951", "stateless-ISO-2022-JP", "eucJP", "eucJP-ms", "euc-jp-ms", "CP51932", "eucKR", "eucTW", "GB2312", "EUC-CN", "eucCN", "GB12345", "CP936", "ISO-2022-JP", "ISO2022-JP", "ISO-2022-JP-2", "ISO2022-JP2", "CP50220", "CP50221", "ISO8859-1", "Windows-1252", "CP1252", "ISO8859-2", "Windows-1250", "CP1250", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "Windows-1256", "CP1256", "ISO8859-7", "Windows-1253", "CP1253", "ISO8859-8", "Windows-1255", "CP1255", "ISO8859-9", "Windows-1254", "CP1254", "ISO8859-10", "ISO8859-11", "TIS-620", "Windows-874", "CP874", "ISO8859-13", "Windows-1257", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "Windows-31J", "CP932", "csWindows31J", "SJIS", "PCK", "MacJapanese", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "UTF-7", "CP65000", "CP65001", "UTF8-MAC", "UTF-8-MAC", "UTF-8-HFS", "UTF-16", "UTF-32", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP1251", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "locale", "external", "filesystem", "internal"] (optional), default: "UTF-8"
- format => ... # string (optional)
- }
- }
-}
-
-
-The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -Set the desired text format for encoding.
- - -# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => msgpack {
- format => ... # string (optional), default: nil
- }
- }
-}
-
-
-The multiline codec is for taking line-oriented text and merging them into a -single event.
- -The original goal of this codec was to allow joining of multi-line messages -from files into a single event. For example - joining java exception and -stacktrace messages into a single event.
- -The config looks like this:
- -input {
- stdin {
- codec => multiline {
- pattern => "pattern, a regexp"
- negate => true or false
- what => "previous" or "next"
- }
- }
-}
-
-
-The 'pattern' should match what you believe to be an indicator that the field -is part of a multi-line event.
- -The 'what' must be "previous" or "next" and indicates the relation -to the multi-line event.
- -The 'negate' can be "true" or "false" (defaults false). If true, a -message not matching the pattern will constitute a match of the multiline -filter and the what will be applied. (vice-versa is also true)
- -For example, java stack traces are multiline and usually have the message -starting at the far-left, then each subsequent line indented. Do this:
- -input {
- stdin {
- codec => multiline {
- pattern => "^\s"
- what => "previous"
- }
- }
-}
-
-
-This says that any line starting with whitespace belongs to the previous line.
- -Another example is C line continuations (backslash). Here's how to do that:
- -filter {
- multiline {
- type => "somefiletype "
- pattern => "\\$"
- what => "next"
- }
-}
-
-
-This is the base class for logstash codecs.
- - -# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => multiline {
- charset => ... # string, one of ["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-1251", "BINARY", "IBM437", "CP437", "IBM737", "CP737", "IBM775", "CP775", "CP850", "IBM850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "CP857", "IBM860", "CP860", "IBM861", "CP861", "IBM862", "CP862", "IBM863", "CP863", "IBM864", "CP864", "IBM865", "CP865", "IBM866", "CP866", "IBM869", "CP869", "Windows-1258", "CP1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "Big5-HKSCS:2008", "CP951", "stateless-ISO-2022-JP", "eucJP", "eucJP-ms", "euc-jp-ms", "CP51932", "eucKR", "eucTW", "GB2312", "EUC-CN", "eucCN", "GB12345", "CP936", "ISO-2022-JP", "ISO2022-JP", "ISO-2022-JP-2", "ISO2022-JP2", "CP50220", "CP50221", "ISO8859-1", "Windows-1252", "CP1252", "ISO8859-2", "Windows-1250", "CP1250", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "Windows-1256", "CP1256", "ISO8859-7", "Windows-1253", "CP1253", "ISO8859-8", "Windows-1255", "CP1255", "ISO8859-9", "Windows-1254", "CP1254", "ISO8859-10", "ISO8859-11", "TIS-620", "Windows-874", "CP874", "ISO8859-13", "Windows-1257", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "Windows-31J", "CP932", "csWindows31J", "SJIS", "PCK", "MacJapanese", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "UTF-7", "CP65000", "CP65001", "UTF8-MAC", "UTF-8-MAC", "UTF-8-HFS", "UTF-16", "UTF-32", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP1251", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "locale", "external", "filesystem", "internal"] (optional), default: "UTF-8"
- negate => ... # boolean (optional), default: false
- pattern => ... # string (required)
- patterns_dir => ... # array (optional), default: []
- what => ... # string, one of ["previous", "next"] (required)
- }
- }
-}
-
-
-The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -Negate the regexp pattern ('if not matched')
- -The regular expression to match
- -logstash ships by default with a bunch of patterns, so you don't -necessarily need to define this yourself unless you are adding additional -patterns.
- -Pattern files are plain text with format:
- -NAME PATTERN
-
-
-For example:
- -NUMBER \d+
-
-
-If the pattern matched, does event belong to the next or previous event?
- - -# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => noop {
- }
- }
-}
-
-
-# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => oldlogstashjson {
- }
- }
-}
-
-
-The "plain" codec is for plain text with no delimiting between events.
- -This is mainly useful on inputs and outputs that already have a defined -framing in their transport protocol (such as zeromq, rabbitmq, redis, etc)
- - -# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => plain {
- charset => ... # string, one of ["ASCII-8BIT", "UTF-8", "US-ASCII", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-1251", "BINARY", "IBM437", "CP437", "IBM737", "CP737", "IBM775", "CP775", "CP850", "IBM850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "CP857", "IBM860", "CP860", "IBM861", "CP861", "IBM862", "CP862", "IBM863", "CP863", "IBM864", "CP864", "IBM865", "CP865", "IBM866", "CP866", "IBM869", "CP869", "Windows-1258", "CP1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "Big5-HKSCS:2008", "CP951", "stateless-ISO-2022-JP", "eucJP", "eucJP-ms", "euc-jp-ms", "CP51932", "eucKR", "eucTW", "GB2312", "EUC-CN", "eucCN", "GB12345", "CP936", "ISO-2022-JP", "ISO2022-JP", "ISO-2022-JP-2", "ISO2022-JP2", "CP50220", "CP50221", "ISO8859-1", "Windows-1252", "CP1252", "ISO8859-2", "Windows-1250", "CP1250", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "Windows-1256", "CP1256", "ISO8859-7", "Windows-1253", "CP1253", "ISO8859-8", "Windows-1255", "CP1255", "ISO8859-9", "Windows-1254", "CP1254", "ISO8859-10", "ISO8859-11", "TIS-620", "Windows-874", "CP874", "ISO8859-13", "Windows-1257", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "Windows-31J", "CP932", "csWindows31J", "SJIS", "PCK", "MacJapanese", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "UTF-7", "CP65000", "CP65001", "UTF8-MAC", "UTF-8-MAC", "UTF-8-HFS", "UTF-16", "UTF-32", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP1251", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "locale", "external", "filesystem", "internal"] (optional), default: "UTF-8"
- format => ... # string (optional)
- }
- }
-}
-
-
-The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -Set the message you which to emit for each event. This supports sprintf -strings.
- -This setting only affects outputs (encoding of events).
- - -# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => rubydebug {
- }
- }
-}
-
-
-# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => spool {
- spool_size => ... # number (optional), default: 50
- }
- }
-}
-
-
-INFORMATION:
-The filter Advisor is designed for capture and confrontation the events.
-The events must be grep by a filter first, then it can pull out a copy of it, like clone, whit tags "advisorfirst",
-this copy is the first occurrence of this event verified in timeadv.
-After timeadv Advisor will pull out an event tagged "advisorinfo" who will tell you the number of same events verified in time_adv.
-INFORMATION ABOUT CLASS:
-For do this job, i used a thread that will sleep time adv. I assume that events coming on advisor are tagged, then i use an array for storing different events.
-If an events is not present on array, then is the first and if the option is activate then advisor push out a copy of event.
-Else if the event is present on array, then is another same event and not the first, let's count it.
-USAGE:
-This is an example of logstash config:
-filter{
- advisor {
time_adv => 1 #(optional)
-send_first => true #(optional)
-
-
-} -} -We analize this: -timeadv => 1 -Means the time when the events matched and collected are pushed on outputs with tag "advisorinfo". -sendfirst => true -Means you can push out the first events different who came in advisor like clone copy and tagged with "advisorfirst"
- - -filter {
- advisor {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- send_first => ... # boolean (optional), default: true
- time_adv => ... # number (optional), default: 0
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- advisor {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- advisor {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- advisor {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- advisor {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -If you want the first different event will be pushed out like a copy
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -If you do not set time_adv the plugin does nothing.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The alter filter allows you to do general alterations to fields -that are not included in the normal mutate filter.
- -NOTE: The functionality provided by this plugin is likely to -be merged into the 'mutate' filter in future versions.
- - -filter {
- alter {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- coalesce => ... # array (optional)
- condrewrite => ... # array (optional)
- condrewriteother => ... # array (optional)
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- alter {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- alter {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Sets the value of field_name to the first nonnull expression among its arguments.
- -Example:
- -filter {
- alter {
- coalesce => [
- "field_name", "value1", "value2", "value3", ...
- ]
- }
-}
-
-
-Change the content of the field to the specified value -if the actual content is equal to the expected one.
- -Example:
- -filter {
- alter {
- condrewrite => [
- "field_name", "expected_value", "new_value"
- "field_name2", "expected_value2, "new_value2"
- ....
- ]
- }
-}
-
-
-Change the content of the field to the specified value -if the content of another field is equal to the expected one.
- -Example:
- -filter {
- alter {
- condrewriteother => [
- "field_name", "expected_value", "field_name_to_change", "value",
- "field_name2", "expected_value2, "field_name_to_change2", "value2",
- ....
- ]
- }
-}
-
-
-Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- alter {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- alter {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Anonymize fields using by replacing values with a consistent hash.
- - -filter {
- anonymize {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- algorithm => ... # string, one of ["SHA1", "SHA256", "SHA384", "SHA512", "MD5", "MURMUR3", "IPV4_NETWORK"] (required), default: "SHA1"
- fields => ... # array (required)
- key => ... # string (required)
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- anonymize {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- anonymize {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -digest/hash type
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -The fields to be anonymized
- -Hashing key -When using MURMUR3 the key is ignored but must still be set. -When using IPV4_NETWORK key is the subnet prefix lentgh
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- anonymize {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- anonymize {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This filter let's you create a checksum based on various parts -of the logstash event. -This can be useful for deduplication of messages or simply to provide -a custom unique identifier.
- -This is VERY experimental and is largely a proof-of-concept
- - -filter {
- checksum {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- algorithm => ... # string, one of ["md5", "sha128", "sha256", "sha384"] (optional), default: "sha256"
- keys => ... # array (optional), default: ["message", "@timestamp", "type"]
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- checksum {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- checksum {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -A list of keys to use in creating the string to checksum -Keys will be sorted before building the string -keys and values will then be concatenated with pipe delimeters -and checksummed
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- checksum {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- checksum {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This filter parses a source and apply a cipher or decipher before -storing it in the target.
- - -filter {
- cipher {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- algorithm => ... # string (required)
- base64 => ... # boolean (optional), default: true
- cipher_padding => ... # string (optional)
- iv => ... # string (optional)
- key => ... # string (optional)
- key_pad => ... # (optional), default: "\x00"
- key_size => ... # number (optional), default: 32
- mode => ... # string (required)
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- source => ... # string (optional), default: "message"
- target => ... # string (optional), default: "message"
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- cipher {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- cipher {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -The cipher algorythm
- -A list of supported algorithms can be obtained by
- -puts OpenSSL::Cipher.ciphers
-
-
-Do we have to perform a base64 decode or encode?
- -If we are decrypting, base64 decode will be done before. -If we are encrypting, base64 will be done after.
- -Cypher padding to use. Enables or disables padding.
- -By default encryption operations are padded using standard block padding -and the padding is checked and removed when decrypting. If the pad -parameter is zero then no padding is performed, the total amount of data -encrypted or decrypted must then be a multiple of the block size or an -error will occur.
- -See EVPCIPHERCTXsetpadding for further information.
- -We are using Openssl jRuby which uses default padding to PKCS5Padding -If you want to change it, set this parameter. If you want to disable -it, Set this parameter to 0
- -filter { cipher { padding => 0 }}
-
-
-Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -The initialization vector to use
- -The cipher modes CBC, CFB, OFB and CTR all need an "initialization -vector", or short, IV. ECB mode is the only mode that does not require -an IV, but there is almost no legitimate use case for this mode -because of the fact that it does not sufficiently hide plaintext patterns.
- -The key to use
- -The character used to pad the key
- -The key size to pad
- -It depends of the cipher algorythm.I your key don't need -padding, don't set this parameter
- -Example, for AES-256, we must have 32 char long key
- -filter { cipher { key_size => 32 }
-
-
-Encrypting or decrypting some data
- -Valid values are encrypt or decrypt
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- cipher {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- cipher {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -The field to perform filter
- -Example, to use the @message field (default) :
- -filter { cipher { source => "message" } }
-
-
-Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -The name of the container to put the result
- -Example, to place the result into crypt :
- -filter { cipher { target => "crypt" } }
-
-
-Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The clone filter is for duplicating events. -A clone will be made for each type in the clone list. -The original event is left unchanged.
- - -filter {
- clone {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- clones => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- clone {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- clone {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -A new clone will be created with the given type for each type in this list.
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- clone {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- clone {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -CSV filter. Takes an event field containing CSV data, parses it, -and stores it as individual fields (can optionally specify the names).
- - -filter {
- csv {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- columns => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- separator => ... # string (optional), default: ","
- source => ... # string (optional), default: "message"
- target => ... # string (optional)
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- csv {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- csv {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Define a list of column names (in the order they appear in the CSV, -as if it were a header line). If this is not specified or there -are not enough columns specified, the default column name is "columnX" -(where X is the field number, starting from 1).
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- csv {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- csv {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Define the column separator value. If this is not specified the default -is a comma ',' -Optional.
- -The CSV data in the value of the source field will be expanded into a -datastructure.
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Define target for placing the data -Defaults to writing to the root of the event.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The date filter is used for parsing dates from fields and using that -date or timestamp as the timestamp for the event.
- -For example, syslog events usually have timestamps like this:
- -"Apr 17 09:32:01"
-
-
-You would use the date format "MMM dd HH:mm:ss" to parse this.
- -The date filter is especially important for sorting events and for -backfilling old data. If you don't get the date correct in your -event, then searching for them later will likely sort out of order.
- -In the absence of this filter, logstash will choose a timestamp based on the -first time it sees the event (at input time), if the timestamp is not already -set in the event. For example, with file input, the timestamp is set to the -time of each read.
- - -filter {
- date {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- locale => ... # string (optional)
- match => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- timezone => ... # string (optional)
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- date {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- date {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -specify a locale to be used for date parsing. If this is not specified the -platform default will be used
- -The locale is mostly necessary to be set for parsing month names and -weekday names
- -The date formats allowed are anything allowed by Joda-Time (java time -library): You can see the docs for this format here:
- -joda.time.format.DateTimeFormat
- -An array with field name first, and format patterns following, [ field,
-formats... ]
If your time field has multiple possible formats, you can do this:
- -match => [ "logdate", "MMM dd YYY HH:mm:ss",
- "MMM d YYY HH:mm:ss", "ISO8601" ]
-
-
-The above will match a syslog (rfc3164) or iso8601 timestamp.
- -There are a few special exceptions, the following format literals exist -to help you save time and ensure correctness of date parsing.
- -For example, if you have a field 'logdate' and with a value that looks like -'Aug 13 2010 00:03:44', you would use this configuration:
- -filter {
- date {
- match => [ "logdate", "MMM dd YYYY HH:mm:ss" ]
- }
-}
-
-
-If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- date {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- date {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -specify a timezone canonical ID to be used for date parsing. -The valid ID are listed on http://joda-time.sourceforge.net/timezones.html -Useful in case the timezone cannot be extracted from the value, -and is not the platform default -If this is not specified the platform default will be used. -Canonical ID is good as it takes care of daylight saving time for you -For example, America/Los_Angeles or Europe/France are valid IDs
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -DNS Filter
- -This filter will resolve any IP addresses from a field of your choosing.
- -The DNS filter performs a lookup (either an A record/CNAME record lookup -or a reverse lookup at the PTR record) on records specified under the -"reverse" and "resolve" arrays.
- -The config should look like this:
- -filter {
- dns {
- type => 'type'
- reverse => [ "source_host", "field_with_address" ]
- resolve => [ "field_with_fqdn" ]
- action => "replace"
- }
-}
-
-
-Caveats: at the moment, there's no way to tune the timeout with the 'resolv' -core library. It does seem to be fixed in here:
- -http://redmine.ruby-lang.org/issues/5100
- -but isn't currently in JRuby.
- - -filter {
- dns {
- action => ... # string, one of ["append", "replace"] (optional), default: "append"
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- nameserver => ... # string (optional)
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- resolve => ... # array (optional)
- reverse => ... # array (optional)
- timeout => ... # int (optional), default: 2
-}
-
-}
-
-
-Determine what action to do: append or replace the values in the fields -specified under "reverse" and "resolve."
- -If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- dns {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- dns {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Use custom nameserver.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- dns {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- dns {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Forward resolve one or more fields.
- -Reverse resolve one or more fields.
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -TODO(sissel): make 'action' required? This was always the intent, but it -due to a typo it was never enforced. Thus the default behavior in past -versions was 'append' by accident. -resolv calls will be wrapped in a timeout instance
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Drop filter.
- -Drops everything that gets to this filter.
- -This is best used in combination with conditionals, for example:
- -filter {
- if [loglevel] == "debug" {
- drop { }
- }
-}
-
-
-The above will only pass events to the drop filter if the loglevel field is -"debug". This will cause all events matching to be dropped.
- - -filter {
- drop {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- drop {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- drop {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- drop {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- drop {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Set fields from environment variables
- - -filter {
- environment {
- add_field => ... # hash (optional), default: {}
- add_field_from_env => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- environment {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -Specify a hash of fields to the environment variable -A hash of matches of field => environment variable
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- environment {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- environment {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- environment {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The GELFify filter parses RFC3164 severity levels to -corresponding GELF levels.
- - -filter {
- gelfify {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- gelfify {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- gelfify {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- gelfify {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- gelfify {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Add GeoIP fields from Maxmind database
- -GeoIP filter, adds information about geographical location of IP addresses. -This filter uses Maxmind GeoIP databases, have a look at -https://www.maxmind.com/app/geolite
- -Logstash releases ship with the GeoLiteCity database made available from -Maxmind with a CCA-ShareAlike 3.0 license. For more details on geolite, see -http://www.maxmind.com/en/geolite.
- - -filter {
- geoip {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- database => ... # a valid filesystem path (optional)
- fields => ... # array (optional)
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- source => ... # string (optional)
- target => ... # string (optional), default: "geoip"
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- geoip {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- geoip {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -GeoIP database file to use, Country, City, ASN, ISP and organization -databases are supported
- -If not specified, this will default to the GeoLiteCity database that ships -with logstash.
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Array of geoip fields that we want to be included in our event.
- -Possible fields depend on the database type. By default, all geoip fields -are included in the event.
- -For the built in GeoLiteCity database, the following are available: -city_name, continent_code, country_code2, country_code3, country_name, -dma_code, ip, latitude, longitude, postal_code, region_name, timezone
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- geoip {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- geoip {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -The field containing IP address, hostname is also OK. If this field is an -array, only the first value will be used.
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Specify into what field you want the geoip data. -This can be useful for example if you have a src_ip and dst_ip and want -information of both IP's
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Grep filter. Useful for dropping events you don't want to pass, or -adding tags or fields to events that match.
- -Events not matched are dropped. If 'negate' is set to true (defaults false), -then matching events are dropped.
- - -filter {
- grep {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- drop => ... # boolean (optional), default: true
- ignore_case => ... # boolean (optional), default: false
- match => ... # hash (optional), default: {}
- negate => ... # boolean (optional), default: false
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- grep {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- grep {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Drop events that don't match
- -If this is set to false, no events will be dropped at all. Rather, the -requested tags and fields will be added to matching events, and -non-matching events will be passed through unchanged.
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Use case-insensitive matching. Similar to 'grep -i'
- -If enabled, ignore case distinctions in the patterns.
- -A hash of matches of field => regexp. If multiple matches are specified, -all must match for the grep to be considered successful. Normal regular -expressions are supported here.
- -For example:
- -filter {
- grep {
- match => [ "message", "hello world" ]
- }
-}
-
-
-The above will drop all events with a message not matching "hello world" as -a regular expression.
- -Negate the match. Similar to 'grep -v'
- -If this is set to true, then any positive matches will result in the -event being cancelled and dropped. Non-matching will be allowed -through.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- grep {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- grep {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Parse arbitrary text and structure it.
- -Grok is currently the best way in logstash to parse crappy unstructured log -data into something structured and queryable.
- -This tool is perfect for syslog logs, apache and other webserver logs, mysql -logs, and in general, any log format that is generally written for humans -and not computer consumption.
- -Logstash ships with about 120 patterns by default. You can find them here: -https://github.com/logstash/logstash/tree/v1.2.0.beta1/patterns. You can add -your own trivially. (See the patterns_dir setting)
- -If you need help building patterns to match your logs, you will find the -http://grokdebug.herokuapp.com too quite useful!
- -Grok works by using combining text patterns into something that matches your -logs.
- -The syntax for a grok pattern is %{SYNTAX:SEMANTIC}
The SYNTAX
is the name of the pattern that will match your text. For
-example, "3.44" will be matched by the NUMBER pattern and "55.3.244.1" will
-be matched by the IP pattern. The syntax is how you match.
The SEMANTIC
is the identifier you give to the piece of text being matched.
-For example, "3.44" could be the duration of an event, so you could call it
-simply 'duration'. Further, a string "55.3.244.1" might identify the client
-making a request.
Optionally you can add a data type conversion to your grok pattern. By default
-all semantics are saved as strings. If you wish to convert a semnatic's data type,
-for example change a string to an integer then suffix it with the target data type.
-For example ${NUMBER:num:int}
which converts the 'num' semantic from a string to an
-integer. Currently the only supporting conversions are int
and float
.
With that idea of a syntax and semantic, we can pull out useful fields from a -sample log like this fictional http request log:
- -55.3.244.1 GET /index.html 15824 0.043
-
-
-The pattern for this could be:
- -%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}
-
-
-A more realistic example, let's read these logs from a file:
- -input {
- file {
- path => "/var/log/http.log"
- type => "examplehttp"
- }
-}
-filter {
- grok {
- type => "examplehttp"
- match => [ "message", "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" ]
- }
-}
-
-
-After the grok filter, the event will have a few extra fields in it:
- -Grok sits on top of regular expressions, so any regular expressions are valid -in grok as well. The regular expression library is Oniguruma, and you can see -the full supported regexp syntax on the Onigiruma -site
- -Sometimes logstash doesn't have a pattern you need. For this, you have -a few options.
- -First, you can use the Oniguruma syntax for 'named capture' which will -let you match a piece of text and save it as a field:
- -(?<field_name>the pattern here)
-
-
-For example, postfix logs have a 'queue id' that is an 11-character -hexadecimal value. I can capture that easily like this:
- -(?<queue_id>[0-9A-F]{11})
-
-
-Alternately, you can create a custom patterns file.
- -patterns
with a file in it called extra
-(the file name doesn't matter, but name it meaningfully for yourself)For example, doing the postfix queue id example as above:
- -# in ./patterns/postfix
-POSTFIX_QUEUEID [0-9A-F]{11}
-
-
-Then use the patterns_dir
setting in this plugin to tell logstash where
-your custom patterns directory is. Here's a full example with a sample log:
Jan 1 06:25:43 mailserver14 postfix/cleanup[21403]: BEF25A72965: message-id=<20130101142543.5828399CCAF@mailserver14.example.com>
-
-filter {
- grok {
- patterns_dir => "./patterns"
- match => [ "message", "%{SYSLOGBASE} %{POSTFIX_QUEUEID:queue_id}: %{GREEDYDATA:message}" ]
- }
-}
-
-
-The above will match and result in the following fields:
- -The timestamp
, logsource
, program
, and pid
fields come from the
-SYSLOGBASE pattern which itself is defined by other patterns.
filter {
- grok {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- break_on_match => ... # boolean (optional), default: true
- drop_if_match => ... # boolean (optional), default: false
- keep_empty_captures => ... # boolean (optional), default: false
- match => ... # hash (optional), default: {}
- named_captures_only => ... # boolean (optional), default: true
- overwrite => ... # array (optional), default: []
- patterns_dir => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- singles => ... # boolean (optional), default: true
- tag_on_failure => ... # array (optional), default: ["_grokparsefailure"]
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- grok {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- grok {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Break on first match. The first successful match by grok will result in the -filter being finished. If you want grok to try all patterns (maybe you are -parsing different things), then set this to false.
- -Drop if matched. Note, this feature may not stay. It is preferable to combine -grok + grep filters to do parsing + dropping.
- -requested in: googlecode/issue/26
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If true, keep empty captures as event fields.
- -A hash of matches of field => value
- -For example:
- -filter {
- grok {
- match => [ "message", "Duration: %{NUMBER:duration}" ]
- }
-}
-
-
-If true, only store named captures from grok.
- -The fields to overwrite.
- -This allows you to overwrite a value in a field that already exists.
- -For example, if you have a syslog line in the 'message' field, you can -overwrite the 'message' field with part of the match like so:
- -filter {
- grok {
- match => [
- "message",
- "%{SYSLOGBASE} %{DATA:message}
- ]
- overwrite => [ "message" ]
- }
-}
-
-
-In this case, a line like "May 29 16:37:11 sadness logger: hello world" - will be parsed and 'hello world' will overwrite the original message.
- -Specify a pattern to parse with. This will match the 'message' field.
- -If you want to match other fields than message, use the 'match' setting. -Multiple patterns is fine.
- -logstash ships by default with a bunch of patterns, so you don't -necessarily need to define this yourself unless you are adding additional -patterns.
- -Pattern files are plain text with format:
- -NAME PATTERN
-
-
-For example:
- -NUMBER \d+
-
-
-If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- grok {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- grok {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -If true, make single-value fields simply that value, not an array -containing that one value.
- -If true, ensure the '_grokparsefailure' tag is present when there has been no -successful match
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -TODO(sissel): This is not supported yet. There is a bug in grok discovery -that causes segfaults in libgrok.
- - -filter {
- grokdiscovery {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- grokdiscovery {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- grokdiscovery {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- grokdiscovery {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- grokdiscovery {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -JSON filter. Takes a field that contains JSON and expands it into -an actual datastructure.
- - -filter {
- json {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- source => ... # string (required)
- target => ... # string (optional)
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- json {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- json {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- json {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- json {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Config for json is:
- -source => source_field
-
-
-For example, if you have json data in the @message field:
- -filter {
- json {
- source => "message"
- }
-}
-
-
-The above would parse the json from the @message field
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Define target for placing the data. If this setting is omitted, -the json data will be stored at the root of the event.
- -For example if you want the data to be put in the 'doc' field:
- -filter {
- json {
- target => "doc"
- }
-}
-
-
-json in the value of the source field will be expanded into a -datastructure in the "target" field.
- -Note: if the "target" field already exists, it will be overwritten.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -JSON encode filter. Takes a field and serializes it into JSON
- - -filter {
- json_encode {
- /[A-Za-z0-9_@-]+/ => ... # string (optional)
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-Config for json_encode is:
- -For example, if you have a field named 'foo', and you want to store the -JSON encoded string in 'bar', do this:
- -filter {
- json_encode {
- foo => bar
- }
-}
-
-
-Note: if the "dest" field already exists, it will be overridden.
- -If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- json_encode {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- json_encode {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- json_encode {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- json_encode {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This filter helps automatically parse messages which are of the 'foo=bar' -variety.
- -For example, if you have a log message which contains 'ip=1.2.3.4 -error=REFUSED', you can parse those automatically by doing:
- -filter {
- kv { }
-}
-
-
-The above will result in a message of "ip=1.2.3.4 error=REFUSED" having -the fields:
- -This is great for postfix, iptables, and other types of logs that -tend towards 'key=value' syntax.
- -Further, this can often be used to parse query parameters like -'foo=bar&baz=fizz' by setting the field_split to "&"
- - -filter {
- kv {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- default_keys => ... # hash (optional), default: {}
- exclude_keys => ... # array (optional), default: []
- field_split => ... # string (optional), default: " "
- include_keys => ... # array (optional), default: []
- prefix => ... # string (optional), default: ""
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- source => ... # string (optional), default: "message"
- target => ... # string (optional)
- trim => ... # string (optional)
- trimkey => ... # string (optional)
- value_split => ... # string (optional), default: "="
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- kv {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- kv {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -A hash that specifies the default keys and their values that should be added to event -in case these keys do no exist in the source field being parsed.
- -filter {
- kv {
- default_keys = [ "from", "logstash@example.com",
- "to", "default@dev.null" ]
- }
-}
-
-
-An array that specifies the parsed keys which should not be added to event. -By default no keys will be excluded.
- -Example, to exclude "from" and "to" from a source like "Hey, from=
filter {
- kv {
- exclude_keys = [ "from", "to" ]
- }
-}
-
-
-Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -A string of characters to use as delimiters for parsing out key-value pairs.
- -These characters form a regex character class and thus you must escape special regex -characters like [ or ] using .
- -Example, to split out the args from a url query string such as -'?pin=12345~0&d=123&e=foo@bar.com&oq=bobo&ss=12345':
- -filter {
- kv {
- field_split => "&?"
- }
-}
-
-
-The above splits on both "&" and "?" characters, giving you the following -fields:
- -An array that specifies the parsed keys which should be added to event. -By default all keys will be added.
- -Example, to include only "from" and "to" from a source like "Hey, from=
filter {
- kv {
- include_keys = [ "from", "to" ]
- }
-}
-
-
-A string to prepend to all of the extracted keys
- -Example, to prepend arg_ to all keys:
- -filter { kv { prefix => "arg_" } }
-
-
-If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- kv {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- kv {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -The fields to perform 'key=value' searching on
- -Example, to use the message field:
- -filter { kv { source => "message" } }
-
-
-Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -The name of the container to put all of the key-value pairs into
- -If this setting is omitted, fields will be written to the root of the -event.
- -Example, to place all keys into field kv:
- -filter { kv { target => "kv" } }
-
-
-A string of characters to trim from the value. This is useful if your -values are wrapped in brackets or are terminated by comma (like postfix -logs)
- -These characters form a regex character class and thus you must escape special regex -characters like [ or ] using .
- -Example, to strip '<' '>' '[' ']' and ',' characters from values:
- -filter {
- kv {
- trim => "<>\[\],"
- }
-}
-
-
-A string of characters to trim from the key. This is useful if your -key are wrapped in brackets or starts with space
- -These characters form a regex character class and thus you must escape special regex -characters like [ or ] using .
- -Example, to strip '<' '>' '[' ']' and ',' characters from keys:
- -filter {
- kv {
- trimkey => "<>\[\],"
- }
-}
-
-
-Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -A string of characters to use as delimiters for identifying key-value relations.
- -These characters form a regex character class and thus you must escape special regex -characters like [ or ] using .
- -Example, to identify key-values such as -'key1:value1 key2:value2':
- -filter { kv { value_split => ":" } }
-
-
-
-filter {
- metaevent {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- followed_by_tags => ... # array (required)
- period => ... # number (optional), default: 5
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- metaevent {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- metaevent {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -syntax: followed_by_tags => [ "tag", "tag" ]
syntax: period => 60
If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- metaevent {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- metaevent {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The metrics filter is useful for aggregating metrics.
- -For example, if you have a field 'response' that is -a http response code, and you want to count each -kind of response, you can do this:
- -filter {
- metrics {
- meter => [ "http.%{response}" ]
- add_tag => "metric"
- }
-}
-
-
-Metrics are flushed every 5 seconds. Metrics appear as -new events in the event stream and go through any filters -that occur after as well as outputs.
- -In general, you will want to add a tag to your metrics and have an output -explicitly look for that tag.
- -The event that is flushed will include every 'meter' and 'timer' -metric in the following way:
- -For a meter => "something"
you will receive the following fields:
For a timer => [ "thing", "%{duration}" ]
you will receive the following fields:
For a simple example, let's track how many events per second are running -through logstash:
- -input {
- generator {
- type => "generated"
- }
-}
-
-filter {
- metrics {
- type => "generated"
- meter => "events"
- add_tag => "metric"
- }
-}
-
-output {
- stdout {
- # only emit events with the 'metric' tag
- tags => "metric"
- message => "rate: %{events.rate_1m}"
- }
-}
-
-
-Running the above:
- -% java -jar logstash.jar agent -f example.conf
-rate: 23721.983566819246
-rate: 24811.395722536377
-rate: 25875.892745934525
-rate: 26836.42375967113
-
-
-We see the output includes our 'events' 1-minute rate.
- -In the real world, you would emit this to graphite or another metrics store, -like so:
- -output {
- graphite {
- metrics => [ "events.rate_1m", "%{events.rate_1m}" ]
- }
-}
-
-
-
-filter {
- metrics {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- ignore_older_than => ... # number (optional), default: 0
- meter => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- timer => ... # hash (optional), default: {}
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- metrics {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- metrics {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Don't track events that have @timestamp older than some number of seconds.
- -This is useful if you want to only include events that are near real-time -in your metrics.
- -Example, to only count events that are within 10 seconds of real-time, you -would do this:
- -filter {
- metrics {
- meter => [ "hits" ]
- ignore_older_than => 10
- }
-}
-
-
-syntax: meter => [ "name of metric", "name of metric" ]
If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- metrics {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- metrics {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -syntax: timer => [ "name of metric", "%{time_value}" ]
Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -See the multiline codec instead.
- - -filter {
- multiline {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- negate => ... # boolean (optional), default: false
- pattern => ... # string (required)
- patterns_dir => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- source => ... # string (optional), default: "message"
- stream_identity => ... # string (optional), default: "%{host}-%{path}-%{type}"
- what => ... # string, one of ["previous", "next"] (required)
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- multiline {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- multiline {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Leave these config settings until we remove this filter entirely. -THe idea is that we want the register method to cause an abort -giving the user a clue to use the codec instead of the filter.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- multiline {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- multiline {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -The mutate filter allows you to do general mutations to fields. You -can rename, remove, replace, and modify fields in your events.
- -TODO(sissel): Support regexp replacements like String#gsub ?
- - -filter {
- mutate {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- convert => ... # hash (optional)
- gsub => ... # array (optional)
- join => ... # hash (optional)
- lowercase => ... # array (optional)
- merge => ... # hash (optional)
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- rename => ... # hash (optional)
- replace => ... # hash (optional)
- split => ... # hash (optional)
- strip => ... # array (optional)
- update => ... # hash (optional)
- uppercase => ... # array (optional)
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- mutate {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- mutate {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Convert a field's value to a different type, like turning a string to an -integer. If the field value is an array, all members will be converted. -If the field is a hash, no action will be taken.
- -Valid conversion targets are: integer, float, string
- -Example:
- -filter {
- mutate {
- convert => [ "fieldname", "integer" ]
- }
-}
-
-
-Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Convert a string field by applying a regular expression and a replacement -if the field is not a string, no action will be taken
- -This configuration takes an array consisting of 3 elements per -field/substitution.
- -be aware of escaping any backslash in the config file
- -for example:
- -filter {
- mutate {
- gsub => [
- # replace all forward slashes with underscore
- "fieldname", "/", "_",
-
- # replace backslashes, question marks, hashes, and minuses with
- # dot
- "fieldname2", "[\\?#-]", "."
- ]
- }
-}
-
-
-Join an array with a separator character, does nothing on non-array fields
- -Example:
- -filter {
- - mutate {
- join => ["fieldname", ","]
- }
-
-
-}
- -Convert a string to its lowercase equivalent
- -Example:
- -filter {
- mutate {
- lowercase => [ "fieldname" ]
- }
-}
-
-
-merge two fields or arrays or hashes -String fields will be converted in array, so - array + string will work - string + string will result in an 2 entry array in dest_field - array and hash will not work
- -Example:
- -filter {
- mutate {
- merge => ["dest_field", "added_field"]
- }
-}
-
-
-Remove one or more fields.
- -Example:
- -filter {
- mutate {
- remove => [ "client" ] # Removes the 'client' field
- }
-}
-
-
-This option is deprecated, instead use remove_field option available in all -filters.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- mutate {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- mutate {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Rename one or more fields.
- -Example:
- -filter {
- mutate {
- # Renames the 'HOSTORIP' field to 'client_ip'
- rename => [ "HOSTORIP", "client_ip" ]
- }
-}
-
-
-Replace a field with a new value. The new value can include %{foo} strings -to help you build a new value from other parts of the event.
- -Example:
- -filter {
- mutate {
- replace => [ "message", "%{source_host}: My new message" ]
- }
-}
-
-
-Split a field to an array using a separator character. Only works on string -fields.
- -Example:
- -filter {
- mutate {
- split => ["fieldname", ","]
- }
-}
-
-
-Strip whitespaces
- -Example:
- -filter {
- mutate {
- strip => ["field1", "field2"]
- }
-}
-
-
-Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -Update an existing field with a new value. If the field does not exist, -then no action will be taken.
- -Example:
- -filter {
- mutate {
- update => [ "sample", "My new message" ]
- }
-}
-
-
-Convert a string to its uppercase equivalent
- -Example:
- -filter {
- mutate {
- uppercase => [ "fieldname" ]
- }
-}
-
-
-
-No-op filter. This is used generally for internal/dev testing.
- - -filter {
- noop {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- noop {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- noop {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- noop {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- noop {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The prune filter is for pruning event data from @fileds based on whitelist/blacklist -of field names or their values (names and values can also be regular expressions).
- - -filter {
- prune {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- blacklist_names => ... # array (optional), default: ["%{[^}]+}"]
- blacklist_values => ... # hash (optional), default: {}
- interpolate => ... # boolean (optional), default: false
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- whitelist_names => ... # array (optional), default: []
- whitelist_values => ... # hash (optional), default: {}
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- prune {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- prune {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Exclude fields which names match specified regexps, by default exclude unresolved %{field} strings.
- -filter {
- prune {
- tags => [ "apache-accesslog" ]
- blacklist_names => [ "method", "(referrer|status)", "${some}_field" ]
- }
-}
-
-
-Exclude specified fields if their values match regexps. -In case field values are arrays, the fields are pruned on per array item -in case all array items are matched whole field will be deleted.
- -filter {
- prune {
- tags => [ "apache-accesslog" ]
- blacklist_values => [ "uripath", "/index.php",
- "method", "(HEAD|OPTIONS)",
- "status", "^[^2]" ]
- }
-}
-
-
-Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Trigger whether configation fields and values should be interpolated for -dynamic values. -Probably adds some performance overhead. Defaults to false.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- prune {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- prune {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -Include only fields only if their names match specified regexps, default to empty list which means include everything.
- -filter {
- prune {
- tags => [ "apache-accesslog" ]
- whitelist_names => [ "method", "(referrer|status)", "${some}_field" ]
- }
-}
-
-
-Include specified fields only if their values match regexps. -In case field values are arrays, the fields are pruned on per array item -thus only matching array items will be included.
- -filter {
- prune {
- tags => [ "apache-accesslog" ]
- whitelist_values => [ "uripath", "/index.php",
- "method", "(GET|POST)",
- "status", "^[^2]" ]
- }
-}
-
-
-
-parallel request filter
- -This filter will separate out the parallel requests into separate events.
- - -filter {
- railsparallelrequest {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- railsparallelrequest {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- railsparallelrequest {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- railsparallelrequest {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- railsparallelrequest {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This filter is used to check that certain fields are within expected size/length ranges. -Supported types are numbers and strings. -Numbers are checked to be within numeric value range. -Strings are checked to be within string length range. -More than one range can be specified for same fieldname, actions will be applied incrementally. -Then field value is with in a specified range and action will be taken -supported actions are drop event add tag or add field with specified value.
- -Example usecases are for histogram like tagging of events -or for finding anomaly values in fields or too big events that should be dropped.
- - -filter {
- range {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- negate => ... # boolean (optional), default: false
- ranges => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- range {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- range {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Negate the range match logic, events should be outsize of the specificed range to match.
- -An array of field, min, max ,action tuples. -Example:
- -filter {
- range {
- ranges => [ "message", 0, 10, "tag:short",
- "message", 11, 100, "tag:medium",
- "message", 101, 1000, "tag:long",
- "message", 1001, 1e1000, "drop",
- "duration", 0, 100, "field:latency:fast",
- "duration", 101, 200, "field:latency:normal",
- "duration", 201, 1000, "field:latency:slow",
- "duration", 1001, 1e1000, "field:latency:outlier"
- "requests", 0, 10, "tag:to_few_%{source}_requests" ]
- }
-}
-
-
-Supported actions are drop tag or field with specified value. -Added tag names and field names and field values can have %{dynamic} values.
- -TODO(piavlo): The action syntax is ugly at the moment due to logstash grammar limitations - arrays grammar should support -TODO(piavlo): simple not nested hashses as values in addition to numaric and string values to prettify the syntax.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- range {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- range {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Execute ruby code.
- -For example, to cancel 90% of events, you can do this:
- -filter {
- ruby {
- # Cancel 90% of events
- code => "event.cancel if rand <= 0.90"
- }
-}
-
-
-
-filter {
- ruby {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- code => ... # string (required)
- init => ... # string (optional)
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- ruby {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- ruby {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -The code to execute for every event. -You will have an 'event' variable available that is the event itself.
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Any code to execute at logstash startup-time
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- ruby {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- ruby {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Sleep a given amount of time. This will cause logstash -to stall for the given amount of time. This is useful -for rate limiting, etc.
- - -filter {
- sleep {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- every => ... # string (optional), default: 1
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- replay => ... # boolean (optional), default: false
- time => ... # string (optional)
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- sleep {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- sleep {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Sleep on every N'th. This option is ignored in replay mode.
- -Example:
- -filter {
- sleep {
- time => "1" # Sleep 1 second
- every => 10 # on every 10th event
- }
-}
-
-
-Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- sleep {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- sleep {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Enable replay mode.
- -Replay mode tries to sleep based on timestamps in each event.
- -The amount of time to sleep is computed by subtracting the -previous event's timestamp from the current event's timestamp. -This helps you replay events in the same timeline as original.
- -If you specify a time
setting as well, this filter will
-use the time
value as a speed modifier. For example,
-a time
value of 2 will replay at double speed, while a
-value of 0.25 will replay at 1/4th speed.
For example:
- -filter {
- sleep {
- time => 2
- replay => true
- }
-}
-
-
-The above will sleep in such a way that it will perform -replay 2-times faster than the original time speed.
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -The length of time to sleep, in seconds, for every event.
- -This can be a number (eg, 0.5), or a string (eg, "%{foo}") -The second form (string with a field value) is useful if -you have an attribute of your event that you want to use -to indicate the amount of time to sleep.
- -Example:
- -filter {
- sleep {
- # Sleep 1 second for every event.
- time => "1"
- }
-}
-
-
-Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The split filter is for splitting multiline messages into separate events.
- -An example use case of this filter is for taking output from the 'exec' input -which emits one event for the whole output of a command and splitting that -output by newline - making each line an event.
- -The end result of each split is a complete copy of the event -with only the current split section of the given field changed.
- - -filter {
- split {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- field => ... # string (optional), default: "message"
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- terminator => ... # string (optional), default: "\n"
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- split {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- split {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -The field which value is split by the terminator
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- split {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- split {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -The string to split on. This is usually a line terminator, but can be any -string.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Filter plugin for logstash to parse the PRI field from the front -of a Syslog (RFC3164) message. If no priority is set, it will -default to 13 (per RFC).
- -This filter is based on the original syslog.rb code shipped -with logstash.
- - -filter {
- syslog_pri {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- facility_labels => ... # array (optional), default: ["kernel", "user-level", "mail", "daemon", "security/authorization", "syslogd", "line printer", "network news", "uucp", "clock", "security/authorization", "ftp", "ntp", "log audit", "log alert", "clock", "local0", "local1", "local2", "local3", "local4", "local5", "local6", "local7"]
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- severity_labels => ... # array (optional), default: ["emergency", "alert", "critical", "error", "warning", "notice", "informational", "debug"]
- syslog_pri_field_name => ... # string (optional), default: "syslog_pri"
- use_labels => ... # boolean (optional), default: true
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- syslog_pri {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- syslog_pri {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Labels for facility levels. This comes from RFC3164.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- syslog_pri {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- syslog_pri {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Labels for severity levels. This comes from RFC3164.
- -Name of field which passes in the extracted PRI part of the syslog message
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -set the status to experimental/beta/stable -Add human-readable names after parsing severity and facility from PRI
- - -Originally written to translate HTTP response codes -but turned into a general translation tool which uses -configured has or/and .yaml files as a dictionary. -response codes in default dictionary were scraped from -'gem install cheat; cheat status_codes'
- -Alternatively for simple string search and replacements for just a few values -use the gsub function of the mutate filter.
- - -filter {
- translate {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- destination => ... # string (optional), default: "translation"
- dictionary => ... # hash (optional), default: {}
- dictionary_path => ... # a valid filesystem path (optional)
- exact => ... # boolean (optional), default: true
- fallback => ... # string (optional)
- field => ... # string (required)
- override => ... # boolean (optional), default: false
- regex => ... # boolean (optional), default: false
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- translate {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- translate {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -The destination field you wish to populate with the translation code. -default is "translation". -Set to the same value as source if you want to do a substitution, in this case filter will allways succeed.
- -Dictionary to use for translation. -Example:
- -filter {
- translate {
- dictionary => [ "100", "Continue",
- "101", "Switching Protocols",
- "200", "OK",
- "201", "Created",
- "202", "Accepted" ]
- }
-}
-
-
-name with full path of external dictionary file.
-format of the table should be a YAML file which will be merged with the @dictionary.
-make sure you encase any integer based keys in quotes.
-The YAML file should look something like this:
100: Continue
-101: Switching Protocols
-
-
-set to false if you want to match multiple terms -a large dictionary could get expensive if set to false.
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Incase no translation was made add default translation string
- -The field containing a response code If this field is an -array, only the first value will be used.
- -In case dstination field already exists should we skip translation(default) or override it with new translation
- -treat dictionary keys as regular expressions to match against, used only then @exact enabled.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- translate {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- translate {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The urldecode filter is for decoding fields that are urlencoded.
- - -filter {
- urldecode {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- all_fields => ... # boolean (optional), default: false
- field => ... # string (optional), default: "message"
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- urldecode {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- urldecode {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Urldecode all fields
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -The field which value is urldecoded
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- urldecode {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- urldecode {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Parse user agent strings into structured data based on BrowserScope data
- -UserAgent filter, adds information about user agent like family, operating -system, version, and device
- -Logstash releases ship with the regexes.yaml database made available from -ua-parser with an Apache 2.0 license. For more details on ua-parser, see -https://github.com/tobie/ua-parser/.
- - -filter {
- useragent {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- prefix => ... # string (optional), default: ""
- regexes => ... # string (optional)
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- source => ... # string (required)
- target => ... # string (optional)
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- useragent {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- useragent {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -A string to prepend to all of the extracted keys
- -regexes.yaml file to use
- -If not specified, this will default to the regexes.yaml that ships -with logstash.
- -You can find the latest version of this here: -https://github.com/tobie/ua-parser/blob/master/regexes.yaml
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- useragent {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- useragent {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -The field containing the user agent string. If this field is an -array, only the first value will be used.
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -The name of the field to assign user agent data into.
- -If not specified user agent data will be stored in the root of the event.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The uuid filter allows you to add a UUID field to messages. -This is useful to be able to control the _id messages are indexed into Elasticsearch -with, so that you can insert duplicate messages (i.e. the same message multiple times -without creating duplicates) - for log pipeline reliability
- - -filter {
- uuid {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- field => ... # string (optional)
- overwrite => ... # boolean (optional), default: false
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- uuid {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- uuid {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -Add a UUID to a field.
- -Example:
- -filter {
- uuid {
- field => "@uuid"
- }
-}
-
-
-If the value in the field currently (if any) should be overridden -by the generated UUID. Defaults to false (i.e. if the field is -present, with ANY value, it won't be overridden)
- -Example:
- -filter {
- - uuid {
- field => "@uuid"
- overwrite => true
- }
-
-
-}
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- uuid {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- uuid {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -XML filter. Takes a field that contains XML and expands it into -an actual datastructure.
- - -filter {
- xml {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- source => ... # string (optional)
- store_xml => ... # boolean (optional), default: true
- target => ... # string (optional)
- xpath => ... # hash (optional), default: {}
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- xml {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- xml {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- xml {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- xml {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -Config for xml to hash is:
- -source => source_field
-
-
-For example, if you have the whole xml document in your @message field:
- -filter {
- xml {
- source => "message"
- }
-}
-
-
-The above would parse the xml from the @message field
- -By default the filter will store the whole parsed xml in the destination -field as described above. Setting this to false will prevent that.
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Define target for placing the data
- -for example if you want the data to be put in the 'doc' field:
- -filter {
- xml {
- target => "doc"
- }
-}
-
-
-XML in the value of the source field will be expanded into a -datastructure in the "target" field. -Note: if the "target" field already exists, it will be overridden -Required
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -xpath will additionally select string values (.to_s on whatever is selected) -from parsed XML (using each source field defined using the method above) -and place those values in the destination fields. Configuration:
- -xpath => [ "xpath-syntax", "destination-field" ]
- -Values returned by XPath parsring from xpath-synatx will be put in the -destination field. Multiple values returned will be pushed onto the -destination field as an array. As such, multiple matches across -multiple source fields will produce duplicate entries in the field
- -More on xpath: http://www.w3schools.com/xpath/
- -The xpath functions are particularly powerful: -http://www.w3schools.com/xpath/xpath_functions.asp
- - -ZeroMQ filter. This is the best way to send an event externally for filtering -It works much like an exec filter would by sending the event "offsite" -for processing and waiting for a response
- -The protocol here is: - * REQ sent with JSON-serialized logstash event - * REP read expected to be the full JSON 'filtered' event - * - if reply read is an empty string, it will cancel the event.
- -Note that this is a limited subset of the zeromq functionality in -inputs and outputs. The only topology that makes sense here is: -REQ/REP.
- - -filter {
- zeromq {
- add_field => ... # hash (optional), default: {}
- add_tag => ... # array (optional), default: []
- address => ... # string (optional), default: "tcp://127.0.0.1:2121"
- field => ... # string (optional)
- mode => ... # string, one of ["server", "client"] (optional), default: "client"
- remove_field => ... # array (optional), default: []
- remove_tag => ... # array (optional), default: []
- sockopt => ... # hash (optional)
-}
-
-}
-
-
-If this filter is successful, add any arbitrary fields to this event. -Tags can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- zeromq {
- add_field => [ "foo_%{somefield}", "Hello world, from %{source}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add field "foo_hello" if it is present, with the -value above and the %{source} piece replaced with that value from the -event.
- -If this filter is successful, add arbitrary tags to the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- zeromq {
- add_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would add a tag "foo_hello"
- -0mq socket address to connect or bind -Please note that inproc:// will not work with logstash -as we use a context per thread -By default, filters connect
- -Only handle events without all/any (controlled by exclude_any config -option) of these tags. -Optional.
- -The field to send off-site for processing -If this is unset, the whole event will be sent -TODO (lusis) -Allow filtering multiple fields
- -0mq mode -server mode binds/listens -client mode connects
- -If this filter is successful, remove arbitrary fields from this event. -Fields names can be dynamic and include parts of the event using the %{field} -Example:
- -filter {
- zeromq {
- remove_field => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the field with name "foo_hello" if it is present
- -If this filter is successful, remove arbitrary tags from the event. -Tags can be dynamic and include parts of the event using the %{field} -syntax. Example:
- -filter {
- zeromq {
- remove_tag => [ "foo_%{somefield}" ]
- }
-}
-
-
-If the event has field "somefield" == "hello" this filter, on success, -would remove the tag "foo_hello" if it is present
- -0mq socket options -This exposes zmq_setsockopt -for advanced tuning -see http://api.zeromq.org/2-1:zmq-setsockopt for details
- -This is where you would set values like: -ZMQ::HWM - high water mark -ZMQ::IDENTITY - named queues -ZMQ::SWAP_SIZE - space for disk overflow -ZMQ::SUBSCRIBE - topic filters for pubsub
- -example: sockopt => ["ZMQ::HWM", 50, "ZMQ::IDENTITY", "mynamedqueue"]
- -Only handle events with all/any (controlled by include_any config option) of these tags. -Optional.
- -Note that all of the specified routing options (type,tags.excludetags,includefields,exclude_fields) -must be met in order for the event to be handled by the filter. -The type to act on. If a type is given, then this filter will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -input {
- amqp {
- ack => ... # boolean (optional), default: true
- add_field => ... # hash (optional), default: {}
- arguments => ... # array (optional), default: {}
- auto_delete => ... # boolean (optional), default: true
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- durable => ... # boolean (optional), default: false
- exchange => ... # string (optional)
- exclusive => ... # boolean (optional), default: true
- host => ... # string (required)
- key => ... # string (optional), default: "logstash"
- passive => ... # boolean (optional), default: false
- password => ... # password (optional), default: "guest"
- port => ... # number (optional), default: 5672
- prefetch_count => ... # number (optional), default: 256
- queue => ... # string (optional), default: ""
- ssl => ... # boolean (optional), default: false
- tags => ... # array (optional)
- threads => ... # number (optional), default: 1
- type => ... # string (optional)
- user => ... # string (optional), default: "guest"
- verify_ssl => ... # boolean (optional), default: false
- vhost => ... # string (optional), default: "/"
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -Retrieve watchdog log events from a Drupal installation with DBLog enabled. -The events are pulled out directly from the database. -The original events are not deleted, and on every consecutive run only new -events are pulled.
- -The last watchdog event id that was processed is stored in the Drupal -variable table with the name "logstashlastwid". Delete this variable or -set it to 0 if you want to re-import all events.
- -More info on DBLog: http://drupal.org/documentation/modules/dblog
- - -input {
- drupal_dblog {
- add_field => ... # hash (optional), default: {}
- add_usernames => ... # boolean (optional), default: false
- bulksize => ... # number (optional), default: 5000
- codec => ... # codec (optional), default: "plain"
- databases => ... # hash (optional)
- debug => ... # boolean (optional), default: false
- interval => ... # number (optional), default: 10
- tags => ... # array (optional)
- type => ... # string (optional), default: "watchdog"
-}
-
-}
-
-
-Add a field to an event
- -By default, the event only contains the current user id as a field. -If you whish to add the username as an additional field, set this to true.
- -The amount of log messages that should be fetched with each query. -Bulk fetching is done to prevent querying huge data sets when lots of -messages are in the database.
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Specify all drupal databases that you whish to import from. -This can be as many as you whish. -The format is a hash, with a unique site name as the key, and a databse -url as the value.
- -Example: -[ - "site1", "mysql://user1:password@host1.com/databasename", - "other_site", "mysql://user2:password@otherhost.com/databasename", - ... -]
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -Time between checks in minutes.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Label this input with a type. -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- - -Read from elasticsearch.
- -This is useful for replay testing logs, reindexing, etc.
- -Example:
- -input {
- # Read all documents from elasticsearch matching the given query
- elasticsearch {
- host => "localhost"
- query => "ERROR"
- }
-}
-
-
-input {
- elasticsearch {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- host => ... # string (required)
- index => ... # string (optional), default: "logstash-*"
- port => ... # number (optional), default: 9200
- query => ... # string (optional), default: "*"
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -The address of your elasticsearch server
- -The index to search
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The http port of your elasticsearch server's REST interface
- -The query to use
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Pull events from a Windows Event Log
- -To collect Events from the System Event Log, use a config like:
- -input {
- eventlog {
- type => 'Win32-EventLog'
- logfile => 'System'
- }
-}
-
-
-
-input {
- eventlog {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- logfile => ... # array (optional), default: ["Application", "Security", "System"]
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -Event Log Name
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Run command line tools and capture the whole output as an event.
- -Notes:
- -input {
- exec {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- command => ... # string (required)
- debug => ... # boolean (optional), default: false
- interval => ... # number (required)
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Command to run. For example, "uptime"
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -Interval to run the command. Value is in seconds.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Stream events from files.
- -By default, each event is assumed to be one line. If you -want to join lines, you'll want to use the multiline filter.
- -Files are followed in a manner similar to "tail -0F". File rotation -is detected and handled by this input.
- - -input {
- file {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- discover_interval => ... # number (optional), default: 15
- exclude => ... # array (optional)
- path => ... # array (required)
- sincedb_path => ... # string (optional)
- sincedb_write_interval => ... # number (optional), default: 15
- start_position => ... # string, one of ["beginning", "end"] (optional), default: "end"
- stat_interval => ... # number (optional), default: 1
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -How often we expand globs to discover new files to watch.
- -Exclusions (matched against the filename, not full path). Globs -are valid here, too. For example, if you have
- -path => "/var/log/*"
-
-
-you might want to exclude gzipped files:
- -exclude => "*.gz"
-
-
-The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -TODO(sissel): This should switch to use the 'line' codec by default
-once file following
-The path to the file to use as an input.
-You can use globs here, such as /var/log/*.log
-Paths must be absolute and cannot be relative.
Where to write the since database (keeps track of the current -position of monitored log files). The default will write -sincedb files to some path matching "$HOME/.sincedb*"
- -How often to write a since database with the current position of -monitored log files.
- -Choose where logstash starts initially reading files - at the beginning or -at the end. The default behavior treats files like live streams and thus -starts at the end. If you have old data you want to import, set this -to 'beginning'
- -This option only modifieds "first contact" situations where a file is new -and not seen before. If a file has already been seen before, this option -has no effect.
- -How often we stat files to see if they have been modified. Increasing -this interval will decrease the number of system calls we make, but -increase the time to detect new log lines.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read ganglia packets from the network via udp
- - -input {
- ganglia {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "0.0.0.0"
- port => ... # number (optional), default: 8649
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -The address to listen on
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root to use.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read gelf messages as events over the network.
- -This input is a good choice if you already use graylog2 today.
- -The main reasoning for this input is to leverage existing GELF -logging libraries such as the gelf log4j appender
- - -input {
- gelf {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "0.0.0.0"
- port => ... # number (optional), default: 12201
- remap => ... # boolean (optional), default: true
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -The address to listen on
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root to use.
- -Whether or not to remap the gelf message fields to logstash event fields or -leave them intact.
- -Default is true
- -Remapping converts the following gelf fields to logstash equivalents:
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Push events to a GemFire region.
- -GemFire is an object database.
- -To use this plugin you need to add gemfire.jar to your CLASSPATH. -Using format=json requires jackson.jar too; use of continuous -queries requires antlr.jar.
- -Note: this plugin has only been tested with GemFire 7.0.
- - -input {
- gemfire {
- add_field => ... # hash (optional), default: {}
- cache_name => ... # string (optional), default: "logstash"
- cache_xml_file => ... # string (optional), default: nil
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- interest_regexp => ... # string (optional), default: ".*"
- query => ... # string (optional), default: nil
- region_name => ... # string (optional), default: "Logstash"
- serialization => ... # string (optional), default: nil
- tags => ... # array (optional)
- threads => ... # number (optional), default: 1
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -Your client cache name
- -The path to a GemFire client cache XML file.
- -Example:
- - <client-cache>
- <pool name="client-pool" subscription-enabled="true" subscription-redundancy="1">
- <locator host="localhost" port="31331"/>
- </pool>
- <region name="Logstash">
- <region-attributes refid="CACHING_PROXY" pool-name="client-pool" >
- </region-attributes>
- </region>
- </client-cache>
-
-
-The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -A regexp to use when registering interest for cache events. -Ignored if a :query is specified.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -A query to run as a GemFire "continuous query"; if specified it takes -precedence over :interest_regexp which will be ignore.
- -Important: use of continuous queries requires subscriptions to be enabled on the client pool.
- -The region name
- -How the message is serialized in the cache. Can be one of "json" or "plain"; default is plain
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Generate random log events.
- -The general intention of this is to test performance of plugins.
- -An event is generated first
- - -input {
- generator {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- count => ... # number (optional), default: 0
- debug => ... # boolean (optional), default: false
- lines => ... # array (optional)
- message => ... # string (optional), default: "Hello world!"
- tags => ... # array (optional)
- threads => ... # number (optional), default: 1
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set how many messages should be generated.
- -The default, 0, means generate an unlimited number of events.
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -The lines to emit, in order. This option cannot be used with the 'message' -setting.
- -Example:
- -input {
- generator {
- lines => [
- "line 1",
- "line 2",
- "line 3"
- ]
- }
-
- # Emit all lines 3 times.
- count => 3
-}
-
-
-The above will emit "line 1" then "line 2" then "line", then "line 1", etc...
- -The message string to use in the event.
- -If you set this to 'stdin' then this plugin will read a single line from -stdin and use that as the message string for every event.
- -Otherwise, this value will be used verbatim as the event message.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -input {
- graphite {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- data_timeout => ... # number (optional), default: -1
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "0.0.0.0"
- mode => ... # string, one of ["server", "client"] (optional), default: "server"
- port => ... # number (required)
- ssl_cacert => ... # a valid filesystem path (optional)
- ssl_cert => ... # a valid filesystem path (optional)
- ssl_enable => ... # boolean (optional), default: false
- ssl_key => ... # a valid filesystem path (optional)
- ssl_key_passphrase => ... # password (optional), default: nil
- ssl_verify => ... # boolean (optional), default: false
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Stream events from a heroku app's logs.
- -This will read events in a manner similar to how the heroku logs -t
command
-fetches logs.
Recommended filters:
- -filter {
- grok {
- pattern => "^%{TIMESTAMP_ISO8601:timestamp} %{WORD:component}\[%{WORD:process}(?:\.%{INT:instance:int})?\]: %{DATA:message}$"
- }
- date { timestamp => ISO8601 }
-}
-
-
-
-input {
- heroku {
- add_field => ... # hash (optional), default: {}
- app => ... # string (required)
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The name of your heroku application. This is usually the first part of the -the domain name 'my-app-name.herokuapp.com'
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read mail from IMAP servers
- -Periodically scans INBOX and moves any read messages -to the trash.
- - -input {
- imap {
- add_field => ... # hash (optional), default: {}
- check_interval => ... # number (optional), default: 300
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- delete => ... # boolean (optional), default: false
- fetch_count => ... # number (optional), default: 50
- host => ... # string (required)
- lowercase_headers => ... # boolean (optional), default: true
- password => ... # password (required)
- port => ... # number (optional)
- secure => ... # boolean (optional), default: true
- tags => ... # array (optional)
- type => ... # string (optional)
- user => ... # string (required)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -Read events from an IRC Server.
- - -input {
- irc {
- add_field => ... # hash (optional), default: {}
- channels => ... # array (required)
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- host => ... # string (required)
- nick => ... # string (optional), default: "logstash"
- password => ... # password (optional)
- port => ... # number (optional), default: 6667
- real => ... # string (optional), default: "logstash"
- secure => ... # boolean (optional), default: false
- tags => ... # array (optional)
- type => ... # string (optional)
- user => ... # string (optional), default: "logstash"
-}
-
-}
-
-
-Add a field to an event
- -Channels to join and read messages from.
- -These should be full channel names including the '#' symbol, such as -"#logstash".
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -Host of the IRC Server to connect to.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -IRC Nickname
- -IRC Server password
- -Port for the IRC Server
- -IRC Real name
- -Set this to true to enable SSL.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -IRC Username
- - -Read events over a TCP socket from Log4j SocketAppender.
- -Can either accept connections from clients or connect to a server,
-depending on mode
. Depending on mode, you need a matching SocketAppender or SocketHubAppender on the remote side
input {
- log4j {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- data_timeout => ... # number (optional), default: 5
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "0.0.0.0"
- mode => ... # string, one of ["server", "client"] (optional), default: "server"
- port => ... # number (required)
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Read timeout in seconds. If a particular tcp connection is -idle for more than this timeout period, we will assume -it is dead and close it. -If you never want to timeout, use -1.
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -When mode is server
, the address to listen on.
-When mode is client
, the address to connect to.
If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Mode to operate in. server
listens for client connections,
-client
connects to a server.
When mode is server
, the port to listen on.
-When mode is client
, the port to connect to.
Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Receive events using the lumberjack protocol.
- -This is mainly to receive events shipped with lumberjack, -http://github.com/jordansissel/lumberjack
- - -input {
- lumberjack {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "0.0.0.0"
- port => ... # number (required)
- ssl_certificate => ... # a valid filesystem path (required)
- ssl_key => ... # a valid filesystem path (required)
- ssl_key_passphrase => ... # password (optional)
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -the address to listen on.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -the port to listen on.
- -ssl certificate to use
- -ssl key to use
- -ssl key passphrase to use
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Stream events from a long running command pipe.
- -By default, each event is assumed to be one line. If you -want to join lines, you'll want to use the multiline filter.
- - -input {
- pipe {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- command => ... # string (required)
- debug => ... # boolean (optional), default: false
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -TODO(sissel): This should switch to use the 'line' codec by default -once we switch away from doing 'readline' -Command to run and read events from, one line at a time.
- -Example:
- -command => "echo hello world"
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Pull events from a RabbitMQ exchange.
- -The default settings will create an entirely transient queue and listen for all messages by default. -If you need durability or any other advanced settings, please set the appropriate options
- -This has been tested with Bunny 0.9.x, which supports RabbitMQ 2.x and 3.x. You can -find links to both here:
- -input {
- rabbitmq {
- ack => ... # boolean (optional), default: true
- add_field => ... # hash (optional), default: {}
- arguments => ... # array (optional), default: {}
- auto_delete => ... # boolean (optional), default: true
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- durable => ... # boolean (optional), default: false
- exchange => ... # string (optional)
- exclusive => ... # boolean (optional), default: true
- host => ... # string (required)
- key => ... # string (optional), default: "logstash"
- passive => ... # boolean (optional), default: false
- password => ... # password (optional), default: "guest"
- port => ... # number (optional), default: 5672
- prefetch_count => ... # number (optional), default: 256
- queue => ... # string (optional), default: ""
- ssl => ... # boolean (optional), default: false
- tags => ... # array (optional)
- threads => ... # number (optional), default: 1
- type => ... # string (optional)
- user => ... # string (optional), default: "guest"
- verify_ssl => ... # boolean (optional), default: false
- vhost => ... # string (optional), default: "/"
-}
-
-}
-
-
-Enable message acknowledgement
- -Add a field to an event
- -Extra queue arguments as an array. -To make a RabbitMQ queue mirrored, use: {"x-ha-policy" => "all"}
- -Should the queue be deleted on the broker when the last consumer -disconnects? Set this option to 'false' if you want the queue to remain -on the broker, queueing up messages until a consumer comes along to -consume them.
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Enable or disable logging
- -Is this queue durable? (aka; Should it survive a broker restart?)
- -(Optional, backwards compatibility) Exchange binding
- -Optional.
- -The name of the exchange to bind the queue to.
- -Is the queue exclusive? (aka: Will other clients connect to this named queue?)
- -The format of input data (plain, json, json_event)
- -Connection
- -RabbitMQ server address
- -Optional.
- -The routing key to use when binding a queue to the exchange. -This is only relevant for direct or topic exchanges.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Passive queue creation? Useful for checking queue existance without modifying server state
- -RabbitMQ password
- -RabbitMQ port to connect on
- -Prefetch count. Number of messages to prefetch
- -Queue & Consumer
- -The name of the queue Logstash will consume events from.
- -Enable or disable SSL
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -RabbitMQ username
- -Validate SSL certificate
- -The vhost to use. If you don't know what this is, leave the default.
- - -Read events from a redis. Supports both redis channels and also redis lists -(using BLPOP)
- -For more information about redis, see http://redis.io/
- -batch_count
noteIf you use the 'batch_count' setting, you must use a redis version 2.6.0 or -newer. Anything older does not support the operations used by batching.
- - -input {
- redis {
- add_field => ... # hash (optional), default: {}
- batch_count => ... # number (optional), default: 1
- codec => ... # codec (optional), default: "plain"
- data_type => ... # string, one of ["list", "channel", "pattern_channel"] (optional)
- db => ... # number (optional), default: 0
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "127.0.0.1"
- key => ... # string (optional)
- password => ... # password (optional)
- port => ... # number (optional), default: 6379
- tags => ... # array (optional)
- threads => ... # number (optional), default: 1
- timeout => ... # number (optional), default: 5
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -How many events to return from redis using EVAL
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Either list or channel. If redis_type is list, then we will BLPOP the -key. If redis_type is channel, then we will SUBSCRIBE to the key. -If redis_type is pattern_channel, then we will PSUBSCRIBE to the key. -TODO: change required to true
- -The redis database number.
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -The hostname of your redis server.
- -The name of a redis list or channel. -TODO: change required to true
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Name is used for logging in case there are multiple instances. -This feature has no real function and will be removed in future versions.
- -Password to authenticate with. There is no authentication by default.
- -The port to connect on.
- -The name of the redis queue (we'll use BLPOP against this). -TODO: remove soon.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times
- -Initial connection timeout in seconds.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read RELP events over a TCP socket.
- -For more information about RELP, see -http://www.rsyslog.com/doc/imrelp.html
- -This protocol implements application-level acknowledgements to help protect -against message loss.
- -Message acks only function as far as messages being put into the queue for -filters; anything lost after that point will not be retransmitted
- - -input {
- relp {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "0.0.0.0"
- port => ... # number (required)
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -The address to listen on.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The port to listen on.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Stream events from files from a S3 bucket.
- -Each line from each file generates an event. -Files ending in '.gz' are handled as gzip'ed files.
- - -input {
- s3 {
- add_field => ... # hash (optional), default: {}
- backup_to_bucket => ... # string (optional), default: nil
- backup_to_dir => ... # string (optional), default: nil
- bucket => ... # string (required)
- codec => ... # codec (optional), default: "plain"
- credentials => ... # array (optional), default: nil
- debug => ... # boolean (optional), default: false
- delete => ... # boolean (optional), default: false
- interval => ... # number (optional), default: 60
- prefix => ... # string (optional), default: nil
- region => ... # string (optional), default: "us-east-1"
- sincedb_path => ... # string (optional), default: nil
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -Name of a S3 bucket to backup processed files to.
- -Path of a local directory to backup processed files to.
- -The name of the S3 bucket.
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -TODO(sissel): refactor to use 'line' codec (requires removing both gzip -support and readline usage). Support gzip through a gzip codec! ;) -The credentials of the AWS account used to access the bucket. -Credentials can be specified: -- As an ["id","secret"] array -- As a path to a file containing AWSACCESSKEYID=... and AWSSECRETACCESSKEY=... -- In the environment (variables AWSACCESSKEYID and AWSSECRETACCESSKEY)
- -Set this to true to enable debugging on an input.
- -Whether to delete processed files from the original bucket.
- -The format of input data (plain, json, json_event)
- -Interval to wait between to check the file list again after a run is finished. -Value is in seconds.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -If specified, the prefix the filenames in the bucket must match (not a regexp)
- -The AWS region for your bucket.
- -Where to write the since database (keeps track of the date -the last handled file was added to S3). The default will write -sincedb files to some path matching "$HOME/.sincedb*"
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read snmp trap messages as events
- -Resulting @message looks like : - #<SNMP::SNMPv1Trap:0x6f1a7a4 @varbindlist=[#<SNMP::VarBind:0x2d7bcd8f @value="teststring", - @name=[1.11.12.13.14.15]>], @timestamp=#<SNMP::TimeTicks:0x1af47e9d @value=55>, @generictrap=6, - @enterprise=[1.2.3.4.5.6], @sourceip="127.0.0.1", @agentaddr=#<SNMP::IpAddress:0x29a4833e @value="\xC0\xC1\xC2\xC3">, - @specifictrap=99>
- - -input {
- snmptrap {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- community => ... # string (optional), default: "public"
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "0.0.0.0"
- port => ... # number (optional), default: 1062
- tags => ... # array (optional)
- type => ... # string (optional)
- yamlmibdir => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -SNMP Community String to listen for.
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -The address to listen on
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root to use. hence the default of 1062.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -directory of YAML MIB maps (same format ruby-snmp uses)
- - -Read rows from an sqlite database.
- -This is most useful in cases where you are logging directly to a table. -Any tables being watched must have an 'id' column that is monotonically -increasing.
- -All tables are read by default except: -* ones matching 'sqlite%' - these are internal/adminstrative tables for sqlite -* 'sincetable' - this is used by this plugin to track state.
- -% sqlite /tmp/example.db
-sqlite> CREATE TABLE weblogs (
- id INTEGER PRIMARY KEY AUTOINCREMENT,
- ip STRING,
- request STRING,
- response INTEGER);
-sqlite> INSERT INTO weblogs (ip, request, response)
- VALUES ("1.2.3.4", "/index.html", 200);
-
-
-Then with this logstash config:
- -input {
- sqlite {
- path => "/tmp/example.db"
- type => weblogs
- }
-}
-output {
- stdout {
- debug => true
- }
-}
-
-
-Sample output:
- -{
- "@source" => "sqlite://sadness/tmp/x.db",
- "@tags" => [],
- "@fields" => {
- "ip" => "1.2.3.4",
- "request" => "/index.html",
- "response" => 200
- },
- "@timestamp" => "2013-05-29T06:16:30.850Z",
- "@source_host" => "sadness",
- "@source_path" => "/tmp/x.db",
- "@message" => "",
- "@type" => "foo"
-}
-
-
-
-input {
- sqlite {
- add_field => ... # hash (optional), default: {}
- batch => ... # number (optional), default: 5
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- exclude_tables => ... # array (optional), default: []
- path => ... # string (required)
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -How many rows to fetch at a time from each SELECT call.
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -Any tables to exclude by name. -By default all tables are followed.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The path to the sqlite database file.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Pull events from an Amazon Web Services Simple Queue Service (SQS) queue.
- -SQS is a simple, scalable queue system that is part of the -Amazon Web Services suite of tools.
- -Although SQS is similar to other queuing systems like AMQP, it -uses a custom API and requires that you have an AWS account. -See http://aws.amazon.com/sqs/ for more details on how SQS works, -what the pricing schedule looks like and how to setup a queue.
- -To use this plugin, you must:
- -The "consumer" identity must have the following permissions on the queue:
- -Typically, you should setup an IAM policy, create a user and apply the IAM policy to the user. -A sample policy is as follows:
- -{
- "Statement": [
- {
- "Action": [
- "sqs:ChangeMessageVisibility",
- "sqs:ChangeMessageVisibilityBatch",
- "sqs:GetQueueAttributes",
- "sqs:GetQueueUrl",
- "sqs:ListQueues",
- "sqs:SendMessage",
- "sqs:SendMessageBatch"
- ],
- "Effect": "Allow",
- "Resource": [
- "arn:aws:sqs:us-east-1:123456789012:Logstash"
- ]
- }
- ]
-}
-
-
-See http://aws.amazon.com/iam/ for more details on setting up AWS identities.
- - -input {
- sqs {
- access_key_id => ... # string (optional)
- add_field => ... # hash (optional), default: {}
- aws_credentials_file => ... # string (optional)
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- id_field => ... # string (optional)
- md5_field => ... # string (optional)
- queue => ... # string (required)
- region => ... # string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us-east-1"
- secret_access_key => ... # string (optional)
- sent_timestamp_field => ... # string (optional)
- tags => ... # array (optional)
- threads => ... # number (optional), default: 1
- type => ... # string (optional)
- use_ssl => ... # boolean (optional), default: true
-}
-
-}
-
-
-This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order...
-1. Static configuration, using access_key_id
and secret_access_key
params in logstash plugin config
-2. External credentials file specified by aws_credentials_file
-3. Environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
-4. Environment variables AMAZON_ACCESS_KEY_ID
and AMAZON_SECRET_ACCESS_KEY
-5. IAM Instance Profile (available when running inside EC2)
Add a field to an event
- -Path to YAML file containing a hash of AWS credentials.
-This file will only be loaded if access_key_id
and
-secret_access_key
aren't set. The contents of the
-file should look like this:
:access_key_id: "12345"
-:secret_access_key: "54321"
-
-
-The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -Name of the event field in which to store the SQS message ID
- -Name of the event field in which to store the SQS message MD5 checksum
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Name of the SQS Queue name to pull messages from. Note that this is just the name of the queue, not the URL or ARN.
- -The AWS Region
- -The AWS Secret Access Key
- -Name of the event field in which to store the SQS message Sent Timestamp
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -Should we require (true) or disable (false) using SSL for communicating with the AWS API
-The AWS SDK for Ruby defaults to SSL so we preserve that
Read events from standard input.
- -By default, each event is assumed to be one line. If you -want to join lines, you'll want to use the multiline filter.
- - -input {
- stdin {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -input {
- stomp {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- destination => ... # string (required)
- host => ... # string (required), default: "localhost"
- password => ... # password (optional), default: ""
- port => ... # number (optional), default: 61613
- tags => ... # array (optional)
- type => ... # string (optional)
- user => ... # string (optional), default: ""
- vhost => ... # string (optional), default: nil
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Enable debugging output?
- -The destination to read events from.
- -Example: "/topic/logstash"
- -The format of input data (plain, json, json_event)
- -The address of the STOMP server.
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The password to authenticate with.
- -The port to connet to on your STOMP server.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -The username to authenticate with.
- -The vhost to use
- - -Read syslog messages as events over the network.
- -This input is a good choice if you already use syslog today. -It is also a good choice if you want to receive logs from -appliances and network devices where you cannot run your own -log collector.
- -Of course, 'syslog' is a very muddy term. This input only supports RFC3164 -syslog with some small modifications. The date format is allowed to be -RFC3164 style or ISO8601. Otherwise the rest of the RFC3164 must be obeyed. -If you do not use RFC3164, do not use this input.
- -Note: this input will start listeners on both TCP and UDP
- - -input {
- syslog {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- facility_labels => ... # array (optional), default: ["kernel", "user-level", "mail", "system", "security/authorization", "syslogd", "line printer", "network news", "UUCP", "clock", "security/authorization", "FTP", "NTP", "log audit", "log alert", "clock", "local0", "local1", "local2", "local3", "local4", "local5", "local6", "local7"]
- host => ... # string (optional), default: "0.0.0.0"
- port => ... # number (optional), default: 514
- severity_labels => ... # array (optional), default: ["Emergency", "Alert", "Critical", "Error", "Warning", "Notice", "Informational", "Debug"]
- tags => ... # array (optional)
- type => ... # string (optional)
- use_labels => ... # boolean (optional), default: true
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -Labels for facility levels -This comes from RFC3164.
- -The format of input data (plain, json, json_event)
- -The address to listen on
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root to use.
- -Labels for severity levels -This comes from RFC3164.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -Use label parsing for severity and facility levels
- - -Read events over a TCP socket.
- -Like stdin and file inputs, each event is assumed to be one line of text.
- -Can either accept connections from clients or connect to a server,
-depending on mode
.
input {
- tcp {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- data_timeout => ... # number (optional), default: -1
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "0.0.0.0"
- mode => ... # string, one of ["server", "client"] (optional), default: "server"
- port => ... # number (required)
- ssl_cacert => ... # a valid filesystem path (optional)
- ssl_cert => ... # a valid filesystem path (optional)
- ssl_enable => ... # boolean (optional), default: false
- ssl_key => ... # a valid filesystem path (optional)
- ssl_key_passphrase => ... # password (optional), default: nil
- ssl_verify => ... # boolean (optional), default: false
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -The 'read' timeout in seconds. If a particular tcp connection is idle for -more than this timeout period, we will assume it is dead and close it.
- -If you never want to timeout, use -1.
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -When mode is server
, the address to listen on.
-When mode is client
, the address to connect to.
If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Mode to operate in. server
listens for client connections,
-client
connects to a server.
When mode is server
, the port to listen on.
-When mode is client
, the port to connect to.
ssl CA certificate, chainfile or CA path -The system CA path is automatically included
- -ssl certificate
- -Enable ssl (must be set for other ssl_
options to take effect)
ssl key
- -ssl key passphrase
- -Verify the identity of the other end of the ssl connection against the CA
-For input, sets the @field.sslsubject
to that of the client certificate
Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read events from the twitter streaming api.
- - -input {
- twitter {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- consumer_key => ... # string (required)
- consumer_secret => ... # password (required)
- debug => ... # boolean (optional), default: false
- keywords => ... # array (required)
- oauth_token => ... # string (required)
- oauth_token_secret => ... # password (required)
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Your twitter app's consumer key
- -Don't know what this is? You need to create an "application" -on twitter, see this url: https://dev.twitter.com/apps/new
- -Your twitter app's consumer secret
- -If you don't have one of these, you can create one by -registering a new application with twitter: -https://dev.twitter.com/apps/new
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -Any keywords to track in the twitter stream
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Your oauth token.
- -To get this, login to twitter with whatever account you want, -then visit https://dev.twitter.com/apps
- -Click on your app (used with the consumerkey and consumersecret settings) -Then at the bottom of the page, click 'Create my access token' which -will create an oauth token and secret bound to your account and that -application.
- -Your oauth token secret.
- -To get this, login to twitter with whatever account you want, -then visit https://dev.twitter.com/apps
- -Click on your app (used with the consumerkey and consumersecret settings) -Then at the bottom of the page, click 'Create my access token' which -will create an oauth token and secret bound to your account and that -application.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read messages as events over the network via udp.
- - -input {
- udp {
- add_field => ... # hash (optional), default: {}
- buffer_size => ... # number (optional), default: 8192
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "0.0.0.0"
- port => ... # number (required)
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -Buffer size
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -The address to listen on
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The port to listen on. Remember that ports less than 1024 (privileged -ports) may require root or elevated privileges to use.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read events over a UNIX socket.
- -Like stdin and file inputs, each event is assumed to be one line of text.
- -Can either accept connections from clients or connect to a server,
-depending on mode
.
input {
- unix {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- data_timeout => ... # number (optional), default: -1
- debug => ... # boolean (optional), default: false
- force_unlink => ... # boolean (optional), default: false
- mode => ... # string, one of ["server", "client"] (optional), default: "server"
- path => ... # string (required)
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -The 'read' timeout in seconds. If a particular connection is idle for -more than this timeout period, we will assume it is dead and close it.
- -If you never want to timeout, use -1.
- -Set this to true to enable debugging on an input.
- -Remove socket file in case of EADDRINUSE failure
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Mode to operate in. server
listens for client connections,
-client
connects to a server.
When mode is server
, the path to listen on.
-When mode is client
, the path to connect to.
Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read from varnish cache's shared memory log
- - -input {
- varnishlog {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- tags => ... # array (optional)
- threads => ... # number (optional), default: 1
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -Read events over the websocket protocol.
- - -input {
- websocket {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- mode => ... # string, one of ["server", "client"] (optional), default: "client"
- tags => ... # array (optional)
- type => ... # string (optional)
- url => ... # string (optional), default: "0.0.0.0"
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Operate as a client or a server.
- -Client mode causes this plugin to connect as a websocket client -to the URL given. It expects to receive events as websocket messages.
- -(NOT IMPLEMENTED YET) Server mode causes this plugin to listen on -the given URL for websocket clients. It expects to receive events -as websocket messages from these clients.
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -The url to connect to or serve from
- - -Collect data from WMI query
- -This is useful for collecting performance metrics and other data -which is accessible via WMI on a Windows host
- -Example:
- -input {
- wmi {
- query => "select * from Win32_Process"
- interval => 10
- }
- wmi {
- query => "select PercentProcessorTime from Win32_PerfFormattedData_PerfOS_Processor where name = '_Total'"
- }
-}
-
-
-
-input {
- wmi {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- interval => ... # number (optional), default: 10
- query => ... # string (required)
- tags => ... # array (optional)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -Polling interval
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -WMI query
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -This input allows you to receive events over XMPP/Jabber.
- -This plugin can be used for accepting events from humans or applications -XMPP, or you can use it for PubSub or general message passing for logstash to -logstash.
- - -input {
- xmpp {
- add_field => ... # hash (optional), default: {}
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- host => ... # string (optional)
- password => ... # password (required)
- rooms => ... # array (optional)
- tags => ... # array (optional)
- type => ... # string (optional)
- user => ... # string (required)
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set to true to enable greater debugging in XMPP. Useful for debugging -network/authentication erros.
- -The format of input data (plain, json, json_event)
- -The xmpp server to connect to. This is optional. If you omit this setting, -the host on the user/identity is used. (foo.com for user@foo.com)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -The xmpp password for the user/identity.
- -if muc/multi-user-chat required, give the name of the room that -you want to join: room@conference.domain/nick
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -The user or resource ID, like foo@example.com.
- - -input {
- zenoss {
- ack => ... # boolean (optional), default: true
- add_field => ... # hash (optional), default: {}
- arguments => ... # array (optional), default: {}
- auto_delete => ... # boolean (optional), default: true
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- durable => ... # boolean (optional), default: false
- exchange => ... # string (optional), default: "zenoss.zenevents"
- exclusive => ... # boolean (optional), default: true
- host => ... # string (optional), default: "localhost"
- key => ... # string (optional), default: "zenoss.zenevent.#"
- passive => ... # boolean (optional), default: false
- password => ... # password (optional), default: "zenoss"
- port => ... # number (optional), default: 5672
- prefetch_count => ... # number (optional), default: 256
- queue => ... # string (optional), default: ""
- ssl => ... # boolean (optional), default: false
- tags => ... # array (optional)
- threads => ... # number (optional), default: 1
- type => ... # string (optional)
- user => ... # string (optional), default: "zenoss"
- verify_ssl => ... # boolean (optional), default: false
- vhost => ... # string (optional), default: "/zenoss"
-}
-
-}
-
-
-Add a field to an event
- -The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The name of the exchange to bind the queue. This is analogous to the 'rabbitmq -output' config 'name'
- -The format of input data (plain, json, json_event)
- -Your rabbitmq server address
- -The routing key to use. This is only valid for direct or fanout exchanges
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -Your rabbitmq password
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- -Your rabbitmq username
- -The vhost to use. If you don't know what this is, leave the default.
- - -Read events over a 0MQ SUB socket.
- -You need to have the 0mq 2.1.x library installed to be able to use -this input plugin.
- -The default settings will create a subscriber binding to tcp://127.0.0.1:2120 -waiting for connecting publishers.
- - -input {
- zeromq {
- add_field => ... # hash (optional), default: {}
- address => ... # array (optional), default: ["tcp://*:2120"]
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- mode => ... # string, one of ["server", "client"] (optional), default: "server"
- sender => ... # string (optional)
- sockopt => ... # hash (optional)
- tags => ... # array (optional)
- topic => ... # array (optional)
- topology => ... # string, one of ["pushpull", "pubsub", "pair"] (required)
- type => ... # string (optional)
-}
-
-}
-
-
-Add a field to an event
- -0mq socket address to connect or bind
-Please note that inproc://
will not work with logstash
-as each we use a context per thread.
-By default, inputs bind/listen
-and outputs connect
The character encoding used in this input. Examples include "UTF-8" -and "cp1252"
- -This setting is useful if your log files are in Latin-1 (aka cp1252) -or in another character set other than UTF-8.
- -This only affects "plain" format logs since json is UTF-8 already.
- -The codec used for input data
- -Set this to true to enable debugging on an input.
- -The format of input data (plain, json, json_event)
- -If format is "json", an event sprintf string to build what -the display @message should be given (defaults to the raw JSON). -sprintf format strings look like %{fieldname}
- -If format is "json_event", ALL fields except for @type -are expected to be present. Not receiving all fields -will cause unexpected results.
- -mode -server mode binds/listens -client mode connects
- -sender -overrides the sender to -set the source of the event -default is "zmq+topology://type/"
- -0mq socket options -This exposes zmq_setsockopt -for advanced tuning -see http://api.zeromq.org/2-1:zmq-setsockopt for details
- -This is where you would set values like: -ZMQ::HWM - high water mark -ZMQ::IDENTITY - named queues -ZMQ::SWAP_SIZE - space for disk overflow
- -example: sockopt => ["ZMQ::HWM", 50, "ZMQ::IDENTITY", "mynamedqueue"]
- -Add any number of arbitrary tags to your event.
- -This can help with processing later.
- -0mq topic -This is used for the 'pubsub' topology only -On inputs, this allows you to filter messages by topic -On outputs, this allows you to tag a message for routing -NOTE: ZeroMQ does subscriber side filtering. -NOTE: All topics have an implicit wildcard at the end -You can specify multiple topics here
- -0mq topology -The default logstash topologies work as follows: -* pushpull - inputs are pull, outputs are push -* pubsub - inputs are subscribers, outputs are publishers -* pair - inputs are clients, inputs are servers
- -If the predefined topology flows don't work for you, -you can change the 'mode' setting -TODO (lusis) add req/rep MAYBE -TODO (lusis) add router/dealer
- -Add a 'type' field to all events handled by this input.
- -Types are used mainly for filter activation.
- -If you create an input with type "foobar", then only filters -which also have type "foobar" will act on them.
- -The type is also stored as part of the event itself, so you -can also use the type to search for in the web interface.
- -If you try to set a type on an event that already has one (for -example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at -the shipper stays with that event for its life even -when sent to another LogStash server.
- - -output {
- amqp {
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- durable => ... # boolean (optional), default: true
- exchange => ... # string (required)
- exchange_type => ... # string, one of ["fanout", "direct", "topic"] (required)
- host => ... # string (required)
- key => ... # string (optional), default: "logstash"
- password => ... # password (optional), default: "guest"
- persistent => ... # boolean (optional), default: true
- port => ... # number (optional), default: 5672
- ssl => ... # boolean (optional), default: false
- user => ... # string (optional), default: "guest"
- verify_ssl => ... # boolean (optional), default: false
- vhost => ... # string (optional), default: "/"
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -output {
- boundary {
- api_key => ... # string (required)
- auto => ... # boolean (optional), default: false
- bsubtype => ... # string (optional)
- btags => ... # array (optional)
- btype => ... # string (optional)
- codec => ... # codec (optional), default: "plain"
- end_time => ... # string (optional)
- org_id => ... # string (required)
- start_time => ... # string (optional)
-}
-
-}
-
-
-This output lets you send annotations to -Boundary based on Logstash events
- -Note that since Logstash maintains no state -these will be one-shot events
- -By default the start and stop time will be -the event timestamp
- -Your Boundary API key
- -Auto -If set to true, logstash will try to pull boundary fields out -of the event. Any field explicitly set by config options will -override these. -['type', 'subtype', 'creationtime', 'endtime', 'links', 'tags', 'loc']
- -Sub-Type
- -Tags -Set any custom tags for this event -Default are the Logstash tags if any
- -Type
- -The codec used for output data
- -End time
-Override the stop time
-Note that Boundary requires this to be seconds since epoch
-If overriding, it is your responsibility to type this correctly
-By default this is set to event.unix_timestamp.to_i
Only handle events without any of these tags. Note this check is additional to type and tags.
- -Your Boundary Org ID
- -Start time
-Override the start time
-Note that Boundary requires this to be seconds since epoch
-If overriding, it is your responsibility to type this correctly
-By default this is set to event.unix_timestamp.to_i
Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -output {
- circonus {
- annotation => ... # hash (required), default: {}
- api_token => ... # string (required)
- app_name => ... # string (required)
- codec => ... # codec (optional), default: "plain"
-}
-
-}
-
-
-Annotations
-Registers an annotation with Circonus
-The only required field is title
and description
.
-start
and stop
will be set to event.unix_timestamp
-You can add any other optional annotation values as well.
-All values will be passed through event.sprintf
Example: - ["title":"Logstash event", "description":"Logstash event for %{host}"] -or - ["title":"Logstash event", "description":"Logstash event for %{host}", "parent_id", "1"]
- -This output lets you send annotations to -Circonus based on Logstash events
- -Your Circonus API Token
- -Your Circonus App name
-This will be passed through event.sprintf
-so variables are allowed here:
Example:
- app_name => "%{myappname}"
The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output lets you aggregate and send metric data to AWS CloudWatch
- -This plugin is intended to be used on a logstash indexer agent (but that
-is not the only way, see below.) In the intended scenario, one cloudwatch
-output plugin is configured, on the logstash indexer node, with just AWS API
-credentials, and possibly a region and/or a namespace. The output looks
-for fields present in events, and when it finds them, it uses them to
-calculate aggregate statistics. If the metricname
option is set in this
-output, then any events which pass through it will be aggregated & sent to
-CloudWatch, but that is not recommended. The intended use is to NOT set the
-metricname option here, and instead to add a CW_metricname
field (and other
-fields) to only the events you want sent to CloudWatch.
When events pass through this output they are queued for background
-aggregation and sending, which happens every minute by default. The
-queue has a maximum size, and when it is full aggregated statistics will be
-sent to CloudWatch ahead of schedule. Whenever this happens a warning
-message is written to logstash's log. If you see this you should increase
-the queue_size
configuration option to avoid the extra API calls. The queue
-is emptied every time we send data to CloudWatch.
Note: when logstash is stopped the queue is destroyed before it can be processed. -This is a known limitation of logstash and will hopefully be addressed in a -future version.
- -There are two ways to configure this plugin, and they can be used in -combination: event fields & per-output defaults
- -Event Field configuration...
-You add fields to your events in inputs & filters and this output reads
-those fields to aggregate events. The names of the fields read are
-configurable via the field_*
options.
Per-output defaults... -You set universal defaults in this output plugin's configuration, and -if an event does not have a field for that option then the default is -used.
- -Notice, the event fields take precedence over the per-output defaults.
- -At a minimum events must have a "metric name" to be sent to CloudWatch.
-This can be achieved either by providing a default here OR by adding a
-CW_metricname
field. By default, if no other configuration is provided
-besides a metric name, then events will be counted (Unit: Count, Value: 1)
-by their metric name (either a default or from their CW_metricname
field)
Other fields which can be added to events to modify the behavior of this
-plugin are, CW_namespace
, CW_unit
, CW_value
, and
-CW_dimensions
. All of these field names are configurable in
-this output. You can also set per-output defaults for any of them.
-See below for details.
Read more about AWS CloudWatch, -and the specific of API endpoint this output uses, -PutMetricData
- - -output {
- cloudwatch {
- access_key_id => ... # string (optional)
- aws_credentials_file => ... # string (optional)
- codec => ... # codec (optional), default: "plain"
- dimensions => ... # hash (optional)
- field_dimensions => ... # string (optional), default: "CW_dimensions"
- field_metricname => ... # string (optional), default: "CW_metricname"
- field_namespace => ... # string (optional), default: "CW_namespace"
- field_unit => ... # string (optional), default: "CW_unit"
- field_value => ... # string (optional), default: "CW_value"
- metricname => ... # string (optional)
- namespace => ... # string (optional), default: "Logstash"
- queue_size => ... # number (optional), default: 10000
- region => ... # string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us-east-1"
- secret_access_key => ... # string (optional)
- timeframe => ... # string (optional), default: "1m"
- unit => ... # string, one of ["Seconds", "Microseconds", "Milliseconds", "Bytes", "Kilobytes", "Megabytes", "Gigabytes", "Terabytes", "Bits", "Kilobits", "Megabits", "Gigabits", "Terabits", "Percent", "Count", "Bytes/Second", "Kilobytes/Second", "Megabytes/Second", "Gigabytes/Second", "Terabytes/Second", "Bits/Second", "Kilobits/Second", "Megabits/Second", "Gigabits/Second", "Terabits/Second", "Count/Second", "None"] (optional), default: "Count"
- use_ssl => ... # boolean (optional), default: true
- value => ... # string (optional), default: "1"
-}
-
-}
-
-
-This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order...
-1. Static configuration, using access_key_id
and secret_access_key
params in logstash plugin config
-2. External credentials file specified by aws_credentials_file
-3. Environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
-4. Environment variables AMAZON_ACCESS_KEY_ID
and AMAZON_SECRET_ACCESS_KEY
-5. IAM Instance Profile (available when running inside EC2)
Path to YAML file containing a hash of AWS credentials.
-This file will only be loaded if access_key_id
and
-secret_access_key
aren't set. The contents of the
-file should look like this:
:access_key_id: "12345"
-:secret_access_key: "54321"
-
-
-The codec used for output data
- -The default dimensions [ name, value, ... ] to use for events which do not have a CW_dimensions
field
Only handle events without any of these tags. Note this check is additional to type and tags.
- -The name of the field used to set the dimensions on an event metric
-The field named here, if present in an event, must have an array of
-one or more key & value pairs, for example...
add_field => [ "CW_dimensions", "Environment", "CW_dimensions", "prod" ]
-
-
-or, equivalently...
- -add_field => [ "CW_dimensions", "Environment" ]
-add_field => [ "CW_dimensions", "prod" ]
-
-
-The name of the field used to set the metric name on an event
-The author of this plugin recommends adding this field to events in inputs &
-filters rather than using the per-output default setting so that one output
-plugin on your logstash indexer can serve all events (which of course had
-fields set on your logstash shippers.)
The name of the field used to set a different namespace per event
-Note: Only one namespace can be sent to CloudWatch per API call
-so setting different namespaces will increase the number of API calls
-and those cost money.
The name of the field used to set the unit on an event metric
- -The name of the field used to set the value (float) on an event metric
- -The default metric name to use for events which do not have a CW_metricname
field.
-Beware: If this is provided then all events which pass through this output will be aggregated and
-sent to CloudWatch, so use this carefully. Furthermore, when providing this option, you
-will probably want to also restrict events from passing through this output using event
-type, tag, and field matching
The default namespace to use for events which do not have a CW_namespace
field
How many events to queue before forcing a call to the CloudWatch API ahead of timeframe
schedule
-Set this to the number of events-per-timeframe you will be sending to CloudWatch to avoid extra API calls
The AWS Region
- -The AWS Secret Access Key
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -Constants
-aggregate_key members
-Units
-How often to send data to CloudWatch
-This does not affect the event timestamps, events will always have their
-actual timestamp (to-the-minute) sent to CloudWatch.
We only call the API if there is data to send.
- -See the Rufus Scheduler docs for an explanation of allowed values
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -The default unit to use for events which do not have a CW_unit
field
-If you set this option you should probably set the "value" option along with it
Should we require (true) or disable (false) using SSL for communicating with the AWS API
-The AWS SDK for Ruby defaults to SSL so we preserve that
The default value to use for events which do not have a CW_value
field
-If provided, this must be a string which can be converted to a float, for example...
"1", "2.34", ".5", and "0.67"
-
-
-If you set this option you should probably set the unit
option along with it
output {
- datadog {
- alert_type => ... # string, one of ["info", "error", "warning", "success"] (optional)
- api_key => ... # string (required)
- codec => ... # codec (optional), default: "plain"
- date_happened => ... # string (optional)
- dd_tags => ... # array (optional)
- priority => ... # string, one of ["normal", "low"] (optional)
- source_type_name => ... # string, one of ["nagios", "hudson", "jenkins", "user", "my apps", "feed", "chef", "puppet", "git", "bitbucket", "fabric", "capistrano"] (optional), default: "my apps"
- text => ... # string (optional), default: "%{message}"
- title => ... # string (optional), default: "Logstash event for %{source}"
-}
-
-}
-
-
-Alert type
- -This output lets you send events (for now. soon metrics) to -DataDogHQ based on Logstash events
- -Note that since Logstash maintains no state -these will be one-shot events
- -Your DatadogHQ API key
- -The codec used for output data
- -Date Happened
- -Tags -Set any custom tags for this event -Default are the Logstash tags if any
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Priority
- -Source type name
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -Text
- -Title
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output lets you send metrics to -DataDogHQ based on Logstash events. -Default queue_size and timeframe are low in order to provide near realtime alerting. -If you do not use Datadog for alerting, consider raising these thresholds.
- - -output {
- datadog_metrics {
- api_key => ... # string (required)
- codec => ... # codec (optional), default: "plain"
- dd_tags => ... # array (optional)
- device => ... # string (optional), default: "%{metric_device}"
- host => ... # string (optional), default: "%{source}"
- metric_name => ... # string (optional), default: "%{metric_name}"
- metric_type => ... # string, one of ["gauge", "counter"] (optional), default: "%{metric_type}"
- metric_value => ... # (optional), default: "%{metric_value}"
- queue_size => ... # number (optional), default: 10
- timeframe => ... # number (optional), default: 10
-}
-
-}
-
-
-Your DatadogHQ API key. https://app.datadoghq.com/account/settings#api
- -The codec used for output data
- -Set any custom tags for this event, -default are the Logstash tags if any.
- -The name of the device that produced the metric.
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The name of the host that produced the metric.
- -The name of the time series.
- -The type of the metric.
- -The value.
- -How many events to queue before flushing to Datadog -prior to schedule set in @timeframe
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -How often (in seconds) to flush queued events to Datadog
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output lets you store logs in elasticsearch and is the most recommended -output for logstash. If you plan on using the logstash web interface, you'll -need to use this output.
- -VERSION NOTE: Your elasticsearch cluster must be running elasticsearch - 0.90.3. If you use any other version of elasticsearch, - you should consider using the elasticsearch_http - output instead.
- -If you want to set other elasticsearch options that are not exposed directly -as config options, there are two options:
- -This plugin will join your elasticsearch cluster, so it will show up in -elasticsearch's cluster health status.
- -You can learn more about elasticsearch at http://elasticsearch.org
- -Your firewalls will need to permit port 9300 in both directions (from -logstash to elasticsearch, and elasticsearch to logstash)
- - -output {
- elasticsearch {
- bind_host => ... # string (optional)
- cluster => ... # string (optional)
- codec => ... # codec (optional), default: "plain"
- document_id => ... # string (optional), default: nil
- embedded => ... # boolean (optional), default: false
- embedded_http_port => ... # string (optional), default: "9200-9300"
- flush_size => ... # number (optional), default: 100
- host => ... # string (optional)
- idle_flush_time => ... # number (optional), default: 1
- index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
- index_type => ... # string (optional)
- node_name => ... # string (optional)
- port => ... # string (optional), default: "9300-9400"
-}
-
-}
-
-
-The name/address of the host to bind to for ElasticSearch clustering
- -The name of your cluster if you set it on the ElasticSearch side. Useful -for discovery.
- -The codec used for output data
- -The document ID for the index. Useful for overwriting existing entries in -elasticsearch with the same ID.
- -Run the elasticsearch server embedded in this process. -This option is useful if you want to run a single logstash process that -handles log processing and indexing; it saves you from needing to run -a separate elasticsearch process.
- -If you are running the embedded elasticsearch server, you can set the http -port it listens on here; it is not common to need this setting changed from -default.
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The maximum number of events to spool before flushing to elasticsearch.
- -The name/address of the host to use for ElasticSearch unicast discovery -This is only required if the normal multicast/cluster discovery stuff won't -work in your environment.
- -The amount of time since last flush before a flush is forced.
- -The index to write events to. This can be dynamic using the %{foo} syntax. -The default value will partition your indices by day so you can more easily -delete old data or only search specific date ranges.
- -The index type to write events to. Generally you should try to write only -similar events to the same 'type'. String expansion '%{foo}' works here.
- -This setting no longer does anything. It exists to keep config validation -from failing. It will be removed in future versions.
- -The node name ES will use when joining a cluster.
- -By default, this is generated internally by the ES client.
- -The port for ElasticSearch transport to use. This is not the ElasticSearch -REST API port (normally 9200).
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output lets you store logs in elasticsearch.
- -This plugin uses the HTTP/REST interface to ElasticSearch, which usually -lets you use any version of elasticsearch server. It is known to work -with elasticsearch 0.90.3
- -You can learn more about elasticsearch at http://elasticsearch.org
- - -output {
- elasticsearch_http {
- codec => ... # codec (optional), default: "plain"
- document_id => ... # string (optional), default: nil
- flush_size => ... # number (optional), default: 100
- host => ... # string (optional)
- idle_flush_time => ... # number (optional), default: 1
- index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
- index_type => ... # string (optional)
- port => ... # number (optional), default: 9200
-}
-
-}
-
-
-The codec used for output data
- -The document ID for the index. Useful for overwriting existing entries in -elasticsearch with the same ID.
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Set the number of events to queue up before writing to elasticsearch.
- -The hostname or ip address to reach your elasticsearch server.
- -The amount of time since last flush before a flush is forced.
- -The index to write events to. This can be dynamic using the %{foo} syntax. -The default value will partition your indices by day so you can more easily -delete old data or only search specific date ranges.
- -The index type to write events to. Generally you should try to write only -similar events to the same 'type'. String expansion '%{foo}' works here.
- -The port for ElasticSearch HTTP interface to use.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output lets you store logs in elasticsearch. It's similar to the -'elasticsearch' output but improves performance by using a queue server, -rabbitmq, to send data to elasticsearch.
- -Upon startup, this output will automatically contact an elasticsearch cluster -and configure it to read from the queue to which we write.
- -You can learn more about elasticseasrch at http://elasticsearch.org -More about the elasticsearch rabbitmq river plugin: https://github.com/elasticsearch/elasticsearch-river-rabbitmq/blob/master/README.md
- - -output {
- elasticsearch_river {
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- document_id => ... # string (optional), default: nil
- durable => ... # boolean (optional), default: true
- es_bulk_size => ... # number (optional), default: 1000
- es_bulk_timeout_ms => ... # number (optional), default: 100
- es_host => ... # string (required)
- es_ordered => ... # boolean (optional), default: false
- es_port => ... # number (optional), default: 9200
- exchange => ... # string (optional), default: "elasticsearch"
- exchange_type => ... # string, one of ["fanout", "direct", "topic"] (optional), default: "direct"
- index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
- index_type => ... # string (optional), default: "%{type}"
- key => ... # string (optional), default: "elasticsearch"
- password => ... # string (optional), default: "guest"
- persistent => ... # boolean (optional), default: true
- queue => ... # string (optional), default: "elasticsearch"
- rabbitmq_host => ... # string (required)
- rabbitmq_port => ... # number (optional), default: 5672
- user => ... # string (optional), default: "guest"
- vhost => ... # string (optional), default: "/"
-}
-
-}
-
-
-The codec used for output data
- -The document ID for the index. Useful for overwriting existing entries in -elasticsearch with the same ID.
- -RabbitMQ durability setting. Also used for ElasticSearch setting
- -ElasticSearch river configuration: bulk fetch size
- -ElasticSearch river configuration: bulk timeout in milliseconds
- -The name/address of an ElasticSearch host to use for river creation
- -ElasticSearch river configuration: is ordered?
- -ElasticSearch API port
- -RabbitMQ exchange name
- -The exchange type (fanout, topic, direct)
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The index to write events to. This can be dynamic using the %{foo} syntax. -The default value will partition your indeces by day so you can more easily -delete old data or only search specific date ranges.
- -The index type to write events to. Generally you should try to write only -similar events to the same 'type'. String expansion '%{foo}' works here.
- -RabbitMQ routing key
- -RabbitMQ password
- -RabbitMQ persistence setting
- -RabbitMQ queue name
- -Hostname of RabbitMQ server
- -Port of RabbitMQ server
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -RabbitMQ user
- -RabbitMQ vhost
- - -Send email when any event is received.
- - -output {
- email {
- attachments => ... # array (optional), default: []
- body => ... # string (optional), default: ""
- cc => ... # string (optional)
- codec => ... # codec (optional), default: "plain"
- contenttype => ... # string (optional), default: "text/html; charset=UTF-8"
- from => ... # string (optional), default: "logstash.alert@nowhere.com"
- htmlbody => ... # string (optional), default: ""
- options => ... # hash (optional), default: {}
- replyto => ... # string (optional)
- subject => ... # string (optional), default: ""
- to => ... # string (required)
- via => ... # string (optional), default: "smtp"
-}
-
-}
-
-
-attachments - has of name of file and file location
- -body for email - just plain text
- -Who to CC on this email?
- -See "to" setting for what is valid here.
- -The codec used for output data
- -contenttype : for multipart messages, set the content type and/or charset of the html part
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The From setting for email - fully qualified email address for the From:
- -body for email - can contain html markup
- -This setting is deprecated in favor of logstash's "conditionals" feature -If you were using this setting previously, please use conditionals instead.
- -If you need help converting your older 'match' setting to a conditional, -I welcome you to join the #logstash irc channel on freenode or to email -the logstash-users@googlegroups.com mailling list and ask for help! :)
- -the options to use: -smtp: address, port, enablestarttlsauto, user_name, password, authentication(bool), domain -sendmail: location, arguments -If you do not specify anything, you will get the following equivalent code set in -every new mail object:
- -Mail.defaults do
- -delivery_method :smtp, { :address => "localhost",
- :port => 25,
- :domain => 'localhost.localdomain',
- :user_name => nil,
- :password => nil,
- :authentication => nil,(plain, login and cram_md5)
- :enable_starttls_auto => true }
-
-retriever_method :pop3, { :address => "localhost",
- :port => 995,
- :user_name => nil,
- :password => nil,
- :enable_ssl => true }
-
-
-end
- -Mail.deliverymethod.new #=> Mail::SMTP instance - Mail.retrievermethod.new #=> Mail::POP3 instance
- -Each mail object inherits the default set in Mail.delivery_method, however, on -a per email basis, you can override the method:
- -mail.delivery_method :sendmail
- -Or you can override the method and pass in settings:
- -mail.delivery_method :sendmail, { :address => 'some.host' }
- -You can also just modify the settings:
- -mail.delivery_settings = { :address => 'some.host' }
- -The passed in hash is just merged against the defaults with +merge!+ and the result -assigned the mail object. So the above example will change only the :address value -of the global smtp_settings to be 'some.host', keeping all other values
- -The Reply-To setting for email - fully qualified email address is required -here.
- -subject for email
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -Who to send this email to? -A fully qualified email address to send to
- -This field also accept a comma separated list of emails like -"me@host.com, you@host.com"
- -You can also use dynamic field from the event with the %{fieldname} syntax.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -how to send email: either smtp or sendmail - default to 'smtp'
- - -This output will run a command for any matching event.
- -Example:
- -output {
- exec {
- type => abuse
- command => "iptables -A INPUT -s %{clientip} -j DROP"
- }
-}
-
-
-Run subprocesses via system ruby function
- -WARNING: if you want it non-blocking you should use & or dtach or other such -techniques
- - -output {
- exec {
- codec => ... # codec (optional), default: "plain"
- command => ... # string (required)
-}
-
-}
-
-
-The codec used for output data
- -Command line to execute via subprocess. Use dtach or screen to make it non blocking
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -File output.
- -Write events to files on disk. You can use fields from the -event as parts of the filename.
- - -output {
- file {
- codec => ... # codec (optional), default: "plain"
- flush_interval => ... # number (optional), default: 2
- gzip => ... # boolean (optional), default: false
- max_size => ... # string (optional)
- message_format => ... # string (optional)
- path => ... # string (required)
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Flush interval for flushing writes to log files. 0 will flush on every meesage
- -Gzip output stream
- -The maximum size of file to write. When the file exceeds this -threshold, it will be rotated to the current filename + ".1" -If that file already exists, the previous .1 will shift to .2 -and so forth.
- -NOT YET SUPPORTED
- -The format to use when writing events to the file. This value -supports any string and can include %{name} and other dynamic -strings.
- -If this setting is omitted, the full json representation of the -event will be written as a single line.
- -The path to the file to write. Event fields can be used here, -like "/var/log/logstash/%{host}/%{application}" -One may also utilize the path option for date-based log -rotation via the joda time format. This will use the event -timestamp. -E.g.: path => "./test-%{+YYYY-MM-dd}.txt" to create -./test-2013-05-29.txt
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output allows you to pull metrics from your logs and ship them to -ganglia's gmond. This is heavily based on the graphite output.
- - -output {
- ganglia {
- codec => ... # codec (optional), default: "plain"
- host => ... # string (optional), default: "localhost"
- lifetime => ... # number (optional), default: 300
- max_interval => ... # number (optional), default: 60
- metric => ... # string (required)
- metric_type => ... # string, one of ["string", "int8", "uint8", "int16", "uint16", "int32", "uint32", "float", "double"] (optional), default: "uint8"
- port => ... # number (optional), default: 8649
- units => ... # string (optional), default: ""
- value => ... # string (required)
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The address of the ganglia server.
- -Lifetime in seconds of this metric
- -Maximum time in seconds between gmetric calls for this metric.
- -The metric to use. This supports dynamic strings like %{host}
The type of value for this metric.
- -The port to connect on your ganglia server.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -Gmetric units for metric, such as "kb/sec" or "ms" or whatever unit -this metric uses.
- -The value to use. This supports dynamic strings like %{bytes}
-It will be coerced to a floating point value. Values which cannot be
-coerced will zero (0)
GELF output. This is most useful if you want to use logstash -to output events to graylog2.
- -More information at http://www.graylog2.org/about/gelf
- - -output {
- gelf {
- chunksize => ... # number (optional), default: 1420
- codec => ... # codec (optional), default: "plain"
- custom_fields => ... # hash (optional), default: {}
- facility => ... # string (optional), default: "logstash-gelf"
- file => ... # string (optional), default: "%{path}"
- full_message => ... # string (optional), default: "%{message}"
- host => ... # string (required)
- ignore_metadata => ... # array (optional), default: ["@timestamp", "@version", "severity", "source_host", "source_path", "short_message"]
- level => ... # array (optional), default: ["%{severity}", "INFO"]
- line => ... # string (optional)
- port => ... # number (optional), default: 12201
- sender => ... # string (optional), default: "%{source}"
- ship_metadata => ... # boolean (optional), default: true
- ship_tags => ... # boolean (optional), default: true
- short_message => ... # string (optional), default: "short_message"
-}
-
-}
-
-
-The GELF chunksize. You usually don't need to change this.
- -The codec used for output data
- -The GELF custom field mappings. GELF supports arbitrary attributes as custom
-fields. This exposes that. Exclude the _
portion of the field name
-e.g. custom_fields => ['foo_field', 'some_value']
-sets
foofield=
some_value`
Only handle events without any of these tags. Note this check is additional to type and tags.
- -The GELF facility. Dynamic values like %{foo} are permitted here; this -is useful if you need to use a value from the event as the facility name.
- -The GELF file; this is usually the source code file in your program where -the log event originated. Dynamic values like %{foo} are permitted here.
- -The GELF full message. Dynamic values like %{foo} are permitted here.
- -graylog2 server address
- -Ignore these fields when ship_metadata is set. Typically this lists the -fields used in dynamic values for GELF fields.
- -The GELF message level. Dynamic values like %{level} are permitted here; -useful if you want to parse the 'log level' from an event and use that -as the gelf level/severity.
- -Values here can be integers [0..7] inclusive or any of -"debug", "info", "warn", "error", "fatal" (case insensitive). -Single-character versions of these are also valid, "d", "i", "w", "e", "f", -"u" -The following additional severitylabels from logstash's syslogpri filter -are accepted: "emergency", "alert", "critical", "warning", "notice", and -"informational"
- -The GELF line number; this is usually the line number in your program where -the log event originated. Dynamic values like %{foo} are permitted here, but the -value should be a number.
- -graylog2 server port
- -Allow overriding of the gelf 'sender' field. This is useful if you -want to use something other than the event's source host as the -"sender" of an event. A common case for this is using the application name -instead of the hostname.
- -Ship metadata within event object? This will cause logstash to ship -any fields in the event (such as those created by grok) in the GELF -messages.
- -Ship tags within events. This will cause logstash to ship the tags of an -event as the field _tags.
- -The GELF short message field name. If the field does not exist or is empty, -the event message is taken instead.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Push events to a GemFire region.
- -GemFire is an object database.
- -To use this plugin you need to add gemfire.jar to your CLASSPATH; -using format=json requires jackson.jar too.
- -Note: this plugin has only been tested with GemFire 7.0.
- - -output {
- gemfire {
- cache_name => ... # string (optional), default: "logstash"
- cache_xml_file => ... # string (optional), default: nil
- codec => ... # codec (optional), default: "plain"
- key_format => ... # string (optional), default: "%{source}-%{@timestamp}"
- region_name => ... # string (optional), default: "Logstash"
-}
-
-}
-
-
-Your client cache name
- -The path to a GemFire client cache XML file.
- -Example:
- - <client-cache>
- <pool name="client-pool">
- <locator host="localhost" port="31331"/>
- </pool>
- <region name="Logstash">
- <region-attributes refid="CACHING_PROXY" pool-name="client-pool" >
- </region-attributes>
- </region>
- </client-cache>
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -A sprintf format to use when building keys
- -The region name
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output allows you to pull metrics from your logs and ship them to -graphite. Graphite is an open source tool for storing and graphing metrics.
- -An example use case: At loggly, some of our applications emit aggregated -stats in the logs every 10 seconds. Using the grok filter and this output, -I can capture the metric values from the logs and emit them to graphite.
- - -output {
- graphite {
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- exclude_metrics => ... # array (optional), default: ["%{[^}]+}"]
- fields_are_metrics => ... # boolean (optional), default: false
- host => ... # string (optional), default: "localhost"
- include_metrics => ... # array (optional), default: [".*"]
- metrics => ... # hash (optional), default: {}
- metrics_format => ... # string (optional), default: "*"
- port => ... # number (optional), default: 2003
- reconnect_interval => ... # number (optional), default: 2
- resend_on_failure => ... # boolean (optional), default: false
-}
-
-}
-
-
-The codec used for output data
- -Enable debug output
- -Exclude regex matched metric names, by default exclude unresolved %{field} strings
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Indicate that the event @fields should be treated as metrics and will be sent as is to graphite
- -The address of the graphite server.
- -Include only regex matched metric names
- -The metric(s) to use. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value. Example:
- -[ "%{source}/uptime", "%{uptime_1m}" ]
-
-
-The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)
- -Defines format of the metric string. The placeholder '*' will be -replaced with the name of the actual metric.
- -metrics_format => "foo.bar.*.sum"
-
-
-NOTE: If no metrics_format is defined the name of the metric will be used as fallback.
- -The port to connect on your graphite server.
- -Interval between reconnect attempts to carboon
- -Should metrics be resend on failure?
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -A plugin for a newly developed Java/Spring Metrics application -I didn't really want to code this project but I couldn't find -a respectable alternative that would also run on any Windows -machine - which is the problem and why I am not going with Graphite -and statsd. This application provides multiple integration options -so as to make its use under your network requirements possible. -This includes a REST option that is always enabled for your use -in case you want to write a small script to send the occasional -metric data.
- -Find GraphTastic here : https://github.com/NickPadilla/GraphTastic
- - -output {
- graphtastic {
- batch_number => ... # number (optional), default: 60
- codec => ... # codec (optional), default: "plain"
- context => ... # string (optional), default: "graphtastic"
- error_file => ... # string (optional), default: ""
- host => ... # string (optional), default: "127.0.0.1"
- integration => ... # string, one of ["udp", "tcp", "rmi", "rest"] (optional), default: "udp"
- metrics => ... # hash (optional), default: {}
- port => ... # number (optional)
- retries => ... # number (optional), default: 1
-}
-
-}
-
-
-the number of metrics to send to GraphTastic at one time. 60 seems to be the perfect -amount for UDP, with default packet size.
- -The codec used for output data
- -if using rest as your end point you need to also provide the application url -it defaults to localhost/graphtastic. You can customize the application url -by changing the name of the .war file. There are other ways to change the -application context, but they vary depending on the Application Server in use. -Please consult your application server documentation for more on application -contexts.
- -setting allows you to specify where we save errored transactions -this makes the most sense at this point - will need to decide -on how we reintegrate these error metrics -NOT IMPLEMENTED!
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -host for the graphtastic server - defaults to 127.0.0.1
- -options are udp(fastest - default) - rmi(faster) - rest(fast) - tcp(don't use TCP yet - some problems - errors out on linux)
- -metrics hash - you will provide a name for your metric and the metric -data as key value pairs. so for example:
- -metrics => { "Response" => "%{response}" }
- -example for the logstash config
- -metrics => [ "Response", "%{response}" ]
- -NOTE: you can also use the dynamic fields for the key value as well as the actual value
- -port for the graphtastic instance - defaults to 1199 for RMI, 1299 for TCP, 1399 for UDP, and 8080 for REST
- -number of attempted retry after send error - currently only way to integrate -errored transactions - should try and save to a file or later consumption -either by graphtastic utility or by this program after connectivity is -ensured to be established.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output allows you to write events to HipChat.
- - -output {
- hipchat {
- codec => ... # codec (optional), default: "plain"
- color => ... # string (optional), default: "yellow"
- format => ... # string (optional), default: "%{message}"
- from => ... # string (optional), default: "logstash"
- room_id => ... # string (required)
- token => ... # string (required)
- trigger_notify => ... # boolean (optional), default: false
-}
-
-}
-
-
-The codec used for output data
- -Background color for message. -HipChat currently supports one of "yellow", "red", "green", "purple", -"gray", or "random". (default: yellow)
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Message format to send, event tokens are usable here.
- -The name the message will appear be sent from.
- -The ID or name of the room.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The HipChat authentication token.
- -Whether or not this message should trigger a notification for people in the room.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -output {
- http {
- codec => ... # codec (optional), default: "plain"
- content_type => ... # string (optional)
- format => ... # string, one of ["json", "form", "message"] (optional), default: "json"
- headers => ... # hash (optional)
- http_method => ... # string, one of ["put", "post"] (required)
- mapping => ... # hash (optional)
- message => ... # string (optional)
- url => ... # string (required)
- verify_ssl => ... # boolean (optional), default: true
-}
-
-}
-
-
-The codec used for output data
- -Content type
- -If not specified, this defaults to the following:
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Set the format of the http body.
- -If form, then the body will be the mapping (or whole event) converted -into a query parameter string (foo=bar&baz=fizz...)
- -If message, then the body will be the result of formatting the event according to message
- -Otherwise, the event is sent as json.
- -Custom headers to use -format is `headers => ["X-My-Header", "%{source}"]
- -What verb to use -only put and post are supported for now
- -This lets you choose the structure and parts of the event that are sent.
- -For example:
- -mapping => ["foo", "%{source}", "bar", "%{type}"]
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -This output lets you PUT
or POST
events to a
-generic HTTP(S) endpoint
Additionally, you are given the option to customize -the headers sent as well as basic customization of the -event json itself. -URL to use
- -validate SSL?
- - -Write events to IRC
- - -output {
- irc {
- channels => ... # array (required)
- codec => ... # codec (optional), default: "plain"
- format => ... # string (optional), default: "%{message}"
- host => ... # string (required)
- messages_per_second => ... # number (optional), default: 0.5
- nick => ... # string (optional), default: "logstash"
- password => ... # password (optional)
- port => ... # number (optional), default: 6667
- real => ... # string (optional), default: "logstash"
- secure => ... # boolean (optional), default: false
- user => ... # string (optional), default: "logstash"
-}
-
-}
-
-
-Channels to broadcast to.
- -These should be full channel names including the '#' symbol, such as -"#logstash".
- -The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Message format to send, event tokens are usable here
- -Address of the host to connect to
- -Limit the rate of messages sent to IRC in messages per second.
- -IRC Nickname
- -IRC server password
- -Port on host to connect to.
- -IRC Real name
- -Set this to true to enable SSL.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -IRC Username
- - -Push messages to the juggernaut websockets server:
- -Wraps Websockets and supports other methods (including xhr longpolling) This -is basically, just an extension of the redis output (Juggernaut pulls -messages from redis). But it pushes messages to a particular channel and -formats the messages in the way juggernaut expects.
- - -output {
- juggernaut {
- channels => ... # array (required)
- codec => ... # codec (optional), default: "plain"
- db => ... # number (optional), default: 0
- host => ... # string (optional), default: "127.0.0.1"
- message_format => ... # string (optional)
- password => ... # password (optional)
- port => ... # number (optional), default: 6379
- timeout => ... # number (optional), default: 5
-}
-
-}
-
-
-List of channels to which to publish. Dynamic names are -valid here, for example "logstash-%{type}".
- -The codec used for output data
- -The redis database number.
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The hostname of the redis server to which juggernaut is listening.
- -How should the message be formatted before pushing to the websocket.
- -Password to authenticate with. There is no authentication by default.
- -The port to connect on.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -Redis initial connection timeout in seconds.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -output {
- librato {
- account_id => ... # string (required)
- annotation => ... # hash (optional), default: {}
- api_token => ... # string (required)
- batch_size => ... # string (optional), default: "10"
- codec => ... # codec (optional), default: "plain"
- counter => ... # hash (optional), default: {}
- gauge => ... # hash (optional), default: {}
-}
-
-}
-
-
-This output lets you send metrics, annotations and alerts to -Librato based on Logstash events
- -This is VERY experimental and inefficient right now. -Your Librato account -usually an email address
- -Annotations
-Registers an annotation with Librato
-The only required field is title
and name
.
-start_time
and end_time
will be set to event.unix_timestamp
-You can add any other optional annotation values as well.
-All values will be passed through event.sprintf
Example: - ["title":"Logstash event on %{source}", "name":"logstashstream"] -or - ["title":"Logstash event", "description":"%{message}", "name":"logstashstream"]
- -Your Librato API Token
- -Batch size -Number of events to batch up before sending to Librato.
- -The codec used for output data
- -Counters -Send data to Librato as a counter
- -Example:
- ["value", "1", "source", "%{source}", "name", "messagesreceived"]
-Additionally, you can override the measure_time
for the event. Must be a unix timestamp:
- ["value", "1", "source", "%{source}", "name", "messagesreceived", "measuretime", "%{myunixtime_field}"]
-Default is to use the event's timestamp
Only handle events without any of these tags. Note this check is additional to type and tags.
- -Gauges -Send data to Librato as a gauge
- -Example:
- ["value", "%{bytesrecieved}", "source", "%{source}", "name", "apachebytes"]
-Additionally, you can override the measure_time
for the event. Must be a unix timestamp:
- ["value", "%{bytesrecieved}", "source", "%{source}", "name", "apachebytes","measuretime", "%{myunixtime_field}]
-Default is to use the event's timestamp
Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Got a loggly account? Use logstash to ship logs to Loggly!
- -This is most useful so you can use logstash to parse and structure -your logs and ship structured, json events to your account at Loggly.
- -To use this, you'll need to use a Loggly input with type 'http' -and 'json logging' enabled.
- - -output {
- loggly {
- codec => ... # codec (optional), default: "plain"
- host => ... # string (optional), default: "logs.loggly.com"
- key => ... # string (required)
- proto => ... # string (optional), default: "http"
- proxy_host => ... # string (optional)
- proxy_password => ... # password (optional), default: ""
- proxy_port => ... # number (optional)
- proxy_user => ... # string (optional)
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The hostname to send logs to. This should target the loggly http input -server which is usually "logs.loggly.com"
- -The loggly http input key to send to. -This is usually visible in the Loggly 'Inputs' page as something like this
- -https://logs.hoover.loggly.net/inputs/abcdef12-3456-7890-abcd-ef0123456789
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- \----------> key <-------------/
-
-
-You can use %{foo} field lookups here if you need to pull the api key from -the event. This is mainly aimed at multitenant hosting providers who want -to offer shipping a customer's logs to that customer's loggly account.
- -Should the log action be sent over https instead of plain http
- -Proxy Host
- -Proxy Password
- -Proxy Port
- -Proxy Username
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -output {
- lumberjack {
- codec => ... # codec (optional), default: "plain"
- hosts => ... # array (required)
- port => ... # number (required)
- ssl_certificate => ... # a valid filesystem path (required)
- window_size => ... # number (optional), default: 5000
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -list of addresses lumberjack can send to
- -the port to connect to
- -ssl certificate to use
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -window size
- - -This output ships metrics to MetricCatcher, allowing you to -utilize Coda Hale's Metrics.
- -More info on MetricCatcher: https://github.com/clearspring/MetricCatcher
- -At Clearspring, we use it to count the response codes from Apache logs:
- -metriccatcher {
- host => "localhost"
- port => "1420"
- type => "apache-access"
- fields => [ "response" ]
- meter => [ "%{source}.apache.response.%{response}", "1" ]
-}
-
-
-
-output {
- metriccatcher {
- biased => ... # hash (optional)
- codec => ... # codec (optional), default: "plain"
- counter => ... # hash (optional)
- gauge => ... # hash (optional)
- host => ... # string (optional), default: "localhost"
- meter => ... # hash (optional)
- port => ... # number (optional), default: 1420
- timer => ... # hash (optional)
- uniform => ... # hash (optional)
-}
-
-}
-
-
-The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value.
- -The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)
- -The codec used for output data
- -The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value. Example:
- -counter => [ "%{source}.apache.hits.%{response}, "1" ]
- -The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value.
- -The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)
- -The address of the MetricCatcher
- -The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value.
- -The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)
- -The port to connect on your MetricCatcher
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value. Example:
- -timer => [ "%{source}.apache.responsetime, "%{responsetime}" ]
- -The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -The metrics to send. This supports dynamic strings like %{source} -for metric names and also for values. This is a hash field with key -of the metric name, value of the metric value.
- -The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)
- - -output {
- mongodb {
- codec => ... # codec (optional), default: "plain"
- collection => ... # string (required)
- database => ... # string (required)
- generateId => ... # boolean (optional), default: false
- isodate => ... # boolean (optional), default: false
- retry_delay => ... # number (optional), default: 3
- uri => ... # string (required)
-}
-
-}
-
-
-The codec used for output data
- -The collection to use. This value can use %{foo} values to dynamically -select a collection based on data in the event.
- -The database to use
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -If true, a id field will be added to the document before insertion. -The id field will use the timestamp of the event and overwrite an existing -_id field in the event.
- -If true, store the @timestamp field in mongodb as an ISODate type instead -of an ISO8601 string. For more information about this, see -http://www.mongodb.org/display/DOCS/Dates
- -Number of seconds to wait after failure before retrying
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -a MongoDB URI to connect to -See http://docs.mongodb.org/manual/reference/connection-string/
- - -The nagios output is used for sending passive check results to nagios via the -nagios command file.
- -For this output to work, your event must have the following fields:
- -These fields are supported, but optional:
- -There are two configuration options:
- -nagioslevel - Specifies the level of the check to be sent. Defaults to -CRITICAL and can be overriden by setting the "nagioslevel" field to one -of "OK", "WARNING", "CRITICAL", or "UNKNOWN"
- - match => [ "message", "(error|ERROR|CRITICAL)" ]
-
-
-output{ - if [message] =~ /(error|ERROR|CRITICAL)/ { - nagios { - # your config here - } - } - }
output {
- nagios {
- codec => ... # codec (optional), default: "plain"
- commandfile => ... # a valid filesystem path (optional), default: "/var/lib/nagios3/rw/nagios.cmd"
- nagios_level => ... # string, one of ["0", "1", "2", "3"] (optional), default: "2"
-}
-
-}
-
-
-The codec used for output data
- -The path to your nagios command file
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The Nagios check level. Should be one of 0=OK, 1=WARNING, 2=CRITICAL, -3=UNKNOWN. Defaults to 2 - CRITICAL.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -The nagios_nsca output is used for sending passive check results to Nagios -through the NSCA protocol.
- -This is useful if your Nagios server is not the same as the source host from -where you want to send logs or alerts. If you only have one server, this -output is probably overkill # for you, take a look at the 'nagios' output -instead.
- -Here is a sample config using the nagios_nsca output:
- -output {
- nagios_nsca {
- # specify the hostname or ip of your nagios server
- host => "nagios.example.com"
-
- # specify the port to connect to
- port => 5667
- }
-}
-
-
-
-output {
- nagios_nsca {
- codec => ... # codec (optional), default: "plain"
- host => ... # string (optional), default: "localhost"
- message_format => ... # string (optional), default: "%{@timestamp} %{source}: %{message}"
- nagios_host => ... # string (optional), default: "%{host}"
- nagios_service => ... # string (optional), default: "LOGSTASH"
- nagios_status => ... # string (required)
- port => ... # number (optional), default: 5667
- send_nsca_bin => ... # a valid filesystem path (optional), default: "/usr/sbin/send_nsca"
- send_nsca_config => ... # a valid filesystem path (optional)
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The nagios host or IP to send logs to. It should have a NSCA daemon running.
- -The format to use when writing events to nagios. This value -supports any string and can include %{name} and other dynamic -strings.
- -The nagios 'host' you want to submit a passive check result to. This -parameter accepts interpolation, e.g. you can use @source_host or other -logstash internal variables.
- -The nagios 'service' you want to submit a passive check result to. This -parameter accepts interpolation, e.g. you can use @source_host or other -logstash internal variables.
- -The status to send to nagios. Should be 0 = OK, 1 = WARNING, 2 = CRITICAL, 3 = UNKNOWN
- -The port where the NSCA daemon on the nagios host listens.
- -The path to the 'send_nsca' binary on the local host.
- -The path to the send_nsca config file on the local host. -Leave blank if you don't want to provide a config file.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -A null output. This is useful for testing logstash inputs and filters for -performance.
- - -output {
- null {
- codec => ... # codec (optional), default: "plain"
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output allows you to pull metrics from your logs and ship them to -opentsdb. Opentsdb is an open source tool for storing and graphing metrics.
- - -output {
- opentsdb {
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional)
- host => ... # string (optional), default: "localhost"
- metrics => ... # array (required)
- port => ... # number (optional), default: 4242
-}
-
-}
-
-
-The codec used for output data
- -Enable debugging. Tries to pretty-print the entire event object.
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The address of the opentsdb server.
- -The metric(s) to use. This supports dynamic strings like %{source_host} -for metric names and also for values. This is an array field with key -of the metric name, value of the metric value, and multiple tag,values . Example:
- -[
- "%{host}/uptime",
- %{uptime_1m} " ,
- "hostname" ,
- "%{host}
- "anotherhostname" ,
- "%{host}
-]
-
-
-The value will be coerced to a floating point value. Values which cannot be -coerced will zero (0)
- -The port to connect on your graphite server.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -PagerDuty output -Send specific events to PagerDuty for alerting
- - -output {
- pagerduty {
- codec => ... # codec (optional), default: "plain"
- description => ... # string (optional), default: "Logstash event for %{host}"
- details => ... # hash (optional), default: {"timestamp"=>"%{@timestamp}", "message"=>"%{message}"}
- event_type => ... # string, one of ["trigger", "acknowledge", "resolve"] (optional), default: "trigger"
- incident_key => ... # string (optional), default: "logstash/%{host}/%{type}"
- pdurl => ... # string (optional), default: "https://events.pagerduty.com/generic/2010-04-15/create_event.json"
- service_key => ... # string (required)
-}
-
-}
-
-
-The codec used for output data
- -Custom description
- -Event details -These might be keys from the logstash event you wish to include -tags are automatically included if detected so no need to add them here
- -Event type
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The service key to use -You'll need to set this up in PD beforehand
- -PagerDuty API url -You shouldn't need to change this -This allows for flexibility -should PD iterate the API -and Logstash hasn't updated yet
- -Service API Key
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Pipe output.
- -Pipe events to stdin of another program. You can use fields from the -event as parts of the command. -WARNING: This feature can cause logstash to fork off multiple children if you are not carefull with per-event commandline.
- - -output {
- pipe {
- codec => ... # codec (optional), default: "plain"
- command => ... # string (required)
- message_format => ... # string (optional)
- ttl => ... # number (optional), default: 10
-}
-
-}
-
-
-The codec used for output data
- -Command line to launch and pipe to
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The format to use when writing events to the pipe. This value -supports any string and can include %{name} and other dynamic -strings.
- -If this setting is omitted, the full json representation of the -event will be written as a single line.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -Close pipe that hasn't been used for TTL seconds. -1 or 0 means never close.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Push events to a RabbitMQ exchange. Requires RabbitMQ 2.x -or later version (3.x is recommended).
- -Relevant links:
- -output {
- rabbitmq {
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- durable => ... # boolean (optional), default: true
- exchange => ... # string (required)
- exchange_type => ... # string, one of ["fanout", "direct", "topic"] (required)
- host => ... # string (required)
- key => ... # string (optional), default: "logstash"
- password => ... # password (optional), default: "guest"
- persistent => ... # boolean (optional), default: true
- port => ... # number (optional), default: 5672
- ssl => ... # boolean (optional), default: false
- user => ... # string (optional), default: "guest"
- verify_ssl => ... # boolean (optional), default: false
- vhost => ... # string (optional), default: "/"
-}
-
-}
-
-
-The codec used for output data
- -Enable or disable logging
- -Is this exchange durable? (aka; Should it survive a broker restart?)
- -The name of the exchange
- -Exchange
- -The exchange type (fanout, topic, direct)
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Connection
- -RabbitMQ server address
- -Key to route to by default. Defaults to 'logstash'
- -RabbitMQ password
- -Should RabbitMQ persist messages to disk?
- -RabbitMQ port to connect on
- -Enable or disable SSL
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -RabbitMQ username
- -Validate SSL certificate
- -The vhost to use. If you don't know what this is, leave the default.
- - -send events to a redis database using RPUSH
- -For more information about redis, see http://redis.io/
- - -output {
- redis {
- batch => ... # boolean (optional), default: false
- batch_events => ... # number (optional), default: 50
- batch_timeout => ... # number (optional), default: 5
- codec => ... # codec (optional), default: "plain"
- congestion_interval => ... # number (optional), default: 1
- congestion_threshold => ... # number (optional), default: 0
- data_type => ... # string, one of ["list", "channel"] (optional)
- db => ... # number (optional), default: 0
- host => ... # array (optional), default: ["127.0.0.1"]
- key => ... # string (optional)
- password => ... # password (optional)
- port => ... # number (optional), default: 6379
- reconnect_interval => ... # number (optional), default: 1
- shuffle_hosts => ... # boolean (optional), default: true
- timeout => ... # number (optional), default: 5
-}
-
-}
-
-
-Set to true if you want redis to batch up values and send 1 RPUSH command -instead of one command per value to push on the list. Note that this only -works with data_type="list" mode right now.
- -If true, we send an RPUSH every "batchevents" events or -"batchtimeout" seconds (whichever comes first). -Only supported for list redis data_type.
- -If batch is set to true, the number of events we queue up for an RPUSH.
- -If batch is set to true, the maximum amount of time between RPUSH commands -when there are pending events to flush.
- -The codec used for output data
- -How often to check for congestion, defaults to 1 second. -Zero means to check on every event.
- -In case redis datatype is list and has more than @congestionthreshold items, block until someone consumes them and reduces -congestion, otherwise if there are no consumers redis will run out of memory, unless it was configured with OOM protection. -But even with OOM protection single redis list can block all other users of redis, as well redis cpu consumption -becomes bad then it reaches the max allowed ram size. -Default value of 0 means that this limit is disabled. -Only supported for list redis data_type.
- -Either list or channel. If redistype is list, then we will RPUSH to key. -If redistype is channel, then we will PUBLISH to key. -TODO set required true
- -The redis database number.
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The hostname(s) of your redis server(s). Ports may be specified on any -hostname, which will override the global port config.
- -For example:
- -"127.0.0.1"
-["127.0.0.1", "127.0.0.2"]
-["127.0.0.1:6380", "127.0.0.1"]
-
-
-The name of a redis list or channel. Dynamic names are -valid here, for example "logstash-%{type}". -TODO set required true
- -Name is used for logging in case there are multiple instances. -TODO: delete
- -Password to authenticate with. There is no authentication by default.
- -The default port to connect on. Can be overridden on any hostname.
- -The name of the redis queue (we'll use RPUSH on this). Dynamic names are -valid here, for example "logstash-%{type}" -TODO: delete
- -Interval for reconnecting to failed redis connections
- -Shuffle the host list during logstash startup.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -Redis initial connection timeout in seconds.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Riak is a distributed k/v store from Basho. -It's based on the Dynamo model.
- - -output {
- riak {
- bucket => ... # array (optional), default: ["logstash-%{+YYYY.MM.dd}"]
- bucket_props => ... # hash (optional)
- codec => ... # codec (optional), default: "plain"
- enable_search => ... # boolean (optional), default: false
- enable_ssl => ... # boolean (optional), default: false
- indices => ... # array (optional)
- key_name => ... # string (optional)
- nodes => ... # hash (optional), default: {"localhost"=>"8098"}
- proto => ... # string, one of ["http", "pb"] (optional), default: "http"
- ssl_opts => ... # hash (optional)
-}
-
-}
-
-
-The bucket name to write events to -Expansion is supported here as values are -passed through event.sprintf -Multiple buckets can be specified here -but any bucket-specific settings defined -apply to ALL the buckets.
- -Bucket properties (NYI)
-Logstash hash of properties for the bucket
-i.e.
-bucket_props => ["r", "one", "w", "one", "dw", "one"]
-or
-bucket_props => ["n_val", "3"]
-Note that the Logstash config language cannot support
-hash or array values
-Properties will be passed as-is
The codec used for output data
- -Search -Enable search on the bucket defined above
- -SSL -Enable SSL
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Indices -Array of fields to add 2i on -e.g. -`indices => ["source_host", "type"] -Off by default as not everyone runs eleveldb
- -The event key name -variables are valid here.
- -Choose this carefully. Best to let riak decide....
- -The nodes of your Riak cluster -This can be a single host or -a Logstash hash of node/port pairs -e.g -["node1", "8098", "node2", "8098"]
- -The protocol to use -HTTP or ProtoBuf -Applies to ALL backends listed above -No mix and match
- -SSL Options
-Options for SSL connections
-Only applied if SSL is enabled
-Logstash hash that maps to the riak-client options
-here: https://github.com/basho/riak-ruby-client/wiki/Connecting-to-Riak
-You'll likely want something like this:
-ssl_opts => ["pem", "/etc/riak.pem", "ca_path", "/usr/share/certificates"]
-Per the riak client docs, the above sample options
-will turn on SSL
VERIFY_PEER`
Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Riemann is a network event stream processing system.
- -While Riemann is very similar conceptually to Logstash, it has -much more in terms of being a monitoring system replacement.
- -Riemann is used in Logstash much like statsd or other metric-related -outputs
- -You can learn about Riemann here:
- -output {
- riemann {
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- host => ... # string (optional), default: "localhost"
- port => ... # number (optional), default: 5555
- protocol => ... # string, one of ["tcp", "udp"] (optional), default: "tcp"
- riemann_event => ... # hash (optional)
- sender => ... # string (optional), default: "%{host}"
-}
-
-}
-
-
-The codec used for output data
- -Enable debugging output?
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The address of the Riemann server.
- -The port to connect to on your Riemann server.
- -The protocol to use -UDP is non-blocking -TCP is blocking
- -Logstash's default output behaviour -is to never lose events -As such, we use tcp as default here
- -A Hash to set Riemann event fields -(http://aphyr.github.com/riemann/concepts.html).
- -The following event fields are supported:
-description
, state
, metric
, ttl
, service
Example:
- -riemann {
- riemann_event => [
- "metric", "%{metric}",
- "service", "%{service}"
- ]
-}
-
-
-metric
and ttl
values will be coerced to a floating point value.
-Values which cannot be coerced will zero (0.0).
description
, by default, will be set to the event message
-but can be overridden here.
The name of the sender.
-This sets the host
value
-in the Riemann event
Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -TODO integrate awsconfig in the future -INFORMATION: -This plugin was created for store the logstash's events into Amazon Simple Storage Service (Amazon S3). -For use it you needs authentications and an s3 bucket. -Be careful to have the permission to write file on S3's bucket and run logstash with super user for establish connection. -S3 plugin allows you to do something complex, let's explain:) -S3 outputs create temporary files into "/opt/logstash/S3temp/". If you want, you can change the path at the start of register method. -This files have a special name, for example: -ls.s3.ip-10-228-27-95.2013-04-18T10.00.taghello.part0.txt -ls.s3 : indicate logstash plugin s3 -"ip-10-228-27-95" : indicate you ip machine, if you have more logstash and writing on the same bucket for example. -"2013-04-18T10.00" : represents the time whenever you specify timefile. -"taghello" : this indicate the event's tag, you can collect events with the same tag. -"part0" : this means if you indicate sizefile then it will generate more parts if you file.size > size_file.
- - When a file is full it will pushed on bucket and will be deleted in temporary directory.
- If a file is empty is not pushed, but deleted.
-
-
-This plugin have a system to restore the previous temporary files if something crash. -INFORMATION ABOUT CLASS: -I tried to comment the class at best i could do. -I think there are much thing to improve, but if you want some points to develop here a list: -TODO Integrate aws_config in the future -TODO Find a method to push them all files when logtstash close the session. -TODO Integrate @field on the path file -TODO Permanent connection or on demand? For now on demand, but isn't a good implementation.
- - Use a while or a thread to try the connection before break a time_out and signal an error.
-
-
-TODO If you have bugs report or helpful advice contact me, but remember that this code is much mine as much as yours,
- - try to work on it if you want :)
-
-
-The programmer's question is: "Why you fuck you use name ls.s3.... you kidding me, motherfucker? -The answer is simple, s3 not allow special characters like "/" "[,]", very useful in date format, -because if you use them s3 dosen't know no more the key and send you to hell! -For example "/" in s3 means you can specify a subfolder on bucket. -USAGE: -This is an example of logstash config: -output { - s3{
- - access_key_id => "crazy_key" (required)
- secret_access_key => "monkey_access_key" (required)
- endpoint_region => "eu-west-1" (required)
- bucket => "boss_please_open_your_bucket" (required)
- size_file => 2048 (optional)
- time_file => 5 (optional)
- format => "plain" (optional)
-
-
-} -} -We analize this: -accesskeyid => "crazykey" -Amazon will give you the key for use their service if you buy it or try it. (not very much open source anyway) -secretaccesskey => "monkeyaccesskey" -Amazon will give you the secretaccesskey for use their service if you buy it or try it . (not very much open source anyway). -endpointregion => "eu-west-1" -When you make a contract with Amazon, you should know where the services you use. -bucket => "bosspleaseopenyourbucket" -Be careful you have the permission to write on bucket and know the name. -sizefile => 2048 -Means the size, in KB, of files who can store on temporary directory before you will be pushed on bucket. -Is useful if you have a little server with poor space on disk and you don't want blow up the server with unnecessary temporary log files. -timefile => 5 -Means, in minutes, the time before the files will be pushed on bucket. Is useful if you want to push the files every specific time. -format => "plain" -Means the format of events you want to store in the files -LET'S ROCK AND ROLL ON THE CODE!
- - -output {
- s3 {
- access_key_id => ... # string (optional)
- aws_credentials_file => ... # (optional)
- bucket => ... # string (optional)
- codec => ... # codec (optional), default: "plain"
- endpoint_region => ... # string, one of ["us_east_1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us_east_1"
- format => ... # string, one of ["json", "plain", "nil"] (optional), default: "plain"
- region => ... # (optional)
- restore => ... # boolean (optional), default: false
- secret_access_key => ... # string (optional)
- size_file => ... # number (optional), default: 0
- time_file => ... # number (optional), default: 0
- use_ssl => ... # (optional)
-}
-
-}
-
-
-include LogStash::PluginMixins::AwsConfig -Aws access_key.
- -Path to YAML file containing a hash of AWS credentials.
-This file will only be loaded if access_key_id
and
-secret_access_key
aren't set. The contents of the
-file should look like this:
:access_key_id: "12345"
-:secret_access_key: "54321"
-
-
-S3 bucket
- -The codec used for output data
- -Aws endpoint_region
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The event format you want to store in files. Defaults to plain text.
- -The AWS Region
- -Aws secretaccesskey
- -Set the size of file in KB, this means that files on bucket when have dimension > file_size, they are stored in two or more file. -If you have tags then it will generate a specific size file for every tags
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -Set the time, in minutes, to close the current subtimesection of bucket. -If you define filesize you have a number of files in consideration of the section and the current tag. -0 stay all time on listerner, beware if you specific 0 and sizefile 0, because you will not put the file on bucket, -for now the only thing this plugin can do is to put the file when logstash restart.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -Should we require (true) or disable (false) using SSL for communicating with the AWS API
-The AWS SDK for Ruby defaults to SSL so we preserve that
SNS output.
- -Send events to Amazon's Simple Notification Service, a hosted pub/sub -framework. It supports subscribers of type email, HTTP/S, SMS, and SQS.
- -For further documentation about the service see:
- -http://docs.amazonwebservices.com/sns/latest/api/
- -This plugin looks for the following fields on events it receives:
- -output {
- sns {
- access_key_id => ... # string (optional)
- arn => ... # string (optional)
- aws_credentials_file => ... # string (optional)
- codec => ... # codec (optional), default: "plain"
- format => ... # string, one of ["json", "plain"] (optional), default: "plain"
- publish_boot_message_arn => ... # string (optional)
- region => ... # string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us-east-1"
- secret_access_key => ... # string (optional)
- use_ssl => ... # boolean (optional), default: true
-}
-
-}
-
-
-This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order...
-1. Static configuration, using access_key_id
and secret_access_key
params in logstash plugin config
-2. External credentials file specified by aws_credentials_file
-3. Environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
-4. Environment variables AMAZON_ACCESS_KEY_ID
and AMAZON_SECRET_ACCESS_KEY
-5. IAM Instance Profile (available when running inside EC2)
SNS topic ARN.
- -Path to YAML file containing a hash of AWS credentials.
-This file will only be loaded if access_key_id
and
-secret_access_key
aren't set. The contents of the
-file should look like this:
:access_key_id: "12345"
-:secret_access_key: "54321"
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Message format. Defaults to plain text.
- -When an ARN for an SNS topic is specified here, the message -"Logstash successfully booted" will be sent to it when this plugin -is registered.
- -Example: arn:aws:sns:us-east-1:770975001275:logstash-testing
- -The AWS Region
- -The AWS Secret Access Key
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -Should we require (true) or disable (false) using SSL for communicating with the AWS API
-The AWS SDK for Ruby defaults to SSL so we preserve that
Push events to an Amazon Web Services Simple Queue Service (SQS) queue.
- -SQS is a simple, scalable queue system that is part of the -Amazon Web Services suite of tools.
- -Although SQS is similar to other queuing systems like AMQP, it -uses a custom API and requires that you have an AWS account. -See http://aws.amazon.com/sqs/ for more details on how SQS works, -what the pricing schedule looks like and how to setup a queue.
- -To use this plugin, you must:
- -The "consumer" identity must have the following permissions on the queue:
- -Typically, you should setup an IAM policy, create a user and apply the IAM policy to the user. -A sample policy is as follows:
- - {
- "Statement": [
- {
- "Sid": "Stmt1347986764948",
- "Action": [
- "sqs:ChangeMessageVisibility",
- "sqs:ChangeMessageVisibilityBatch",
- "sqs:DeleteMessage",
- "sqs:DeleteMessageBatch",
- "sqs:GetQueueAttributes",
- "sqs:GetQueueUrl",
- "sqs:ListQueues",
- "sqs:ReceiveMessage"
- ],
- "Effect": "Allow",
- "Resource": [
- "arn:aws:sqs:us-east-1:200850199751:Logstash"
- ]
- }
- ]
- }
-
-
-See http://aws.amazon.com/iam/ for more details on setting up AWS identities.
- - -output {
- sqs {
- access_key_id => ... # string (optional)
- aws_credentials_file => ... # string (optional)
- batch => ... # boolean (optional), default: true
- batch_events => ... # number (optional), default: 10
- batch_timeout => ... # number (optional), default: 5
- codec => ... # codec (optional), default: "plain"
- queue => ... # string (required)
- region => ... # string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us-east-1"
- secret_access_key => ... # string (optional)
- use_ssl => ... # boolean (optional), default: true
-}
-
-}
-
-
-This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order...
-1. Static configuration, using access_key_id
and secret_access_key
params in logstash plugin config
-2. External credentials file specified by aws_credentials_file
-3. Environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
-4. Environment variables AMAZON_ACCESS_KEY_ID
and AMAZON_SECRET_ACCESS_KEY
-5. IAM Instance Profile (available when running inside EC2)
Path to YAML file containing a hash of AWS credentials.
-This file will only be loaded if access_key_id
and
-secret_access_key
aren't set. The contents of the
-file should look like this:
:access_key_id: "12345"
-:secret_access_key: "54321"
-
-
-Set to true if you want send messages to SQS in batches with batch_send -from the amazon sdk
- -If batch is set to true, the number of events we queue up for a batch_send.
- -If batch is set to true, the maximum amount of time between batch_send commands when there are pending events to flush.
- -The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Name of SQS queue to push messages into. Note that this is just the name of the queue, not the URL or ARN.
- -The AWS Region
- -The AWS Secret Access Key
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -Should we require (true) or disable (false) using SSL for communicating with the AWS API
-The AWS SDK for Ruby defaults to SSL so we preserve that
statsd is a server for aggregating counters and other metrics to ship to -graphite.
- -The most basic coverage of this plugin is that the 'namespace', 'sender', and -'metric' names are combined into the full metric path like so:
- -namespace.sender.metric
-
-
-The general idea is that you send statsd count or latency data and every few -seconds it will emit the aggregated values to graphite (aggregates like -average, max, stddev, etc)
- -You can learn about statsd here:
- -A simple example usage of this is to count HTTP hits by response code; to learn -more about that, check out the -log metrics tutorial
- - -output {
- statsd {
- codec => ... # codec (optional), default: "plain"
- count => ... # hash (optional), default: {}
- debug => ... # boolean (optional), default: false
- decrement => ... # array (optional), default: []
- gauge => ... # hash (optional), default: {}
- host => ... # string (optional), default: "localhost"
- increment => ... # array (optional), default: []
- namespace => ... # string (optional), default: "logstash"
- port => ... # number (optional), default: 8125
- sample_rate => ... # number (optional), default: 1
- sender => ... # string (optional), default: "%{source}"
- set => ... # hash (optional), default: {}
- timing => ... # hash (optional), default: {}
-}
-
-}
-
-
-The codec used for output data
- -A count metric. metric_name => count as hash
- -The final metric sent to statsd will look like the following (assuming defaults) -logstash.sender.file_name
- -Enable debugging output?
- -A decrement metric. metric names as array.
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -A gauge metric. metric_name => gauge as hash
- -The address of the Statsd server.
- -An increment metric. metric names as array.
- -The statsd namespace to use for this metric
- -The port to connect to on your statsd server.
- -The sample rate for the metric
- -The name of the sender. -Dots will be replaced with underscores
- -A set metric. metric_name => string to append as hash
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -A timing metric. metric_name => duration as hash
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -output {
- stdout {
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- message => ... # string (optional), default: "%{+yyyy-MM-dd'T'HH:mm:ss.SSSZ} %{host}: %{message}"
-}
-
-}
-
-
-The codec used for output data
- -Enable debugging. Tries to pretty-print the entire event object.
- -Debug output format: ruby (default), json
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The message to emit to stdout.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -output {
- stomp {
- codec => ... # codec (optional), default: "plain"
- debug => ... # boolean (optional), default: false
- destination => ... # string (required)
- host => ... # string (required)
- password => ... # password (optional), default: ""
- port => ... # number (optional), default: 61613
- user => ... # string (optional), default: ""
- vhost => ... # string (optional), default: nil
-}
-
-}
-
-
-The codec used for output data
- -Enable debugging output?
- -The destination to read events from. Supports string expansion, meaning -%{foo} values will expand to the field value.
- -Example: "/topic/logstash"
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The address of the STOMP server.
- -The password to authenticate with.
- -The port to connect to on your STOMP server.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -The username to authenticate with.
- -The vhost to use
- - -Send events to a syslog server.
- -You can send messages compliant with RFC3164 or RFC5424 -UDP or TCP syslog transport is supported
- - -output {
- syslog {
- appname => ... # string (optional), default: "LOGSTASH"
- codec => ... # codec (optional), default: "plain"
- facility => ... # string, one of ["kernel", "user-level", "mail", "daemon", "security/authorization", "syslogd", "line printer", "network news", "uucp", "clock", "security/authorization", "ftp", "ntp", "log audit", "log alert", "clock", "local0", "local1", "local2", "local3", "local4", "local5", "local6", "local7"] (required)
- host => ... # string (required)
- msgid => ... # string (optional), default: "-"
- port => ... # number (required)
- procid => ... # string (optional), default: "-"
- protocol => ... # string, one of ["tcp", "udp"] (optional), default: "udp"
- rfc => ... # string, one of ["rfc3164", "rfc5424"] (optional), default: "rfc3164"
- severity => ... # string, one of ["emergency", "alert", "critical", "error", "warning", "notice", "informational", "debug"] (required)
- sourcehost => ... # string (optional), default: "%{source}"
- timestamp => ... # string (optional), default: "%{@timestamp}"
-}
-
-}
-
-
-application name for syslog message
- -The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -facility label for syslog message
- -syslog server address to connect to
- -message id for syslog message
- -syslog server port to connect to
- -process id for syslog message
- -syslog server protocol. you can choose between udp and tcp
- -syslog message format: you can choose between rfc3164 or rfc5424
- -severity label for syslog message
- -source host for syslog message
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -timestamp for syslog message
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Write events over a TCP socket.
- -Each event json is separated by a newline.
- -Can either accept connections from clients or connect to a server,
-depending on mode
.
output {
- tcp {
- codec => ... # codec (optional), default: "plain"
- host => ... # string (required)
- mode => ... # string, one of ["server", "client"] (optional), default: "client"
- port => ... # number (required)
- reconnect_interval => ... # number (optional), default: 10
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -When mode is server
, the address to listen on.
-When mode is client
, the address to connect to.
The format to use when writing events to the file. This value -supports any string and can include %{name} and other dynamic -strings.
- -If this setting is omitted, the full json representation of the -event will be written as a single line.
- -Mode to operate in. server
listens for client connections,
-client
connects to a server.
When mode is server
, the port to listen on.
-When mode is client
, the port to connect to.
When connect failed,retry interval in sec.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -Send events over UDP
- -Keep in mind that UDP will lose messages.
- - -output {
- udp {
- codec => ... # codec (optional), default: "plain"
- host => ... # string (required)
- port => ... # number (required)
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The address to send messages to
- -The port to send messages on
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output runs a websocket server and publishes any -messages to all connected websocket clients.
- -You can connect to it with ws://<host>:<port>/
- -If no clients are connected, any messages received are ignored.
- - -output {
- websocket {
- codec => ... # codec (optional), default: "plain"
- host => ... # string (optional), default: "0.0.0.0"
- port => ... # number (optional), default: 3232
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The address to serve websocket data from
- -The port to serve websocket data from
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -This output allows you ship events over XMPP/Jabber.
- -This plugin can be used for posting events to humans over XMPP, or you can -use it for PubSub or general message passing for logstash to logstash.
- - -output {
- xmpp {
- codec => ... # codec (optional), default: "plain"
- host => ... # string (optional)
- message => ... # string (required)
- password => ... # password (required)
- rooms => ... # array (optional)
- user => ... # string (required)
- users => ... # array (optional)
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -The xmpp server to connect to. This is optional. If you omit this setting, -the host on the user/identity is used. (foo.com for user@foo.com)
- -The message to send. This supports dynamic strings like %{source}
- -The xmpp password for the user/identity.
- -if muc/multi-user-chat required, give the name of the room that -you want to join: room@conference.domain/nick
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -The user or resource ID, like foo@example.com.
- -The users to send messages to
- - -The zabbix output is used for sending item data to zabbix via the -zabbix_sender executable.
- -For this output to work, your event must have the following fields:
- -In Zabbix, create your host with the same name (no spaces in the name of -the host supported) and create your item with the specified key as a -Zabbix Trapper item.
- -The easiest way to use this output is with the grep filter. -Presumably, you only want certain events matching a given pattern -to send events to zabbix, so use grep to match and also to add the required -fields.
- - filter {
- grep {
- type => "linux-syslog"
- match => [ "@message", "(error|ERROR|CRITICAL)" ]
- add_tag => [ "zabbix-sender" ]
- add_field => [
- "zabbix_host", "%{source_host}",
- "zabbix_item", "item.key"
- ]
- }
-}
-
-output {
- zabbix {
- # only process events with this tag
- tags => "zabbix-sender"
-
- # specify the hostname or ip of your zabbix server
- # (defaults to localhost)
- host => "localhost"
-
- # specify the port to connect to (default 10051)
- port => "10051"
-
- # specify the path to zabbix_sender
- # (defaults to "/usr/local/bin/zabbix_sender")
- zabbix_sender => "/usr/local/bin/zabbix_sender"
- }
-}
-
-
-
-output {
- zabbix {
- codec => ... # codec (optional), default: "plain"
- host => ... # string (optional), default: "localhost"
- port => ... # number (optional), default: 10051
- zabbix_sender => ... # a valid filesystem path (optional), default: "/usr/local/bin/zabbix_sender"
-}
-
-}
-
-
-The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- -Write events to a 0MQ PUB socket.
- -You need to have the 0mq 2.1.x library installed to be able to use -this output plugin.
- -The default settings will create a publisher connecting to a subscriber -bound to tcp://127.0.0.1:2120
- - -output {
- zeromq {
- address => ... # array (optional), default: ["tcp://127.0.0.1:2120"]
- codec => ... # codec (optional), default: "plain"
- mode => ... # string, one of ["server", "client"] (optional), default: "client"
- sockopt => ... # hash (optional)
- topic => ... # string (optional), default: ""
- topology => ... # string, one of ["pushpull", "pubsub", "pair"] (required)
-}
-
-}
-
-
-0mq socket address to connect or bind.
-Please note that inproc://
will not work with logstashi.
-For each we use a context per thread.
-By default, inputs bind/listen and outputs connect.
The codec used for output data
- -Only handle events without any of these tags. Note this check is additional to type and tags.
- -Server mode binds/listens. Client mode connects.
- -This exposes zmq_setsockopt for advanced tuning. -See http://api.zeromq.org/2-1:zmq-setsockopt for details.
- -This is where you would set values like:
- -Example: sockopt => ["ZMQ::HWM", 50, "ZMQ::IDENTITY", "mynamedqueue"]
- -Only handle events with all of these tags. Note that if you specify -a type, the event must also match that type. -Optional.
- -This is used for the 'pubsub' topology only.
-On inputs, this allows you to filter messages by topic.
-On outputs, this allows you to tag a message for routing.
-NOTE: ZeroMQ does subscriber-side filtering
-NOTE: Topic is evaluated with event.sprintf
so macros are valid here.
The default logstash topologies work as follows:
- -If the predefined topology flows don't work for you, -you can change the 'mode' setting -TODO (lusis) add req/rep MAYBE -TODO (lusis) add router/dealer
- -The type to act on. If a type is given, then this output will only -act on messages with the same type. See any input plugin's "type" -attribute for more. -Optional.
- - -<% if section == "codec" -%>
-# with an input plugin:
-# you can also use this codec with an output.
-input {
- file {
- codec => <%= synopsis.split("\n").map { |l| " #{l}" }.join("\n") %>
- }
-}
-<% else -%>
-<%= section %> {
- <%= synopsis %>
-}
-<% end -%>
-
-